Search Results

Search found 42321 results on 1693 pages for 'sql reporting services 05'.

Page 426/1693 | < Previous Page | 422 423 424 425 426 427 428 429 430 431 432 433  | Next Page >

  • How do I stop CRM from asking for admin credentials when printing report

    - by Ac0ua
    Problem: When printing a report in CRM, I am asked by the UAC for admin credentials. There is a long URL: https://(crm_hostname)/reserved.ReportViewerWebcontrol.axd?ReportSession=(ID)ControlID=(ID)Culture=1333&UICulture=1033&ReportStack=1&OpType=PrintCab Info: There are three users that have the issue laied out here. They are not admins of any kind. It looks like it is asking for permission to allow SQL Server Reporting Services 2008 to run. When I put my credentials in, it just brings up the Print dialog box (this is fine, just stop asking for credentials). I know this might sound silly but I downloaded and installed "SQL Server Reporting Services 2008" hopping my credentials would give permission right from the beginning. Giving the users local admin, I was told is not an option. Note: I did post this on community (dot) dynamics (dot) com but, they are not a very active website. Thanks for any help even if it just points me in the right direction!

    Read the article

  • What the best way to achieve RPO of zero and lowest possible RTO (less than 15 minutes) with SQL 2008 R2?

    - by Adrian Hope-Bailie
    We are running a payments (EFT transaction processing) application which is processing high volumes of transactions 24/7 and are currently investigating a better way of doing DB replication to our disaster recovery site. Our current and previous strategies have included using both DoubleTake and Redgate to replicate data to a warm stand-by. DoubleTake is the supported solution from the payments software vendor however their (DoubleTake's) support in South Africa is very poor. We had a few issues and simply couldn't ever resolve them so we had to give up on DoubleTake. We have been using Redgate to manually read the data from the primary site (via queries) and write to the DR site but this is: A bad solution Getting the software vendor hot and bothered whenever we have support issues as it has a tendency to interfere with the payment application which is very DB intensive. We recently upgraded the whole system to run on SQL 2008 R2 Enterprise which means we should probably be looking at using some of the built-in replication features. The server has 2 fairly large databases with a mixture of tables containing highly volatile transactional data and pretty static configuration data. Replication would be done over a WAN link to a separate physical site and needs to achieve the following objectives. RPO: Zero loss - This is transactional data with financial impact so we can't lose anything. RTO: Tending to zero - The business depends on our ability to process transactions every minute we are down we are losing money I have looked at a few of the other questions/answers but none meet our case exactly: SQL Server 2008 failover strategy - Log shipping or replication? How to achieve the following RTO & RPO with logshipping only using SQL Server? What is the best of two approaches to achieve DB Replication? My current thinking is that we should use mirroring but I am concerned that for RPO:0 we will need to do delayed commits and this could impact the performance of the primary DB which is not an option. Our current DR process is to: Stop incoming traffic to the primary site and allow all in-flight transaction to complete. Allow the replication to DR to complete. Change network routing to route to DR site. Start all applications and services on the secondary site (Ideally we can change this to a warmer stand-by whereby the applications are already running but not processing any transactions). In other words the DR database needs to, as quickly as possible, catch up with primary and be ready for processing as the new primary. We would then need to be able to reverse this when we are ready to switch back. Is there a better option than mirroring (should we be doing log-shipping too) and can anyone suggest other considerations that we should keep in mind?

    Read the article

  • backup sql databases, folders; 7ZIP and copy to ftp

    - by laurens
    Hi all, We are quite stuck with which solution to choose for this backup issue: What should happen: First, there should be an interface were to choose several sql databases (sort of checkboxes or whatsoever), also a few folders should be backed up - this could be part of the program or could be seperate, I think about an interface were to select folders, but a txt file (or xml) with paths-to-folders is as good. Next, everything should be 7Zipped, SQL-DB and files seperate. Eventually everything should be copied to a local network drive after which copied via FTP. Also important; it could be programmed or (partly) bought but I can't be one of those expensive backup tools $1000+ etc. I already found this fairly priced tool that does already most of the tasks 7ZIP and copy to ftp sqlbackupandftp.com For your information: we had a kind of self-made tool created by a collegue (some time ago) but it became very untrustworthy and as the databases grew it couldn't handle it anymore... moving on Please come up with suggestions. Thanks in advance!

    Read the article

  • SQL Server 2008 Log-shipping: Without a UNC drive: how?

    - by samsmith
    My real question here is... is there a tool I can use? (E.g. I have a lot to do, and would prefer not to script it all up myself!) Anyone using the redgate (hmmm, they had a tool for this, but I do not see it on their web site now...) I have a primary web app at rackspace. Am setting up a backup copy of the app in another data center. I want to use SQL log replication to sync the db. Using SQL Server Web Edition. TIA for suggestions and insight!

    Read the article

  • Where to download MS SQL Server 2005 Developer Edition?

    - by Mark
    Just got put in charge of a big web project. All I know is the web server is running MS SQL 2005, so I need something comparable to test locally. I figure developer edition is my best bet because it offers everything that the enterprise edition does, but is for development purposes only. But this page is pretty worthless http://www.microsoft.com/sqlserver/2005/en/us/developer.aspx Where do I actually download it? What about SQL 2005 Express? Would that meet my needs? I can't figure out all the differences between these stupid MS products.

    Read the article

  • MSSQL, ASP.NET, IIS. SQL Server perfmon log question

    - by Datapimp23
    Hi, I'm testing a web application that runs on a hypervisor. The database server and the webserver are seperate vm's that run on the same hypervisor. We did some tests and the functions perform ok. I want you guys to look at a screenshot of a permon log of the sql 2005 server on the busiest moment. The webserver perfmon log looks fine and it's obvious that we have enough resources to present the page in a timely fashion. http://d.imagehost.org/view/0919/heavyload http://d.imagehost.org/0253/heavyloadz.jpg Zoomed out The striped blue line maxing out is the Processor que length (scale 100,0) The green line at around value 30 is Available MBytes (scale 0,01) The rest of the counters are visible on the screenshot. The sql server machine has no CPU limitations on the hypervisor resources and has 5 vcpu's and 5 GB RAM. Can someone help me to interpret this log. Thanks

    Read the article

  • SQL Server suddenly using only a small portion of CPU.

    - by hermiod
    We've got a Windows 2008 R2 server running SQL Server 2008. All of a sudden, the SQLServer process is refusing to go above 20% CPU usage. As of last week, when running a heavy query against the db it would rise to 100% usage as I would expect. We've had this server for a while and it seems strange that it would just suddenly have this limit. This limit is causing our queries to take a lot longer than they normally would. No one has (knowingly at least) made any changes to the server configuration. After a bit of investigation, I discovered the sys.dm_os_sys_memory view. This shows 'available physical memory is high' bu at the same time the available physical memory is 339552kb where as the total is 4193848kb. It is worth noting that this is a virtual server running on vmware. Is there a setting somewhere with in SQL Server that sets the maximum CPU usage? I've found the settings in resource governor, although this is currently off as it always has been. We have recently started using Spotlight for SQL Server by Quest Software. It's playback database was located on this server for a short time this morning, I first noticed the problem shortly afterwards, although I hadn't been doing any queries prior to this so I don't know if this is the point at which the problem began, however the database was working as expected on Friday afternoon. The Windows log shows that the following settings were applied to the SpotlightPlaybackDatabase when it was created. 02/21/2011 08:45:02,spid60,Unknown,Setting database option TORN_PAGE_DETECTION to ON for database SpotlightPlaybackDatabase. 02/21/2011 08:45:02,spid60,Unknown,Setting database option MULTI_USER to ON for database SpotlightPlaybackDatabase. 02/21/2011 08:45:02,spid60,Unknown,Setting database option READ_WRITE to ON for database SpotlightPlaybackDatabase. 02/21/2011 08:45:02,spid60,Unknown,Setting database option AUTO_UPDATE_STATISTICS to ON for database SpotlightPlaybackDatabase. 02/21/2011 08:45:02,spid60,Unknown,Setting database option AUTO_CREATE_STATISTICS to ON for database SpotlightPlaybackDatabase. 02/21/2011 08:45:02,spid60,Unknown,Setting database option ANSI_WARNINGS to OFF for database SpotlightPlaybackDatabase. 02/21/2011 08:45:02,spid60,Unknown,Setting database option CONCAT_NULL_YIELDS_NULL to ON for database SpotlightPlaybackDatabase. 02/21/2011 08:45:02,spid60,Unknown,Setting database option RECOVERY to SIMPLE for database SpotlightPlaybackDatabase. 02/21/2011 08:45:02,spid60,Unknown,Setting database option QUOTED_IDENTIFIER to OFF for database SpotlightPlaybackDatabase. 02/21/2011 08:45:02,spid60,Unknown,Setting database option AUTO_CLOSE to OFF for database SpotlightPlaybackDatabase. Could any of these settings changes modified the settings applied to the whole server?

    Read the article

  • Intermittent timeout when connecting to Sql Server, what do I look for?

    - by Will
    Sql Server 2008 Standard 64bit on Windows Server 2008 R2 virtual machine hosted on a Hyper-V server. I'm getting intermittent timeouts when connecting to the server. This happens for both windows and Sql Authentication. May timeout every 2 out of 5 tries in different applications. When the connection times out, I can see (in Profiler) that no connection was made. Firewall is holey, server port is static (good ol' 1433). If I ping /t the server I get a steady connection that wavers between 1 and 2 ms. Any ideas what else to try would be appreciated, thanks.

    Read the article

  • The good SQL database to process a lot of data?

    - by Dorian
    I have to process like 10-100 millions records. I have to give the data to the client when it's finish. The data is givent as SQL requests to execute in the database. He have a powerful server with MySQL, I think it will be fast enough. The issue is my computer is not as powerful as his server, so I would like to use an other SQL server who is compatible (I export his database and import it in my computer) with MySQL but more powerful. What should I use? Or am I doomed to use MySQL?

    Read the article

  • Running 32 bit SQL Server 2005 on 64 bit Windows Server?

    - by TooFat
    If I have a 32 bit version of SQL Server 2005 running on a 64 bit Windows Server does the max amount of memory avail. the SQL Server process increase from 2gb to 4gb. In reading this blog entry by Mark Russinovich in which he states that "All Microsoft server products and data intensive executables in Windows are marked with the large address space awareness flag" and "Because the address space on 64-bit Windows is much larger than 4GB, something I’ll describe shortly, Windows can give 32-bit processes the maximum 4GB that they can address and use the rest for the operating system’s virtual memory." which leads me to believe that the answer is "yes" but I not totally confident.

    Read the article

  • Get percentage free space on database volumes w/ SQL Server 2005?

    - by Allen
    I am currently using SQL Server 2005 and (undocumented I believe) master..xp_fixeddrives to get free space on my database volumes as part of my monitoring. However, this only gives me an absolute number of MB free. What I really need is percentage free. Is there another way in SQL Server 2005 to get this? If not, is there some other light-weight way to get it? If I can, I want to avoid installing a Java JRE, or Perl, or Python on my database server. Perhaps vbscript, or a small Windows executable on the file system? Yes, I know I can Google this, and I have. It looks like there are a few ways to accomplish it, and I'm curious how my DBA brethren have handled this.

    Read the article

  • SQL server could not connect: Lacked Sufficient Buffer Space...

    - by chumad
    I recently moved my app to a new server - the app is written in c# against the 3.5 framework. The hardware is faster but the OS is the same (Win Server 2003). No new software is running. On the prior hardware the app would run for months with no problems. Now, in this new install, I get the following error after about 3 days, and the only way to fix it is to reboot: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: TCP Provider, error: 0 - An operation on a socket could not be performed because the system lacked sufficient buffer space or because a queue was full.) I have yet to find a service I can even shut down to make it work. Anyone had this before and know a solution?

    Read the article

  • how to include error messages into backup reports for SQL Server 2008 R2?

    - by avs099
    Right now I have daily (differential) and weekly (full) backups set on my SQL Server 2008 R2 as jobs for SQL Server Agent with email notifications if job fails. I do get emails like this: JOB RUN: 'Daily backup.Diff backup' was run on 4/11/2012 at 3:00:00 AM DURATION: 0 hours, 0 minutes, 28 seconds STATUS: Failed MESSAGES: The job failed. The Job was invoked by Schedule 9 (Daily backup.Diff backup). The last step to run was step 1 (Diff backup). but often that happens because we delete/create new databases - and diff backup fails. And the only way for me to see the actual reason is to go to Log Viewer - Maintenance Plans logs. Is it possible to include "Error Message" field from the logs into notification emails? And more generic - is it possible to change notification email templates somehow?.. Thanks you.

    Read the article

  • After moving our Servers to a virtual environment using VMware - SQL timeouts came in, why?

    - by RayofCommand
    We moved our servers to a virtual cloud (VMware) where only our servers are in. But as soon as we finished migrating everything we are fighting against SQL Timeouts and machine slowdowns we can't explain. Even though we ~ doubled the servers capacity while switching from physical to virtual. Now I googled and found that we are not alone. People are complaining about poor performance after moving to a cloud managed by VMware. Are there any known issues? Sometimes our services can't access a disk or SQL receives a timeout and we have no idea why.

    Read the article

  • Error installing SQL Server 2005 Express on AMD Turion?

    - by Angel
    Hello. Is there some compatibility problem between SQL Server 2005 Express and AMD processors? I read something along those lines on various sites, but found nothing 100% definitive. I am trying to install SQL Server 2005 Express on a computer that runs Windows Vista Home Premium, 32-bit edition, and the CPU is "AMD Turion Dual-Core RM-70". I tried both the 32-bit and 64-bit installers, just in case, and the process runs fine until the moment the service attempts to start, where it fails with the same error every time. I'm sorry I can't provide more details or the exact message, it was on a client computer to which I have no access at the moment. I hope someone could shed some light on this. Thank you.

    Read the article

  • SSAS Tabular Workshop online and other upcoming dates (and updates!) #ssas #tabular

    - by Marco Russo (SQLBI)
    After many conferences and travels, this summer I had some time to write and prepare new sessions for the next wave of conferences. In reality I am just doing that, even if I already restarted traveling for consulting and training. So expect new content about DAX and Tabular coming in the next months! Starting to see real customer adopting Tabular is showing many new challenges and there is still a lot to learn and to create. If you still didn’t started working on Tabular, well, you should. As I always say, as a BI developer you should be able to choose between Tabular and Multidimensional, and in order to do that you should know both of them! One thing that I don’t like very much about marketing is that “Tabular is simpler”, because it’s often translated in “Tabular is for simple projects” when this last statement is not true. Actually, I see a lot of good reasons to adopt Tabular in complex data models, especially in non-traditional scenarios. I know, this is because I love to understand what are the actual limits of a technology, and I’m learning that there is simple a lot of space of improvement also for Tabular. It’s already fast, but it could be faster! How can you start? Well, first of all, by reading our book. Then, by attending to our SSAS Tabular workshop. There is an online edition of the workshop on September 3-4, 2012 (hurry up if you want to register), and there are already several dates planned for the next months (and others will be added soon!). And, of course, by installing SQL Server 2012 and trying to create models over your databases. If you are too lazy, just start with PowerPivot. As soon as you start working with Tabular or PowerPivot, you will see that there is one important skill you need: learning DAX. In the next few days I should publish an article that I’m finishing these days about best practices using SUMMARIZE and ADDCOLUMNS. If only someone published this article one year ago, I would have saved many hours of my life. But, you know, flight manuals are written in blood… and someone has to write! Stay tuned.

    Read the article

  • Windows Azure: Major Updates for Mobile Backend Development

    - by ScottGu
    This week we released some great updates to Windows Azure that make it significantly easier to develop mobile applications that use the cloud. These new capabilities include: Mobile Services: Custom API support Mobile Services: Git Source Control support Mobile Services: Node.js NPM Module support Mobile Services: A .NET API via NuGet Mobile Services and Web Sites: Free 20MB SQL Database Option for Mobile Services and Web Sites Mobile Notification Hubs: Android Broadcast Push Notification Support All of these improvements are now available to use immediately (note: some are still in preview).  Below are more details about them. Mobile Services: Custom APIs, Git Source Control, and NuGet Windows Azure Mobile Services provides the ability to easily stand up a mobile backend that can be used to support your Windows 8, Windows Phone, iOS, Android and HTML5 client applications.  Starting with the first preview we supported the ability to easily extend your data backend logic with server side scripting that executes as part of client-side CRUD operations against your cloud back data tables. With today’s update we are extending this support even further and introducing the ability for you to also create and expose Custom APIs from your Mobile Service backend, and easily publish them to your Mobile clients without having to associate them with a data table. This capability enables a whole set of new scenarios – including the ability to work with data sources other than SQL Databases (for example: Table Services or MongoDB), broker calls to 3rd party APIs, integrate with Windows Azure Queues or Service Bus, work with custom non-JSON payloads (e.g. Windows Periodic Notifications), route client requests to services back on-premises (e.g. with the new Windows Azure BizTalk Services), or simply implement functionality that doesn’t correspond to a database operation.  The custom APIs can be written in server-side JavaScript (using Node.js) and can use Node’s NPM packages.  We will also be adding support for custom APIs written using .NET in the future as well. Creating a Custom API Adding a custom API to an existing Mobile Service is super easy.  Using the Windows Azure Management Portal you can now simply click the new “API” tab with your Mobile Service, and then click the “Create a Custom API” button to create a new Custom API within it: Give the API whatever name you want to expose, and then choose the security permissions you’d like to apply to the HTTP methods you expose within it.  You can easily lock down the HTTP verbs to your Custom API to be available to anyone, only those who have a valid application key, only authenticated users, or administrators.  Mobile Services will then enforce these permissions without you having to write any code: When you click the ok button you’ll see the new API show up in the API list.  Selecting it will enable you to edit the default script that contains some placeholder functionality: Today’s release enables Custom APIs to be written using Node.js (we will support writing Custom APIs in .NET as well in a future release), and the Custom API programming model follows the Node.js convention for modules, which is to export functions to handle HTTP requests. The default script above exposes functionality for an HTTP POST request. To support a GET, simply change the export statement accordingly.  Below is an example of some code for reading and returning data from Windows Azure Table Storage using the Azure Node API: After saving the changes, you can now call this API from any Mobile Service client application (including Windows 8, Windows Phone, iOS, Android or HTML5 with CORS). Below is the code for how you could invoke the API asynchronously from a Windows Store application using .NET and the new InvokeApiAsync method, and data-bind the results to control within your XAML:     private async void RefreshTodoItems() {         var results = await App.MobileService.InvokeApiAsync<List<TodoItem>>("todos", HttpMethod.Get, parameters: null);         ListItems.ItemsSource = new ObservableCollection<TodoItem>(results);     }    Integrating authentication and authorization with Custom APIs is really easy with Mobile Services. Just like with data requests, custom API requests enjoy the same built-in authentication and authorization support of Mobile Services (including integration with Microsoft ID, Google, Facebook and Twitter authentication providers), and it also enables you to easily integrate your Custom API code with other Mobile Service capabilities like push notifications, logging, SQL, etc. Check out our new tutorials to learn more about to use new Custom API support, and starting adding them to your app today. Mobile Services: Git Source Control Support Today’s Mobile Services update also enables source control integration with Git.  The new source control support provides a Git repository as part your Mobile Service, and it includes all of your existing Mobile Service scripts and permissions. You can clone that git repository on your local machine, make changes to any of your scripts, and then easily deploy the mobile service to production using Git. This enables a really great developer workflow that works on any developer machine (Windows, Mac and Linux). To use the new support, navigate to the dashboard for your mobile service and select the Set up source control link: If this is your first time enabling Git within Windows Azure, you will be prompted to enter the credentials you want to use to access the repository: Once you configure this, you can switch to the configure tab of your Mobile Service and you will see a Git URL you can use to use your repository: You can use this URL to clone the repository locally from your favorite command line: > git clone https://scottgutodo.scm.azure-mobile.net/ScottGuToDo.git Below is the directory structure of the repository: As you can see, the repository contains a service folder with several subfolders. Custom API scripts and associated permissions appear under the api folder as .js and .json files respectively (the .json files persist a JSON representation of the security settings for your endpoints). Similarly, table scripts and table permissions appear as .js and .json files, but since table scripts are separate per CRUD operation, they follow the naming convention of <tablename>.<operationname>.js. Finally, scheduled job scripts appear in the scheduler folder, and the shared folder is provided as a convenient location for you to store code shared by multiple scripts and a few miscellaneous things such as the APNS feedback script. Lets modify the table script todos.js file so that we have slightly better error handling when an exception occurs when we query our Table service: todos.js tableService.queryEntities(query, function(error, todoItems){     if (error) {         console.error("Error querying table: " + error);         response.send(500);     } else {         response.send(200, todoItems);     }        }); Save these changes, and now back in the command line prompt commit the changes and push them to the Mobile Services: > git add . > git commit –m "better error handling in todos.js" > git push Once deployment of the changes is complete, they will take effect immediately, and you will also see the changes be reflected in the portal: With the new Source Control feature, we’re making it really easy for you to edit your mobile service locally and push changes in an atomic fashion without sacrificing ease of use in the Windows Azure Portal. Mobile Services: NPM Module Support The new Mobile Services source control support also allows you to add any Node.js module you need in the scripts beyond the fixed set provided by Mobile Services. For example, you can easily switch to use Mongo instead of Windows Azure table in our example above. Set up Mongo DB by either purchasing a MongoLab subscription (which provides MongoDB as a Service) via the Windows Azure Store or set it up yourself on a Virtual Machine (either Windows or Linux). Then go the service folder of your local git repository and run the following command: > npm install mongoose This will add the Mongoose module to your Mobile Service scripts.  After that you can use and reference the Mongoose module in your custom API scripts to access your Mongo database: var mongoose = require('mongoose'); var schema = mongoose.Schema({ text: String, completed: Boolean });   exports.get = function (request, response) {     mongoose.connect('<your Mongo connection string> ');     TodoItemModel = mongoose.model('todoitem', schema);     TodoItemModel.find(function (err, items) {         if (err) {             console.log('error:' + err);             return response.send(500);         }         response.send(200, items);     }); }; Don’t forget to push your changes to your mobile service once you are done > git add . > git commit –m "Switched to use Mongo Labs" > git push Now our Mobile Service app is using Mongo DB! Note, with today’s update usage of custom Node.js modules is limited to Custom API scripts only. We will enable it in all scripts (including data and custom CRON tasks) shortly. New Mobile Services NuGet package, including .NET 4.5 support A few months ago we announced a new pre-release version of the Mobile Services client SDK based on portable class libraries (PCL). Today, we are excited to announce that this new library is now a stable .NET client SDK for mobile services and is no longer a pre-release package. Today’s update includes full support for Windows Store, Windows Phone 7.x, and .NET 4.5, which allows developers to use Mobile Services from ASP.NET or WPF applications. You can install and use this package today via NuGet. Mobile Services and Web Sites: Free 20MB Database for Mobile Services and Web Sites Starting today, every customer of Windows Azure gets one Free 20MB database to use for 12 months free (for both dev/test and production) with Web Sites and Mobile Services. When creating a Mobile Service or a Web Site, simply chose the new “Create a new Free 20MB database” option to take advantage of it: You can use this free SQL Database together with the 10 free Web Sites and 10 free Mobile Services you get with your Windows Azure subscription, or from any other Windows Azure VM or Cloud Service. Notification Hubs: Android Broadcast Push Notification Support Earlier this year, we introduced a new capability in Windows Azure for sending broadcast push notifications at high scale: Notification Hubs. In the initial preview of Notification Hubs you could use this support with both iOS and Windows devices.  Today we’re excited to announce new Notification Hubs support for sending push notifications to Android devices as well. Push notifications are a vital component of mobile applications.  They are critical not only in consumer apps, where they are used to increase app engagement and usage, but also in enterprise apps where up-to-date information increases employee responsiveness to business events.  You can use Notification Hubs to send push notifications to devices from any type of app (a Mobile Service, Web Site, Cloud Service or Virtual Machine). Notification Hubs provide you with the following capabilities: Cross-platform Push Notifications Support. Notification Hubs provide a common API to send push notifications to iOS, Android, or Windows Store at once.  Your app can send notifications in platform specific formats or in a platform-independent way.  Efficient Multicast. Notification Hubs are optimized to enable push notification broadcast to thousands or millions of devices with low latency.  Your server back-end can fire one message into a Notification Hub, and millions of push notifications can automatically be delivered to your users.  Devices and apps can specify a number of per-user tags when registering with a Notification Hub. These tags do not need to be pre-provisioned or disposed, and provide a very easy way to send filtered notifications to an infinite number of users/devices with a single API call.   Extreme Scale. Notification Hubs enable you to reach millions of devices without you having to re-architect or shard your application.  The pub/sub routing mechanism allows you to broadcast notifications in a super-efficient way.  This makes it incredibly easy to route and deliver notification messages to millions of users without having to build your own routing infrastructure. Usable from any Backend App. Notification Hubs can be easily integrated into any back-end server app, whether it is a Mobile Service, a Web Site, a Cloud Service or an IAAS VM. It is easy to configure Notification Hubs to send push notifications to Android. Create a new Notification Hub within the Windows Azure Management Portal (New->App Services->Service Bus->Notification Hub): Then register for Google Cloud Messaging using https://code.google.com/apis/console and obtain your API key, then simply paste that key on the Configure tab of your Notification Hub management page under the Google Cloud Messaging Settings: Then just add code to the OnCreate method of your Android app’s MainActivity class to register the device with Notification Hubs: gcm = GoogleCloudMessaging.getInstance(this); String connectionString = "<your listen access connection string>"; hub = new NotificationHub("<your notification hub name>", connectionString, this); String regid = gcm.register(SENDER_ID); hub.register(regid, "myTag"); Now you can broadcast notification from your .NET backend (or Node, Java, or PHP) to any Windows Store, Android, or iOS device registered for “myTag” tag via a single API call (you can literally broadcast messages to millions of clients you have registered with just one API call): var hubClient = NotificationHubClient.CreateClientFromConnectionString(                   “<your connection string with full access>”,                   "<your notification hub name>"); hubClient.SendGcmNativeNotification("{ 'data' : {'msg' : 'Hello from Windows Azure!' } }", "myTag”); Notification Hubs provide an extremely scalable, cross-platform, push notification infrastructure that enables you to efficiently route push notification messages to millions of mobile users and devices.  It will make enabling your push notification logic significantly simpler and more scalable, and allow you to build even better apps with it. Learn more about Notification Hubs here on MSDN . Summary The above features are now live and available to start using immediately (note: some of the services are still in preview).  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using them today.  Visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Trash Destination Adapter

    The Trash Destination and this article came from early experiences of using SSIS and community feedback at the time. When developing a package it is very useful to have a destination adapter that does nothing but consume rows with no setup requirement. You often want run a package part way through development, or just add a path so you can set a Data Viewer. There are stock tasks that can be used, but with the Trash Destination all columns are treated as selected automatically (usage type of read-only), so the pipeline knows they are required. It is also obvious that this is for development or diagnostic purposes, and is clearly not a part of the functional design of the package. It is also ideal for just playing around and exploring concepts in SSIS, and is often used in conjunction with the Data Generator Source. Using these two components it is easy to setup a test of an expression in the Derived Column Transformation for example. The Data Generator Source provides some dummy data, and the Trash Destination allows you to anchor the output path and set a Data Viewer to examine the results. It can also be used when performance tuning packages. It is a consistent and known quantity that has no external influences, so it is ideal as a destination when breaking the data flow into sections to isolate a bottleneck. The adapter is really simple to use and requires no setup. Simply drop it onto the pipeline designer and use it to terminate your data flow path. Installation The component is provided as an MSI file which you can download and run to install it. This simply places the files on disk in the correct locations and also installs the assemblies in the Global Assembly Cache as per Microsoft’s recommendations. You may need to restart the SQL Server Integration Services service, as this caches information about what components are installed, as well as restarting any open instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. Finally, for 2005/2008, you will have to add the transformation to the Visual Studio toolbox manually. Right-click the toolbox, and select Choose Items.... Select the SSIS Data Flow Items tab, and then check the Trash Destination transformation in the Choose Toolbox Items window. This process has been described in detail in the related FAQ entry for How do I install a task or transform component? We recommend you follow best practice and apply the current Microsoft SQL Server Service pack to your SQL Server servers and workstations. Downloads The Trash Destination is available for SQL Server 2005, SQL Server 2008 (includes R2) and SQL Server 2012. Please choose the version to match your SQL Server version, or you can install multiple versions and use them side by side if you have more than one version of SQL Server installed. Trash Destination for SQL Server 2005 Trash Destination for SQL Server 2008 Trash Destination for SQL Server 2012 Version History SQL Server 2012 Version 3.0.0.34 - SQL Server 2012 release. Includes upgrade support for both 2005 and 2008 packages to 2012. (5 Jun 2012) SQL Server 2008 Version 2.0.0.33 - SQL Server 2008 release. Includes support for upgrade of 2005 packages. RTM compatible, previously February 2008 CTP. (4 Mar 2008) Version 2.0.0.31 - SQL Server 2008 November 2007 CTP. (14 Feb 2008) SQL Server 2005 Version 1.0.2.18 - SQL Server 2005 RTM Refresh. SP1 Compatibility Testing. (12 Jun 2006) Version 1.0.1.1 - SQL Server 2005 IDW 15 June CTP. Minor enhancements over v1.0.1.0. (11 Jun 2005) Version 1.0.1.0 - SQL Server 2005 IDW 14 April CTP. First Public Release. (30 May 2005) Troubleshooting Make sure you have downloaded the version that matches your version of SQL Server. We offer separate downloads for SQL Server 2005, SQL Server 2008 and SQL Server 2012. If you an error when you try and use the component along the lines of The component could not be added to the Data Flow task. Please verify that this component is properly installed.  ... The data flow object "Konesans ..." is not installed correctly on this computer, this usually indicates that the internal cache of SSIS components needs to be updated. This is held by the SSIS service, so you need restart the the SQL Server Integration Services service. You can do this from the Services applet in Control Panel or Administrative Tools in Windows. You can also restart the computer if you prefer. You may also need to restart any current instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. The full error message is shown below for reference: TITLE: Microsoft Visual Studio ------------------------------ The component could not be added to the Data Flow task. Please verify that this component is properly installed. ------------------------------ ADDITIONAL INFORMATION: The data flow object "Konesans.Dts.Pipeline.TrashDestination.Trash, Konesans.Dts.Pipeline.TrashDestination, Version=1.0.1.0, Culture=neutral, PublicKeyToken=b8351fe7752642cc" is not installed correctly on this computer. (Microsoft.DataTransformationServices.Design) For 2005/2008, once installation is complete you need to manually add the task to the toolbox before you will see it and to be able add it to packages - How do I install a task or transform component? This is not necessary for SQL Server 2012 as the new SSIS toolbox automatically detects components. If you are still having issues then contact us, but please provide as much detail as possible about error, as well as which version of the the task you are using and details of the SSIS tools installed.

    Read the article

  • My Right-to-Left Foot (T-SQL Tuesday #13)

    - by smisner
    As a business intelligence consultant, I often encounter the situation described in this month's T-SQL Tuesday, hosted by Steve Jones ( Blog | Twitter) – “What the Business Says Is Not What the  Business Wants.” Steve posed the question, “What issues have you had in interacting with the business to get your job done?” My profession requires me to have one foot firmly planted in the technology world and the other foot planted in the business world. I learned long ago that the business never says exactly what the business wants because the business doesn't have the words to describe what the business wants accurately enough for IT. Not only do technological-savvy barriers exist, but there are also linguistic barriers between the two worlds. So how do I cope? The adage "a picture is worth a thousand words" is particularly helpful when I'm called in to help design a new business intelligence solution. Many of my students in BI classes have heard me explain ("rant") about left-to-right versus right-to-left design. To understand what I mean about these two design options, let's start with a picture: When we design a business intelligence solution that includes some sort of traditional data warehouse or data mart design, we typically place the data sources on the left, the new solution in the middle, and the users on the right. When I've been called in to help course-correct a failing BI project, I often find that IT has taken a left-to-right approach. They look at the data sources, decide how to model the BI solution as a _______ (fill in the blank with data warehouse, data mart, cube, etc.), and then build the new data structures and supporting infrastructure. (Sometimes, they actually do this without ever having talked to the business first.) Then, when they show what they've built to the business, the business says that is not what we want. Uh-oh. I prefer to take a right-to-left approach. Preferably at the beginning of a project. But even if the project starts left-to-right, I'll do my best to swing it around so that we’re back to a right-to-left approach. (When circumstances are beyond my control, I carry on, but it’s a painful project for everyone – not because of me, but because the approach just doesn’t get to what the business wants in the most effective way.) By using a right to left approach, I try to understand what it is the business is trying to accomplish. I do this by having them explain reports to me, and explaining the decision-making process that relates to these reports. Sometimes I have them explain to me their business processes, or better yet show me their business processes in action because I need pictures, too. I (unofficially) call this part of the project "getting inside the business's head." This is starting at the right side of the diagram above. My next step is to start moving leftward. I do this by preparing some type of prototype. Depending on the nature of the project, this might mean that I simply mock up some data in a relational database and build a prototype report in Reporting Services. If I'm lucky, I might be able to use real data in a relational database. I'll either use a subset of the data in the prototype report by creating a prototype database to hold the sample data, or select data directly from the source. It all depends on how much data there is, how complex the queries are, and how fast I need to get the prototype completed. If the solution will include Analysis Services, then I'll build a prototype cube. Analysis Services makes it incredibly easy to prototype. You can sit down with the business, show them the prototype, and have a meaningful conversation about what the BI solution should look like. I know I've done a good job on the prototype when I get knocked out of my chair so that the business user can explore the solution further independently. (That's really happened to me!) We can talk about dimensions, hierarchies, levels, members, measures, and so on with something tangible to look at and without using those terms. It's not helpful to use sample data like Adventure Works or to use BI terms that they don't really understand. But when I show them their data using the BI technology and talk to them in their language, then they truly have a picture worth a thousand words. From that, we can fine tune the prototype to move it closer to what they want. They have a better idea of what they're getting, and I have a better idea of what to build. So right to left design is not truly moving from the right to the left. But it starts from the right and moves towards the middle, and once I know what the middle needs to look like, I can then build from the left to meet in the middle. And that’s how I get past what the business says to what the business wants.

    Read the article

  • Multitenant Design for SQL Azure: White Paper Available

    - by Herve Roggero
    Cloud computing is about scaling out all your application tiers, from web application to the database layer. In fact, the whole promise of Azure is to pay for just what you need. You need more IIS servers? No problemo... just spin another web server. You expect to double your storage needs for Azure Tables? No problemo; you are covered there too... just pay for your storage needs. But what about the database tier, SQL Azure? How do you add new databases easily, and transparently, so that your application simply uses more of SQL Azure if its needs to? Without changing a single line of code? And what if you need to scale back down? Welcome to the world of database scalability. There are many terms that describe database scalability, including data federation, multitenant designs, and even NoSQL depending on the technical solution you are implementing.  Because SQL Azure is a transactional database system, NoSQL is not really an option. However data federation and multitenant designs offer some very interesting scalability options that are worth considering. Data federation, a feature of SQL Azure that will be offered in the future, offers very interesting capabilities available natively on the SQL Azure platform. More to come in a few weeks... Multitenant designs on the other hand are design practices and technologies designed to help you reach flexible scalability options not available otherwise. The first incarnation of such a method was made available on CodePlex as an open source project (http://enzosqlshard.codeplex.com).  This project was an attempt to provide a sharding library for educational purposes.  All that sounds really cool... and really esoteric... almost a form of database "voodoo"... However after being on multiple Azure projects I am starting to see a real need. Customers want to be able to free themselves from the database tier, so that if they have 10 new customers tomorrow, all they need to do is add 2 more SQL Azure instances. It's that simple. How you achieve this, and suggested application design guidelines, are available in a white paper I just published.  The white paper offers two primary sections. The first section describes the business and technical problem at hand, and how to classify it according to specific design patterns. For example, I discuss compressed shards through schema separation. The second section offers a method for addressing the needs of a multitenant design using a new library, the big bother of the codeplex project mentioned previously (that I created earlier this year), complete with management interface and such. A Beta of this platform will be made available within weeks; as soon as the documentation will be ready.   I would like to ask you to drop me a quick email at [email protected] if you are going to download the white paper. It's not required, but it would help me get in touch with you for feedback.  You can download this white paper here:   http://www.bluesyntax.net/files/EnzoFramework.pdf . Thank you, and I am looking for feedback, thoughts and implementation opportunities.

    Read the article

  • Calling services from the Orchestrating layer in SOA?

    - by Martin Lee
    The Service Oriented Architecture Principles site says that Service Composition is an important thing in SOA. But Service Loose Coupling is important as well. Does that mean that the "Orchestrating layer" should be the only one that is allowed to make calls to services in the system? As I understand SOA, the "Orchestrating layer" 'glues' all the services together into one software application. I tried to depict that on Fig.A and Fig.B. The difference between the two is that on Fig.A the application is composed of services and all the logic is done in the "Orchestrating layer" (all calls to services are done from the "Orchestrating layer" only). On Fig.B the application is composed from services, but one service calls another service. Does the architecture on Fig.B violate the "Service Loose Coupling" principle of SOA? Can a service call another service in SOA? And more generally, can the architecture on Fig.A be considered superior to the one on Fig.B in terms of service loose coupling, abstraction, reusability, autonomy, etc.? My guess is that the A architecture is much more universal, but it can add some unnecessary data transfers between the "Orchestrating layer" and all the called services.

    Read the article

  • Common methods/implementation across multiple WCF Services

    - by Rob
    I'm looking at implementing some WCF Services as part of an API for 3rd parties to access data within a product I work on. There are currently a set of services exposed as "classic" .net Web Services and I need to emulate the behaviour of these, at least in part. The existing services all have an AcquireAuthenticationToken method that takes a set of parameters (username, password, etc) and return a session token (represented as a GUID), which is then passed in on calls to any other method (There's also a ReleaseAuthenticationToken method, no guesses needed as to what that does!). What I want to do is implement multiple WCF services, such as: ProductData UserData and have both of these services share a common implementation of Acquire/Release. From the base project that is created by VS2k8, it would appear I will start with, per service: public class ServiceName : IServiceName { } public interface IServiceName { } Therefore my questions would be: Will WCF tolerate me adding a base class to this, public class ServiceName : ServiceBase, IServiceName, or does the fact that there's an interface involved mean that won't work? If "No it won't work" to Question 1, could I change IServiceName so it extends another interface, IServiceBase, thus forcing the presence of Acquire/Release methods, but then having to supply the implementation in each service. Are 1 and 2 both really bad ideas and there's actually a much better solution that, knowing next to nothing about WCF, I just haven't thought of?

    Read the article

  • I Hereby Resolve… (T-SQL Tuesday #14)

    - by smisner
    It’s time for another T-SQL Tuesday, hosted this month by Jen McCown (blog|twitter), on the topic of resolutions. Specifically, “what techie resolutions have you been pondering, and why?” I like that word – pondering – because I ponder a lot. And while there are many things that I do already because of my job, there are many more things that I ponder about doing…if only I had the time. Then I ponder about making time, but then it’s back to work! In 2010, I was moderately more successful in making time for things that I ponder about than I had been in years past, and I hope to continue that trend in 2011. If Jen hadn’t settled on this topic, I could keep my ponderings to myself and no one would ever know the outcome, but she’s egged me on (and everyone else that chooses to participate)! So here goes… For me, having resolve to do something means that I wouldn’t be doing that something as part of my ordinary routine. It takes extra effort to make time for it. It’s not something that I do once and check off a list, but something that I need to commit to over a period of time. So with that in mind, I hereby resolve… To Learn Something New… One of the things I love about my job is that I get to do a lot of things outside of my ordinary routine. It’s a veritable smorgasbord of opportunity! So what more could I possibly add to that list of things to do? Well, the more I learn, the more I realize I have so much more to learn. It would be much easier to remain in ignorant bliss, but I was born to learn. Constantly. (And apparently to teach, too– my father will tell you that as a small child, I had the neighborhood kids gathered together to play school – in the summer. I’m sure they loved that – but they did it!) These are some of things that I want to dedicate some time to learning this year: Spatial data. I have a good understanding of how maps in Reporting Services works, and I can cobble together a simple T-SQL spatial query, but I know I’m only scratching the surface here. Rob Farley (blog|twitter) posted interesting examples of combining maps and PivotViewer, and I think there’s so many more creative possibilities. I’ve always felt that pictures (including charts and maps) really help people get their minds wrapped around data better, and because a lot of data has a geographic aspect to it, I believe developing some expertise here will be beneficial to my work. PivotViewer. Not only is PivotViewer combined with maps a useful way to visualize data, but it’s an interesting way to work with data. If you haven’t seen it yet, check out this interactive demonstration using Netflx OData feed. According to Rob Farley, learning how to work with PivotViewer isn’t trivial. Just the type of challenge I like! Security. You’ve heard of the accidental DBA? Well, I am the accidental security person – is there a word for that role? My eyes used to glaze over when having to study about security, or  when reading anything about it. Then I had a problem long ago that no one could figure out – not even the vendor’s tech support – until I rolled up my sleeves and painstakingly worked through the myriad of potential problems to resolve a very thorny security issue. I learned a lot in the process, and have been able to share what I’ve learned with a lot of people. But I’m not convinced their eyes weren’t glazing over, too. I don’t take it personally – it’s just a very dry topic! So in addition to deepening my understanding about security, I want to find a way to make the subject as it relates to SQL Server and business intelligence more accessible and less boring. Well, there’s actually a lot more that I could put on this list, and a lot more things I have plans to do this coming year, but I run the risk of overcommitting myself. And then I wouldn’t have time… To Have Fun! My name is Stacia and I’m a workaholic. When I love what I do, it’s difficult to separate out the work time from the fun time. But there are some things that I’ve been meaning to do that aren’t related to business intelligence for which I really need to develop some resolve. And they are techie resolutions, too, in a roundabout sort of way! Photography. When my husband and I went on an extended camping trip in 2009 to Yellowstone and the Grand Tetons, I had a nice little digital camera that took decent pictures. But then I saw the gorgeous cameras that other tourists were toting around and decided I needed one too. So I bought a Nikon D90 and have started to learn to use it, but I’m definitely still in the beginning stages. I traveled so much in 2010 and worked on two book projects that I didn’t have a lot of free time to devote to it. I was very inspired by Kimberly Tripp’s (blog|twitter) and Paul Randal’s (blog|twitter) photo-adventure in Alaska, though, and plan to spend some dedicated time with my camera this year. (And hopefully before I move to Alaska – nothing set in stone yet, but we hope to move to a remote location – with Internet access – later this year!) Astronomy. I have this cool telescope, but it suffers the same fate as my camera. I have been gone too much and busy with other things that I haven’t had time to work with it. I’ll figure out how it works, and then so much time passes by that I forget how to use it. I have this crazy idea that I can actually put the camera and the telescope together for astrophotography, but I think I need to start simple by learning how to use each component individually. As long as I’m living in Las Vegas, I know I’ll have clear skies for nighttime viewing, but when we move to Alaska, we’ll be living in a rain forest. I have no idea what my opportunities will be like there – except I know that when the sky is clear, it will be far more amazing than anything I can see in Vegas – even out in the desert - because I’ll be so far away from city light pollution. I’ve been contemplating putting together a blog on these topics as I learn. As many of my fellow bloggers in the SQL Server community know, sometimes the best way to learn something is to sit down and write about it. I’m just stumped by coming up with a clever name for the new blog, which I was thinking about inaugurating with my move to Alaska. Except that I don’t know when that will be exactly, so we’ll just have to wait and see which comes first!

    Read the article

< Previous Page | 422 423 424 425 426 427 428 429 430 431 432 433  | Next Page >