Search Results

Search found 28297 results on 1132 pages for 'sql azure'.

Page 56/1132 | < Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >

  • Converting SQL Query to LINQ 2 SQL expression

    - by Shyju
    How can i rewrite the below SQL query to its equivalent LINQ 2 SQL expression (both in C# and VB.NET) SELECT t1.itemnmbr, t1.locncode,t1.bin,t2.Total FROM IV00200 t1 (NOLOCK) INNER JOIN IV00112 t2 (NOLOCK) ON t1.itemnmbr = t2.itemnmbr AND t1.bin = t2.bin AND t1.bin = 'MU7I336A80'

    Read the article

  • Select into from one sql server into another?

    - by blasto
    I want to select data from one table (T1, in DB1) in one server (Data.Old.S1) into data in another table (T2, in DB2) in another server (Data.Latest.S2). How can I do this ? Please note the way the servers are named. The query should take care of that too. That is, SQL server should not be confused about fully qualified table names. For example - this could confuse SQL server - Data.Old.S1.DB1.dbo.T1.

    Read the article

  • Error mounting CloudDrive snapshot in Azure

    - by Dave
    Hi, I've been running a cloud drive snapshot in dev for a while now with no probs. I'm now trying to get this working in Azure. I can't for the life of me get it to work. This is my latest error: Microsoft.WindowsAzure.Storage.CloudDriveException: Unknown Error HRESULT=D000000D ---> Microsoft.Window.CloudDrive.Interop.InteropCloudDriveException: Exception of type 'Microsoft.WindowsAzure.CloudDrive.Interop.InteropCloudDriveException' was thrown. at ThrowIfFailed(UInt32 hr) at Microsoft.WindowsAzure.CloudDrive.Interop.InteropCloudDrive.Mount(String url, SignatureCallBack sign, String mount, Int32 cacheSize, UInt32 flags) at Microsoft.WindowsAzure.StorageClient.CloudDrive.Mount(Int32 cacheSize, DriveMountOptions options) Any idea what is causing this? I'm running both the WorkerRole and Storage in Azure so it's nothing to do with the dev simulation environment disconnect. This is my code to mount the snapshot: CloudDrive.InitializeCache(localPath.TrimEnd('\\'), size); var container = _blobStorage.GetContainerReference(containerName); var blob = container.GetPageBlobReference(driveName); CloudDrive cloudDrive = _cloudStorageAccount.CreateCloudDrive(blob.Uri.AbsoluteUri); string snapshotUri; try { snapshotUri = cloudDrive.Snapshot().AbsoluteUri; Log.Info("CloudDrive Snapshot = '{0}'", snapshotUri); } catch (Exception ex) { throw new InvalidCloudDriveException(string.Format( "An exception has been thrown trying to create the CloudDrive '{0}'. This may be because it doesn't exist.", cloudDrive.Uri.AbsoluteUri), ex); } cloudDrive = _cloudStorageAccount.CreateCloudDrive(snapshotUri); Log.Info("CloudDrive created: {0}", snapshotUri, cloudDrive); string driveLetter = cloudDrive.Mount(size, DriveMountOptions.None); The .Mount() method at the end is what's now failing. Please help as this has me royally stumped! Thanks in advance. Dave

    Read the article

  • Azure Mobile Services with persistent authentication

    - by akshay2000
    I am trying to implement authentication with Windows Azure Mobile Services in my Windows Phone app. I have followed the official tutorials and the authentication works fine. The issue is that, whenever the app is closed and started again, the user has to enter username and password. Since the services only use authentication tokens, the 'Remember me' option on log in page is not likely to work. The official documentation for Windows Azure shows possibility of Single Sign On with the Microsoft account using the Live SDK. The Live SDK provides authentication token in form of string. However, even this token expires in about 24 hours. Moreover, this is restricted to the Microsoft Account only. What are my possibilities if I want to cache the user's identity and enable automatic log in? I have already gone through the article here. User will still have to log in again once the token expires. I have seen apps which require user to sign in only once!

    Read the article

  • SQL Server to sql server linked server setup

    - by ScottStonehouse
    Please explain what is required to set up a SQL Server linked server. Server A is SQL 2005 windows logins only Server B is the same (SQL 2005 windows logins only) Server A runs windows XP Server B runs Windows Server 2003 Both SQL Server services are running under the same domain account. I am logged into my workstation with a domain account that has administrative rights on both SQL Servers. Note these are both SQL Server 2005 SP2 - I've had old hotfixes pointed out to me, but those are already applied. The issue I am having is this error: "Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. (Microsoft SQL Server, Error: 18456)"

    Read the article

  • Using SQL Server specific code in Access linked to SQL Server database

    - by Brennan Vincent
    Hi, I have an access file that is linked (through an ODBC connection) to a SQL Server 2008 database. I am trying to write some reports against this database. However, Access chokes when I write the select query of the report with SQL syntax specific to SQL Server that doesn't exist in access. Shouldn't this work, since it's the SQL Server engine running the queries and just sending the data back to Access to display? Is there any way to get this to work? Need this to work on any combination of Access 2007 and 2010, and SQL Server 2005 and 2008. Edit Note: I cannot create a SQL Server stored procedure or function, or otherwise modify the original (SQL Server) schema in any way.

    Read the article

  • Windows Azure Evolution &ndash; Welcome to VS2012

    - by Shaun
    When the Microsoft released the first preview version of Windows 8 and Visual Studio, many people in the community were asking if the windows azure tool is available to it. The answer was “NO”. Microsoft promised that the windows azure tool will only support the Visual Studio 2010 but when the 2012 was final released, windows azure tool should be work. But now alone with the new windows azure platform was published we got the latest Windows Azure SDK 1.7, which is compatible to the Visual Studio 2012 RC.   You can retrieve the latest version of the Windows Azure SDK through Web Platform Installer, which I think it’s the easiest and simplest way to download and install, since besides the SDK itself it also needs some other components. To download the latest windows azure SDK from Web Platform Installer, just go to the windows azure website and clicked the Develop, .NET and click the blue “install” button. Then you need to select which version of Visual Studio you want to use, Visual Studio 2010 or Visual Studio 2012 RC. After selected the current version you will download an EXE file. This file will lead you to install the Web Platform Installer 4.0 (if you haven’t installed) and the latest windows azure SDK. You can see the version name is June 2012, 1.7. Finally the WebPI will detect the dependent components you need to download and begin to install. But if you want to challenge yourself you can download the components and install them manually. The standalone installations are listed in this page with the instruction on how to install them with necessary pre-requirements.   Once you finished the installation you can open the Visual Studio 2012 RC and as usual, it need to be run as administrator. If you clicked the New Project link from the start page, navigated to Cloud category you will find that there no project template available. Is there anything wrong? So, if you changed the target framework from the default .NET 4.5 to .NET 4 you will see the azure project template. This is because, currently the windows azure instance does not support .NET 4.5. After clicked OK you will see the role creation window, which is similar as what you have seen before. But there are some new role templates in this SDK. Firstly you will have ASP.NET MVC 4 web role available, which means you can create ASP.NET MVC 4 applications for internet, intranet, mobile and WebAPI on the cloud. Then there are two new worker role templates, “Cache Worker Role” and “Worker Role with Service Bus Queue”. “Worker Role with Service Bus Queue” is a worker role which had added necessary references to access the Windows Azure Service Bus Queue. It also have some basic sample code in the worker role class which could read messages from the queue when started. The “Cache Worker Role” is a worker role which has the in-memory distributed cache feature enabled by default. This feature is different than the Windows Azure Caching. It allows the role instance to use its memory as a in-memory distributed cache clusters. By using this feature you can have one or more worker roles as some dedicate cache clusters. Alternatively, you can make part of your web role and worker role’s memory as the cache clusters as well. Let’s just create an ASP.NET MVC 4 Web Role, and click F5 to run it under the local emulator. If you have been working with azure for a while you should know that I need to setup the local storage emulator before running locally if it’s a fresh azure SDK installation. But in this version when we started our azure project the Visual Studio will check if the storage emulator had been initialized. If not, it will run the initializer automatically. And as you can see, in this version the storage emulator relies on the SQL Server 2012 Local DB feature. It will create the emulator database and tables in the default local database. You can set the storage emulator to use a standard SQL Server default instance by using the command “dsinit /instance:.”. The “dsinit” tool now is located at %PROGRAM FILES%\Microsoft SDKs\Windows Azure\Emulator\devstore After the Visual Studio complied and deployed the package our website should be shown in the browser. This is the MVC 4 Web Role home page on my Windows 8 machine in IE10. Another thing you might notice is that, in this version the compute emulator utilizes IIS Express to host the web roles instead of the full IIS. You can add breakpoint in the code and debug, and you can use the local storage emulator to test your code for accessing the storage service. All of them are same as what your are doing now on SDK 1.6. You can switch to use IIS to run your web role in local emulator. Just open the windows azure porject property windows, in the Web page select “Use IIS Web Server”. For more information about this please have a look on Nuno’s blog post. In the role property page in Visual Studio there’s no massive changes. You can configure your role settings such as the endpoints, certificates and local storage, etc.. One thing was added is the Caching tab. Here you can specify enable the caching feature or not, and how much memory you want to use as the cache cluster. I will introduce more details about it in the future posts. The publish and package feature are also no change. You can publish your project to azure directly through Visual Studio 2012, while you can create the package and upload manually. Below is the SDK version of my deployment which is 1.7.30602.1703 in the developer portal.   Summary In this post I introduced about the new Windows Azure SDK 1.7 especially on how it works on the latest Visual Studio 2012 RC. There’s no significant changes in the visual studio tool in this version but some small enhancement such as ASP.NET MVC 4, Cache Worker Role, using SQL 2012 Local DB and IIS Express, etc..   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • FTP Publishing with the new Windows Azure Release

    - by Harish Ranganathan
    There is a good chance you might have stumbled upon the new Windows Azure Release that we made on June 6th.  Scott Guthrie’s Post quite summarizes the overall new features. One of my favorite features is the Windows Azure Websites and the ability to do publish files to Azure using your FTP Client. Windows Azure Websites offers low cost (free upto 10 websites) web hosting where you can deploy any website that can run on IIS 7.0, quickly. The earlier releases of Azure SDKs and the Azure platform support .NET 3.5 & above for running your applications.  This was a constraint for many since there are/were a lot of ASP.NET 2.0 applications built over time and simply to put it on Azure, many of you were skeptical to migrate it to .NET 4. Windows Azure Websites offer the flexibility of running IIS 7.0 supported .NET Versions which means you can run .NET 1.1, 2.0, 3.5 and .NET 4.  Not just that! You can also run classic ASP Applications. Windows Azure Websites don’t need you to go through the complexity of adding the Cloud Project Template and then publishing the Configuration Files.  Lets take a step by step understanding of Websites and publishing using FTP. I downloaded the Club Website Starter Kit from http://www.asp.net/downloads/starter-kits/club It also requires a database and I downloaded the SQL Scripts and created a SQL Server Database called Club. This installs a Web Site Project Template.  Note that I am running Windows 8 Release Preview and Visual Studio 2012 RC.  After installing the template, select File – New – Website and don’t forget to choose the Framework version as .NET 2.0 You can see the “Club Website Starter Kit” .  Once you select the Website gets created.  You would encounter a warning indicating that the Club Website Starter Kit uses SQL Express and the recommended database is LocalDB Express.  Click ok to continue.  Once the Website is created open up the Web.config and locate the “ClubSiteDB” connection string.  By default, it points to a SQL Express Database.  Instead configure it to use your local SQL Server. Also, open up Global.asax and comment out the following line if (!Roles.RoleExists("Administrators")) Roles.CreateRole("Administrators"); There seems to be an issue in the code that doesn’t create the role.  Post that, hit CTRL+F5 and you should be able to see the Website Running, as below So, now we have the Club Starter Kit site up running locally.  Moving to Azure Visit http://manage.windowsazure.com/ and sign up for a trial account.  This allows you to host up to 10 websites for free and a host of other benefits.  The free Websites can be extended to an year without any charge.  Once you have signed up, sign in to the portal using the Live ID used for sign up. After signing in, you would be presented with the “All Items” listing page which lists, Websites, Cloud Services, Databases etc.,  If this is the first time, you wouldn’t find anything. Click on the “Websites” link from the left menu.  Click on “New” in the bottom and it should show up a dialog.  In the same, select Website and click on “Quick Create” and in the URL Textbox, specify “MyFirstDemo” and click the “Create Web Site” link below. It should take a few seconds to create the Website.  Once the Website is created, click on the listing and it should open up the Dashboard.  Since we haven’t done anything yet, there shouldn’t be any statistics Click on the “Download publish profile” link in the right bottom.  This file has the FTP publishing settings. Also, if you scroll down you can see the FTP URL for this site.  It should typically start ftp://waws-xxxx-xxx-xxxx In the downloaded publish profile file, you can also find the ftp URL.  Pick the following from this file publishUrl (the 2nd one, the one that features after publishMethod =”FTP”) and the userName and userPWD that follows. Note that we have everything required to publish the files.  But since the Club Starter Kit uses Databases, we need to have the Database running on SQL Azure.  Go back to the Main Menu and click on “New” in the bottom but this time select “SQL Database” and provide “Club” as Database name for “Quick Create” If this is the first time a Server would be created.  Otherwise, it would pickup the existing server name. Once the database is created, you can use the SQL Azure Migration Wizard http://sqlazuremw.codeplex.com/ and provide the credentials to connect to local database and then the SQL Azure database for migrating the “Club” database.  The migration wizard UI hasn’t changed much and is the same as explained by me in one my posts earlier http://geekswithblogs.net/ranganh/archive/2009/09/29/taking-your-northwind-database-to-sql-azure-and-binding-it.aspx Once the database is migrated, come back to the main screen and click on the Database base in the Azure Management Portal.  It opens up the dashboard of the database.  Click on “Show connection Strings” and it would popup a list of connection string formats.  Choose the ADO.NET connection string and after editing the password with the password that you provided when creating the database server in the Azure Portal, paste it into the config file of the Club Starter Kit Website.  Just to reiterate, the connection string key is ClubSiteDB. Try running the Website once to ensure that the application though running locally could connect to the SQL Database running on Azure. Once you are able to run the website successfully, we are all set to do the FTP Publishing. Download your favorite FTP tool.  I use http://filezilla-project.org/ In the Host Textbox, paste the FTP URL that you picked up from the publish profile file and also paste the username and password.  Click on “QuickConnect”.  If everything is fine, you should be able to connect to the remote server.  If it is successfully connected, you can see the wwwroot folder of the Website, running in Azure Make sure on the “Local Site” in the left, you choose the path to the folder of your Website.  Open up the Website folder on the left such that it lists all the files and folders inside.  Select all of them and click select “Upload” or simply drag and drop all the files to the root folder that is listed above.  Once the publishing is done, you should be able to hit the SiteURL that you can find the dashboard page of the website.  In our case, it would be http://MyFirstDemo.azurewebsites.net That’s it, we have now done FTP publishing in Azure and that too we are running a .NET 2.0 Website on Azure. Cheers !!!

    Read the article

  • Integration Patterns with Azure Service Bus Relay, Part 1: Exposing the on-premise service

    - by Elton Stoneman
    We're in the process of delivering an enabling project to expose on-premise WCF services securely to Internet consumers. The Azure Service Bus Relay is doing the clever stuff, we register our on-premise service with Azure, consumers call into our .servicebus.windows.net namespace, and their requests are relayed and serviced on-premise. In theory it's all wonderfully simple; by using the relay we get lots of protocol options, free HTTPS and load balancing, and by integrating to ACS we get plenty of security options. Part of our delivery is a suite of sample consumers for the service - .NET, jQuery, PHP - and this set of posts will cover setting up the service and the consumers. Part 1: Exposing the on-premise service In theory, this is ultra-straightforward. In practice, and on a dev laptop it is - but in a corporate network with firewalls and proxies, it isn't, so we'll walkthrough some of the pitfalls. Note that I'm using the "old" Azure portal which will soon be out of date, but the new shiny portal should have the same steps available and be easier to use. We start with a simple WCF service which takes a string as input, reverses the string and returns it. The Part 1 version of the code is on GitHub here: on GitHub here: IPASBR Part 1. Configuring Azure Service Bus Start by logging into the Azure portal and registering a Service Bus namespace which will be our endpoint in the cloud. Give it a globally unique name, set it up somewhere near you (if you’re in Europe, remember Europe (North) is Ireland, and Europe (West) is the Netherlands), and  enable ACS integration by ticking "Access Control" as a service: Authenticating and authorizing to ACS When we try to register our on-premise service as a listener for the Service Bus endpoint, we need to supply credentials, which means only trusted service providers can act as listeners. We can use the default "owner" credentials, but that has admin permissions so a dedicated service account is better (Neil Mackenzie has a good post On Not Using owner with the Azure AppFabric Service Bus with lots of permission details). Click on "Access Control Service" for the namespace, navigate to Service Identities and add a new one. Give the new account a sensible name and description: Let ACS generate a symmetric key for you (this will be the shared secret we use in the on-premise service to authenticate as a listener), but be sure to set the expiration date to something usable. The portal defaults to expiring new identities after 1 year - but when your year is up *your identity will expire without warning* and everything will stop working. In production, you'll need governance to manage identity expiration and a process to make sure you renew identities and roll new keys regularly. The new service identity needs to be authorized to listen on the service bus endpoint. This is done through claim mapping in ACS - we'll set up a rule that says if the nameidentifier in the input claims has the value serviceProvider, in the output we'll have an action claim with the value Listen. In the ACS portal you'll see that there is already a Relying Party Application set up for ServiceBus, which has a Default rule group. Edit the rule group and click Add to add this new rule: The values to use are: Issuer: Access Control Service Input claim type: http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier Input claim value: serviceProvider Output claim type: net.windows.servicebus.action Output claim value: Listen When your service namespace and identity are set up, open the Part 1 solution and put your own namespace, service identity name and secret key into the file AzureConnectionDetails.xml in Solution Items, e.g: <azure namespace="sixeyed-ipasbr">    <!-- ACS credentials for the listening service (Part1):-->   <service identityName="serviceProvider"            symmetricKey="nuR2tHhlrTCqf4YwjT2RA2BZ/+xa23euaRJNLh1a/V4="/>  </azure> Build the solution, and the T4 template will generate the Web.config for the service project with your Azure details in the transportClientEndpointBehavior:           <behavior name="SharedSecret">             <transportClientEndpointBehavior credentialType="SharedSecret">               <clientCredentials>                 <sharedSecret issuerName="serviceProvider"                               issuerSecret="nuR2tHhlrTCqf4YwjT2RA2BZ/+xa23euaRJNLh1a/V4="/>               </clientCredentials>             </transportClientEndpointBehavior>           </behavior> , and your service namespace in the Azure endpoint:         <!-- Azure Service Bus endpoints -->          <endpoint address="sb://sixeyed-ipasbr.servicebus.windows.net/net"                   binding="netTcpRelayBinding"                   contract="Sixeyed.Ipasbr.Services.IFormatService"                   behaviorConfiguration="SharedSecret">         </endpoint> The sample project is hosted in IIS, but it won't register with Azure until the service is activated. Typically you'd install AppFabric 1.1 for Widnows Server and set the service to auto-start in IIS, but for dev just navigate to the local REST URL, which will activate the service and register it with Azure. Testing the service locally As well as an Azure endpoint, the service has a WebHttpBinding for local REST access:         <!-- local REST endpoint for internal use -->         <endpoint address="rest"                   binding="webHttpBinding"                   behaviorConfiguration="RESTBehavior"                   contract="Sixeyed.Ipasbr.Services.IFormatService" /> Build the service, then navigate to: http://localhost/Sixeyed.Ipasbr.Services/FormatService.svc/rest/reverse?string=abc123 - and you should see the reversed string response: If your network allows it, you'll get the expected response as before, but in the background your service will also be listening in the cloud. Good stuff! Who needs network security? Onto the next post for consuming the service with the netTcpRelayBinding.  Setting up network access to Azure But, if you get an error, it's because your network is secured and it's doing something to stop the relay working. The Service Bus relay bindings try to use direct TCP connections to Azure, so if ports 9350-9354 are available *outbound*, then the relay will run through them. If not, the binding steps down to standard HTTP, and issues a CONNECT across port 443 or 80 to set up a tunnel for the relay. If your network security guys are doing their job, the first option will be blocked by the firewall, and the second option will be blocked by the proxy, so you'll get this error: System.ServiceModel.CommunicationException: Unable to reach sixeyed-ipasbr.servicebus.windows.net via TCP (9351, 9352) or HTTP (80, 443) - and that will probably be the start of lots of discussions. Network guys don't really like giving servers special permissions for the web proxy, and they really don't like opening ports, so they'll need to be convinced about this. The resolution in our case was to put up a dedicated box in a DMZ, tinker with the firewall and the proxy until we got a relay connection working, then run some traffic which the the network guys monitored to do a security assessment afterwards. Along the way we hit a few more issues, diagnosed mainly with Fiddler and Wireshark: System.Net.ProtocolViolationException: Chunked encoding upload is not supported on the HTTP/1.0 protocol - this means the TCP ports are not available, so Azure tries to relay messaging traffic across HTTP. The service can access the endpoint, but the proxy is downgrading traffic to HTTP 1.0, which does not support tunneling, so Azure can’t make its connection. We were using the Squid proxy, version 2.6. The Squid project is incrementally adding HTTP 1.1 support, but there's no definitive list of what's supported in what version (here are some hints). System.ServiceModel.Security.SecurityNegotiationException: The X.509 certificate CN=servicebus.windows.net chain building failed. The certificate that was used has a trust chain that cannot be verified. Replace the certificate or change the certificateValidationMode. The evocation function was unable to check revocation because the revocation server was offline. - by this point we'd given up on the HTTP proxy and opened the TCP ports. We got this error when the relay binding does it's authentication hop to ACS. The messaging traffic is TCP, but the control traffic still goes over HTTP, and as part of the ACS authentication the process checks with a revocation server to see if Microsoft’s ACS cert is still valid, so the proxy still needs some clearance. The service account (the IIS app pool identity) needs access to: www.public-trust.com mscrl.microsoft.com We still got this error periodically with different accounts running the app pool. We fixed that by ensuring the machine-wide proxy settings are set up, so every account uses the correct proxy: netsh winhttp set proxy proxy-server="http://proxy.x.y.z" - and you might need to run this to clear out your credential cache: certutil -urlcache * delete If your network guys end up grudgingly opening ports, they can restrict connections to the IP address range for your chosen Azure datacentre, which might make them happier - see Windows Azure Datacenter IP Ranges. After all that you've hopefully got an on-premise service listening in the cloud, which you can consume from pretty much any technology.

    Read the article

  • Need help limiting a join in Transact-sql

    - by MsLis
    I'm somewhat new to SQL and need help with query syntax. My issue involves 2 tables within a larger multi-table join under Transact-SQL (MS SQL Server 2000 Query Analyzer) I have ACCOUNTS and LOGINS, which are joined on 2 fields: Site & Subset. Both tables may have multiple rows for each Site/Subset combination. ACCOUNTS: | LOGINS: SITE SUBSET FIELD FIELD FIELD | SITE SUBSET USERID PASSWD alpha bravo blah blah blah | alpha bravo foo bar alpha charlie blah blah blah | alpha bravo bar foo alpha charlie bleh bleh blue | alpha charlie id ego delta bravo blah blah blah | delta bravo john welcome delta foxtrot blah blah blah | delta bravo jane welcome | delta bravo ken welcome | delta bravo barbara welcome I want to select all rows in ACCOUNTS which have LOGIN entries, but only 1 login per account. DESIRED RESULT: SITE SUBSET FIELD FIELD FIELD USERID PASSWD alpha bravo blah blah blah foo bar alpha charlie blah blah blah id ego alpha charlie bleh bleh blue id ego delta bravo blah blah blah jane welcome I don't really care which row from the login table I get, but the UserID and Password have to correspond. [Don't return invalid combinations like foo/foo or bar/bar] MS Access has a handy FIRST function, which can do this, but I haven't found an equivalent in TSQL. Also, if it makes a difference, other tables are joined to ACCOUNTS, but this is the only use of LOGINS in the structure. Thank you very much for any assistance.

    Read the article

  • Schema objects not visible in SQL Server Management Studio 2008

    - by Germ
    I'm experiencing a weird problem with a SQL login. When I connect to the server in Microsoft SQL Server Management Studio (2008) using this account, I cannot see any of the tables, stored procedures etc. that this account should have access to on a particular database. When I connect to the same server within Visual Studio (2008) with the same account everything is there. When I connect with the same account on a Virtual Machine everything is there. I've also had a co-worker connect to the server using the same login and he's able to view everything as well. I use Microsoft SQL Server Management Studio all day connecting to different servers and databases and I've never experienced this problem. Does anyone have any suggestions on how I can diagnose this problem? I've checked to make sure I don't have any Table filters etc. There's several database on this server and I'm able to see the correct tables that this account has access to in the other databases just fine. Running this query lists the tables I'm expecting to see. SELECT * FROM INFORMATION_SCHEMA.TABLES

    Read the article

  • Sql Server Compact 2005 on Visual Studio 2008

    - by Tim
    I'm working on a Windows Forms application that interacts with a Sql Compact database file created by SQL Server 2005. This application was originally developed in Visual Studio 2005 but was recently converted to a Visual Studio 2008 solution. In regards to Sql Compact, we made sure the references were all still set to the assemblies that handle the 2005 version of Sql Compact rather than Sql Compact 3.5. Having done this, the application still runs just as it should - it will still interact with the Compact database, perform synchronization operations, etc. However, I just discovered today that Visual Studio tools such as the DataSet Designer do not play well with a Sql Compact database file of an older version than 3.5. If I go to the New Connection... wizard, the only Sql Compact Data Source / Data Provider are for Sql Compact 3.5. I assume that Visual Studio 2008 just doesn't include the data provider for the older version of Sql Compact by default. Is there a way you can add the old version of Sql Compact to the list of "Data Sources" for the connection wizard? To see exactly what I'm referring to, click on the Tools menu of Visual Studio 2008 and click Connect to Database... In the window that comes up, click Change... next to the Data source setting. From this dialog there is no way I can select the earlier version of Sql Compact - only 3.5 is available. Maybe I need to add an assembly reference somewhere? Or copy some file(s) from my Visual Studio 2005 directory over to 2008? I would think there would have to be a way for Visual Studio 2008 to be able to interact with a Sql Compact database from Sql Server 2005. To provide one more bit of detail, I discovered this problem when I went to my DataSet, right-clicked and tried to add a TableAdapter. The first screen that comes up says, "Choose Your Data Connection". If I leave it set to the Sql Compact connection that we've always used, I now get the following error when clicking the Next button: Failed to open a connection to the database "The selected database was created with an earlier version of SQL Server Compact and needs to be upgraded to SQL Server Compact 3.5 before the connection can be opened or tested. Upgrade the database by creating a new data connection and completing the Add Connection dialog box." Check the connection and try again. The only problem here is that we still use Sql Server 2005, and if my understanding is correct, it does not produce subscription files that are compatible with Sql Compact 3.5. If I am wrong in this assumption, please correct me. Any help you can provide is greatly appreciated. Thank you.

    Read the article

  • SQL Agent Logon - What is going on?

    - by James Wiseman
    I have a DTSX package that is called from a SQL Agent Job. The DTSX package references a file at a fixed location (e.g. e:\mssql\myfile.txt). On most machines, this location exists, but on some I have to manually map this (which is not a problem - I know a better solution would be to use package conifgurations to dynamically pull the file location, but this is not an option here - and anyway I'd like to understand what is going on) I have set up the agent service to run as a specific user (e.g. myuser) When I log on as this user and map the directory, then run the dtsx package directly, then all goes well. When I run the package through a SQL Agent Job, the file cannot be found. If I add a command line job step to the agent job to map the drive: net use e: \\svr\location Then all works file also. So what is going on in the backgound? How come the SQL Agent user requries the drive mapping even when I am logged in as this user.

    Read the article

  • Windows Azure Platform, latest version?

    - by Vimvq1987
    I searched through internet but found nothing. The whitepapers of Windows Azure Platform say something like that: In its first release, the maximum size of a single database in SQL Azure Database is 10 gigabytes A few things are omitted in the technology’s first release, however, such as the SQL Common Language Runtime (CLR) and support for spatial data. (Microsoft says that both will be available in a future version.) I want to know that Microsoft had updated Windows Azure Platform and removed these limits or not? I decided to post this question here instead of Serverfault.com because it's more relative to programming than administration. Thank you

    Read the article

  • Need details about applications that are running on Windows Azure

    - by veda
    I have an application which requires large amount of data storage (say some PB) and computing resources. Instead of going for clusters, I am planning to propose to use Windows Azure Cloud for this application. I have gone through white papers of Windows Azure and have collected some details about Azure. But I feel that is not substantial. I need to do some case study about applications that are running on the azure and that uses azure storage efficiently. I looked for several research paper in related to performance of the applications in Windows Azure. But as Azure was quite new, I wasn't able to find any. Now, I am looking for some white papers/details regarding application that uses azure storage to substantiate my proposal. I also need to understand the windows azure storage architecture and virtual machine architecture. Do anyone know some research papers or details or blogs or something related to these topics.

    Read the article

  • Write a sql for updating data based on time

    - by Lu Lu
    Hello everyone, Because I am new with SQL Server and T-SQL, so I will need your help. I have 2 table: Realtime and EOD. To understand my question, I give example data for 2 tables: ---Realtime table--- Symbol Date Value ABC 1/3/2009 03:05:01 327 // this day is not existed in EOD -> inserting BBC 1/3/2009 03:05:01 458 // this day is not existed in EOD -> inserting ABC 1/2/2009 03:05:01 326 // this day is new -> updating BBC 1/2/2009 03:05:01 454 // this day is new -> updating ABC 1/2/2009 02:05:01 323 BBC 1/2/2009 02:05:01 453 ABC 1/2/2009 01:05:01 313 BBC 1/2/2009 01:05:01 423 ---EOD table--- Symbol Date Value ABC 1/2/2009 02:05:01 323 BBC 1/2/2009 02:05:01 453 I will need to create a store procedure to update value of symbols. If data in day of a symbol is new (compare between Realtime & EOD), it will update value and date for EOD at that day if existing, otherwise inserting. And store will update EOD table with new data: ---EOD table--- Symbol Date Value ABC 1/3/2009 03:05:01 327 BBC 1/3/2009 03:05:01 458 ABC 1/2/2009 03:05:01 326 BBC 1/2/2009 03:05:01 454 P/S: I use SQL Server 2005. And I have a similar answered question at here: http://stackoverflow.com/questions/2726369/help-to-the-way-to-write-a-query-for-the-requirement Please help me. Thanks.

    Read the article

  • Windows Azure Table Storage LINQ Operators

    - by Ryan Elkins
    Currently Table Storage supports From, Where, Take, and First. Are there plans to support any of the other 29 operators? If we have to code for these ourselves, how much of a performance difference are we looking at to something similar via SQL and SQL Server? Do you see it being somewhat comparable or will it be far far slower if I need to do a Count or Sum or Group By over a gigantic dataset? I like the Azure platform and the idea of cloud based storage. I like Windows Azure for the amount of data it can store and the schema-less nature of table storage. SQL Azure just won't work due to the high cost to storage space.

    Read the article

  • Return type from DAL class (Sql ce, Linq to Sql)

    - by bretddog
    Hi, Using VS2008 and Sql CE 3.5, and preferably Linq to Sql. I'm learning database, and unsure about DAL methods return types and how/where to map the data over to my business objects: I don't want direct UI binding. A business object class UserData, and a class UserDataList (Inherits List(Of UserData)), is represented in the database by the table "Users". I use SQL Compact and run SqlMetal which creates dbml/designer.vb file. This gives me a class with a TableAttribute: <Table()> _ Partial Public Class Users I'm unsure how to use this class. Should my business object know about this class, such that the DAL can return the type Users, or List(Of Users) ? So for example the "UserDataService Class" is a part of the DAL, and would have for example the functions GetAll and GetById. Will this be correct : ? Public Class UserDataService Public Function GetAll() As List(Of Users) Dim ctx As New MyDB(connection) Dim q As List(Of Users) = From n In ctx.Users Select n Return q End Function Public Function GetById(ByVal id As Integer) As Users Dim ctx As New MyDB(connection) Dim q As Users = (From n In ctx.Users Where n.UserID = id Select n).Single Return q End Function And then, would I perhaps have a method, say in the UserDataList class, like: Public Class UserDataList Inherits List(Of UserData) Public Sub LoadFromDatabase() Me.clear() Dim database as New UserDataService dim users as List(Of Users) users = database.GetAll() For each u in users dim newUser as new UserData newUser.Id = u.Id newUser.Name = u.Name Me.Add(newUser) Next End Sub End Class Is this a sensible approach? Would appreciate any suggestions/alternatives, as this is my first attempt on a database DAL. cheers!

    Read the article

  • Strange Sql Server 2005 behavior

    - by Justin C
    Background: I have a site built in ASP.NET with Sql Server 2005 as it's database. The site is the only site on a Windows Server 2003 box sitting in my clients server room. The client is a local school district, so for data security reasons there is no remote desktop access and no remote Sql Server connection, so if I have to service the database I have to be at the terminal. I do have FTP access to update ASP code. Problem: I was contacted yesterday about an issue with the system. When I looked in to it, it seems a bug that I had solved nearly a year ago had returned. I have a stored procedure that used to take an int as a parameter but a year ago we changed the structure of the system and updated the stored procedure to take an nvarchar(10). The stored procedure somehow changed back to taking an int instead of an nvarchar. There is an external hard drive connected to the server that copies data periodically and has the ability to restore the server in case of failure. I would have assumed that somehow an older version of the database had been restored, but data that I know was inserted 7 days and 1 day before the bug occurred is still in the database. Question: Is there anyway that the structure of a Sql Server 2005 database can revert to a previous version or be restored to a previous version without touching the actual data? No one else should have access to the server so I'm going a little insane trying to figure out how this even happened. Any ideas?

    Read the article

  • Update SQL Server 2000 to SQL Server 2008: Benefits please?

    - by Ciaran Archer
    Hi there I'm looking for the benefits of upgrading from SQL Server 2000 to 2008. I was wondering: What database features can we leverage with 2008 that we can't now? What new TSQL features can we look forward to using? What performance benefits can we expect to see? What else will make management go for it? And the converse: What problems can we expect to encounter? What other problems have people found when migrating? Why fix something that isn't (technically) broken? We work in a Java shop, so any .NET / CLR stuff won't rock our world. We also use Eclipse as our main development so any integration with Visual Studio won't be a plus. We do use SQL Server Management Studio however. Some background: Our main database machine is a 32bit Dell Intel Xeon MP CPU 2.0GHz, 40MB of RAM with Physical Address Extension running Windows Server 2003 Enterprise Edition. We will not be changing our hardware. Our databases in total are under a TB with some having more than 200 tables. But they are busy and during busy times we see 60-80% CPU utilisation. Apart form the fact that SQL Server 2000 is coming close to end of life, why should we upgrade? Any and all contributions are appreciated!

    Read the article

  • Azure SDK + Django + Visual Studio 2012 - Publish to Azure succeeds, but I get 500 error

    - by hume
    I followed the instructions here: https://www.windowsazure.com/en-us/develop/python/tutorials/django-with-visual-studio/ However, whenever I try to open the url to my web app in the cloud I get a 500 error. The tutorial doesn't mention setting up the TEMPLATE_DIRS setting in the django application or doing any work on the cloud service machine to install python/django. Could these be the problem?

    Read the article

  • Basic Spatial Data with SQL Server and Entity Framework 5.0

    - by Rick Strahl
    In my most recent project we needed to do a bit of geo-spatial referencing. While spatial features have been in SQL Server for a while using those features inside of .NET applications hasn't been as straight forward as could be, because .NET natively doesn't support spatial types. There are workarounds for this with a few custom project like SharpMap or a hack using the Sql Server specific Geo types found in the Microsoft.SqlTypes assembly that ships with SQL server. While these approaches work for manipulating spatial data from .NET code, they didn't work with database access if you're using Entity Framework. Other ORM vendors have been rolling their own versions of spatial integration. In Entity Framework 5.0 running on .NET 4.5 the Microsoft ORM finally adds support for spatial types as well. In this post I'll describe basic geography features that deal with single location and distance calculations which is probably the most common usage scenario. SQL Server Transact-SQL Syntax for Spatial Data Before we look at how things work with Entity framework, lets take a look at how SQL Server allows you to use spatial data to get an understanding of the underlying semantics. The following SQL examples should work with SQL 2008 and forward. Let's start by creating a test table that includes a Geography field and also a pair of Long/Lat fields that demonstrate how you can work with the geography functions even if you don't have geography/geometry fields in the database. Here's the CREATE command:CREATE TABLE [dbo].[Geo]( [id] [int] IDENTITY(1,1) NOT NULL, [Location] [geography] NULL, [Long] [float] NOT NULL, [Lat] [float] NOT NULL ) Now using plain SQL you can insert data into the table using geography::STGeoFromText SQL CLR function:insert into Geo( Location , long, lat ) values ( geography::STGeomFromText ('POINT(-121.527200 45.712113)', 4326), -121.527200, 45.712113 ) insert into Geo( Location , long, lat ) values ( geography::STGeomFromText ('POINT(-121.517265 45.714240)', 4326), -121.517265, 45.714240 ) insert into Geo( Location , long, lat ) values ( geography::STGeomFromText ('POINT(-121.511536 45.714825)', 4326), -121.511536, 45.714825) The STGeomFromText function accepts a string that points to a geometric item (a point here but can also be a line or path or polygon and many others). You also need to provide an SRID (Spatial Reference System Identifier) which is an integer value that determines the rules for how geography/geometry values are calculated and returned. For mapping/distance functionality you typically want to use 4326 as this is the format used by most mapping software and geo-location libraries like Google and Bing. The spatial data in the Location field is stored in binary format which looks something like this: Once the location data is in the database you can query the data and do simple distance computations very easily. For example to calculate the distance of each of the values in the database to another spatial point is very easy to calculate. Distance calculations compare two points in space using a direct line calculation. For our example I'll compare a new point to all the points in the database. Using the Location field the SQL looks like this:-- create a source point DECLARE @s geography SET @s = geography:: STGeomFromText('POINT(-121.527200 45.712113)' , 4326); --- return the ids select ID, Location as Geo , Location .ToString() as Point , @s.STDistance( Location) as distance from Geo order by distance The code defines a new point which is the base point to compare each of the values to. You can also compare values from the database directly, but typically you'll want to match a location to another location and determine the difference for which you can use the geography::STDistance function. This query produces the following output: The STDistance function returns the straight line distance between the passed in point and the point in the database field. The result for SRID 4326 is always in meters. Notice that the first value passed was the same point so the difference is 0. The other two points are two points here in town in Hood River a little ways away - 808 and 1256 meters respectively. Notice also that you can order the result by the resulting distance, which effectively gives you results that are ordered radially out from closer to further away. This is great for searches of points of interest near a central location (YOU typically!). These geolocation functions are also available to you if you don't use the Geography/Geometry types, but plain float values. It's a little more work, as each point has to be created in the query using the string syntax, but the following code doesn't use a geography field but produces the same result as the previous query.--- using float fields select ID, geography::STGeomFromText ('POINT(' + STR (long, 15,7 ) + ' ' + Str(lat ,15, 7) + ')' , 4326), geography::STGeomFromText ('POINT(' + STR (long, 15,7 ) + ' ' + Str(lat ,15, 7) + ')' , 4326). ToString(), @s.STDistance( geography::STGeomFromText ('POINT(' + STR(long ,15, 7) + ' ' + Str(lat ,15, 7) + ')' , 4326)) as distance from geo order by distance Spatial Data in the Entity Framework Prior to Entity Framework 5.0 on .NET 4.5 consuming of the data above required using stored procedures or raw SQL commands to access the spatial data. In Entity Framework 5 however, Microsoft introduced the new DbGeometry and DbGeography types. These immutable location types provide a bunch of functionality for manipulating spatial points using geometry functions which in turn can be used to do common spatial queries like I described in the SQL syntax above. The DbGeography/DbGeometry types are immutable, meaning that you can't write to them once they've been created. They are a bit odd in that you need to use factory methods in order to instantiate them - they have no constructor() and you can't assign to properties like Latitude and Longitude. Creating a Model with Spatial Data Let's start by creating a simple Entity Framework model that includes a Location property of type DbGeography: public class GeoLocationContext : DbContext { public DbSet<GeoLocation> Locations { get; set; } } public class GeoLocation { public int Id { get; set; } public DbGeography Location { get; set; } public string Address { get; set; } } That's all there's to it. When you run this now against SQL Server, you get a Geography field for the Location property, which looks the same as the Location field in the SQL examples earlier. Adding Spatial Data to the Database Next let's add some data to the table that includes some latitude and longitude data. An easy way to find lat/long locations is to use Google Maps to pinpoint your location, then right click and click on What's Here. Click on the green marker to get the GPS coordinates. To add the actual geolocation data create an instance of the GeoLocation type and use the DbGeography.PointFromText() factory method to create a new point to assign to the Location property:[TestMethod] public void AddLocationsToDataBase() { var context = new GeoLocationContext(); // remove all context.Locations.ToList().ForEach( loc => context.Locations.Remove(loc)); context.SaveChanges(); var location = new GeoLocation() { // Create a point using native DbGeography Factory method Location = DbGeography.PointFromText( string.Format("POINT({0} {1})", -121.527200,45.712113) ,4326), Address = "301 15th Street, Hood River" }; context.Locations.Add(location); location = new GeoLocation() { Location = CreatePoint(45.714240, -121.517265), Address = "The Hatchery, Bingen" }; context.Locations.Add(location); location = new GeoLocation() { // Create a point using a helper function (lat/long) Location = CreatePoint(45.708457, -121.514432), Address = "Kaze Sushi, Hood River" }; context.Locations.Add(location); location = new GeoLocation() { Location = CreatePoint(45.722780, -120.209227), Address = "Arlington, OR" }; context.Locations.Add(location); context.SaveChanges(); } As promised, a DbGeography object has to be created with one of the static factory methods provided on the type as the Location.Longitude and Location.Latitude properties are read only. Here I'm using PointFromText() which uses a "Well Known Text" format to specify spatial data. In the first example I'm specifying to create a Point from a longitude and latitude value, using an SRID of 4326 (just like earlier in the SQL examples). You'll probably want to create a helper method to make the creation of Points easier to avoid that string format and instead just pass in a couple of double values. Here's my helper called CreatePoint that's used for all but the first point creation in the sample above:public static DbGeography CreatePoint(double latitude, double longitude) { var text = string.Format(CultureInfo.InvariantCulture.NumberFormat, "POINT({0} {1})", longitude, latitude); // 4326 is most common coordinate system used by GPS/Maps return DbGeography.PointFromText(text, 4326); } Using the helper the syntax becomes a bit cleaner, requiring only a latitude and longitude respectively. Note that my method intentionally swaps the parameters around because Latitude and Longitude is the common format I've seen with mapping libraries (especially Google Mapping/Geolocation APIs with their LatLng type). When the context is changed the data is written into the database using the SQL Geography type which looks the same as in the earlier SQL examples shown. Querying Once you have some location data in the database it's now super easy to query the data and find out the distance between locations. A common query is to ask for a number of locations that are near a fixed point - typically your current location and order it by distance. Using LINQ to Entities a query like this is easy to construct:[TestMethod] public void QueryLocationsTest() { var sourcePoint = CreatePoint(45.712113, -121.527200); var context = new GeoLocationContext(); // find any locations within 5 kilometers ordered by distance var matches = context.Locations .Where(loc => loc.Location.Distance(sourcePoint) < 5000) .OrderBy( loc=> loc.Location.Distance(sourcePoint) ) .Select( loc=> new { Address = loc.Address, Distance = loc.Location.Distance(sourcePoint) }); Assert.IsTrue(matches.Count() > 0); foreach (var location in matches) { Console.WriteLine("{0} ({1:n0} meters)", location.Address, location.Distance); } } This example produces: 301 15th Street, Hood River (0 meters)The Hatchery, Bingen (809 meters)Kaze Sushi, Hood River (1,074 meters)   The first point in the database is the same as my source point I'm comparing against so the distance is 0. The other two are within the 5 mile radius, while the Arlington location which is 65 miles or so out is not returned. The result is ordered by distance from closest to furthest away. In the code, I first create a source point that is the basis for comparison. The LINQ query then selects all locations that are within 5km of the source point using the Location.Distance() function, which takes a source point as a parameter. You can either use a pre-defined value as I'm doing here, or compare against another database DbGeography property (say when you have to points in the same database for things like routes). What's nice about this query syntax is that it's very clean and easy to read and understand. You can calculate the distance and also easily order by the distance to provide a result that shows locations from closest to furthest away which is a common scenario for any application that places a user in the context of several locations. It's now super easy to accomplish this. Meters vs. Miles As with the SQL Server functions, the Distance() method returns data in meters, so if you need to work with miles or feet you need to do some conversion. Here are a couple of helpers that might be useful (can be found in GeoUtils.cs of the sample project):/// <summary> /// Convert meters to miles /// </summary> /// <param name="meters"></param> /// <returns></returns> public static double MetersToMiles(double? meters) { if (meters == null) return 0F; return meters.Value * 0.000621371192; } /// <summary> /// Convert miles to meters /// </summary> /// <param name="miles"></param> /// <returns></returns> public static double MilesToMeters(double? miles) { if (miles == null) return 0; return miles.Value * 1609.344; } Using these two helpers you can query on miles like this:[TestMethod] public void QueryLocationsMilesTest() { var sourcePoint = CreatePoint(45.712113, -121.527200); var context = new GeoLocationContext(); // find any locations within 5 miles ordered by distance var fiveMiles = GeoUtils.MilesToMeters(5); var matches = context.Locations .Where(loc => loc.Location.Distance(sourcePoint) <= fiveMiles) .OrderBy(loc => loc.Location.Distance(sourcePoint)) .Select(loc => new { Address = loc.Address, Distance = loc.Location.Distance(sourcePoint) }); Assert.IsTrue(matches.Count() > 0); foreach (var location in matches) { Console.WriteLine("{0} ({1:n1} miles)", location.Address, GeoUtils.MetersToMiles(location.Distance)); } } which produces: 301 15th Street, Hood River (0.0 miles)The Hatchery, Bingen (0.5 miles)Kaze Sushi, Hood River (0.7 miles) Nice 'n simple. .NET 4.5 Only Note that DbGeography and DbGeometry are exclusive to Entity Framework 5.0 (not 4.4 which ships in the same NuGet package or installer) and requires .NET 4.5. That's because the new DbGeometry and DbGeography (and related) types are defined in the 4.5 version of System.Data.Entity which is a CLR assembly and is only updated by major versions of .NET. Why this decision was made to add these types to System.Data.Entity rather than to the frequently updated EntityFramework assembly that would have possibly made this work in .NET 4.0 is beyond me, especially given that there are no native .NET framework spatial types to begin with. I find it also odd that there is no native CLR spatial type. The DbGeography and DbGeometry types are specific to Entity Framework and live on those assemblies. They will also work for general purpose, non-database spatial data manipulation, but then you are forced into having a dependency on System.Data.Entity, which seems a bit silly. There's also a System.Spatial assembly that's apparently part of WCF Data Services which in turn don't work with Entity framework. Another example of multiple teams at Microsoft not communicating and implementing the same functionality (differently) in several different places. Perplexed as a I may be, for EF specific code the Entity framework specific types are easy to use and work well. Working with pre-.NET 4.5 Entity Framework and Spatial Data If you can't go to .NET 4.5 just yet you can also still use spatial features in Entity Framework, but it's a lot more work as you can't use the DbContext directly to manipulate the location data. You can still run raw SQL statements to write data into the database and retrieve results using the same TSQL syntax I showed earlier using Context.Database.ExecuteSqlCommand(). Here's code that you can use to add location data into the database:[TestMethod] public void RawSqlEfAddTest() { string sqlFormat = @"insert into GeoLocations( Location, Address) values ( geography::STGeomFromText('POINT({0} {1})', 4326),@p0 )"; var sql = string.Format(sqlFormat,-121.527200, 45.712113); Console.WriteLine(sql); var context = new GeoLocationContext(); Assert.IsTrue(context.Database.ExecuteSqlCommand(sql,"301 N. 15th Street") > 0); } Here I'm using the STGeomFromText() function to add the location data. Note that I'm using string.Format here, which usually would be a bad practice but is required here. I was unable to use ExecuteSqlCommand() and its named parameter syntax as the longitude and latitude parameters are embedded into a string. Rest assured it's required as the following does not work:string sqlFormat = @"insert into GeoLocations( Location, Address) values ( geography::STGeomFromText('POINT(@p0 @p1)', 4326),@p2 )";context.Database.ExecuteSqlCommand(sql, -121.527200, 45.712113, "301 N. 15th Street") Explicitly assigning the point value with string.format works however. There are a number of ways to query location data. You can't get the location data directly, but you can retrieve the point string (which can then be parsed to get Latitude and Longitude) and you can return calculated values like distance. Here's an example of how to retrieve some geo data into a resultset using EF's and SqlQuery method:[TestMethod] public void RawSqlEfQueryTest() { var sqlFormat = @" DECLARE @s geography SET @s = geography:: STGeomFromText('POINT({0} {1})' , 4326); SELECT Address, Location.ToString() as GeoString, @s.STDistance( Location) as Distance FROM GeoLocations ORDER BY Distance"; var sql = string.Format(sqlFormat, -121.527200, 45.712113); var context = new GeoLocationContext(); var locations = context.Database.SqlQuery<ResultData>(sql); Assert.IsTrue(locations.Count() > 0); foreach (var location in locations) { Console.WriteLine(location.Address + " " + location.GeoString + " " + location.Distance); } } public class ResultData { public string GeoString { get; set; } public double Distance { get; set; } public string Address { get; set; } } Hopefully you don't have to resort to this approach as it's fairly limited. Using the new DbGeography/DbGeometry types makes this sort of thing so much easier. When I had to use code like this before I typically ended up retrieving data pks only and then running another query with just the PKs to retrieve the actual underlying DbContext entities. This was very inefficient and tedious but it did work. Summary For the current project I'm working on we actually made the switch to .NET 4.5 purely for the spatial features in EF 5.0. This app heavily relies on spatial queries and it was worth taking a chance with pre-release code to get this ease of integration as opposed to manually falling back to stored procedures or raw SQL string queries to return spatial specific queries. Using native Entity Framework code makes life a lot easier than the alternatives. It might be a late addition to Entity Framework, but it sure makes location calculations and storage easy. Where do you want to go today? ;-) Resources Download Sample Project© Rick Strahl, West Wind Technologies, 2005-2012Posted in ADO.NET  Sql Server  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • The embarrassingly obvious about SQL Server CE

    - by Edward Boyle
    I have been working with SQL servers in one form or another for almost two decades now. But I am new to SQL Server Compact Edition. In the past weeks I have been working with SQL Serve CE a lot. The SQL, not a problem, but the engine itself is very new to me. One of the issues I ran into was a simple SQL statement taking excusive amounts of time; by excessive, I mean over one second. I wrote a little code to time the method. Sometimes it took under one second, other times as long as three seconds. –But it was a simple update statement! As embarrassing as it is, why it was slow eluded me. I posted my issue to MSDN and I got a reply from ErikEJ (MS MVP) who runs the blog “Everything SQL Server Compact” . I know little to nothing about SQL Server Compact. This guy is completely obsessed very well versed in CE. If you spend any time in MSDN forums, it seems that this guy single handedly has the answer for every CE question that comes up. Anyway, he said: “Opening a connection to a SQL Server Compact database file is a costly operation, keep one connection open per thread (incl. your UI thread) in your app, the one on the UI thread should live for the duration of your app.” It hit me, all databases have some connection overhead and SQL Server CE is not a database engine running as a service drinking Jolt Cola waiting for someone to talk to him so he can spring into action and show off his quarter-mile sprint capabilities. Imagine if you had to start the SQL Server process every time you needed to make a database connection. Principally, that is what you are doing with SQL Server CE. For someone who has worked with Enterprise Level SQL Servers a lot, I had to come to the mental image that my Open connection to SQL Server CE is basically starting a service, my own private service, and by closing the connection, I am shutting down my little private service. After making the changes in my code, I lost any reservations I had with using CE. At present, my Data Access Layer class has a constructor; in that constructor I open my connection, I also have OpenConnection and CloseConnection methods, I also implemented IDisposable and clean up any connections in Dispose(). I am still finalizing how this assembly will function. – That’s beside the point. All I’m trying to say is: “Opening a connection to a SQL Server Compact database file is a costly operation”

    Read the article

  • T-SQL in SQL Azure

    - by kaleidoscope
    The following table summarizes the Transact-SQL support provided by SQL Azure Database at PDC 2009: Transact-SQL Features Supported Transact-SQL Features Unsupported Constants Constraints Cursors Index management and rebuilding indexes Local temporary tables Reserved keywords Stored procedures Statistics management Transactions Triggers Tables, joins, and table variables Transact-SQL language elements such as Create/drop databases Create/alter/drop tables Create/alter/drop users and logins User-defined functions Views, including sys.synonyms view Common Language Runtime (CLR) Database file placement Database mirroring Distributed queries Distributed transactions Filegroup management Global temporary tables Spatial data and indexes SQL Server configuration options SQL Server Service Broker System tables Trace Flags   Amit, S

    Read the article

  • Custom forms in Sharepoint with MS SQL Server as Backend. Is it possible?

    - by Kaan
    We're evaluating using SharePoint 2010 as our project management tool. Specifically, the system needs to satisfy the following: Discussion groups Project management (simple issue tracking, no complex workflows or vcs integrations) News feed for the project(s) File sharing based on authorization/user-roles Custom homepage Custom forms using MS SQL Server as a backend and contents of old forms searchable from the user interface. Now, I think [1-5] is possible using SharePoint (Comments are always welcome :)). I'm not sure about [6]. Is it possible? For instance, can an admin or a user of the SharePoint portal, create a custom form (without any programming) that uses MS SQL Server as a backend and publish it to the portal so that other users can also perform data entry? If it can be done (be it with or without some programming), can users perform text search on form data using the SharePoint interface?

    Read the article

< Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >