Search Results

Search found 28297 results on 1132 pages for 'sql azure'.

Page 131/1132 | < Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >

  • Azure News at TechEd 2010

    - by guybarrette
    The Azure team did a few announcements today at TechED US 2010: June 2010 Azure Tools SDK with support for VS 2010 RTM and the .NET Framework 4 Production launch of the Azure CDN OS Auto-upgrade Feature Read all about it here var addthis_pub="guybarrette";

    Read the article

  • Windows Azure : « La confidentialité est un enjeu fondamental dans le Cloud Public », entretien avec le responsable France d'Azure

    Windows Azure : « La confidentialité est un enjeu fondamental dans le Cloud Public » 3 questions à Julien Lesaicherre, Responsable de la plateforme, Microsoft France Azure, la plateforme Cloud dédiée aux développeurs, a connu des changements importants et des améliorations majeures lors de ce mois de juin. Developpez.com s'est donc entretenu avec Julien Lesaicherre, responsable de l'offre chez Microsoft France, pour faire le point sur trois sujets clefs : pourquoi aujourd'hui choisir Azure, la confidentialité et le succès (ou non) de l'offre auprès des développeur...

    Read the article

  • Joins in LINQ to SQL

    - by rajbk
    The following post shows how to write different types of joins in LINQ to SQL. I am using the Northwind database and LINQ to SQL for these examples. NorthwindDataContext dataContext = new NorthwindDataContext(); Inner Join var q1 = from c in dataContext.Customers join o in dataContext.Orders on c.CustomerID equals o.CustomerID select new { c.CustomerID, c.ContactName, o.OrderID, o.OrderDate }; SELECT [t0].[CustomerID], [t0].[ContactName], [t1].[OrderID], [t1].[OrderDate]FROM [dbo].[Customers] AS [t0]INNER JOIN [dbo].[Orders] AS [t1] ON [t0].[CustomerID] = [t1].[CustomerID] Left Join var q2 = from c in dataContext.Customers join o in dataContext.Orders on c.CustomerID equals o.CustomerID into g from a in g.DefaultIfEmpty() select new { c.CustomerID, c.ContactName, a.OrderID, a.OrderDate }; SELECT [t0].[CustomerID], [t0].[ContactName], [t1].[OrderID] AS [OrderID], [t1].[OrderDate] AS [OrderDate]FROM [dbo].[Customers] AS [t0]LEFT OUTER JOIN [dbo].[Orders] AS [t1] ON [t0].[CustomerID] = [t1].[CustomerID] Inner Join on multiple //We mark our anonymous type properties as a and b otherwise//we get the compiler error "Type inferencce failed in the call to 'Join’var q3 = from c in dataContext.Customers join o in dataContext.Orders on new { a = c.CustomerID, b = c.Country } equals new { a = o.CustomerID, b = "USA" } select new { c.CustomerID, c.ContactName, o.OrderID, o.OrderDate }; SELECT [t0].[CustomerID], [t0].[ContactName], [t1].[OrderID], [t1].[OrderDate]FROM [dbo].[Customers] AS [t0]INNER JOIN [dbo].[Orders] AS [t1] ON ([t0].[CustomerID] = [t1].[CustomerID]) AND ([t0].[Country] = @p0) Inner Join on multiple with ‘OR’ clause var q4 = from c in dataContext.Customers from o in dataContext.Orders.Where(a => a.CustomerID == c.CustomerID || c.Country == "USA") select new { c.CustomerID, c.ContactName, o.OrderID, o.OrderDate }; SELECT [t0].[CustomerID], [t0].[ContactName], [t1].[OrderID], [t1].[OrderDate]FROM [dbo].[Customers] AS [t0], [dbo].[Orders] AS [t1]WHERE ([t1].[CustomerID] = [t0].[CustomerID]) OR ([t0].[Country] = @p0) Left Join on multiple with ‘OR’ clause var q5 = from c in dataContext.Customers from o in dataContext.Orders.Where(a => a.CustomerID == c.CustomerID || c.Country == "USA").DefaultIfEmpty() select new { c.CustomerID, c.ContactName, o.OrderID, o.OrderDate }; SELECT [t0].[CustomerID], [t0].[ContactName], [t1].[OrderID] AS [OrderID], [t1].[OrderDate] AS [OrderDate]FROM [dbo].[Customers] AS [t0]LEFT OUTER JOIN [dbo].[Orders] AS [t1] ON ([t1].[CustomerID] = [t0].[CustomerID]) OR ([t0].[Country] = @p0)

    Read the article

  • Trace Flag 610 – When should you use it?

    - by simonsabin
    Thanks to Marcel van der Holst for providing this great information on the use of Trace Flag 610. This trace flag can be used to have minimal logging into a b tree (i.e. clustered table or an index on a heap) that already has data. It is a trace flag because in testing they found some scenarios where it didn’t perform as well. Marcel explains why below. “ TF610 can be used to get minimal logging in a non-empty B-Tree. The idea is that when you insert a large amount of data, you don't want to...(read more)

    Read the article

  • MDX Studio download #mdx #ssas

    - by Marco Russo (SQLBI)
    Short version: the latest available version of MDX Studio can be downloaded from http://www.sqlbi.com/tools/mdx-studio/ Long version: Last week Stacia Misner twitted that the online version of MDX Studio was no longer available. It was hosted on http://mdx.mosha.com. It was a sad news, and it is also not good that nobody is maintaining the desktop version of MDX Studio. The latest release is the 0.4.14 and as I am writing it is still available on a SkyDrive link provided by Mosha Pasumansky, who wrote MDX Studio. Mosha does not work in Microsoft now and the entire BI community hopes that somebody will continue its work on this product. Unfortunately, it cannot be published on CodePlex because of some IP restrictions. Only bad news? Well, I hope no. The first good news is that MDX Studio also works with Analysis Services 2012 in Multidimensional mode. The second news is that, after having checked that we can do that, we created a web page on SQLBI web site to download the latest available release of MDX Studio. I hope it will be necessary to update it in the future, by now it is just a way to simplify the finding and download of this precious tool, and to grant that it will not disappear in case the current SkyDrive using to host the download would be discontinued, like it happened to the MDX Studio online version. Now a question to the BI Community: I know that there was some content available regarding tutorial on MDX Studio. I’d like to gather it and to put all in a single place. If you have such content, please contact me directly writing to marco (dot) russo (at) sqlbi [dot] com. Thanks!

    Read the article

  • Formatting Keywords to UPPERCASE In Oracle SQL Developer

    - by thatjeffsmith
    I received this question from a customer today, and it took me more than a few minutes to remember where this preference was located in SQL Developer. This tells me that the topic is ripe for blogging How do I go FROM: select * from scott.emp where ename like '%JEFF%' TO SELECT * FROM scott.emp WHERE ename LIKE '%JEFF%' It’s all in the formatting You need to access the formatting preferences under the Tools menu. It takes a bit of navigating to get there, so bear with me: Tools Database SQL Formatter Oracle Formatting Click ‘Edit’ on the profile Other Case change: ‘Keywords Uppercase’ It’s easy to find once you know where to look? You can tell it to leave the case alone, upper everything, upper only the keywords, lower everything. Accessing the Formatter Options We allow separate formatting options for different RDBMS. You need to make sure you’re accessing the ‘Oracle Formatting’ page in the preferences. You can then choose to edit the default options OR you can do what I have done – save the defaults as a new set of options. I’ve called my profile ‘JeffCustom.’ I can now switch back and forth now through different sets of formatting options. You need to hit the ‘Edit’ button to get to the formatting options editor. A good number of people seem to miss this. Select your profile, then hit the ‘Edit’ button

    Read the article

  • Connecting to SQL database using SQLCMD

    - by kaleidoscope
    As we all know, there are a number of ways you can connect to your SQL Azure Database. One of the quick options is to try to connect to SQL server is SQLCMD. To start the SQLCMD utility and connect to a named instance of SQL Server Open a Command Prompt window, and type sqlcmd -S myServer\instanceName. Replace myServer\instanceName with the name of the computer and the instance of SQL Server that you want to connect to. Press ENTER. The sqlcmd prompt (1>) indicates that you are connected to the specified instance of SQL Server. SQL Management Studio offers the facility to use SQLCMD from within SQL scripts by using SQLCMD Mode. How to: Enable SQLCMD mode in the Transact-SQL Editor (About how to start the editor, see How to: Start the Transact-SQL Editor.) To toggle SQLCMD mode from the Data menu 1. Open the query in the Transact-SQL editor. 2. On the Data menu, point to Transact-SQL Editor, and click SQLCMD Mode. To toggle SQLCMD mode from the toolbar 1. Open the query in the Transact-SQL editor. 2. On the Transact-SQL Editor toolbar, click SQLCMD Mode. To toggle SQLCMD mode from the shortcut menu 1. Open the query in the Transact-SQL editor. 2. Right-click anywhere in the editor window, and then click SQLCMD Mode. For more information follow below link http://msdn.microsoft.com/en-us/library/ms170207.aspx   Geeta, G

    Read the article

  • Upgrade to 2008 R2

    - by DavidWimbush
    I don't like it, Carruthers. It's just too quiet. Well, I've done the pre-production server, the main live server and the Reporting/BI server with remarkably little trouble. Pre-production and live were rebuilds. I failed live over to our log shipping standby for the duration, which has a gotcha I blogged about before. When I failed back to the primary live server again, it was very quick to bring the databases online. I understand the databases don't actually get upgraded until you recover them but there was no noticable delay. It's gone from 2005 Workgroup - limited to 4GB of memory - to 2008 R2 Standard so it can now use nearly all of the 30GB in the server. It's soo much faster. The reporting/BI server I upgraded in situ. This took a while but, again, went smoothly. Just watch out, because the master database was left at compatibility level 90. Also the upgrade decided to use the reporting service's credentials for database access when running reports. It didn't preserve the existing credentials and I had to go into the Reporting Configuration Manager to put them back in. Make sure you know what credentials your server is using before you upgrade. All things considered, a fairly painless experience. Now I just have to upgrade and reset our log shipping standby server again!

    Read the article

  • Windows Azure Boot Camp coming to KC in April

    - by John Alexander
    Interested in getting up to speed on Windows Azure? Then check out this FREE boot camp, all across the US this   What is a Windows Azure Boot Camp? Windows Azure Boot Camp is a two day deep dive class to get you up to speed on developing for Windows Azure. The class includes a trainer with deep real world experience with Azure, as well as a series of labs so you can practice what you just learned. ABC is more than just a class, it is also an event in a box. If you don't see a class near you, then throw your own. We provide all of the materials and training you need to host your own class. This can be for your company, your customers, your friends, or even your family. Please let us know so we can give you all of the details.   Awesome. How much does it cost? Thanks to all of our fantabulous sponsors, this two day training event is FREE! We will provide drinks and snacks, but you will be on your own for lunch on both days. This is a training class after all.   How do I attend one? You can click here to register for the Kansas City event on April 8th and 9th or click here to see where else ABC will be… WHAT TO BRING – important!!!

    Read the article

  • SQL Server Windows Auth Login sees Domain as untrusted...

    - by Mr Shoubs
    I've had someone set up a domain controller on windows 2008 on one server, and sql server 2008 on another. The domain seems to be working fine, I'm logged on as a domain user on both servers, nothing seems to be a problem there. However, when I try to add a domain user/group to SQL Server Security (e.g. clicking ok from the create login screen) it says it can't find it (even though I've used the search to find the correct account in the first place), when I try to logon (even though I haven't added it yet) it says something about the account being part of an untrusted domain instead of saying I don't have permission to log on. Anyone have any ideas on what is set up incorrectly?

    Read the article

  • Steps to deploying on Windows Azure

    - by Vincent Grondin
    Alright, these steps might be a little detailed and of few might not be necessary but still it's a pretty accurate road map to deploying on azure...     1)     Open you solution 2)      Rebuild ALL 3)      Right click on your Azure project and click "Publish" 4)      It should open a windows explorer window with your package to be uploaded (.cspkg ) and its associated configuration (.cscfg) to be uploaded too.  Keep it open, you'll need that path later on... 5)      It should also open a browser asking you to login to your passport account, please do so. 6)      After this you will be redirected to the Azure Portal where you will see your Azure Project Name below the « Projet Name » section.  Click on it. 7)      Then you should be redirected to a detailed view of your account on Azure where you will create a new service by clicking the hyperlink on the top right corner. 8)      Choose the right service type for you, most likely the "Hosted Service" type 9)      Choose a « Label » name and click « next » 10)   Choose a name for your service and validate that the name is available in the cloud by clicking the "Check Availability" button 11)   At the bottom of this same page, you can choose to create a group for your service, use no group or join an existing group.  Creating a group means that all applications that belong to the same group will see no cost to exchanging data between other applications of the same group.  Most of the time when you create a single application, creating a group is not necessary.  You should choose a region that's close to your own region. 12)   On the next window, you should see a "Production" environment and a "Staging" environment.  Beware because "Staging" and "Production" are two different environments in the cloud and applications in "Staging" even when not runing do continue to rack in charges...  Choose an environment and click "Deploy". 13)   In the following window, browse to the path where your cspkg resides and then do the same thing with your cscfg file.  Choose a name for your Label,  and click "Deploy"... 14)   From now on, the clock is ticking and unless you have free Azure hours, your credit card is being billed… 15)   Click on the « Run » button to start your application 16)   Be patient.... be very patient… 17)   Once your application has finished starting, you should see a GREEN circle on the left side of the screen indicating that your application is READY.  Click the URL to test your application and remember that if your application is a service, you have to hit the "svc" class behind the link you see there.  Something in the likes of http://testvince2.cloudapp.net/service1.svc  (this is a fictional link) 18)   Hopefully your application will show up or in the case of a service, you will see your service's wsdl meaning that everything is working fine. Happy cloud computing all!

    Read the article

  • Connecting MS SQL using freetds and unixodbc: isql - no default driver specified

    - by Dejan
    I am trying to connect to the MS SQL database using freetds and unixodbc. I have read various guides how to do it, but no one works fine for me. When I try to connect to the database using isql tool, I get the following error: $ isql -v TS username password [IM002][unixODBC][Driver Manager]Data source name not found, and no default driver specified [ISQL]ERROR: Could not SQLConnect Have anybody already successfully established the connection to the MS SQL database using freetds and unixodbc on Ubuntu 12.04? I would really appreciate some help. Below is the procedure I used to configure the freetds and unixodbc. Thanks for your help in advance! Procedure First, I have installed the following packages sudo apt-get unixodbc unixodbc-dev freetds-dev tdsodbc and configured freetds as follows: --- /etc/freetds/freetds.conf --- [TS] host = SERVER port = 1433 tds version = 7.0 client charset = UTF-8 Using tsql tool I can successfully connect to the database by executing tsql -S TS -U username -P password As I need an odbc connection I configured odbcinst.ini as follows: --- /etc/odbcinst.ini --- [FreeTDS] Description = FreeTDS Driver = /usr/lib/x86_64-linux-gnu/odbc/libtdsodbc.so Setup = /usr/lib/x86_64-linux-gnu/odbc/libtdsS.so FileUsage = 1 CPTimeout = CPResuse = client charset = utf-8 and odbc.ini as follows: --- /etc/odbc.ini --- [TS] Description = "test" Driver = FreeTDS Servername = SERVER Server = SERVER Port = 1433 Database = DBNAME Trace = No Trying to connect to the database using isql tool with such a configuration results the following error: $ isql -v TS username password [IM002][unixODBC][Driver Manager]Data source name not found, and no default driver specified [ISQL]ERROR: Could not SQLConnect

    Read the article

  • SQL in the City (Charlotte) Wrap Up

    - by drsql
    Ok, it has been quite a while since the event, two weeks and a day to be exact, but I needed a rest before hitting Windows Live Writer again. Speaking is exhausting, traveling is exhausting, and well, I replaced my laptop and had to get all of my software back together. (Between Windows 8.1 sync features, Dropbox and Skydrive, it has never been easier…but I digress.) There are plenty of great vendors out there, but one of my favorites has always been Red-Gate. I have written half of a book with them,...(read more)

    Read the article

  • SSMS Tools Pack now supports Denali CTP1

    - by AaronBertrand
    Earlier today, Mladen Prajdic ( blog | twitter ) released an updated version of his SSMS Tools Pack (v.1.9.4), a free add-in for Management Studio that provides a ton of helpful functionality that isn't available with the native tools. I'm really glad this happened, because I've installed Denali on all of my VMs and have been using it for most of my work, and I've been missing some of the little things the tool adds. In addition to adding Denali support, Mladen also fixed a handful of minor bugs...(read more)

    Read the article

  • New Silverligh-based Protal for Windows Azure AppFabric Labs

    - by Shaun
    Wade Wegner introduced a good news about the Windows Azure platform, which is the new Silverlight-based portal for Windows Azure AppFabric Labs had been launched. You can have a look here. As we know the new Silverlight-based portal of Windows Azure had been published on the Nov of last year but the AppFabric part still not changed. (Clicked the AppFabric link will direct to the old portal.) Now the Silverlight-based AppFabric portal is available for Labs. For more information about this new portal and new features with the latest Windows Azure AppFabric CTP please refer to Wade’s blob post.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Smart defaults [SSDT]

    - by jamiet
    I’ve just discovered a new, somewhat hidden, feature in SSDT that I didn’t know about and figured it would be worth highlighting here because I’ll bet not many others know it either; the feature is called Smart Defaults. It gets around the problem of adding a NOT NULLable column to an existing table that has got data in it – previous to SSDT you would need to define a DEFAULT constraint however it does feel rather cumbersome to create an object purely for the purpose of pushing through a deployment – that’s the situation that Smart Defaults is meant to alleviate. The Smart Defaults option exists in the advanced section of a Publish Profile file: The description of the setting is “Automatically provides a default value when updating a table that contains data with a column that does not allow null values”, in other words checking that option will cause SSDT to insert an arbitrary default value into your newly created NON NULLable column. In case you’re wondering how it does it, here’s how: SSDT creates a DEFAULT CONSTRAINT at the same time as the column is created and then immediately removes that constraint: ALTER TABLE [dbo].[T1]    ADD [C1] INT NOT NULL,         CONSTRAINT [SD_T1_1df7a5f76cf44bb593506d05ff9a1e2b] DEFAULT 0 FOR [C1];ALTER TABLE [dbo].[T1] DROP CONSTRAINT [SD_T1_1df7a5f76cf44bb593506d05ff9a1e2b]; You can then update the value as appropriate in a Post-Deployment script. Pretty cool! On the downside, you can only specify this option for the whole project, not for an individual table or even an individual column – I’m not sure that I’d want to turn this on for an entire project as it could hide problems that a failed deployment would highlight, in other words smart defaults could be seen to be “papering over the cracks”. If you think that should be improved go and vote (and leave a comment) at [SSDT] Allow us to specify Smart defaults per table or even per column. @Jamiet

    Read the article

  • T-SQL Tuesday #005: On Technical Reporting

    - by Adam Machanic
    Reports. They're supposed to look nice. They're supposed to be a method by which people can get vital information into their heads. And that's obvious, right? So obvious that you're undoubtedly getting ready to close this tab and go find something better to do with your life. "Why is Adam wasting my time with this garbage?" Because apparently, it's not obvious. In the world of reporting we have a number of different types of reports: business reports, status reports, analytical reports, dashboards,...(read more)

    Read the article

  • SQL Saturday Birmingham #328 Database Design Precon In One Week

    - by drsql
    On September 22, I will be doing my "How to Design a Relational Database" pre-conference session in Birmingham, Alabama. You can see the abstract here if you are interested, and you can sign up there too, naturally. At just $100, which includes a free ebook copy of my database design book, it is a great bargain and I totally promise it will be a little over 7 hours of talking about and designing databases, which will certainly be better than what you do on a normal work day, even a Friday....(read more)

    Read the article

  • Always use dtexec.exe to test performance of your dataflows. No exceptions.

    - by jamiet
    Earlier this evening I posted a blog post entitled Investigation: Can different combinations of components effect Dataflow performance? where I compared the performance of three different dataflows all working to the same overall goal. I wanted to make one last point related to the results but I thought it warranted a blog post all of its own. Here is a screenshot of one of the dataflows that I was testing: Pretty complicated I’m sure you’ll agree. Now, when I executed this dataflow in the test it was executing in ~19seconds however in that case I was executing using the command-line tool dtexec. I also tried executing inside the BIDS development environment and in that case it took much longer – 139seconds. That’s more than seven times as long. The point I want to make is very simple. If you are testing your dataflows for performance please use dtexec. Nothing else will suffice. @Jamiet

    Read the article

  • Does multiple files in SQL Server when using RAID help reduce conflicts in growth and file-locking?

    - by Dr Giles M
    I've been reading around and get the impression that if you are using RAID then using multiple SQL Server files within a filegroup won't yeild any more improvements, and the benefits are purely administrative (if you started to run out of space or wanted to partition off data into managable chunks for backups/balancing the data around your big server room). However, being a reasonably savvy software person, it's not unthinkable to hypothesise that, even for smaller databases that SQL Server will perform growth and locking operations (for writes) on a LOGICAL file basis, so even if you are using RAID, it seems to make sense to have multiple files in a file group to balance I/O, or does the time taken to reconstruct the data from distributed filegroups outweigh the benefits of reduced locking? I'm also aware that the behaviour and benefits may be different for tables/indeces/log. Is there a good site that distinguishes the benefits of multiple files when RAID is already in place?

    Read the article

  • Creating a Simple PHP Blog in Azure

    - by Josh Holmes
    In this post, I want to walk through creating a simple Azure application that will show a few pages, leverage Blob storage, Table storage and generally get you started doing PHP on Azure development. In short, we are going to write a very simple PHP Blog engine for Azure. To be very clear, this is not a pro blog engine and I don’t recommend using it in production. It’s a » read more.

    Read the article

  • Should we have a database independent SQL like query language in Django?

    - by Yugal Jindle
    Note : I know we have Django ORM already that keeps things database independent and converts to the database specific SQL queries. Once things starts getting complicated it is preferred to write raw SQL queries for better efficiency. When you write raw sql queries your code gets trapped with the database you are using. I also understand its important to use the full power of your database that can-not be achieved with the django orm alone. My Question : Until I use any database specific feature, why should one be trapped with the database. For instance : We have a query with multiple joins and we decided to write a raw sql query. Now, that makes my website postgres specific. Even when I have not used any postgres specific feature. I feel there should be some fake sql language which can translate to any database's sql query. Even Django's ORM can be built over it. So, that if you go out of ORM but not database specific - you can still remain database independent. I asked the same question to Jacob Kaplan Moss (In person) : He advised me to stay with the database that I like and endure its whole power, to which I agree. But my point was not that we should be database independent. My point is we should be database independent until we use a database specific feature. Please explain, why should be there a fake sql layer over the actual sql ?

    Read the article

< Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >