Search Results

Search found 9929 results on 398 pages for 'azure tables'.

Page 23/398 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • What is the technological basis for Azure DocumentDB?

    - by user1703840
    Microsoft announced the availability of Azure DocumentDB as follows... A fully-managed, highly-scalable, NoSQL document database service. - Rich query over a schema-free JSON data model - Transactional execution of JavaScript logic - Scalable storage and throughput - Tunable consistency - Rapid development with familiar technologies - Blazingly fast and write optimized database service I really like the "transactional execution of JavaScript logic". Sounds like an approach similar to that of PostgreSQL NoSQL. Anyone know what is the technological basis for the Azure DocumentDB service? SQL Server?

    Read the article

  • Using Windows Azure In Europe and the middel East

    - by user126015
    Hi All, I've built my application in .net and sql server 2008. Having looking for a hosting soultion I stumpled upon windows azure. I saw that currently its only availabe in the US. Can I use the service if I live outside of the US? If I upload my website up there and people try entering, will people from outside of the US be blocked? Sorry for posting an unrelated program question. I am not receiving an answer anywhere else, and I can see that there are serveral question regarding azure which are not program related here. Thank you!

    Read the article

  • Azure Service Bus Scalability

    - by phebbar
    I am trying to understand how can I make Azure Service Bus Topic to be scaleable to handle 10,000 requests/second from more than 50 different clients. I found this article at Microsoft - http://msdn.microsoft.com/en-us/library/windowsazure/hh528527.aspx. This provides lot of good input to scale azure service bus like creating multiple message factories, sending and receiving asynchronously, doing batch send/receive. But all these input are from the publisher and subscriber client perspective. What if the node running the Topic can not handle the huge number of transactions? How do I monitor that? How do I have the Topic running on multiple nodes? Any input on that would be helpful. Also wondering if any one has done any capacity testing with Topic/Queue and I am eager to see those results... Thanks, Prasanna

    Read the article

  • Is it wise to use temporary tables?

    - by Industrial
    Hi guys, We have a mySQL database table for products. We are utilizing a cache layer to reduce database load, but we think that it's a good idea to minimize the actual data needed to be stored in the cache layer to speed up the application further. All the products in the database, that is visible to visitors have a price attached to them: The prices are stored in a different table, called prices . There are multiple price categories depending on which discount level each visitor (customer) applies to. From time to time, there are campaigns which means that a special price for each product is available. The special prices are stored in a table called specials. Is it a bad to make a temp table that binds the tables together? It would only have the neccessary information and would ofcourse be cached. -------------|-------------|------------ | productId | hasPrice | hasSpecial -------------|-------------|------------ 1 | 1 | 0 2 | 1 | 1 By doing such, it would be super easy to know if the specific product really has a price, without having to iterate through the complete prices or specials table each time a product should be listed or presented. Are temp tables a common thing for web applications or is it just bad design?

    Read the article

  • Windows Azure Evolution &ndash; TFS Integration (WAWS Part 2)

    - by Shaun
    So this is the fourth blog post about the new features of Windows Azure and the second part of Windows Azure Web Sites. But this is not just focus on the WAWS since the function I’m going to introduce is available in both Windows Azure Web Sites and Windows Azure Cloud Service (a.k.a. hosted service). In the previous post I talked about the Windows Azure Web Sites and how to use its gallery to build a WordPress personal blog without coding. Besides the gallery we can create an empty web site and upload our website from vary approaches. And one of the highlighted feature here is that, we can make our web site integrated with a source control service, such as TFS and Git, so that it will be deployed automatically once a new commit or build available.   Create New Empty Web Site In the developer portal when creating a new web site, we can select QUICK CREATE item. This will create an empty web site with only one shared instance without any database associated. Let’s specify the URL, region and subscription and click OK. After a few seconds our website will be ready. And now we can click the BROWSE button to open this empty website. As you can see there is a welcome page available in my website even thought I didn’t upload or deploy anything. This means even though the website will be charged even before anything was deployed, similar as the cloud service (hosted service). It is because once we created a website, Windows Azure platform had arranged a hosting process (w3wp.exe) in the group of virtual machines.   Create Project in TFS Preview Service and Setup Link Currently the Windows Azure Web Sites can integrate with TFS and Git as its deployment source, and it only support the Microsoft TFS Preview Service for now. I will not deep into how to use the TFS preview service in this post but once we click into the website we had just created and then clicked the “Set up TFS publishing”, there will be a dialog helping us to connect to this service. If you don’t have an account you can click the link shown below to request one. Assuming we have already had an account of TFS service then we need to create a new project firstly. Go to your TFS service website and create a new project, giving the project name, description and the process template. Then, back to the developer portal and clicked the “Set up TFS publishing” link. In the popping up window I will provide my TFS service URL and click the “Authorize now” link. Click “Accept” button to allow my windows azure to connect to my TFS service. Then it will be back to the developer portal and list all projects in my account. Just select the one I had just created and click OK. Then our website is linking to the TFS project I specified and finally it will show similar like this below. This means the web site had been linked to the TFS successfully.   Work with TFS Preview Service in VS2010 In the figure above there are some links to guide us how to connect to the TFS server through Visual Studio 2010 and 2012 RC. If you are using Visual Studio 2012 RC, you don’t need any extension. But if you are using Visual Studio 2010 you must have SP1 and KB2581206 installed. To connect to my TFS service just open the Visual Studio and in the Team Explorer, we can add a new TFS server and paste the URL of my TFS service from the developer portal. And select the project I had just created, then it will be listed in my Team Explorer. Now let’s start to build our website. Since the website we are going to build will be deployed to WAWS, it’s NOT a cloud service, NOT a web role. So in this case we need to create a normal ASP.NET web application. For example, an ASP.NET MVC 3 web application. Next, right click on the solution and select “Add Solution to Source Control”, select the project I had just created. Then check my code in. Once the check-in finished we can see that there is a build running in the TFS server. And if we back to the developer portal, we will see in our web site deployment page there’s a deployment running. In fact, once we linked our web site to our TFS then it will create a new build definition in our TFS project. It will be triggered by each check-in and deploy to the web site we linked automatically. So that when our code had been compiled it will be published to our web site from our TFS server. Once the build and deployment finished we can see it’s now active on our developer portal. Now we can see the web site that created from my Visual Studio and deployed by my TFS.   Continue Deployment through VS and TFS A big benefit when using TFS publishing is the continue deployment. Now if I changed some code in my Visual Studio, for example update some text on the home page and check in my changes, then it will trigger an new build and deploy to my WAWS automatically. And even more, if we wanted to rollback to a previous version we can just select an existing deployment listed in the portal and click REDEPLOY at the bottom.   Q&A: Can Web Site use Storage work with a Worker Role? Stacy asked a question in my previous post, which was “can a web site use Windows Azure Storage and furthermore working with a worker role”. Since the web site is deployed on the windows azure virtual machines in data center, it must be able to use all windows azure features such as the storage, SQL databases, CDN, etc.. But since when using web site we normally have a standard ASP.NET web application, PHP website or NodeJS, the windows azure SDK was not referenced by default. But we can add them by ourselves. In our sample project let’s right click on my MVC project and clicked the “Manage NuGet packages”. And in the dialog I will search windows azure packages and select the “Windows Azure Storage” to install. Then we will have the assemblies to access windows azure storage such as tables, queues and blobs. Since I have a storage account already, let’s have a quick demo, just to list all blobs in a container. The code would be like this. 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Web; 5: using System.Web.Mvc; 6: using Microsoft.WindowsAzure; 7: using Microsoft.WindowsAzure.StorageClient; 8:  9: namespace WAASTFSDemo.Controllers 10: { 11: public class HomeController : Controller 12: { 13: public ActionResult Index() 14: { 15: ViewBag.Message = "Welcome to Windows Azure!"; 16:  17: var credentials = new StorageCredentialsAccountAndKey("[STORAGE_ACCOUNT]", "[STORAGE_KEY]"); 18: var account = new CloudStorageAccount(credentials, false); 19: var client = account.CreateCloudBlobClient(); 20: var container = client.GetContainerReference("shared"); 21: ViewBag.Blobs = container.ListBlobs().Select(b => b.Uri.AbsoluteUri); 22:  23: return View(); 24: } 25:  26: public ActionResult About() 27: { 28: return View(); 29: } 30: } 31: } 1: @{ 2: ViewBag.Title = "Home Page"; 3: } 4:  5: <h2>@ViewBag.Message</h2> 6: <p> 7: To learn more about ASP.NET MVC visit <a href="http://asp.net/mvc" title="ASP.NET MVC Website">http://asp.net/mvc</a>. 8: </p> 9: <div> 10: <ul> 11: @foreach (var blob in ViewBag.Blobs) 12: { 13: <li>@blob</li> 14: } 15: </ul> 16: </div> And then just check in the code, it will be deployed to my web site. Finally we can see the blobs in my storage.   This is just an example but it proves that web sites can connect to storage, table, blob and queue as well. So the answer to Stacy should be “yes”. The web site can use queue storage to work with worker role.   Summary In this post I demonstrated how to integrate with TFS from Windows Azure Web Sites. You can see our website can be built, uploaded and deployed automatically by TFS service. All we need to do is to provide the TFS name and select the project. Not only the Windows Azure Web Site, in this upgrade the Windows Azure Cloud Services (hosted service) can be published through TFS as well. Very similar as what we have shown below. But currently, only Microsoft TFS Service Preview can be integrated with Windows Azure. But I think in the future we can link the TFS in our enterprise and some 3rd party TFS such as CodePlex to Windows Azure.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Is it possible to backup an SQL Database hosted on an Azure VM, to our internal DPM2012?

    - by Florent Courtay
    I've got an SQL database on an azure VM (non domain) that i'd like to backup to our internal DPM 2012 server. I've installed the DPM agent on the Azure VM, setup DCOM to use only the ports 5000 to 5025 on both the VM and the DPM server, created the 135, 5000-5025, 5718 5719 endpoints on azure and on the VM's firewall. When trying to add this agent to the DPM server, I end up with an error, "Unable to contact the protection Agent on server .cloudapp.net" I know there is some sort of connection between them, as using a wrong password gives me an Invalid Credentials error. The error seems to be DCOM related : When trying to connect to the Azure VM from the DPM server using VBEMTest, i get an Error "0x800706ba The RPC server is unavailable", but access is deneid when using wrong credentials ) What am i missing ? Has someone been able to achieve this kind of setup ? Thanks for your help !

    Read the article

  • How do you live-migrate Hyper-V to Azure?

    - by TopHat
    I have a new install of Windows Server 2012 with the Hyper-V role setup and a couple VMs running along fat, dumb, and happy. I want to play with Azure hosting for VMs for a couple of stand-alone boxes. Is there anything special that I need to wire up to be able to live-migrate to Azure? I have the 90-day Azure trial account right now. Any special plumbing required? I have not found a lot of documentation about this yet. Everything I found points to manually copying the VHDs via command line and the Azure 2012 SDK.

    Read the article

  • Latex: Listing all figures (tables, algorithm) once again at the end of the document

    - by Zlatko
    Hi all, I have been writhing a rather large document with latex. Now I would like to list all the figures / tables / algortihms once again at the end of the file so that I can check if they all look the same. For example, if every algorithm has the same notation. How can I do this? I know about \listofalgorithms and \listoffigures but they only list the names of the algorithms or figures and the pages where they are. Thanks.

    Read the article

  • Mysql insert into 2 tables

    - by Spidfire
    I want to make a insert into 2 tables visits: visit_id int | card_id int registration: registration_id int | type enum('in','out') | timestamp int | visit_id int i want something like: INSERT INTO `visits` as v ,`registration` as v (v.`visit_id`,v.`card_id`,r.`registration_id`, r.`type`, r.`timestamp`, r.`visit_id`) VALUES (NULL, 12131141,NULL, UNIX_TIMESTAMP(), v.`visit_id`); I wonder if its possible

    Read the article

  • Azure git deployment - missing references in 2nd assembly

    - by Dan
    I'm trying to setup Bitbucket deployment to an Azure website. I successfully have Bitbucket and Azure linked, but when I push to Bitbucket, I get the following error on the Azure site: If I click on 'View Log', it shows the following compile errors: D:\Windows\Microsoft.NET\Framework\v4.0.30319\Microsoft.Common.targets(1578,5): warning MSB3245: Could not resolve this reference. Could not locate the assembly "System.Web.Mvc, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35, processorArchitecture=MSIL". Check to make sure the assembly exists on disk. If this reference is required by your code, you may get compilation errors. [C:\DWASFiles\Sites\<projname>\VirtualDirectory0\site\repository\<projname>.Common\<projname>.Common.csproj] D:\Windows\Microsoft.NET\Framework\v4.0.30319\Microsoft.Common.targets(1578,5): warning MSB3245: Could not resolve this reference. Could not locate the assembly "WebMatrix.WebData, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35, processorArchitecture=MSIL". Check to make sure the assembly exists on disk. If this reference is required by your code, you may get compilation errors. [C:\DWASFiles\Sites\<projname>\VirtualDirectory0\site\repository\<projname>.Common\<projname>.Common.csproj] CustomMembershipProvider.cs(5,7): error CS0246: The type or namespace name 'WebMatrix' could not be found (are you missing a using directive or an assembly reference?) [C:\DWASFiles\Sites\<projname>\VirtualDirectory0\site\repository\<projname>.Common\<projname>.Common.csproj] CustomMembershipProvider.cs(9,38): error CS0246: The type or namespace name 'ExtendedMembershipProvider' could not be found (are you missing a using directive or an assembly reference?) [C:\DWASFiles\Sites\<projname>\VirtualDirectory0\site\repository\<projname>.Common\<projname>.Common.csproj] Models\AccountModels.cs(3,18): error CS0234: The type or namespace name 'Mvc' does not exist in the namespace 'System.Web' (are you missing an assembly reference?) [C:\DWASFiles\Sites\<projname>\VirtualDirectory0\site\repository\<projname>.Common\<projname>.Common.csproj] CustomMembershipProvider.cs(198,37): error CS0246: The type or namespace name 'OAuthAccountData' could not be found (are you missing a using directive or an assembly reference?) [C:\DWASFiles\Sites\<projname>\VirtualDirectory0\site\repository\<projname>.Common\<projname>.Common.csproj] Models\AccountModels.cs(40,10): error CS0246: The type or namespace name 'Compare' could not be found (are you missing a using directive or an assembly reference?) [C:\DWASFiles\Sites\<projname>\VirtualDirectory0\site\repository\<projname>.Common\<projname>.Common.csproj] Models\AccountModels.cs(40,10): error CS0246: The type or namespace name 'CompareAttribute' could not be found (are you missing a using directive or an assembly reference?) [C:\DWASFiles\Sites\<projname>\VirtualDirectory0\site\repository\<projname>.Common\<projname>.Common.csproj] Models\AccountModels.cs(73,10): error CS0246: The type or namespace name 'Compare' could not be found (are you missing a using directive or an assembly reference?) [C:\DWASFiles\Sites\<projname>\VirtualDirectory0\site\repository\<projname>.Common\<projname>.Common.csproj] Models\AccountModels.cs(73,10): error CS0246: The type or namespace name 'CompareAttribute' could not be found (are you missing a using directive or an assembly reference?) [C:\DWASFiles\Sites\<projname>\VirtualDirectory0\site\repository\<projname>.Common\<projname>.Common.csproj] Note that these compile errors are against another assembly in my project (the assembly where I put the business logic). When Googling, the only mention I found was about having to set the "local copy" flag to true for those references. I've tried this, but still got the same errors. This all compiles fine locally. Any ideas?

    Read the article

  • Getting code-first Entity Framework to build tables on SQL Azure

    - by NER1808
    I am new the code-first Entity Framework. I have tried a few things now, but can't get EF to construct any tables in the my SQL Azure database. Can anyone advise of some steps and settings I should check. The membership provider has no problems create it's tables. I have added the PersistSecurityInfo=True in the connection string. The connection string is using the main user account for the server. When I implement the tables in the database using sql everything works fine. I have the following in the WebRole.cs //Initialize the database Database.SetInitializer<ReykerSCPContext>(new DbInitializer()); My DbInitializer (which does not get run before I get a "Invalid object name 'dbo.ClientAccountIFAs'." when I try to access the table for the first time. Sometime after startup. public class DbInitializer:DropCreateDatabaseIfModelChanges<ReykerSCPContext> { protected override void Seed(ReykerSCPContext context) { using (context) { //Add Doc Types context.DocTypes.Add(new DocType() { DocTypeId = 1, Description = "Statement" }); context.DocTypes.Add(new DocType() { DocTypeId = 2, Description = "Contract note" }); context.DocTypes.Add(new DocType() { DocTypeId = 3, Description = "Notification" }); context.DocTypes.Add(new DocType() { DocTypeId = 4, Description = "Invoice" }); context.DocTypes.Add(new DocType() { DocTypeId = 5, Description = "Document" }); context.DocTypes.Add(new DocType() { DocTypeId = 6, Description = "Newsletter" }); context.DocTypes.Add(new DocType() { DocTypeId = 7, Description = "Terms and Conditions" }); //Add ReykerAccounttypes context.ReykerAccountTypes.Add(new ReykerAccountType() { ReykerAccountTypeID = 1, Description = "ISA" }); context.ReykerAccountTypes.Add(new ReykerAccountType() { ReykerAccountTypeID = 2, Description = "Trading" }); context.ReykerAccountTypes.Add(new ReykerAccountType() { ReykerAccountTypeID = 3, Description = "SIPP" }); context.ReykerAccountTypes.Add(new ReykerAccountType() { ReykerAccountTypeID = 4, Description = "CTF" }); context.ReykerAccountTypes.Add(new ReykerAccountType() { ReykerAccountTypeID = 5, Description = "JISA" }); context.ReykerAccountTypes.Add(new ReykerAccountType() { ReykerAccountTypeID = 6, Description = "Direct" }); context.ReykerAccountTypes.Add(new ReykerAccountType() { ReykerAccountTypeID = 7, Description = "ISA & Direct" }); //Save the changes context.SaveChanges(); } and my DBContext class looks like public class ReykerSCPContext : DbContext { //set the connection explicitly public ReykerSCPContext():base("ReykerSCPContext"){} //define tables public DbSet<ClientAccountIFA> ClientAccountIFAs { get; set; } public DbSet<Document> Documents { get; set; } public DbSet<DocType> DocTypes { get; set; } public DbSet<ReykerAccountType> ReykerAccountTypes { get; set; } protected override void OnModelCreating(DbModelBuilder modelBuilder) { //Runs when creating the model. Can use to define special relationships, such as many-to-many. } The code used to access the is public List<ClientAccountIFA> GetAllClientAccountIFAs() { using (DataContext) { var caiCollection = from c in DataContext.ClientAccountIFAs select c; return caiCollection.ToList(); } } and it errors on the last line. Help!

    Read the article

  • WCF Web Service Gives 404 error in Azure

    - by landyman
    I'm new to using WCF and Azure, but I have a WCF Web Service that works correctly when debugging in Visual Studio. I set the startup project to Azure, and I get 404 errors for any URL I try related to the service. Here is what I think is relavant code: From IWebService.cs [OperationContract] [WebGet(UriTemplate = "GetData/Xml?value={value}", ResponseFormat=WebMessageFormat.Xml)] string GetDataXml(string value); and from Web.config: <system.serviceModel> <services> <service name="WebService" behaviorConfiguration="WebServiceBehavior"> <!-- Service Endpoints --> <endpoint address="" binding="webHttpBinding" contract="IWebService" behaviorConfiguration="WebEndpointBehavior"></endpoint> <endpoint address="ws" binding="wsHttpBinding" contract="IWebService"/> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange"/> </service> </services> <behaviors> <serviceBehaviors> <behavior name="WebServiceBehavior"> <serviceMetadata httpGetEnabled="true"/> <serviceDebug includeExceptionDetailInFaults="true"/> </behavior> </serviceBehaviors> <endpointBehaviors> <behavior name="WebEndpointBehavior"> <webHttp/> </behavior> </endpointBehaviors> </behaviors> </system.serviceModel> I have tried changing the binding to 'basicHttpBinding', but that had no luck. Thanks in advance for any help!

    Read the article

  • Microsoft SQL Server 2005 Inserting into tables from child procedure which returned multiple tables

    - by Kevin
    I've got a child procedure which returns more than table. child: PROCEDURE KevinGetTwoTables AS BEGIN SELECT 'ABC' Alpha, '123' Numeric SELECT 'BBB' Alpha, '123' Numeric1, '555' Numeric2 END example: PROCEDURE KevinTesting AS BEGIN DECLARE @Table1 TABLE ( Alpha varchar(50), Numeric1 int ) DECLARE @Table2 TABLE ( Alpha varchar(50), Numeric1 int, Numeric2 int ) --INSERT INTO @Table1 EXEC KevinGetTwoTables END

    Read the article

  • SQL Server, temporary tables with truncate vs table variable with delete

    - by Richard
    I have a stored procedure inside which I create a temporary table that typically contains between 1 and 10 rows. This table is truncated and filled many times during the stored procedure. It is truncated as this is faster than delete. Do I get any performance increase by replacing this temporary table with a table variable when I suffer a penalty for using delete (truncate does not work on table variables) Whilst table variables are mainly in memory and are generally faster than temp tables do I loose any benefit by having to delete rather than truncate?

    Read the article

  • Join performance on MyISAM and InnoDB tables

    - by j0nes
    I am thinking about converting some tables from MyISAM to InnoDB in my mysql server. The tables will certainly benefit from the change because a lot of write requests come to these tables, while there are also quite a lot of read request at the same time. However, they are often joined together with some tables that almost don't get any writes. Is there a performance penalty when joining together MyISAM and InnoDB tables or should everything work fine? Second question: During backups at night, I am copying data from the InnoDB tables to MyISAM tables for archiving purposes. In these backups, a lot of write-requests happen, however there is almost no read from these archive tables. Would these tables also benefit from using InnoDB or is this just a waste of space and RAM?

    Read the article

  • Concatenating rows from different tables into one field

    - by Markus
    Hi! In a project using a MSSQL 2005 Database we are required to log all data manipulating actions in a logging table. One field in that table is supposed to contain the row before it was changed. We have a lot of tables so I was trying to write a stored procedure that would gather up all the fields in one row of a table that was given to it, concatenate them somehow and then write a new log entry with that information. I already tried using FOR XML PATH and it worked, but the client doesn't like the XML notation, they want a csv field. Here's what I had with FOR XML PATH: DECLARE @foo varchar(max); SET @foo = (SELECT * FROM table WHERE id = 5775 FOR XML PATH('')); The values for "table", "id" and the actual id (here: 5775) would later be passed in via the call to the stored procedure. Is there any way to do this without getting XML notation and without knowing in advance which fields are going to be returned by the SELECT statement?

    Read the article

  • Is there a certain IIS configuration required to allow a functioning .Net 4.0 ASP.Net MVC 2 Azure ap

    - by erg39
    I just installed the Azure 1.2 tools update and would like to get to work on an Azure project running locally using ASP.Net MVC and .Net 4, but I cannot get MVC pages to load. If I just create a new Azure project in VS 2010, add a ASP.Net MVC web role, and run the application, pages never load. It appears that routing is somehow at fault as controller actions never get called, but if I add other pages to the project (like .htm or .aspx) they will load in the browser. It all works fine with a new project using .Net 3.5, MVC 2 project in the Azure development environment; it all works fine with .Net 4.0 MVC 2 project that is not running in Azure; only the combination does not work. Environment is Win 7 x64 (IIS 7.5), VS 2010, Azure tools 1.2 Is there some magic IIS setting I need to change or something? Any ideas?

    Read the article

  • PHP/MySQL - updateing 2 tables in one request

    - by Phil Jackson
    Morning, I want to learn more about sql and I'm wanting to update to tables; $query3 = "INSERT INTO `$table1`, `$table2` ($table1.DISPLAY_NAME, $table1.EMAIL_ACCOUNT, $table2.DISPLAY_NAME, $table2.EMAIL_ACCOUNT) values ('" . DISPLAY_NAME . "', '" . EMAIL_ADDRESS . "', '" . $get['rn'] . "', '" . $email . "')"; could some one point me in the right direction on how I would go about this? current error is You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ' contacts_ACT_Web_Designs (contacts_E_Jackson.DISPLAY_NAME, contacts_E_Jackson' at line 1 regards, phil

    Read the article

  • MySQL – Beginning Temporary Tables in MySQL

    - by Pinal Dave
    MySQL supports Temporary tables to store the resultsets temporarily for a given connection. Temporary tables are created with the keyword TEMPORARY along with the CREATE TABLE statement. Let us create the temporary table named Temp CREATE TEMPORARY TABLE TEMP (id INT); Now you can find out the column names using DESC command DESC TEMP; The above returns the following result This table can be accessed only for the current connection and it can be used like a permanent table and automatically dropped when the connection is closed. However, you can not find temporary tables using INFORMATION_SCHEMA. TABLES system view. It will only list out the permanent tables. MySQL usually stores the data of temporary tables in memory and processed by Memory Storage engine. But if the data size is too large MySQL automatically converts this to the on – disk table and use MyISAM engine. You can also create a permanent table with the same name of a temporary table in the same connection. However the structure of permanent table is visible only if the temporary table with the same name is dropped. Let us create a permanent table with the same name Temp as below CREATE TABLE TEMP (id INT, names VARCHAR(100)); Now running the following command stills gives you the structure of the temporary table temp created earlier. DESC TEMP; You can drop the temporary table using DROP TEMPORARY TABLE command; DROP TEMPORARY TABLE TEMP; After you executed the temporary table, run the following command DESC TEMP; Now you will see the structure of the permanent table named temp In summary – If there is a Temporary Table in MySQL it gets first priority over the permanent table in the session. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Tips and Tricks, T SQL

    Read the article

  • Search multiple tables

    - by gilden
    I have developed a web application that is used mainly for archiving all sorts of textual material (documents, references to articles, books, magazines etc.). There can be any given number of archive tables in my system, each with its own schema. The schema can be changed by a moderator through the application (imagine something similar to a really dumbed down version of phpMyAdmin). Users can search for anything from all of the tables. By using FULLTEXT indexes together with substring searching (fields which do not support FULLTEXT indexing) the script inserts the results of a search to a single table and by ordering these results by the similarity measure I can fairly easily return the paginated results. However, this approach has a few problems: substring searching can only count exact results the 50% rule applies to all tables separately and thus, mysql may not return important matches or too naively discards common words. is quite expensive in terms of query numbers and execution time (not an issue right now as there's not a lot of data yet in the tables). normalized data is not even searched for (I have different tables for categories, languages and file attatchments). My planned solution Create a single table having columns similar to id, table_id, row_id, data Every time a new row is created/modified/deleted in any of the data tables this central table also gets updated with the data column containing a concatenation of all the fields in a row. I could then create a single index for Sphinx and use it for doing searches instead. Are there any more efficient solutions or best practises how to approach this? Thanks.

    Read the article

  • Advanced Continuous Delivery to Azure from TFS, Part 1: Good Enough Is Not Great

    - by jasont
    The folks over on the TFS / Visual Studio team have been working hard at releasing a steady stream of new features for their new hosted Team Foundation Service in the cloud. One of the most significant features released was simple continuous delivery of your solution into your Azure deployments. The original announcement from Brian Harry can be found here. Team Foundation Service is a great platform for .Net developers who are used to working with TFS on-premises. I’ve been using it since it became available at the //BUILD conference in 2011, and when I recently came to work at Stackify, it was one of the first changes I made. Managing work items is much easier than the tool we were using previously, although there are some limitations (more on that in another blog post). However, when continuous deployment was made available, it blew my mind. It was the killer feature I didn’t know I needed. Not to say that I wasn’t previously an advocate for continuous delivery; just that it was always a pain to set up and configure. Having it hosted - and a one-click setup – well, that’s just the best thing since sliced bread. It made perfect sense: my source code is in the cloud, and my deployment is in the cloud. Great! I can queue up a build from my iPad or phone and just let it go! I quickly tore through the quick setup and saw it all work… sort of. This will be the first in a three part series on how to take the building block of Team Foundation Service continuous delivery and build a CD model that will actually work for any team deploying something more advanced than a “Hello World” example. Part 1: Good Enough Is Not Great Part 2: A Model That Works: Branching and Multiple Deployment Environments Part 3: Other Considerations: SQL, Custom Tasks, Etc Good Enough Is Not Great There. I’ve said it. I certainly hope no one on the TFS team is offended, but it’s the truth. Let’s take a look under the hood and understand how it works, and then why it’s not enough to handle real world CD as-is. How it works. (note that I’ve skipped a couple of steps; I already have my accounts set up and something deployed to Azure) The first step is to establish some oAuth magic between your Azure management portal and your TFS Instance. You do this via the management portal. Once it’s done, you have a new build process template in your TFS instance. (Image lifted from the documentation) From here, you’ll get the usual prompts for security, allowing access, etc. But you’ll also get to pick which Solution in your source control to build. Here’s what the bulk of the build definition looks like. All I’ve had to do is add in the solution to build (notice that mine is from a specific branch – Release – more on that later) and I’ve changed the configuration. I trigger the build, and voila! I have an Azure deployment a few minutes later. The beauty of this is that it’s all in the cloud and I’m not waiting for my machine to compile and upload the package. (I also had to enable the build definition first – by default it is created in disabled state, probably a good thing since it will trigger on every.single.checkin by default.) I get to see a history of deployments from the Azure portal, and can link into TFS to see the associated changesets and work items. You’ll notice also that this build definition also automatically put my code in the Staging slot of my Azure deployment – more on this soon. For now, I can VIP swap and be in production. (P.S. I hate VIP swap and “production” and “staging” in Azure. More on that later too.) That’s it. That’s the default out-of-box experience. Easy, right? But it’s full of room for improvement, so let’s get into that….   The Problems Nothing is perfect (except my code – it’s always perfect), and neither is Continuous Deployment without a bit of work to help it fit your dev team’s process. So what are the issues? Issue 1: Staging vs QA vs Prod vs whatever other environments your team may have. This, for me, is the big hairy one. Remember how this automatically deployed to staging rather than prod for us? There are a couple of issues with this model: If I want to deliver to prod, it requires intervention on my part after deployment (via a VIP swap). If I truly want to promote between environments (i.e. Nightly Build –> Stable QA –> Production) I likely have configuration changes between each environment such as database connection strings and this process (and the VIP swap) doesn’t account for this. Yet. Issue 2: Branching and delivering on every check-in. As I mentioned above, I have set this up to target a specific branch – Release – of my code. For the purposes of this example, I have adopted the “basic” branching strategy as defined by the ALM Rangers. This basically establishes a “Main” trunk where you branch off Dev and Release branches. Granted, the Release branch is usually the only thing you will deploy to production, but you certainly don’t want to roll to production automatically when you merge to the Release branch and check-in (unless you like the thrill of it, and in that case, I like your style, cowboy….). Rather, you have nightly build and QA environments, or if you’ve adopted the feature-branch model you have environments for those. Those are the environments you want to continuously deploy to. But that takes us back to Issue 1: we currently have a 1:1 solution to Azure deployment target. Issue 3: SQL and other custom tasks. Let’s be honest and address the elephant in the room: I need to get some sleep because I see an elephant in the room. But seriously, I can’t think of an application I have touched in the last 10 years that doesn’t need to consider SQL changes when deploying code and upgrading an environment. Microsoft seems perfectly content to ignore this elephant for now: yes, they’ve added Data Tier Applications. But let’s be honest with ourselves again: no one really uses it, and it’s not suitable for anything more complex than a Hello World sample project database. Why? Because it doesn’t fit well into a great source control story. Developers make stored procedure and table changes all day long while coding complex applications, and if someone forgets to go update the DACPAC before the automated deployment, you have a broken build until it’s completed. Developers – not just DBAs – also like to work with SQL in SQL tools, not in Visual Studio. I’m really picking on SQL because that’s generally the biggest concern that I hear. But we need to account for any custom tasks as well in the build process.   The Solutions… ? We’ve taken a look at how this all works, and addressed the shortcomings. In my next post (which I promise will be very, very soon), I will detail how I’ve overcome these shortcomings and used this foundation to create a mature, flexible model for deploying my app – any version, any time, to any environment.

    Read the article

  • Running Jetty under Windows Azure Using RoleEntryPoint in a Worker Role

    - by Shawn Cicoria
    This post is built upon the work of Mario Kosmiskas and David C. Chou’s prior postings – from here: http://blogs.msdn.com/b/mariok/archive/2011/01/05/deploying-java-applications-in-azure.aspx  http://blogs.msdn.com/b/dachou/archive/2010/03/21/run-java-with-jetty-in-windows-azure.aspx As Mario points out in his post, when you need to have more control over the process that starts, it generally is better left to a RoleEntryPoint capability that as of now, requires the use of a CLR based assembly that is deployed as part of the package to Azure. There were things I liked especially about Mario’s post – specifically, the ability to pull down the JRE and Jetty runtimes at role startup and instantiate the process using the extracted bits.  The way Mario initialized the java process (and Jetty) was to take advantage of a role startup task configured as part of the service definition.  This is a great quick way to kick off processes or tasks prior to your role entry point.  However, if you need access to service configuration values or role events, that’s where RoleEntryPoint comes in.  For this PoC sample I moved the logic for retrieving the bits for the jre and jetty to the worker roles OnStart – in addition to moving the process kickoff to the OnStart method.  The Run method at this point is there to loop and just report the status of the java process. Beyond just making things more parameterized, both Mario’s and David’s articles still form the essence of the approach. The solution that accompanies this post provides all the necessary .NET based Visual Studio project.  In addition, you’ll need: 1. Jetty 7 runtime http://www.eclipse.org/jetty/downloads.php 2. JRE http://www.oracle.com/technetwork/java/javase/downloads/index.html Once you have these the first step is to create archives (zips) of the distributions.  For this PoC, the structure of the archive requires that the root of the archive looks as follows: JRE6.zip jetty---.zip Upload the contents to a storage container (block blob), and for this example I used /archives as the location.  The service configuration has several settings that allow, which is the advantage of using RoleEntryPoint, the ability to provide these things via native configuration support from Azure in a worker role. Storage Explorer You can use development storage for testing this out – the zipped version of the solution is configured for development storage.  When you’re ready to deploy, you update the two settings – 1 for diagnostics and the other for the storage container where the /archives are going to be stored. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="HostedJetty" osFamily="2" osVersion="*"> <Role name="JettyWorker"> <Instances count="1" /> <ConfigurationSettings> <!--<Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="DefaultEndpointsProtocol=https;AccountName=<accountName>;AccountKey=<accountKey>" />--> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="JettyArchive" value="jetty-distribution-7.3.0.v20110203b.zip" /> <Setting name="StartRole" value="true" /> <Setting name="BlobContainer" value="archives" /> <Setting name="JreArchive" value="jre6.zip" /> <!--<Setting name="StorageCredentials" value="DefaultEndpointsProtocol=https;AccountName=<accountName>;AccountKey=<accountKey>"/>--> <Setting name="StorageCredentials" value="UseDevelopmentStorage=true" />   For interacting with Storage you can use several tools – one tool that I like is from the Windows Azure CAT team located here: http://appfabriccat.com/2011/02/exploring-windows-azure-storage-apis-by-building-a-storage-explorer-application/  and shown in the prior picture At runtime, during role initialization and startup, Azure will call into your RoleEntryPoint.  At that time the code will do a dynamic pull of the 2 archives and extract – using the Sharp Zip Lib <link> as Mario had demonstrated in his sample.  The only different here is the use of CLR code vs. PowerShell (which is really CLR, but that’s another discussion). At this point, once the 2 zips are extracted, the Role’s file system looks as follows: Worker Role approot From there, the OnStart method (which also does the download and unzip using a simple StorageHelper class) kicks off the Java path and now you have Java! Task Manager Jetty Sample Page A couple of things I’m working on to enhance this is to extract the jre and jetty bits not to the appRoot but to a resource location defined as part of the service definition. ServiceDefinition.csdef <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="HostedJetty" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WorkerRole name="JettyWorker"> <Imports> <Import moduleName="Diagnostics" /> <Import moduleName="RemoteAccess" /> <Import moduleName="RemoteForwarder" /> </Imports> <Endpoints> <InputEndpoint name="JettyPort" protocol="tcp" port="80" localPort="8080" /> </Endpoints> <LocalResources> <LocalStorage name="Archives" cleanOnRoleRecycle="false" sizeInMB="100" /> </LocalResources>   As the concept matures a bit, being able to update dynamically the content or jar files as part of a running java solution is something that is possible through continued enhancement of this simple model. The Visual Studio 2010 Solution is located here: HostingJavaSln_NDA.zip

    Read the article

  • Free online Windows AzureConf this Wednesday

    - by ScottGu
    This Wednesday, November 14th, we’ll be hosting Windows AzureConf – a free online event for and by the Windows Azure community.  It will be streamed online from 8:30am->5:00 PM PST via Channel 9, and you can watch it all for free. I’ll be kicking off the event with a Windows Azure overview in the morning (a great way to learn more about Windows Azure if you haven’t used it yet!), and following my talk the rest of the day will be full of excellent presentations from members of the Windows Azure community.  You can ask questions from them live and I think you’ll find the day an excellent way to learn more about Windows Azure – as well as hear directly from developers building solutions on it today. Click here to learn more about the event and register for free to watch it live.  Hope to see you there! Scott P.S. We will also make the presentations available for download after the event in case you miss them. 

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >