Search Results

Search found 27294 results on 1092 pages for 'site deployment'.

Page 18/1092 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • How do you manage your sqlserver database projects for new builds and migrations?

    - by Rory
    How do you manage your sql server database build/deploy/migrate for visual studio projects? We have a product that includes a reasonable database part (~100 tables, ~500 procs/functions/views), so we need to be able to deploy new databases of the current version as well as upgrade older databases up to the current version. Currently we maintain separate scripts for creation of new databases and migration between versions. Clearly not ideal, but how is anyone else dealing with this? This is complicated for us by having many customers who each have their own db instance, rather than say just having dev/test/live instances on our own web servers, but the processes around managing dev/test/live for others must be similar.

    Read the article

  • Install NPM Packages Automatically for Node.js on Windows Azure Web Site

    - by Shaun
    In one of my previous post I described and demonstrated how to use NPM packages in Node.js and Windows Azure Web Site (WAWS). In that post I used NPM command to install packages, and then use Git for Windows to commit my changes and sync them to WAWS git repository. Then WAWS will trigger a new deployment to host my Node.js application. Someone may notice that, a NPM package may contains many files and could be a little bit huge. For example, the “azure” package, which is the Windows Azure SDK for Node.js, is about 6MB. Another popular package “express”, which is a rich MVC framework for Node.js, is about 1MB. When I firstly push my codes to Windows Azure, all of them must be uploaded to the cloud. Is that possible to let Windows Azure download and install these packages for us? In this post, I will introduce how to make WAWS install all required packages for us when deploying.   Let’s Start with Demo Demo is most straightforward. Let’s create a new WAWS and clone it to my local disk. Drag the folder into Git for Windows so that it can help us commit and push. Please refer to this post if you are not familiar with how to use Windows Azure Web Site, Git deployment, git clone and Git for Windows. And then open a command windows and install a package in our code folder. Let’s say I want to install “express”. And then created a new Node.js file named “server.js” and pasted the code as below. 1: var express = require("express"); 2: var app = express(); 3: 4: app.get("/", function(req, res) { 5: res.send("Hello Node.js and Express."); 6: }); 7: 8: console.log("Web application opened."); 9: app.listen(process.env.PORT); If we switch to Git for Windows right now we will find that it detected the changes we made, which includes the “server.js” and all files under “node_modules” folder. What we need to upload should only be our source code, but the huge package files also have to be uploaded as well. Now I will show you how to exclude them and let Windows Azure install the package on the cloud. First we need to add a special file named “.gitignore”. It seems cannot be done directly from the file explorer since this file only contains extension name. So we need to do it from command line. Navigate to the local repository folder and execute the command below to create an empty file named “.gitignore”. If the command windows asked for input just press Enter. 1: echo > .gitignore Now open this file and copy the content below and save. 1: node_modules Now if we switch to Git for Windows we will found that the packages under the “node_modules” were not in the change list. So now if we commit and push, the “express” packages will not be uploaded to Windows Azure. Second, let’s tell Windows Azure which packages it needs to install when deploying. Create another file named “package.json” and copy the content below into that file and save. 1: { 2: "name": "npmdemo", 3: "version": "1.0.0", 4: "dependencies": { 5: "express": "*" 6: } 7: } Now back to Git for Windows, commit our changes and push it to WAWS. Then let’s open the WAWS in developer portal, we will see that there’s a new deployment finished. Click the arrow right side of this deployment we can see how WAWS handle this deployment. Especially we can find WAWS executed NPM. And if we opened the log we can review what command WAWS executed to install the packages and the installation output messages. As you can see WAWS installed “express” for me from the cloud side, so that I don’t need to upload the whole bunch of the package to Azure. Open this website and we can see the result, which proved the “express” had been installed successfully.   What’s Happened Under the Hood Now let’s explain a bit on what the “.gitignore” and “package.json” mean. The “.gitignore” is an ignore configuration file for git repository. All files and folders listed in the “.gitignore” will be skipped from git push. In the example below I copied “node_modules” into this file in my local repository. This means,  do not track and upload all files under the “node_modules” folder. So by using “.gitignore” I skipped all packages from uploading to Windows Azure. “.gitignore” can contain files, folders. It can also contain the files and folders that we do NOT want to ignore. In the next section we will see how to use the un-ignore syntax to make the SQL package included. The “package.json” file is the package definition file for Node.js application. We can define the application name, version, description, author, etc. information in it in JSON format. And we can also put the dependent packages as well, to indicate which packages this Node.js application is needed. In WAWS, name and version is necessary. And when a deployment happened, WAWS will look into this file, find the dependent packages, execute the NPM command to install them one by one. So in the demo above I copied “express” into this file so that WAWS will install it for me automatically. I updated the dependencies section of the “package.json” file manually. But this can be done partially automatically. If we have a valid “package.json” in our local repository, then when we are going to install some packages we can specify “--save” parameter in “npm install” command, so that NPM will help us upgrade the dependencies part. For example, when I wanted to install “azure” package I should execute the command as below. Note that I added “--save” with the command. 1: npm install azure --save Once it finished my “package.json” will be updated automatically. Each dependent packages will be presented here. The JSON key is the package name while the value is the version range. Below is a brief list of the version range format. For more information about the “package.json” please refer here. Format Description Example version Must match the version exactly. "azure": "0.6.7" >=version Must be equal or great than the version. "azure": ">0.6.0" 1.2.x The version number must start with the supplied digits, but any digit may be used in place of the x. "azure": "0.6.x" ~version The version must be at least as high as the range, and it must be less than the next major revision above the range. "azure": "~0.6.7" * Matches any version. "azure": "*" And WAWS will install the proper version of the packages based on what you defined here. The process of WAWS git deployment and NPM installation would be like this.   But Some Packages… As we know, when we specified the dependencies in “package.json” WAWS will download and install them on the cloud. For most of packages it works very well. But there are some special packages may not work. This means, if the package installation needs some special environment restraints it might be failed. For example, the SQL Server Driver for Node.js package needs “node-gyp”, Python and C++ 2010 installed on the target machine during the NPM installation. If we just put the “msnodesql” in “package.json” file and push it to WAWS, the deployment will be failed since there’s no “node-gyp”, Python and C++ 2010 in the WAWS virtual machine. For example, the “server.js” file. 1: var express = require("express"); 2: var app = express(); 3: 4: app.get("/", function(req, res) { 5: res.send("Hello Node.js and Express."); 6: }); 7:  8: var sql = require("msnodesql"); 9: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:tqy4c0isfr.database.windows.net,1433;Database=msteched2012;Uid=shaunxu@tqy4c0isfr;Pwd=P@ssw0rd123;Encrypt=yes;Connection Timeout=30;"; 10: app.get("/sql", function (req, res) { 11: sql.open(connectionString, function (err, conn) { 12: if (err) { 13: console.log(err); 14: res.send(500, "Cannot open connection."); 15: } 16: else { 17: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 18: if (err) { 19: console.log(err); 20: res.send(500, "Cannot retrieve records."); 21: } 22: else { 23: res.json(results); 24: } 25: }); 26: } 27: }); 28: }); 29: 30: console.log("Web application opened."); 31: app.listen(process.env.PORT); The “package.json” file. 1: { 2: "name": "npmdemo", 3: "version": "1.0.0", 4: "dependencies": { 5: "express": "*", 6: "msnodesql": "*" 7: } 8: } And it failed to deploy to WAWS. From the NPM log we can see it’s because “msnodesql” cannot be installed on WAWS. The solution is, in “.gitignore” file we should ignore all packages except the “msnodesql”, and upload the package by ourselves. This can be done by use the content as below. We firstly un-ignored the “node_modules” folder. And then we ignored all sub folders but need git to check each sub folders. And then we un-ignore one of the sub folders named “msnodesql” which is the SQL Server Node.js Driver. 1: !node_modules/ 2:  3: node_modules/* 4: !node_modules/msnodesql For more information about the syntax of “.gitignore” please refer to this thread. Now if we go to Git for Windows we will find the “msnodesql” was included in the uncommitted set while “express” was not. I also need remove the dependency of “msnodesql” from “package.json”. Commit and push to WAWS. Now we can see the deployment successfully done. And then we can use the Windows Azure SQL Database from our Node.js application through the “msnodesql” package we uploaded.   Summary In this post I demonstrated how to leverage the deployment process of Windows Azure Web Site to install NPM packages during the publish action. With the “.gitignore” and “package.json” file we can ignore the dependent packages from our Node.js and let Windows Azure Web Site download and install them while deployed. For some special packages that cannot be installed by Windows Azure Web Site, such as “msnodesql”, we can put them into the publish payload as well. With the combination of Windows Azure Web Site, Node.js and NPM it makes even more easy and quick for us to develop and deploy our Node.js application to the cloud.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Automated SSRS deployment with the RS utility

    - by Stacy Vicknair
    If you’re familiar with SSRS and development you are probably aware of the SSRS web services. The RS utility is a tool that comes with SSRS that allows for scripts to be executed against against the SSRS web service without needing to create an application to consume the service. One of the better benefits of using this format rather than writing an application is that the script can be modified by others who might be involved in the creation and addition of scripts or management of the SSRS environment.   Reporting Services Scripter Jasper Smith from http://www.sqldbatips.com created Reporting Services Scripter to assist with the created of a batch process to deploy an entire SSRS environment. The helper scripts below were created through the modification of his generated scripts. Why not just use this tool? You certainly can. For me, the volume of scripts generated seems less maintainable than just using some common methods extracted from these scripts and creating a deployment in a single script file. I would, however, recommend this as a product if you do not think that your environment will change drastically or if you do not need to deploy with a higher level of control over the deployment. If you just need to replicate, this tool works great. Executing with RS.exe Executing a script against rs.exe is fairly simple. The Script Half the battle is having a starting point. For the scripting I needed to do the below is the starter script. A few notes: This script assumes integrated security. This script assumes your reports have one data source each. Both of the above are just what made sense for my scenario and are definitely modifiable to accommodate your needs. If you are unsure how to change the scripts to your needs, I recommend Reporting Services Scripter to help you understand how the differences. The script has three main methods: CreateFolder, CreateDataSource and CreateReport. Scripting the server deployment is just a process of recreating all of the elements that you need through calls to these methods. If there are additional elements that you need to deploy that aren’t covered by these methods, again I suggest using Reporting Services Scripter to get the code you would need, convert it to a repeatable method and add it to this script! Public Sub Main() CreateFolder("/", "Data Sources") CreateFolder("/", "My Reports") CreateDataSource("/Data Sources", "myDataSource", _ "Data Source=server\instance;Initial Catalog=myDatabase") CreateReport("/My Reports", _ "MyReport", _ "C:\myreport.rdl", _ True, _ "/Data Sources", _ "myDataSource") End Sub   Public Sub CreateFolder(parent As String, name As String) Dim fullpath As String = GetFullPath(parent, name) Try RS.CreateFolder(name, parent, GetCommonProperties()) Console.WriteLine("Folder created: {0}", name) Catch e As SoapException If e.Detail.Item("ErrorCode").InnerText = "rsItemAlreadyExists" Then Console.WriteLine("Folder {0} already exists and cannot be overwritten", fullpath) Else Console.WriteLine("Error : " + e.Detail.Item("ErrorCode").InnerText + " (" + e.Detail.Item("Message").InnerText + ")") End If End Try End Sub   Public Sub CreateDataSource(parent As String, name As String, connectionString As String) Try RS.CreateDataSource(name, parent,False, GetDataSourceDefinition(connectionString), GetCommonProperties()) Console.WriteLine("DataSource {0} created successfully", name) Catch e As SoapException Console.WriteLine("Error : " + e.Detail.Item("ErrorCode").InnerText + " (" + e.Detail.Item("Message").InnerText + ")") End Try End Sub   Public Sub CreateReport(parent As String, name As String, location As String, overwrite As Boolean, dataSourcePath As String, dataSourceName As String) Dim reportContents As Byte() = Nothing Dim warnings As Warning() = Nothing Dim fullpath As String = GetFullPath(parent, name)   'Read RDL definition from disk Try Dim stream As FileStream = File.OpenRead(location) reportContents = New [Byte](stream.Length-1) {} stream.Read(reportContents, 0, CInt(stream.Length)) stream.Close()   warnings = RS.CreateReport(name, parent, overwrite, reportContents, GetCommonProperties())   If Not (warnings Is Nothing) Then Dim warning As Warning For Each warning In warnings Console.WriteLine(Warning.Message) Next warning Else Console.WriteLine("Report: {0} published successfully with no warnings", name) End If   'Set report DataSource references Dim dataSources(0) As DataSource   Dim dsr0 As New DataSourceReference dsr0.Reference = dataSourcePath Dim ds0 As New DataSource ds0.Item = CType(dsr0, DataSourceDefinitionOrReference) ds0.Name=dataSourceName dataSources(0) = ds0     RS.SetItemDataSources(fullpath, dataSources)   Console.Writeline("Report DataSources set successfully")       Catch e As IOException Console.WriteLine(e.Message) Catch e As SoapException Console.WriteLine("Error : " + e.Detail.Item("ErrorCode").InnerText + " (" + e.Detail.Item("Message").InnerText + ")") End Try End Sub     Public Function GetCommonProperties() As [Property]() 'Common CatalogItem properties Dim descprop As New [Property] descprop.Name = "Description" descprop.Value = "" Dim hiddenprop As New [Property] hiddenprop.Name = "Hidden" hiddenprop.Value = "False"   Dim props(1) As [Property] props(0) = descprop props(1) = hiddenprop Return props End Function   Public Function GetDataSourceDefinition(connectionString as String) Dim definition As New DataSourceDefinition definition.CredentialRetrieval = CredentialRetrievalEnum.Integrated definition.ConnectString = connectionString definition.Enabled = True definition.EnabledSpecified = True definition.Extension = "SQL" definition.ImpersonateUser = False definition.ImpersonateUserSpecified = True definition.Prompt = "Enter a user name and password to access the data source:" definition.WindowsCredentials = False definition.OriginalConnectStringExpressionBased = False definition.UseOriginalConnectString = False Return definition End Function   Private Function GetFullPath(parent As String, name As String) As String If parent = "/" Then Return parent + name Else Return parent + "/" + name End If End Function

    Read the article

  • A mechanism to include site title in every page, but not in <title> element

    - by Saeed Neamati
    Each site can have a name. For example, site x. Each page also can have a name (or a title) that should appear in <title> tag in the header. However, many websites out there use the combination site name - page name to provide the value for <title> tag. I find it a little far from being semantic. On the other hand, if you only include page title in <title> tag, search engines won't find your site by its name. For example, if your site's name is Thought Results and you don't include it in page titles, then if you search for Thought Results, you won't find your site in SERPs. Thus I'm searching for a mechanism to both include site title (not page title) in every page, and also only include page title in <title> tag to get more semantic results. Is there any way to achieve this?

    Read the article

  • ASP.NET website deployment [on hold]

    - by Rei Brazilva
    I am getting my hands wet with ASP and I have been following the tutorials. I deployed the site and in Azure and it worked great. Today I started actually designing the site. And when I published, it looks as if it doesn't read any of the files I just updated, added, and modified. It works on my localhost, but not in the Azure. I thought when you publish, everything goes up, including the new files. I don't have enough reputation to add a picture so, you'll forgive me. SO, basically, how do I get my entire site uploaded? In case anyone does stop by, I was able to pull this out just recently: CA0058 Error Running Code Analysis CA0058 : The referenced assembly 'DotNetOpenAuth.AspNet, Version=4.0.0.0, Culture=neutral, PublicKeyToken=2780ccd10d57b246' could not be found. This assembly is required for analysis and was referenced by: C:\Users\lotusms\Desktop\LOTUS MARKETING\ASP.NET\WebsiteManager\WebsiteManager\bin\WebsiteManager.dll, C:\Users\lotusms\Desktop\LOTUS MARKETING\ASP.NET\WebsiteManager\packages\Microsoft.AspNet.WebPages.OAuth.2.0.20710.0\lib\net40\Microsoft.Web.WebPages.OAuth.dll. [Errors and Warnings] (Global) CA0001 Error Running Code Analysis CA0001 : The following error was encountered while reading module 'Microsoft.Web.WebPages.OAuth': Assembly reference cannot be resolved: DotNetOpenAuth.AspNet, Version=4.0.0.0, Culture=neutral, PublicKeyToken=2780ccd10d57b246. [Errors and Warnings] (Global) Could this have something to do with the problem?

    Read the article

  • Web Host for Small Rails-based CMS site [closed]

    - by clem
    Possible Duplicate: How to find web hosting that meets my requirements? I am building a site for someone that uses a Rails-based content management system that I built myself. All of the Rails deployment experience I have so far has been over small intranets. I'm looking at web hosts like rackspace, because it seems like they're well-suited for Rails deployment. However, for a site that's not going to have more than a couple of hundred hits a month (if even that), I'm not sure it's necessary. I've also used Dreamhost's Phusion Passenger deployment for small projects before, but it seems barely functional and not well-supported, and I've also used Heroku for deployment, but I think a regular web host may do a little bit better, as they'll need things like Google Apps for Gmail set up. If anyone could provide some guidance on this, I'd greatly appreciate it. I get confused when I see things on rackspace like "1.5c/hour", because I'm not sure how that gets computed.

    Read the article

  • WiX, MSDeploy and an appealing configuration/deployment paradigm

    - by alexhildyard
    I do a lot of application and server configuration; I've done this for many years and have tended to view the complexity of this strictly in terms of the complexity of the ultimate configuration to be deployed. For example, specific APIs aside, I would tend to regard installing a server certificate as a more complex activity than, say, copying a file or adding a Registry entry.My prejudice revolved around the idea of a sequential deployment script that not only had the explicit prescription to apply a specific server configuration, but also made the implicit presumption that the server in question was in a good known state. Scripts like this fail for hundreds of reasons -- the Default Website didn't exist; the application had already been deployed; the application had already been partially deployed and failed to rollback fully, and so on. And so the problem is that the more complex the configuration activity, the more scope for error in any individual part of that activity, and therefore the greater the chance the server in question will not end up at exactly the desired configuration level.Recently I was introduced to a completely different mindset, which, for want of a better turn of phrase, I will call the "make it so" mindset. It's extremely simple both to explain and to implement. In place of the head-down, imperative script you used to use, you substitute a set of checks -- much like exception handlers -- around each configuration activity, starting with a check of the current system state. Thus the configuration logic becomes: "IF these services aren't started then start them, and IF XYZ website doesn't exist then create it, and IF these shares don't exist then create them, and IF these shares aren't permissioned in some particular way, then permission them so." This works. Really well, in my experience. Scenario 1: You want to get a system into a good known state; it's already in a good known state; you quickly realise there is nothing to do.Scenario 2: You want to get the system into a good known state; your script is flawed or the system is bust; it cannot be put into that state. You know exactly where (at least part of) the problem is and why.Scenario 3: You want to get the system into a good known state; people are fiddling around with the system just now. That's fine. You do what you can, and later you come back and try it againScenario 4: No one wants to deploy anything; they want you to prove that the previous deployment was successful. So you re-run the deployment script with the "-WhatIf" flag. It reports that there was nothing to change. There's your proof.I mentioned two technologies in the title -- MSI and MSDeploy. I am thinking specifically of the conversation that took place here. Having worked with both technologies, I think Rob Mensching's response is appropriately nuanced, and in essence the difference is this: sometimes your target is either to achieve a specific new server state, or to rollback to a known good one. Then again, your target may be to configure what you can, and to understand what you can't. Implicitly MSDeploy's "rollback" is simply to redeploy the previous version, whereas a well-crafted MSI will actively put your system into that state without further intervention. Either way, if all goes well it will leave you with a system in one of two states, whereas MSDeploy could leave your system in one of many states. The key is that MSDeploy and MSI are complementary technologies; which suits you best depends as much on Operational guidance as your Configuration remit.What I wanted to say was that I have always been for atomic, transactional-based configuration, but having worked with the "make it so" paradigm, I have been favourably impressed by the actual results. I'm tempted to put a more technical post up on this in due course.

    Read the article

  • Subdomain takes the position of main site in Google search result

    - by user3578586
    We have one domain and one sub-domain. Until last week both of them appear in first page of Google search for very important keyword. Unfortunately Google dropped our main domain from search results. our main site has been in first page for 5 years! About one year ago we build this sub-domain. It simply has been redirected to one of pages of main domain. For solving problem we upload a independent site for sub-domain because we guessed that Google think this is our main page of our site. But problem did not solved. What should we do? our main site offer main services and we we want that will be on first page. Shout down sub-domain? Redirect to main site? Put the link of our main site in sub-domain? (About one year ago we put link of this sub-domain to our main site. Google indexed it and continuously bring that to top.) changing in robots.txt ....

    Read the article

  • SQL 2008 R2 3rd Party Peer-to-Peer Replication, Global Site Distribution

    - by gombala
    We are looking at hosting 3 globally distributed SQL Server installations at different data centers. The intent is that Site A will serve web traffic and data for a specific region, same with Site B and C. In the case that Site A data center goes down, looses connectivity, etc. the users of Site A users will fail over to Site B or C (depending which is up). Also, if a user from Site A travels to Site C they should be able to access their data as it was on Site A. My questions is what SQL replication technology (SQL Replication or 3rd party) can support this scenario? We are using SQL 2008 R2 Enterprise at each site, each site runs on top of VMWare with a Netapp filer. Would something like distributed caching help in this scenario as well? We have looked at and tested Peer-to-Peer replication but have encountered issues with conflicts during our testing. I imagine there are other global data centers that have encountered and solved this issue.

    Read the article

  • Webcast: Best Practices for Speeding Virtual Infrastructure Deployment with Oracle VM

    - by Honglin Su
    We announced Oracle VM Blade Cluster Reference Configuration last month, see the blog. The new Oracle VM blade cluster reference configuration can help reduce the time to deploy virtual infrastructure by up to 98 percent when compared to multi-vendor configurations. Customers and partners have shown lots of interests. Join Oracle's experts to learn the best practices for speeding virtual infrastructure deployment with Oracle VM, register the webcast (1/25/2011) here.   Virtualization has already been widely accepted as a means to increase IT flexibility and help IT services align better with changing business needs. The flexibility of a virtualized IT infrastructure enables new applications to be rapidly deployed, capacity to be easily scaled, and IT resources to be quickly redirected. The net result is that IT can bring greater value to the business, making virtualization an obvious win from a business perspective. However, building a virtualized infrastructure typically requires assembling and integrating multiple components (e.g. servers, storage, network, virtualization, and operating systems). This infrastructure must be deployed and tested before applications can even be installed. It can take weeks or months to plan, architect, configure, troubleshoot, and deploy a virtualized infrastructure. The process is not only time-consuming, but also error-prone, making it hard to achieve a timely and profitable return on investment.  Oracle is the only vendor that can offer a fully integrated virtualization infrastructure with all of the necessary hardware and software components. The Oracle VM blade cluster reference configuration is a single-vendor solution that addresses every layer of the virtualization stack with Oracle hardware and software components, see the figure below. It enables quick and easy deployment of the virtualized infrastructure using components that have been tested together and are all supported together by Oracle. To learn more about Oracle's virtualization offerings, visit http://oracle.com/virtualization.

    Read the article

  • SQLAuthority News – Deployment guide for Microsoft SharePoint Foundation 2010

    - by pinaldave
    SharePoint and SQL Server both goes together – hands to hand. SharePoint installation is very interesting. At various organizations, the installation is very different and have various needs. SQL Server installation with SharePoint is equally important and I have often seen that it is being neglected. Microsoft has published the Deployment Guide for SharePoint Foundation. It talks about various database aspects as well. For optimal sharepoint installation the required version of SQL Server, including service packs and cumulative updates must be installed on the database server. The installation must include any additional features, such as SQL Analysis Services, and the appropriate SharePoint Foundation logins have to be added and configured. The database server must be hardened and, if it is required, databases must be created by the DBA. For more information, see: Hardware and software requirements (SharePoint Foundation 2010) Harden SQL Server for SharePoint environments (SharePoint Foundation 2010) Deploy by using DBA-created databases (SharePoint Foundation 2010) Deployment guide for Microsoft SharePoint Foundation 2010 Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SharePoint

    Read the article

  • Setting up a multi-site CMS, collecting thoughts about the DB schema

    - by Ben Fransen
    Hello all, I'm collecting some thoughts about creating a multisite CMS. In my opinion there are two major approaches. All data is stored into 1 database, giving me the advantage of single point of updates; Seperated databases, so each client has its own database. Giving me the advantage to measure bandwith. Option 1 gives me the disadvantage of measuring bandwith while option is giving me the disadvantage of a single point of update structure. Are there any generic approaches for creating a sort of update system? So my clients can download a small package (maybe a zip with a conf file to tell the updatescript where to put all the files and how to extend the database??) Do you guys have some thougths about the best solution for a situation like this? I have my own webserver, full access to all resources and I'm developing in PHP with MySQL as DBMS. I hope to hear from you and I surely appreciate any effort you make to help me further! Greets from Holland, Ben Fransen

    Read the article

  • Financial institutions build predictive models using Oracle R Enterprise to speed model deployment

    - by Mark Hornick
    See the Oracle press release, Financial Institutions Leverage Metadata Driven Modeling Capability Built on the Oracle R Enterprise Platform to Accelerate Model Deployment and Streamline Governance for a description where a "unified environment for analytics data management and model lifecycle management brings the power and flexibility of the open source R statistical platform, delivered via the in-database Oracle R Enterprise engine to support open standards compliance." Through its integration with Oracle R Enterprise, Oracle Financial Services Analytical Applications provides "productivity, management, and governance benefits to financial institutions, including the ability to: Centrally manage and control models in a single, enterprise model repository, allowing for consistent management and application of security and IT governance policies across enterprise assets Reuse models and rapidly integrate with applications by exposing models as services Accelerate development with seeded models and common modeling and statistical techniques available out-of-the-box Cut risk and speed model deployment by testing and tuning models with production data while working within a safe sandbox Support compliance with regulatory requirements by carrying out comprehensive stress testing, which captures the effects of adverse risk events that are not estimated by standard statistical and business models. This approach supplements the modeling process and supports compliance with the Pillar I and the Internal Capital Adequacy Assessment Process stress testing requirements of the Basel II Accord Improve performance by deploying and running models co-resident with data. Oracle R Enterprise engines run in database, virtually eliminating the need to move data to and from client machines, thereby reducing latency and improving security"

    Read the article

  • Speed up ADF Mobile Deployment to Android with Keystore

    - by Shay Shmeltzer
    As you might have noticed from my latest ADF Mobile entries, I'm doing most of my ADF Mobile development on a windows machine and testing on an Android device. Unfortunately the Android/windows experience is not as fast as the iOS/Mac one. However, there is one thing I learned today that can make this a bit less painful in terms of the speed to deploy and test your application - and this is to use the "Release" mode when deploying your application instead of the "Debug" mode. To do this you'll first need to define a keystore, but as Joe from our Mobile team showed me today, this is quite easy. Here are the steps: Open a command line in your JDK bin directory (I just used the JDK that comes with the JDeveloper install). Issue the following command: keytool –genkey –v –keystore <Keystore Name>.keystore –alias <Alias Name> -keyalg RSA –keysize 2048 –validity 10000 Both keystore name and alias names are strings that you decide on. The keytool utility will then prompt you with various questions that you'll need to answer. Once this is done, the next step is to configure your JDeveloper preferences->ADF Mobile to add this keystore there under the release tab:  Then for your application specific deployment profile - switch the build mode from debug to release. The end result is a much smaller mobile application (for example from 60 to 21mb) and a much faster deployment cycle (for me it is about twice as fast as before).

    Read the article

  • Feature (de)activation error “The web or site was not found” and Application Pool

    - by panjkov
    I am using Microsoft IW Demo VM (2010-10A) for my experiments related to SharePoint, in all cases when I don’t have time (read: when I’m lazy) to create complete SharePoint Dev environment. Problem This particular time I was playing around with site-scoped features and newly created site collection. So here is my workflow: Create feature with feature receiver Deploy to Site Collection from Visual Studio using “No Activation” deployment profile Activate feature from “Site Collection Features” interface...(read more)

    Read the article

  • obiee 10g teradata Solaris deployment

    - by user554629
    I have 3-4 years worth of notes on proper Teradata deployment across multiple operating systems.   The topic that is too large to cover succinctly in a blog entry.   I'm trying something new:  document a specific situation, consolidate the facts, document diagnostic procedures and then clone the structure to cover other obiee deployments (11g and other operating systems). Until the icon below is removed, this blog entry may be revised frequently.  No construction between June 6th through June 25th. Getting started obiee 10g certification:  pg 24-25 Teradata V2R5.1.x - V2R6.2, Client 13.10, certified 10.1.3.4.1obiee 10g documentation: Deployment Guide, Server Administration, Install/Config Guideobiee overview: teradata connectivity downloads: ( requires registration )solaris odbc drivers: sparc 13.10:  Choose 13.10.00.04  ( ReadMe ) sparc 14.00: probably would work, but not certified by Oracle on 10g I assume you have obiee 10.1.3.4.1 installed; 10.1.3.4.2 would be a better choice. Teradata odbc install requires root for Solaris pkgadd Only 1 version of Teradata odbc can be installed.symbolic links to the current version are created in /usr/lib at install obiee implementation background database access has two types of implementation:  native and odbcnative drivers use DB vendor client interfaces for accessodbc drivers are provided by the DB vendor for DB accessTeradata is an odbc interface Database. odbc drivers require an ODBC Driver Managerobiee uses Merant Data Direct driver manager obiee servers communicate with one another using odbc.The internal odbc driver is implemented by the obiee team and requires Merant Driver Manager. Teradata supplies a Driver Manager, which is built by Merant, but should not be used in obiee. The nqsserver shared library deployment looks like this  OBIEE Server<->DataDirect Manager<->Teradata Driver<->Teradata Database nqsserver startup $ cd $BI/setup$ . ./sa-init64.sh$ run-sa.sh autorestart64 The following files are referenced from setup:  .variant.sh  user.sh  NQSConfig.INI  DBFeatures.INI  $ODBCINI ( odbc.ini )  sqlnet.ora How does nqsserver connect to Teradata? A teradata DSN is created in the RPD. ( TD71 )setup/odbc.ini contains: [ODBC Data Sources] TD71=tdata.so[TD71]Driver=/opt/tdodbc/odbc/drivers/tdata.soDescription=Teradata V7.1.0DBCName=###.##.##.### LastUser=Username=northwindPassword=northwindDatabase=DefaultDatabase=northwind setup/user.sh contains LIBPATH\=/opt/tdicu/lib_64\:/usr/odbc/lib\:/usr/odbc/drivers\:/usr/lpp/tdodbc/odbc/drivers\:$LIBPATHexport LIBPATH   setup/.variant.sh contains if [ "$ANA_SERVER_64" = "1" ]; then  ANA_BIN_DIR=${SAROOTDIR}/server/Bin64  ANA_WEB_DIR=${SAROOTDIR}/web/bin64  ANA_ODBC_DIR=${SAROOTDIR}/odbc/lib64         setup/sa-run.sh  contains . ${ANA_INSTALL_DIR}/setup/.variant.sh. ${ANA_INSTALL_DIR}/setup/user.sh logfile="${SAROOTDIR}/server/Log/nqsserver.out.log"${ANA_BIN_DIR}/nqsserver -quiet >> ${logfile} 2>&1 &   nqsserver is running: nqsserver produces $BI/server/nqsserver.logAt startup, the native database drivers connect and record DB versions.tdata.so is not loaded until a Teradata DB connection is attempted.    Teradata odbc client installation Accept all the defaults for pkgadd.   Install in /opt. $ mkdir odbc$ cd odbc$ gzip -dc ../tdodbc__solaris_sparc.13.10.00.04.tar.gz | tar -xf - $ sudo su# pkgadd -d . TeraGSS# pkgadd -d . tdicu1310# pkgadd -d . tdodbc1310   Directory Notes: /opt/teradata/client/13.10/odbc_64/lib/tdata.soThe 64-bit obiee library loaded by nqsserver. /opt/teradata/client/13.10/odbc_64/lib is not needed in LD_LIBRARY_PATH /opt/teradata/client/13.10/tdicu/lib64is needed in LD_LIBRARY_PATH /usr/odbc should not be referenced;  it is a link to 32-bit libraries LD_LIBRARY_PATH_64 should not be used.     Useful bash functions and aliases export SAROOTDIR=/export/home/dw_adm/OracleBIexport TERA_HOME=/opt/teradata/client/13.10 export ORACLE_HOME=/export/home/oracle/product/10.2.0/clientexport ODBCINI=$SAROOTDIR/setup/odbc.iniexport TD_ICU_DATA=$TERA_HOME/tdicu/lib64alias cds="alias | grep '^alias cd' | sed 's/^alias //' | sort"alias cdtd="cd $TERA_HOME; ls" alias cdtdodbc="cd $TERA_HOME/odbc_64; ls -l"alias cdtdicu="cd $TERA_HOME/tdicu/lib64; ls -l"alias cdbi="cd $SAROOTDIR; ls"alias cdbiodbc="cd $SAROOTDIR/odbc; ls -l"alias cdsetup="cd $SAROOTDIR/setup; ls -ltr"alias cdsvr="cd $SAROOTDIR/server; ls"alias cdrep="cd $SAROOTDIR/server/Repository; ls -ltr"alias cdsvrcfg="cd $SAROOTDIR/server/Config; ls -ltr"alias cdsvrlog="cd $SAROOTDIR/server/Log; ls -ltr"alias cdweb="cd $SAROOTDIR/web; ls"alias cdwebconfig="cd $SAROOTDIR/web/config; ls -ltr"alias cdoci="cd $ORACLE_HOME; ls"pkgfiles() { pkgchk -l $1 | awk  '/^Pathname/ {print $2}'; }pkgfind()  { pkginfo | egrep -i $1 ; } Examples: $ pkgfind td$ pkgfiles tdodbc1310 | grep 64$ cds$ cdtdodbc$ cdsetup$ cdsvrlog$ cdweblog

    Read the article

  • How to create sitemap for my shopping site?

    - by John Sanjay
    I have one shopping site related to Home Goods and I need to create and submit the sitemap of my site in Google Webmaster Tool. I know there are several online tools to generate XML sitemap but some one told me that, Shopping site's sitemaps are different than other sites which means we have to submit sitemaps in two format. One is static page site map and another one is dynamic product page sitemap. Is it true? If so how create sitemaps in these two formats?

    Read the article

  • SEO to ensure visibility for a narrow, non-competitive, non-commercial site

    - by hen3ry
    I'm webmaster of a non-commercial site in English. A non-native-English speaker asked me why our site doesn't produce hits in Google searches she conducts for relevant keywords in her native language. I asked her for a list of keywords in her native language, and I naively tried inserting those into the META info in the page headers and waited a couple of weeks. No help. A little searching informed me that Google doesn't use the META info, and has not done so for a very long time. D'oh! To be entirely concrete, suppose the StackExchange folks want Russian speakers to find this site, Pro Webmasters. The direct translation in Russian of "webmaster" --thanks, Google Translator-- is: "?????????". (Not sure this will render properly, but that's not essential to my question.) Assuming Pro Webmasters has a common template for all pages it generates, inserting "?????????" into the Keywords META for that template won't help, it seems. What could StackExchange do to make this site visible to users searching with the Russian keyword "?????????" ? Pretty much all the advice I've seen boils down to this, if I understand correctly: use the desired search term often (but not too often) among site content, and the problem will be solved. That's great, but I don't think sprinkling "?????????" visibly all over Pro Webmasters is going to fly. Just for completeness, crawlers must be long immune to the invisible-to-visitors scheme, e.g, format "?????????" in a tiny text size in a color the same as an existing background, e.g. white-over-white. Or, put that text inside a div styled: ' style="visibility: hidden" '. Probably some other equivalents. I can only think of one slightly effective method, along these lines: place an unobtrusive link on the common template to a page titled "for international users" , and on that page list desired synonyms for "webmaster" in various languages on that page. A test case --admittedly, just one-- using my site implies that a Google search for "international users" ????????? will produce a hit for this page, and thus make the site minimally visible, despite the fact that the page will almost never be visited. At the moment, anyway. Note: All the SEO discussions I have found so far are about competitive and --almost certainly-- commercial sites. To repeat: my site is non-commercial, and it is about an obscure, narrow topic that is of interest to only a small number of people worldwide. This isn't about clawing our way to the top of competitive rankings, just making this content minimally visible to interested non-native-English speakers. Ideas? TIA

    Read the article

  • SEO effects of intermix of WP blog, custom PHP site and FB app game

    - by melbournetechlover
    We're a melbourne tech company in the process of building a custom site in PHP. We plan to launch a "pre-launch" page which is also custom coded (CSS3 on twitter bootstrap framework + HTML5 front end and PHP back end). On that site will be a link to a blog - the idea behind this is to build up ranking for a variety of relevant keywords prior to the full site going live (given the majority of the site is a member only community anyway so the blog is really the main way we'll be able to execute on-site SEO. Ideally, we would like to install wordpress in a subdirectory on our servers and just customise the header to look the same as the landing page of the website. But some questions and concerns... Is there any detrimental effect on SEO efforts in having two separate systems (one custom PHP, the other an installation of wordpress) to manage the blog vs the rest of the site? Are there any benefits or detriments to installing on a sub domain such as blog.sitename.com vs. sitename.com/blog. My preference would be sitename.com/blog as it feels neater - but open to suggestions based on knowledge of Google preferences. Separately, we are building a Facebook app which is under another site name. Again because we are launching this app first, from an SEO perspective, would it actually be better to run it from a sub domain on the main site - e.g. gamename.mainsitename.com instead of on app.gamename.com? Currently we have it on app.gamename.com, but if there are SEO benefits to moving it to the other domain and server then we'll do it. Basically we don't want to have our SEO efforts divided - will Google algorithms prefer two sites heavily referring traffic, or is it better to focus our efforts on one. I guess that's the crux of the issue. But the other one is - does Google care about traffic accessing a page built for the Facebook app iFrame - does that count toward rankings? Sorry I hope these questions aren't too complex - but we're in the tech world every day and still can't seem to find a good answer to these ones...hence I'm taking to the forums!! Free beer for whoever can give me a solid answer!

    Read the article

  • Cloud Deployment Models

    - by B R Clouse
    Normal 0 false false false EN-US X-NONE X-NONE As the cloud paradigm grows in depth and breadth, more readers are approaching the topic for the first time, or from a new perspective.  This blog is a basic review of  cloud deployment models, to help orient newcomers and neophytes. Most cloud deployments today are either private or public. It is also possible to connect a private cloud and a public cloud to form a hybrid cloud. A private cloud is for the exclusive use of an organization. Enterprises, universities and government agencies throughout the world are using private clouds. Some have designed, built and now manage their private clouds. Others use a private cloud that was built by and is now managed by a provider, hosted either onsite or at the provider’s datacenter. Because private clouds are for exclusive use, they are usually the option chosen by organizations with concerns about data security and guaranteed performance. Public clouds are open to anyone with an Internet connection. Because they require no capital investment from their users, they are particularly attractive to companies with limited resources in less regulated environments and for temporary workloads such as development and test environments. Public clouds offer a range of products, from end-user software packages to more basic services such as databases or operating environments. Public clouds may also offer cloud services such as a disaster recovery for a private cloud, or the ability to “cloudburst” a temporary workload spike from a private cloud to a public cloud. These are examples of a hybrid cloud. These are most feasible when the private and public clouds are built with similar technologies. Usually people think of a public cloud in terms of a user role, e.g., “Which public cloud should I consider using?” But someone needs to own and manage that public cloud. The company who owns and operates a public cloud is known as a public cloud provider. Oracle Database Cloud Service, Amazon RDS, database.com and Savvis Symphony Database are examples of public cloud database services. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} When evaluating deployment models, be aware that you can use any or all of the available options. Some workloads may be best-suited for a private cloud, some for a public or hybrid cloud. And you might deploy multiple private clouds in your organization. If you are going to combine multiple clouds, then you want to make sure that each cloud is based on a consistent technology portfolio and architecture. This simplifies management and gives you the greatest flexibility in moving resources and workloads among your different clouds. Oracle’s portfolio of cloud products and services enables both deployment models. Oracle can manage either model. Universities, government agencies and companies in all types of business everywhere in the world are using clouds built with the Oracle portfolio. By employing a consistent portfolio, these customers are able to run all of their workloads – from test and development to the most mission-critical -- in a consistent manner: One Enterprise Cloud, powered by Oracle.   /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Affiliate programs offering redirects back to my site after successful conversion

    - by Andy
    I am building a website where members are rewarded for actions completed. For some of these actions within my site (such as uploading a photo), I would like to offer my members the ability to participate in affiliate sites as follows: Once they complete the required action, they are rewarded on my site. Ideally, I would pass in the redirect url when sending the member to their site. Are there affiliate programs that offer to redirect back to your site?

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >