Search Results

Search found 45175 results on 1807 pages for 'web deployment'.

Page 202/1807 | < Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >

  • Automating deployments with the SQL Compare command line

    - by Jonathan Hickford
    In my previous article, “Five Tips to Get Your Organisation Releasing Software Frequently” I looked at how teams can automate processes to speed up release frequency. In this post, I’m looking specifically at automating deployments using the SQL Compare command line. SQL Compare compares SQL Server schemas and deploys the differences. It works very effectively in scenarios where only one deployment target is required – source and target databases are specified, compared, and a change script is automatically generated and applied. But if multiple targets exist, and pressure to increase the frequency of releases builds, this solution quickly becomes unwieldy.   This is where SQL Compare’s command line comes into its own. I’ve put together a PowerShell script that loops through the Servers table and pulls out the server and database, these are then passed to sqlcompare.exe to be used as target parameters. In the example the source database is a scripts folder, a folder structure of scripted-out database objects used by both SQL Source Control and SQL Compare. The script can easily be adapted to use schema snapshots.     -- Create a DeploymentTargets database and a Servers table CREATE DATABASE DeploymentTargets GO USE DeploymentTargets GO CREATE TABLE [dbo].[Servers]( [id] [int] IDENTITY(1,1) NOT NULL, [serverName] [nvarchar](50) NULL, [environment] [nvarchar](50) NULL, [databaseName] [nvarchar](50) NULL, CONSTRAINT [PK_Servers] PRIMARY KEY CLUSTERED ([id] ASC) ) GO -- Now insert your target server and database details INSERT INTO dbo.Servers ( serverName , environment , databaseName) VALUES ( N'myserverinstance' , N'myenvironment1' , N'mydb1') INSERT INTO dbo.Servers ( serverName , environment , databaseName) VALUES ( N'myserverinstance' , N'myenvironment2' , N'mydb2') Here’s the PowerShell script you can adapt for yourself as well. # We're holding the server names and database names that we want to deploy to in a database table. # We need to connect to that server to read these details $serverName = "" $databaseName = "DeploymentTargets" $authentication = "Integrated Security=SSPI" #$authentication = "User Id=xxx;PWD=xxx" # If you are using database authentication instead of Windows authentication. # Path to the scripts folder we want to deploy to the databases $scriptsPath = "SimpleTalk" # Path to SQLCompare.exe $SQLComparePath = "C:\Program Files (x86)\Red Gate\SQL Compare 10\sqlcompare.exe" # Create SQL connection string, and connection $ServerConnectionString = "Data Source=$serverName;Initial Catalog=$databaseName;$authentication" $ServerConnection = new-object system.data.SqlClient.SqlConnection($ServerConnectionString); # Create a Dataset to hold the DataTable $dataSet = new-object "System.Data.DataSet" "ServerList" # Create a query $query = "SET NOCOUNT ON;" $query += "SELECT serverName, environment, databaseName " $query += "FROM dbo.Servers; " # Create a DataAdapter to populate the DataSet with the results $dataAdapter = new-object "System.Data.SqlClient.SqlDataAdapter" ($query, $ServerConnection) $dataAdapter.Fill($dataSet) | Out-Null # Close the connection $ServerConnection.Close() # Populate the DataTable $dataTable = new-object "System.Data.DataTable" "Servers" $dataTable = $dataSet.Tables[0] #For every row in the DataTable $dataTable | FOREACH-OBJECT { "Server Name: $($_.serverName)" "Database Name: $($_.databaseName)" "Environment: $($_.environment)" # Compare the scripts folder to the database and synchronize the database to match # NB. Have set SQL Compare to abort on medium level warnings. $arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/AbortOnWarnings:Medium") # + @("/sync" ) # Commented out the 'sync' parameter for safety, write-host $arguments & $SQLComparePath $arguments "Exit Code: $LASTEXITCODE" # Some interesting variations # Check that every database matches a folder. # For example this might be a pre-deployment step to validate everything is at the same baseline state. # Or a post deployment script to validate the deployment worked. # An exit code of 0 means the databases are identical. # # $arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/Assertidentical") # Generate a report of the difference between the folder and each database. Generate a SQL update script for each database. # For example use this after the above to generate upgrade scripts for each database # Examine the warnings and the HTML diff report to understand how the script will change objects # #$arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/ScriptFile:update_$($_.environment+"_"+$_.databaseName).sql", "/report:update_$($_.environment+"_"+$_.databaseName).html" , "/reportType:Interactive", "/showWarnings", "/include:Identical") } It’s worth noting that the above example generates the deployment scripts dynamically. This approach should be problem-free for the vast majority of changes, but it is still good practice to review and test a pre-generated deployment script prior to deployment. An alternative approach would be to pre-generate a single deployment script using SQL Compare, and run this en masse to multiple targets programmatically using sqlcmd, or using a tool like SQL Multi Script.  You can use the /ScriptFile, /report, and /showWarnings flags to generate change scripts, difference reports and any warnings.  See the commented out example in the PowerShell: #$arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/ScriptFile:update_$($_.environment+"_"+$_.databaseName).sql", "/report:update_$($_.environment+"_"+$_.databaseName).html" , "/reportType:Interactive", "/showWarnings", "/include:Identical") There is a drawback of running a pre-generated deployment script; it assumes that a given database target hasn’t drifted from its expected state. Often there are (rightly or wrongly) many individuals within an organization who have permissions to alter the production database, and changes can therefore be made outside of the prescribed development processes. The consequence is that at deployment time, the applied script has been validated against a target that no longer represents reality. The solution here would be to add a check for drift prior to running the deployment script. This is achieved by using sqlcompare.exe to compare the target against the expected schema snapshot using the /Assertidentical flag. Should this return any differences (sqlcompare.exe Exit Code 79), a drift report is outputted instead of executing the deployment script.  See the commented out example. # $arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/Assertidentical") Any checks and processes that should be undertaken prior to a manual deployment, should also be happen during an automated deployment. You might think about triggering backups prior to deployment – even better, automate the verification of the backup too.   You can use SQL Compare’s command line interface along with PowerShell to automate multiple actions and checks that you need in your deployment process. Automation is a practical solution where multiple targets and a higher release cadence come into play. As we know, with great power comes great responsibility – responsibility to ensure that the necessary checks are made so deployments remain trouble-free.  (The code sample supplied in this post automates the simple dynamic deployment case – if you are considering more advanced automation, e.g. the drift checks, script generation, deploying to large numbers of targets and backup/verification, please email me at [email protected] for further script samples or if you have further questions)

    Read the article

  • OAuth Consumer request for token from ServiceProvider returns InternalServerError

    - by chridam
    I'm playing around with DevDefined.OAuth - an OAuth consumer and provider implementation for .Net http://code.google.com/p/devdefined-tools/wiki/OAuth and on launching the ExampleConsumerSite project after configuring the service endpoints on my IIS 7 web server, I'm receiving the following error: Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Exception: Request for uri: http://localhost%3A8080/RequestToken.aspx?oauth%5Fcallback=oob&oauth%5Fnonce=94efde0b-dd45-4cee-8253-7496cef0b877&oauth%5Fconsumer%5Fkey=key&oauth%5Fsignature%5Fmethod=PLAINTEXT&oauth%5Ftimestamp=1252512419&oauth%5Fversion=1.0&oauth%5Ftoken=&oauth%5Fsignature=secret%2526 failed. status code: InternalServerError An error occurred during the parsing of a resource required to service this request. Please review the following specific parse error details and modify your source file appropriately. Source Error: [HttpException]: 'RequestToken' is not allowed here because it does not extend class 'System.Web.UI.Page'. at System.Web.UI.TemplateParser.ProcessError(String message) at System.Web.UI.TemplateParser.ProcessInheritsAttribute(String baseTypeName, String codeFileBaseTypeName, String src, Assembly assembly) at System.Web.UI.TemplateParser.PostProcessMainDirectiveAttributes(IDictionary parseData) [HttpParseException]: 'RequestToken' is not allowed here because it does not extend class 'System.Web.UI.Page'. at System.Web.UI.TemplateParser.ProcessException(Exception ex) at System.Web.UI.TemplateParser.ParseStringInternal(String text, Encoding fileEncoding) at System.Web.UI.TemplateParser.ParseString(String text, VirtualPath virtualPath, Encoding fileEncoding) [HttpParseException]: 'RequestToken' is not allowed here because it does not extend class 'System.Web.UI.Page'. at System.Web.UI.TemplateParser.ParseString(String text, VirtualPath virtualPath, Encoding fileEncoding) at System.Web.UI.TemplateParser.ParseReader(StreamReader reader, VirtualPath virtualPath) at System.Web.UI.TemplateParser.ParseFile(String physicalPath, VirtualPath virtualPath) at System.Web.UI.TemplateParser.ParseInternal() at System.Web.UI.TemplateParser.Parse() at System.Web.UI.TemplateParser.Parse(ICollection referencedAssemblies, VirtualPath virtualPath) at System.Web.Compilation.BaseTemplateBuildProvider.get_CodeCompilerType() at System.Web.Compilation.BuildProvider.GetCompilerTypeFromBuildProvider(BuildProvider buildProvider) at System.Web.Compilation.BuildProvidersCompiler.ProcessBuildProviders() at System.Web.Compilation.BuildProvidersCompiler.PerformBuild() at System.Web.Compilation.BuildManager.CompileWebFile(VirtualPath virtualPath) at System.Web.Compilation.BuildManager.GetVPathBuildResultInternal(VirtualPath virtualPath, Boolean noBuild, Boolean allowCrossApp, Boolean allowBuildInPrecompile) at System.Web.Compilation.BuildManager.GetVPathBuildResultWithNoAssert(HttpContext context, VirtualPath virtualPath, Boolean noBuild, Boolean allowCrossApp, Boolean allowBuildInPrecompile) at System.Web.Compilation.BuildManager.GetVirtualPathObjectFactory(VirtualPath virtualPath, HttpContext context, Boolean allowCrossApp, Boolean noAssert) at System.Web.Compilation.BuildManager.CreateInstanceFromVirtualPath(VirtualPath virtualPath, Type requiredBaseType, HttpContext context, Boolean allowCrossApp, Boolean noAssert) at System.Web.UI.PageHandlerFactory.GetHandlerHelper(HttpContext context, String requestType, VirtualPath virtualPath, String physicalPath) at System.Web.UI.PageHandlerFactory.System.Web.IHttpHandlerFactory2.GetHandler(HttpContext context, String requestType, VirtualPath virtualPath, String physicalPath) at System.Web.HttpApplication.MapHttpHandler(HttpContext context, String requestType, VirtualPath path, String pathTranslated, Boolean useAppConfig) at System.Web.HttpApplication.MapHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) I've noticed the oauth_token GET parameter is empty. On tracing this, the error source is from the line 12 of Default.aspx.cs page: IToken requestToken = session.GetRequestToken(); protected void oauthRequest_Click(object sender, EventArgs e) { OAuthSession session = CreateSession(); IToken requestToken = session.GetRequestToken(); if (string.IsNullOrEmpty(requestToken.Token)) { throw new Exception("The request token was null or empty"); } Session[requestToken.Token] = requestToken; string callBackUrl = "http://localhost:" + HttpContext.Current.Request.Url.Port + "/Callback.aspx"; string authorizationUrl = session.GetUserAuthorizationUrlForToken(requestToken, callBackUrl); Response.Redirect(authorizationUrl, true); } While I'm not sure if this has to do with configuring the service endpoints but I'm running the consumer project from VS2008 and hosting the service on IIS. Please advice.

    Read the article

  • msbuild target package not found

    - by Andrew Davey
    I want to package my VS2010 web application project ready for deployment with msdeploy. On development machine I can do this using: MSBuild.exe "C:\path\to\WebApp.csproj" /target:package But on my build server I get this error: error MSB4057: The target "package" does not exist in the project. What am I missing on the build server?

    Read the article

  • "Official" Way to deploy Assemblies into the GAC?

    - by Michael Stum
    I just wonder - if I need to deploy an assembly into the GAC, what is the "official" way of doing it? Currently we either manually drag/drop into the c:\windows\assembly folder or we use gacutil.exe. The first way is obviously not the good one (it's manual process after all), and gacutil is part of the SDK and not available per default on production servers. Are there any Microsoft deployment Guidelines?

    Read the article

  • Git for Websites / post-receive / Separation of Test and Production Sites

    - by Walt W
    Hi all, I'm using Git to manage my website's source code and deployment, and currently have the test and live sites running on the same box. Following this resource http://toroid.org/ams/git-website-howto originally, I came up with the following post-receive hook script to differentiate between pushes to my live site and pushes to my test site: while read ref do #echo "Ref updated:" #echo $ref -- would print something like example at top of file result=`echo $ref | gawk -F' ' '{ print $3 }'` if [ $result != "" ]; then echo "Branch found: " echo $result case $result in refs/heads/master ) git --work-tree=c:/temp/BLAH checkout -f master echo "Updated master" ;; refs/heads/testbranch ) git --work-tree=c:/temp/BLAH2 checkout -f testbranch echo "Updated testbranch" ;; * ) echo "No update known for $result" ;; esac fi done echo "Post-receive updates complete" However, I have doubts that this is actually safe :) I'm by no means a Git expert, but I am guessing that Git probably keeps track of the current checked-out branch head, and this approach probably has the potential to confuse it to no end. So a few questions: IS this safe? Would a better approach be to have my base repository be the test site repository (with corresponding working directory), and then have that repository push changes to a new live site repository, which has a corresponding working directory to the live site base? This would also allow me to move the production to a different server and keep the deployment chain intact. Is there something I'm missing? Is there a different, clean way to differentiate between test and production deployments when using Git for managing websites? As an additional note in light of Vi's answer, is there a good way to do this that would handle deletions without mucking with the file system much? Thank you, -Walt PS - The script I came up with for the multiple repos (and am using unless I hear better) is as follows: sitename=`basename \`pwd\`` while read ref do #echo "Ref updated:" #echo $ref -- would print something like example at top of file result=`echo $ref | gawk -F' ' '{ print $3 }'` if [ $result != "" ]; then echo "Branch found: " echo $result case $result in refs/heads/master ) git checkout -q -f master if [ $? -eq 0 ]; then echo "Test Site checked out properly" else echo "Failed to checkout test site!" fi ;; refs/heads/live-site ) git push -q ../Live/$sitename live-site:master if [ $? -eq 0 ]; then echo "Live Site received updates properly" else echo "Failed to push updates to Live Site" fi ;; * ) echo "No update known for $result" ;; esac fi done echo "Post-receive updates complete" And then the repo in ../Live/$sitename (these are "bare" repos with working trees added after init) has the basic post-receive: git checkout -f if [ $? -eq 0 ]; then echo "Live site `basename \`pwd\`` checked out successfully" else echo "Live site failed to checkout" fi

    Read the article

  • MS Runtime required for app to run, how can I include with setup?

    - by JPJedi
    My application requires access to be installed on the computer or at least the MS Access runtime. Is their a way to include that with the application to check for that component/resource or install it if it isn't? Or would it just be easier to include a link to tell the user where to get the runtime if the error happens? I am using Visual Studio 2008 and the windows form app is written in VB.net. I am currently using click once for the deployment. Thanks

    Read the article

  • Exploded (unpacked) EAR vs. Packaged EAR file?

    - by Adam
    In my office we use exploded EAR's (and inside them exploded WAR directories) for our test environments, and then a packaged one for production. I've yet to find a good explanation of the reason behind this though. I understand it's easier from a deployment perspective to push out a single file during builds, but it prevents us from doing things like property file changes without doing complete rebuilds (we could skip the compiles, but our environment currently binds the compile and jar processes together). What are the major advantages / disadvantages between these two configurations?

    Read the article

  • How to handle environment-specific application configuration organization-wide?

    - by Stuart Lange
    Problem Your organization has many separate applications, some of which interact with each other (to form "systems"). You need to deploy these applications to separate environments to facilitate staged testing (for example, DEV, QA, UAT, PROD). A given application needs to be configured slightly differently in each environment (each environment has a separate database, for example). You want this re-configuration to be handled by some sort of automated mechanism so that your release managers don't have to manually configure each application every time it is deployed to a different environment. Desired Features I would like to design an organization-wide configuration solution with the following properties (ideally): Supports "one click" deployments (only the environment needs to be specified, and no manual re-configuration during/after deployment should be necessary). There should be a single "system of record" where a shared environment-dependent property is specified (such as a database connection string that is shared by many applications). Supports re-configuration of deployed applications (in the event that an environment-specific property needs to change), ideally without requiring a re-deployment of the application. Allows an application to be run on the same machine, but in different environments (run a PROD instance and a DEV instance simultaneously). Possible Solutions I see two basic directions in which a solution could go: Make all applications "environment aware". You would pass the environment name (DEV, QA, etc) at the command line to the app, and then the app is "smart" enough to figure out the environment-specific configuration values at run-time. The app could fetch the values from flat files deployed along with the app, or from a central configuration service. Applications are not "smart" as they are in #1, and simply fetch configuration by property name from config files deployed with the app. The values of these properties are injected into the config files at deploy-time by the install program/script. That install script takes the environment name and fetches all relevant configuration values from a central configuration service. Question How would/have you achieved a configuration solution that solves these problems and supports these desired features? Am I on target with the two possible solutions? Do you have a preference between those solutions? Also, please feel free to tell me that I'm thinking about the problem all wrong. Any feedback would be greatly appreciated.

    Read the article

  • Windows7 Installer takes priority how to move it back during installation using C#?

    - by shahjapan
    I've a custom Action on Deployment project of .NET Applicaiton, which contains custom dialogbox to enter certain parameters, on invalid parameters I've shown MessageBox.Show - but its being hide by installer window, I tried windows forms too with Activate, TopMost, Focus,bring2front, etc serveral options but it comes by default behind the windows installer window and due to this user is not able to identify why installing process is not finishing - because actually its waiting for user to read the MessageBox and press OK. I've tried to implement IWin32Window with the handler of MsiExec Process, and shown the Messagebox but still its not working, anyone has idea ??? Here is my installer.cs function defination, public override void Install(IDictionary stateSaver)

    Read the article

  • Same project...multiple apps?

    - by greypoint
    We have a an iPhone app project that we wish to deploy multiple times under different client names. The individual apps will be very similar but will have different resources (icon, images etc) and config settings stored in plists (server names, options etc). What is the preferred means to manage this in Xcode? Obviously we really don't want different XCode projects for each App deployment since it's 90% shared code.

    Read the article

  • Why does FastCGI not work well with Ruby on Rails?

    - by Jian Lin
    It is said that FastCGI doesn't work well with Ruby on Rails deployment. Why is that? In previous experience, something either works quite well or it might be fundamentally wrong. So if FastCGI is a viable solution, why is it not reliable with RoR? Does FastCGI work well with most any language / frameworks?

    Read the article

  • Deploying to Heroku with sensitive setting information

    - by TK
    I'm using GitHub for code and Heroku for the deployment platform for my rails app. I don't want to have sensitive data under Git. Such data include database file settings (database.yml) and some other files that have secret API keys. When I deploy to heroku, how can I deal with files that are not under revision control. When I use Capistrano, I can write some hook methods, but I don't know what to do with Heroku.

    Read the article

  • Please help me to deploy merge modules

    - by Dabblernl
    I need to deploy some Crystal Reports XI .dlls (craxdrt.dll, crviewer.dll) to client computers. Craxdrt.dll has many dependencies. I found out that the easiest way to go about this is to use the supplied merge modules. Having always relied on ClikOnce deployment I am at a total loss how to do this. If it matters: the .exe is written in VB6, but I have visual studio 2010 to make setup projects. Thanks!

    Read the article

  • C#: Installing a new version over old version

    - by sbenderli
    I have a deployment project which will not let me install over an older version. The msi file says to uninstall the program first from Add/Remove programs. This is not a good user experience. How can I do it so that the installer will simply remove the software first and then install the new version?

    Read the article

  • Install Dll in the GAC for .Net

    - by tkg
    Hi all, I am a beginner in .Net development. I need to ask about .Net winform deployment. If i have 1 exe file, some class libraries, and 3rd party components (DevExpress). Should i install these Dlls (class library and 3rd party component) in the GAC (Global Assembly Cache) in client computers ? Thank you for the explanations.

    Read the article

  • Android Device Management

    - by Jon Hopkins
    I'm looking at the possibility of using Android as a secure corporate mobile platform. One of the pre-requisites for this will be a way of managing multiple devices, security policies, software deployment, that sort of thing - essentially the things the BlackBerry Enterprise Server handles for BlackBerry or MDM (or something 3rd party like SOTI) handles for Windows Mobile. Does such a thing exist for Android? It's a platform we're interested in but without this right now (and we're not in a position to build it ourselves) it's a non-starter.

    Read the article

  • Create Registry Value In Local Machine Using C#

    - by Robert
    I'm trying to save an install path to the registry so my windows service will know where my other application was installed. I'm using visual studio's deployment to create a registry value in HKEY_CURRENT_USER, but my windows service which runs under LocalMachine doesn't have access to that. I then made the installer create a registry value in HKEY_LOCAL_MACHINE, but when I view the registry after the install it appears it never made the value. Any ideas?

    Read the article

  • 5 reason why you should upgrade to new iPad (3rd generation)

    - by Gopinath
    Apple released the new iPad, 3rd generation, couple of days ago and they will be available in stores from March 16 onwards.  It’s the best tablet available in the market and for first time buyers it’s a no brainer to choose it. What about the iPad owners? Should they upgrade their iPad 2 to the new iPad? This is the question on the lips of most of the iPad owners. In this post we will provide you 5 reasons why you should upgrade your iPad, if more than two reasons are convincing then you should upgrade to the new iPad. Retina display – The best display ever made for mobile device, a game changer The new iPad comes with Retina display with screen resolution of 2048 x 1536, which is twice the resolution of iPad 2. Undoubtedly the iPad 3’s display is the best display ever made for a mobile device and it’s a game changer. With better resolution on iPad 3 eBook reading is going to be a pleasure with clear and crisp text Watching HD movies on iPad is going to be unbelievably good The new Games targeted for Retina display are going to be more realistic and needless to explain the pleasure of playing such games Graphic artists and photo editors get a professional on screen rendering support to create beautiful graphics 2x Faster & 2x Memory – Better Games and powerful Apps The new iPad is more powerful with 2x faster graphics and 2x more memory. Apple claims that the A5x processor of new iPad is 2x faster than iPad 2 and 4x faster than the best graphic chips available from other vendors. The RAM of  new iPad  is upgraded to 1 GB compared from 512 MB of iPad 2. With the fast processor and more memory, Apps and games are going to be blazing fast. 4G Internet – Browse the web at the speeds of 42 MB/sec Half of the iPad owners are frequent commuters who access internet over cellular networks, the new iPad’s 4G LTE is going to be a big boom for their  high data access needs. With the new iPad’s 4G LTE connectivity you can browse the web at 42 MB/sec and it mean you can watch a HD video without buffering issues. iPad 2 comes with 3G network support and it’s browsing speeds are way less than the new iPad. 5MP Camera – HD Movie Recording & gorgeous Photography iPad 2 has a 0.7 mega pixel camera and the new iPad comes with 5 megapixels camera. That is a huge boost for hobbyist  photographers and videographers. With the new iPad you can shoot gorgeous photos and 1080p HD video. The iSight camera of new iPad uses advanced optics with features like auto exposure, auto focus and face detection up to 10 faces. Amazon Pays up to $300 for old iPad 2 16 GB Wifi and more for other models Do you know that you can trade in your iPad 2 16 GB Wifi for upto $300? Amazon has an excellent trade in program for selling your used iPad 2s. Depending on the condition of the iPad 2  Amazon offers $234, $270, $300.00 for 16 GB Wifi versions that in Acceptable, Good and Like New conditions respectively.  The higher models of iPad 2s fetch you more money. With this great deal from Amazon the amount of extra money you need to spend for new iPad is almost half of their price. Visit Amazon Trade In’s website or read Amazon’s brilliant plan to pay you crazy money for your iPad 2 for more details. Related: New IPad Vs. IPad 2–Side By Side Comparison Of Hardware Specification [Infographic]

    Read the article

  • System.Net.WebException: The remote server returned an error: (405) Method Not Allowed .exception occurred during the execution of the web request

    - by user88
    When I ran my web application code I got this error on this line. using (HttpWebResponse response = (HttpWebResponse)request.GetResponse()){} Actually when I ran my url directly on browser.It will give proper o/p but when I ran my url in code. It will give exception. Here MyCode is :- string service = "http://api.ean.com/ean-services/rs/hotel/"; string version = "v3/"; string method = "info/"; string hotelId1 = "188603"; int hotelId = Convert.ToInt32(hotelId1); string otherElemntsStr = "&cid=411931&minorRev=[12]&customerUserAgent=[hotel]&locale=en_US&currencyCode=INR"; string apiKey = "tzyw4x2zspckjayrbjekb397"; string sig = "a6f828b696ae6a9f7c742b34538259b0"; string url = service + version + method + "?&type=xml" + "&apiKey=" + apiKey + "&sig=" + sig + otherElemntsStr + "&hotelId=" + hotelId; HttpWebRequest request = (HttpWebRequest)WebRequest.Create(url) as HttpWebRequest; request.Method = "POST"; request.ContentType = "text/xml"; request.ContentLength = 0; XmlDocument xmldoc = new XmlDocument(); using (HttpWebResponse response = (HttpWebResponse)request.GetResponse()) { StreamReader responsereader = new StreamReader(response.GetResponseStream()); var responsedata = responsereader.ReadToEnd(); xmldoc = (XmlDocument)JsonConvert.DeserializeXmlNode(responsedata); xmldoc.Save(@"D:\FlightSearch\myfile.xml"); xmldoc.Load(@"D:\FlightSearch\myfile.xml"); DataSet ds = new DataSet(); ds.ReadXml(Request.PhysicalApplicationPath + "myfile.xml"); GridView1.DataSource = ds.Tables["HotelSummary"]; GridView1.DataBind(); }

    Read the article

  • emacs frustration with web development any working dot-files?

    - by Tony Cruise
    I really liked flexibility of emacs but it is really annoying to make it work. I want to use it for web development html, css, javascript, php. I first tried emacs-starter-kit . It didn't included nXhtml. Also C-g key binding does not work (they call it starter kit but basic key command does not work). I think it is mapped for git control. That's a frustration for a beginner. Then I replaced emacs-starter-kit with nXhtml. At least C-g is working. But code completion sucks, M-tab does not work. I tried code completion from nXhtml menu with no success. Also NXhtml mode did'nt colorized my file if css is mixed with html. Isn't it recommended for mixed html, css,php files. So why it doesnt work?. Why Emacs folks do not aware of convention over configuration? Dam! ship it something works! Please help me before I am getting crazy. I use Ubuntu 10.04 and emacs-snaphot-gtk 23.1.50-1. Please guide me step by step with your working dotfile url. Even I accept I am a dummy, it is really annoying and frustrating to use emacs.

    Read the article

  • Page output caching for dynamic web applications

    - by Mike Ellis
    I am currently working on a web application where the user steps (forward or back) through a series of pages with "Next" and "Previous" buttons, entering data until they reach a page with the "Finish" button. Until finished, all data is stored in Session state, then sent to the mainframe database via web services at the end of the process. Some of the pages display data from previous pages in order to collect additional information. These pages can never be cached because they are different for every user. For pages that don't display this dynamic data, they can be cached, but only the first time they load. After that, the data that was previously entered needs to be displayed. This requires Page_Load to fire, which means the page can't be cached at that point. A couple of weeks ago, I knew almost nothing about implementing page caching. Now I still don't know much, but I know a little bit, and here is the solution that I developed with the help of others on my team and a lot of reading and trial-and-error. We have a base page class defined from which all pages inherit. In this class I have defined a method that sets the caching settings programmatically. For pages that can be cached, they call this base page method in their Page_Load event within a if(!IsPostBack) block, which ensures that only the page itself gets cached, not the data on the page. if(!IsPostBack) {     ...     SetCacheSettings();     ... } protected void SetCacheSettings() {     Response.Cache.AddValidationCallback(new HttpCacheValidateHandler(Validate), null);     Response.Cache.SetExpires(DateTime.Now.AddHours(1));     Response.Cache.SetSlidingExpiration(true);     Response.Cache.SetValidUntilExpires(true);     Response.Cache.SetCacheability(HttpCacheability.ServerAndNoCache); } The AddValidationCallback sets up an HttpCacheValidateHandler method called Validate which runs logic when a cached page is requested. The Validate method signature is standard for this method type. public static void Validate(HttpContext context, Object data, ref HttpValidationStatus status) {     string visited = context.Request.QueryString["v"];     if (visited != null && "1".Equals(visited))     {         status = HttpValidationStatus.IgnoreThisRequest; //force a page load     }     else     {         status = HttpValidationStatus.Valid; //load from cache     } } I am using the HttpValidationStatus values IgnoreThisRequest or Valid which forces the Page_Load event method to run or allows the page to load from cache, respectively. Which one is set depends on the value in the querystring. The value in the querystring is set up on each page in the "Next" and "Previous" button click event methods based on whether the page that the button click is taking the user to has any data on it or not. bool hasData = HasPageBeenVisited(url); if (hasData) {     url += VISITED; } Response.Redirect(url); The HasPageBeenVisited method determines whether the destination page has any data on it by checking one of its required data fields. (I won't include it here because it is very system-dependent.) VISITED is a string constant containing "?v=1" and gets appended to the url if the destination page has been visited. The reason this logic is within the "Next" and "Previous" button click event methods is because 1) the Validate method is static which doesn't allow it to access non-static data such as the data fields for a particular page, and 2) at the time at which the Validate method runs, either the data has not yet been deserialized from Session state or is not available (different AppDomain?) because anytime I accessed the Session state information from the Validate method, it was always empty.

    Read the article

  • Chrome Web Browser Messages: Some Observations

    - by ultan o'broin
    I'm always on the lookout for how different apps handle errors and what kind of messages are shown (I probably need to get out more), I use this 'research' to reflect on our own application error messages patterns and guidelines and how we might make things better for our users in future. Users are influenced by all sorts of things, but their everyday experiences of technology, and especially what they encounter on the internet, increasingly sets their expectations for the enterprise user experience too. I recently came across a couple of examples from Google's Chrome web browser that got me thinking. In the first case, we have a Chrome error about not being able to find a web page. I like how simple, straightforward messaging language is used along with an optional ability to explore things a bit further--for those users who want to. The 'more information' option shows the error encountered by the browser (or 'original' error) in technical terms, along with an error number. Contrasting the two messages about essentially the same problem reveals what's useful to users and what's not. Everyone can use the first message, but the technical version of the message has to be explicitly disclosed for any more advanced user to pursue further. More technical users might search for a resolution, using that Error 324 number, but I imagine most users who see the message will try again later or check their URL again. Seems reasonable that such an approach be adopted in the enterprise space too, right? Maybe. Generally, end users don't go searching for solutions based on those error numbers, and help desk folks generally prefer they don't do so. That's because of the more critical nature of enterprise data or the fact that end users may not have the necessary privileges to make any fixes anyway. What might be more useful here is a link to a trusted source of additional help provided by the help desk or reputable community instead. This takes me on to the second case, this time more closely related to the language used in messaging situations. Here, I first noticed by the using of the (s) approach to convey possibilities of there being one or more pages at the heart of the problem. This approach is a no-no in Oracle style terms (the plural would be used) and it can create translation issues (though it is not a show-stopper). I think Google could have gone with the plural too. However, of more interest is the use of the verb "kill", shown in the message text and as an action button label. For many writers, words like "kill" and "abort" are to be avoided as they can give offense. I am not so sure about that judgment, as really their use cannot be separated from the context. Certainly, for more technical users, they're fine and have been in use for years, so I see no reason to avoid these terms if the audience has accepted them. Most end users too, I think would find the idea of "kill" usable and may even use the term in every day speech. Others might disagree--Apple uses a concept of Force Quit, for example. Ultimately, the only way to really know how to proceed is to research these matter by asking users of differing roles and expertise to perform some tasks, encounter these messages and then make recommendations based on those findings for our designs. Something to do in 2011!

    Read the article

  • ORACLE is WEB 2.0

    - by anca.rosu
    You never know what to expect in life, where it can take you and what kind of fulfillment it can offer you. It’s just like an amazing lottery with millions of winning tickets. My name is Paula, I am an Online Marketing Specialist at Oracle University and this is my story. Having graduated from a technical profile college, it seemed almost normal to follow the same career path. But I said no. I wanted to try something else, so I took an Advertising Masters Program and I really became in love with this entire industry. Advertising and the new impact of the Internet through social networking is my current fascination. I knew I had to work to incorporate both my skills intro one dream job. I want to believe that I have come to work at Oracle as part of a great plan that life has for me. It’s not the most glamorous job in advertising or in the fashion industry, but it’s everything you need to start investing in your development and to build relationships. A normal day at work begins at 9.30 at our Oracle Office in Bucharest. After a short chit-chat, coffee and some conference calls, marketing gets to work! Some of the members of my team are working besides me but others are based all over Europe. This is extremely useful when coordinating the EMEA Marketing for Oracle University, because this way it’s easier to keep an eye on these various locations. Even though it’s a team play, you need to speak up and make your mark. I am the kind of person that never stands-by and waits to be given directions, I am curious and intuitive. This makes things easier. In Oracle you really need to find your own way and to discover how to organize your time and how to get involved with people. People to people, this is the focus. But everything is up to you and it strongly depends on the type of personality that you have. I try to get involved in various activities, participate in Oracle Days Events, interact and meet all kinds of people. For those who are newly graduates or interns, Oracle has lots of trainings and webcasts you can attend to help you develop your career shape and to understand better the way the business works. You can also be awarded for ideas and setting the trends so that makes it worth it. What I like most about my job is the fact that I can come with ideas and bring them to life. For example Oracle University has a special seminar program called “Celebrity Seminars” where top industry speakers teach 1-day or 2-day condensed seminars. We thought of creating something exclusive and a video was the best idea. So my colleague and I became reporters for a day and interviewed this well-known speaker regarding his seminar. I think this is a good way to market this business. Live footage is a very good marketing tool so we are planning to use the video to target our online audiences via Facebook, Twitter or LinkedIn. This can even go in the newsletters that marketing sends regarding the Celebrity Seminars. This is what I meant when I said Oracle is a free spirited organization and you can surely find your place here among us. The best way to describe my job is WEB 2.0. The modern online approach comes to life while we are trying to sell our business. We need to be out there and we are responsible of spreading the buzz regarding our training offerings and our official courseware materials. There are so many new ways to interact with the target audience nowadays and I am so eager to discover the best online techniques! If you have any questions related to this article feel free to contact  [email protected].  You can find our job opportunities via http://campus.oracle.com Technorati Tags: WEB 2.0,Online Marketing,Oracle University,Bucharest,events,graduates,interns,training,webcast,seminar,newsletters,business,Facebook,Twitter,LinkedIn

    Read the article

< Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >