Search Results

Search found 108959 results on 4359 pages for 'ado net data services'.

Page 337/4359 | < Previous Page | 333 334 335 336 337 338 339 340 341 342 343 344  | Next Page >

  • Web Matrix released

    - by TATWORTH
    Microsoft have now released Web Matrix (and ASP.NET MVC3 if you so inclined!) One signifcant utility is IIS Express which will replace Cassini It is worth noting that SP1 for VS2010 should be out in Q1. Links: http://www.hanselman.com/blog/ASPNETMVC3WebMatrixNuGetIISExpressAndOrchardReleasedTheMicrosoftJanuaryWebReleaseInContext.aspx http://www.hanselman.com/blog/LinkRollupNewDocumentationAndTutorialsFromWebPlatformAndTools.aspx http://arstechnica.com/microsoft/news/2011/01/microsoft-releases-free-webmatrix-web-development-tool.ars I am impressed by the copious tutorials on MVC, which I include below: Intro to ASP.NET MVC 3 onboarding series. Scott Hanselman and Rick Anderson collaboration and Mike Pope (Editor) Both C# and VB versions: Intro to ASP.NET MVC 3 Adding a Controller Adding a View Entity Framework Code-First Development Accessing your Model's Data from a Controller Adding a Create Method and Create View Adding Validation to the Model Adding a New Field to the Movie Model and Table Implementing Edit, Details and Delete Source code for this series MVC 3 Updated and new tutorials/ API Reference on MSDN Rick Anderson (Lead Programming Writer), Keith Newman and Mike Pope (Editor) ASP.NET MVC 3 Content Map ASP.NET MVC Overview MVC Framework and Application Structure Understanding MVC Application Execution Compatibility of ASP.NET Web Forms and MVC Walkthrough: Creating a Basic ASP.NET MVC Project Walkthrough: Using Forms Authentication in ASP.NET MVC Controllers and Action Methods in ASP.NET MVC Applications Using an Asynchronous Controller in ASP.NET MVC Views and UI Rendering in ASP.NET MVC Applications Rendering a Form Using HTML Helpers Passing Data in an ASP.NET MVC Application Walkthrough: Using Templated Helpers to Display Data in ASP.NET MVC Creating an ASP.NET MVC View by Calling Multiple Actions Models and Validation in ASP.NET MVC How to: Validate Model Data Using DataAnnotations Attributes Walkthrough: Using MVC View Templates How to: Implement Remote Validation in ASP.NET MVC Walkthrough: Adding AJAX Scripting Walkthrough: Organizing an Application using Areas Filtering in ASP.NET MVC Creating Custom Action Filters How to: Create a Custom Action Filter Unit Testing in ASP.NET MVC Applications Walkthrough: Using TDD with ASP.NET MVC How to: Add a Custom ASP.NET MVC Test Framework in Visual Studio ASP.NET MVC 3 Reference System.Web.Mvc System.Web.Mvc.Ajax System.Web.Mvc.Async System.Web.Mvc.Html System.Web.Mvc.Razor

    Read the article

  • DevConnections Slides and Samples Posted

    - by Rick Strahl
    I’ve posted the slides and samples to my DevConnections Sessions for anyone interested. I had a lot of fun with my sessions this time around mainly because the sessions picked were a little off the beaten track (well, the handlers/modules and e-commerce sessions anyway). For those of you that attended I hope you found the sessions useful. For the rest of you – you can check out the slides and samples if you like. Here’s what was covered: Introduction to jQuery with ASP.NET This session covered mostly the client side of jQuery demonstrated on a small sample page with a variety of incrementally built up examples of selection and page manipulation. This session also introduces some of the basics of AJAX communication talking to ASP.NET. When I do this session it never turns out exactly the same way and this time around the examples were on the more basic side and purely done with hands on demonstrations rather than walk throughs of more complex examples. Alas this session always feels like it needs another half an hour to get through the full sortiment of functionality. The slides and samples cover a wider variety of topics and there are many examples that demonstrate more advanced operations like interacting with WCF REST services, using client templating and building rich client only windowed interfaces. Download Low Level ASP.NET: Handlers and Modules This session was a look at the ASP.NET pipeline and it discusses some of the ASP.NET base architecture and key components from HttpRuntime on up through the various modules and handlers that make up the ASP.NET/IIS pipeline. This session is fun as there are a number of cool examples that demonstrate the power and flexibility of ASP.NET, but some of the examples were external and interfacing with other technologies so they’re not actually included in the downloadable samples. However, there are still a few cool ones in there – there’s an image resizing handler, an image overlay module that stamps images with Sample if loaded from a certain folder, an OpenID authentication module (which failed during the demo due to the crappy internet connection at DevConnections this year :-}), Response filtering using a generic filter stream component, a generic error handler and a few others. The slides cover a lot of the ASP.NET pipeline flow and various HttpRuntime components. Download Electronic Payment Processing in ASP.NET Applications This session covered the business end and integration of electronic credit card processing and PayPal. A good part of this session deals with what’s involved in payment processing, getting signed up and who you have to deal with for your merchant account. We then took a look at integration of credit card processing via some generic components provided with the session that allow processing using a unified class interface with specific implementations for several of the most common gateway providers including Authorize.NET, PayFlowPro, LinkPoint, BluePay etc. We also briefly looked at PayPal Classic implementation which provides a quick and cheap if not quite as professional mechanism for taking payments online. The samples provide the Credit Card processing wrappers for the various gateway providers as well as a PayPal helper class to generate the PayPal redirect urls as well as helper code for dealing with IPN callbacks. Download Hope some of you will find the material useful. Enjoy.© Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • Uploading documents to WSS (Windows Sharepoint Services) using SSIS

    - by Randy Aldrich Paulo
    Recently I was tasked to create an SSIS application that will query a database, split the results with certain criteria and create CSV file for every result and upload the file to a Sharepoint Document Library site. I've search the web and compiled the steps I've taken to build the solution. Summary: A) Create a proxy class of WSS Copy.asmx. B) Create a wrapper class for the proxy class and add a mechanism to check if the file is existing and delete method. C) Create an SSIS and call the wrapper class to transfer the files.   A) Creating Proxy Class 1) Go to Visual Studio Command Prompt type wsdl http://[sharepoint site]/_vti_bin/Copy.asmx this will generate the proxy class (Copy.cs) that will be added to the solution. 2) Add Copy.cs to solution and create another constructor for Copy() that will accept additional parameters url, userName, password and domain.   public Copy(string url, string userName, string password, string domain) { this.Url = url; this.Credentials = new System.Net.NetworkCredential(userName, password, domain); } 3) Add a namespace.     B) Wrapper Class Create a C# new library that references the Proxy Class.         C) Create SSIS SSIS solution is composed of:   1) Execute SQL Task, returns a single column rows containing the criteria. 2) Foreach Loop Container - loops per result from query (SQL Task) and creates a CSV file on a certain folder. 3) Script Task - calls the wrapper class to upload CSV files located on a certain folder to targer WSS Document Library Note: I've created another overload of CopyFiles that accepts a Directory Info instead of file location that loops thru the contents of the folder. Designer View Variable View

    Read the article

  • Data Web Controls Enhancements in ASP.NET 4.0

    Traditionally, developers using Web controls enjoyed increased productivity but at the cost of control over the rendered markup. For instance, many ASP.NET controls automatically wrap their content in <table> for layout or styling purposes. This behavior runs counter to the web standards that have evolved over the past several years, which favor cleaner, terser HTML; sparing use of tables; and Cascading Style Sheets (CSS) for layout and styling. Furthermore, the <table> elements and other automatically-added content makes it harder to both style the Web controls using CSS and to work with the controls from client-side script. One of the aims of ASP.NET version 4.0 is to give Web Form developers greater control over the markup rendered by Web controls. Last week's article, Take Control Of Web Control ClientID Values in ASP.NET 4.0, highlighted how new properties in ASP.NET 4.0 give the developer more say over how a Web control's ID property is translated into a client-side id attribute. In addition to these ClientID-related properties, many Web controls in ASP.NET 4.0 include properties that allow the page developer to instruct the control to not emit extraneous markup, or to use an HTML element other than <table>. This article explores a number of enhancements made to the data Web controls in ASP.NET 4.0. As you'll see, most of these enhancements give the developer greater control over the rendered markup. Read on to learn more! Read More >

    Read the article

  • Using Web Services from an XNA 4.0 WP7 Game

    - by Michael Cummings
    Now that the Windows Phone 7 development tools have been out for a while, let’s talk about how you can use them. Windows Phone 7 ( WP7 ) has two application types that you can create, either Silverlight or XNA, and you can’t really mix the two together. The development environment for WP7 is a special edition of Visual Studio 2010 called Visual Studio 2010 Express for Windows Phone. This edition will be installed with the WP7 tools, even if you have a full edition of VS2010 already installed. While you can use your full edition of VS2010 to do WP7 development, this astute developer has noticed that there are a few things that you can only do in the Express for Windows Phone edition. So lets start by discussing WP7 networking. On the WP7 platform the only networking available is through Web Services using WCF or if you’re really masochistic, you’ll use the WebClient to do http. In Silverlight, it’s fairly easy to wire up a WCF proxy to call a web service and get some data. In the XNA projects, not so much. Create WCF Service First, we’ll create our service that will return some information that we need in our game. Open Visual Studio 2010, and create a new WCF Web Service project. We’ll use the default implementation as we only need to see how to use a service, we are not interested in creating a really cool service at this point. However you may want to follow the instructions in the comments of Service1.svc.cs to change the name to something better, I used DataService and IDataService for the interface. You should now be able to run the project and the WCF Test Client will load and properly enumerate your service. At this point we have a functional service that can be consumed by our XNA game. Consume the WCF Service Open Visual Studio 2010 Express for Windows Phone and create a new XNA Game Studio 4.0 Windows Phone Game project. Now if you try to add a service reference to the project, you’ll notice that the option is not available. However, if you add a Silverlight application to your solution, you’ll notice that you can create a service reference there. So using the Silverlight project, we can create the service reference. Unfortunately you can’t reference the Silverlight project from the XNA Game project, so using Windows Explorer copy the Service References folder from the Silverlight project directory to the XNA Game project directory, then add the folder to your XNA Game project. You’ll need to set the property Build Action to None for all the files, except for Reference.cs, which should be Build. Truely, we only need Reference.cs but I find it easier to copy the whole folder. If you try to compile at this point, you’ll notice that we are missing  a couple of references, System.Runtime.Serialization, System.Net and System.ServiceModel. Add these to the XNA Game project and you should build successfully. You’ll also need to copy the ServiceReference.ClientConfig file and add it to your project. The WCF infrastructure looks for this file and will complain if it can’t find it. You’ll need to set the Copy to Output Directory property to Copy if Newer. We now need to add the code to call the service and display the results on the screen. Go ahead and add a SpriteFont resource to the Content project and load it in the Game project. There’s nothing here that’s changed much from 3.1 other than your Content project is now under the Solution node and not the Project node. While you’re at it, add a string field to store the result of the service call, and intialize it to string.Empty. Then in the Draw method, write the string out to the screen, only if it does not equal string.Empty. Now to wrap this up, lets create a new field that’s of the type DataServiceClient. In the Initialize Method, create a new instance of this type using its default contructor, then in the LoadContent we can call the service. Since we can only call the GetData method of our service asynchronously we need to set up a Completed event handler first. Thankfully, Visual Studio helps out a lot there just create, using the tab key whatever VS says to. In the GetDataAsyncCompleted event handler assign the service result ( e.Result) to your string field. If you run your game, you should get something like this : Enjoy!

    Read the article

  • Take Control Of Web Control ClientID Values in ASP.NET 4.0

    Each server-side Web control in an ASP.NET Web Forms application has an ID property that identifies the Web control and is name by which the Web control is accessed in the code-behind class. When rendered into HTML, the Web control turns its server-side ID value into a client-side id attribute. Ideally, there would be a one-to-one correspondence between the value of the server-side ID property and the generated client-side id, but in reality things aren't so simple. By default, the rendered client-side id is formed by taking the Web control's ID property and prefixed it with the ID properties of its naming containers. In short, a Web control with an ID of txtName can get rendered into an HTML element with a client-side id like ctl00_MainContent_txtName. This default translation from the server-side ID property value to the rendered client-side id attribute can introduce challenges when trying to access an HTML element via JavaScript, which is typically done by id, as the page developer building the web page and writing the JavaScript does not know what the id value of the rendered Web control will be at design time. (The client-side id value can be determined at runtime via the Web control's ClientID property.) ASP.NET 4.0 affords page developers much greater flexibility in how Web controls render their ID property into a client-side id. This article starts with an explanation as to why and how ASP.NET translates the server-side ID value into the client-side id value and then shows how to take control of this process using ASP.NET 4.0. Read on to learn more! Read More >Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Stored Procedures with SSRS? Hmm… not so much

    - by Rob Farley
    Little Bobby Tables’ mother says you should always sanitise your data input. Except that I think she’s wrong. The SQL Injection aspect is for another post, where I’ll show you why I think SQL Injection is the same kind of attack as many other attacks, such as the old buffer overflow, but here I want to have a bit of a whinge about the way that some people sanitise data input, and even have a whinge about people who insist on using stored procedures for SSRS reports. Let me say that again, in case you missed it the first time: I want to have a whinge about people who insist on using stored procedures for SSRS reports. Let’s look at the data input sanitisation aspect – except that I’m going to call it ‘parameter validation’. I’m talking about code that looks like this: create procedure dbo.GetMonthSummaryPerSalesPerson(@eomdate datetime) as begin     /* First check that @eomdate is a valid date */     if isdate(@eomdate) != 1     begin         select 'Please enter a valid date' as ErrorMessage;         return;     end     /* Then check that time has passed since @eomdate */     if datediff(day,@eomdate,sysdatetime()) < 5     begin         select 'Sorry - EOM is not complete yet' as ErrorMessage;         return;     end         /* If those checks have succeeded, return the data */     select SalesPersonID, count(*) as NumSales, sum(TotalDue) as TotalSales     from Sales.SalesOrderHeader     where OrderDate >= dateadd(month,-1,@eomdate)         and OrderDate < @eomdate     group by SalesPersonID     order by SalesPersonID; end Notice that the code checks that a date has been entered. Seriously??!! This must only be to check for NULL values being passed in, because anything else would have to be a valid datetime to avoid an error. The other check is maybe fair enough, but I still don’t like it. The two problems I have with this stored procedure are the result sets and the small fact that the stored procedure even exists in the first place. But let’s consider the first one of these problems for starters. I’ll get to the second one in a moment. If you read Jes Borland (@grrl_geek)’s recent post about returning multiple result sets in Reporting Services, you’ll be aware that Reporting Services doesn’t support multiple results sets from a single query. And when it says ‘single query’, it includes ‘stored procedure call’. It’ll only handle the first result set that comes back. But that’s okay – we have RETURN statements, so our stored procedure will only ever return a single result set.  Sometimes that result set might contain a single field called ErrorMessage, but it’s still only one result set. Except that it’s not okay, because Reporting Services needs to know what fields to expect. Your report needs to hook into your fields, so SSRS needs to have a way to get that information. For stored procs, it uses an option called FMTONLY. When Reporting Services tries to figure out what fields are going to be returned by a query (or stored procedure call), it doesn’t want to have to run the whole thing. That could take ages. (Maybe it’s seen some of the stored procedures I’ve had to deal with over the years!) So it turns on FMTONLY before it makes the call (and turns it off again afterwards). FMTONLY is designed to be able to figure out the shape of the output, without actually running the contents. It’s very useful, you might think. set fmtonly on exec dbo.GetMonthSummaryPerSalesPerson '20030401'; set fmtonly off Without the FMTONLY lines, this stored procedure returns a result set that has three columns and fourteen rows. But with FMTONLY turned on, those rows don’t come back. But what I do get back hurts Reporting Services. It doesn’t run the stored procedure at all. It just looks for anything that could be returned and pushes out a result set in that shape. Despite the fact that I’ve made sure that the logic will only ever return a single result set, the FMTONLY option kills me by returning three of them. It would have been much better to push these checks down into the query itself. alter procedure dbo.GetMonthSummaryPerSalesPerson(@eomdate datetime) as begin     select SalesPersonID, count(*) as NumSales, sum(TotalDue) as TotalSales     from Sales.SalesOrderHeader     where     /* Make sure that @eomdate is valid */         isdate(@eomdate) = 1     /* And that it's sufficiently past */     and datediff(day,@eomdate,sysdatetime()) >= 5     /* And now use it in the filter as appropriate */     and OrderDate >= dateadd(month,-1,@eomdate)     and OrderDate < @eomdate     group by SalesPersonID     order by SalesPersonID; end Now if we run it with FMTONLY turned on, we get the single result set back. But let’s consider the execution plan when we pass in an invalid date. First let’s look at one that returns data. I’ve got a semi-useful index in place on OrderDate, which includes the SalesPersonID and TotalDue fields. It does the job, despite a hefty Sort operation. …compared to one that uses a future date: You might notice that the estimated costs are similar – the Index Seek is still 28%, the Sort is still 71%. But the size of that arrow coming out of the Index Seek is a whole bunch smaller. The coolest thing here is what’s going on with that Index Seek. Let’s look at some of the properties of it. Glance down it with me… Estimated CPU cost of 0.0005728, 387 estimated rows, estimated subtree cost of 0.0044385, ForceSeek false, Number of Executions 0. That’s right – it doesn’t run. So much for reading plans right-to-left... The key is the Filter on the left of it. It has a Startup Expression Predicate in it, which means that it doesn’t call anything further down the plan (to the right) if the predicate evaluates to false. Using this method, we can make sure that our stored procedure contains a single query, and therefore avoid any problems with multiple result sets. If we wanted, we could always use UNION ALL to make sure that we can return an appropriate error message. alter procedure dbo.GetMonthSummaryPerSalesPerson(@eomdate datetime) as begin     select SalesPersonID, count(*) as NumSales, sum(TotalDue) as TotalSales, /*Placeholder: */ '' as ErrorMessage     from Sales.SalesOrderHeader     where     /* Make sure that @eomdate is valid */         isdate(@eomdate) = 1     /* And that it's sufficiently past */     and datediff(day,@eomdate,sysdatetime()) >= 5     /* And now use it in the filter as appropriate */     and OrderDate >= dateadd(month,-1,@eomdate)     and OrderDate < @eomdate     group by SalesPersonID     /* Now include the error messages */     union all     select 0, 0, 0, 'Please enter a valid date' as ErrorMessage     where isdate(@eomdate) != 1     union all     select 0, 0, 0, 'Sorry - EOM is not complete yet' as ErrorMessage     where datediff(day,@eomdate,sysdatetime()) < 5     order by SalesPersonID; end But still I don’t like it, because it’s now a stored procedure with a single query. And I don’t like stored procedures that should be functions. That’s right – I think this should be a function, and SSRS should call the function. And I apologise to those of you who are now planning a bonfire for me. Guy Fawkes’ night has already passed this year, so I think you miss out. (And I’m not going to remind you about when the PASS Summit is in 2012.) create function dbo.GetMonthSummaryPerSalesPerson(@eomdate datetime) returns table as return (     select SalesPersonID, count(*) as NumSales, sum(TotalDue) as TotalSales, '' as ErrorMessage     from Sales.SalesOrderHeader     where     /* Make sure that @eomdate is valid */         isdate(@eomdate) = 1     /* And that it's sufficiently past */     and datediff(day,@eomdate,sysdatetime()) >= 5     /* And now use it in the filter as appropriate */     and OrderDate >= dateadd(month,-1,@eomdate)     and OrderDate < @eomdate     group by SalesPersonID     union all     select 0, 0, 0, 'Please enter a valid date' as ErrorMessage     where isdate(@eomdate) != 1     union all     select 0, 0, 0, 'Sorry - EOM is not complete yet' as ErrorMessage     where datediff(day,@eomdate,sysdatetime()) < 5 ); We’ve had to lose the ORDER BY – but that’s fine, as that’s a client thing anyway. We can have our reports leverage this stored query still, but we’re recognising that it’s a query, not a procedure. A procedure is designed to DO stuff, not just return data. We even get entries in sys.columns that confirm what the shape of the result set actually is, which makes sense, because a table-valued function is the right mechanism to return data. And we get so much more flexibility with this. If you haven’t seen the simplification stuff that I’ve preached on before, jump over to http://bit.ly/SimpleRob and watch the video of when I broke a microphone and nearly fell off the stage in Wales. You’ll see the impact of being able to have a simplifiable query. You can also read the procedural functions post I wrote recently, if you didn’t follow the link from a few paragraphs ago. So if we want the list of SalesPeople that made any kind of sales in a given month, we can do something like: select SalesPersonID from dbo.GetMonthSummaryPerSalesPerson(@eomonth) order by SalesPersonID; This doesn’t need to look up the TotalDue field, which makes a simpler plan. select * from dbo.GetMonthSummaryPerSalesPerson(@eomonth) where SalesPersonID is not null order by SalesPersonID; This one can avoid having to do the work on the rows that don’t have a SalesPersonID value, pushing the predicate into the Index Seek rather than filtering the results that come back to the report. If we had joins involved, we might see some of those being simplified out. We also get the ability to include query hints in individual reports. We shift from having a single-use stored procedure to having a reusable stored query – and isn’t that one of the main points of modularisation? Stored procedures in Reporting Services are just a bit limited for my liking. They’re useful in plenty of ways, but if you insist on using stored procedures all the time rather that queries that use functions – that’s rubbish. @rob_farley

    Read the article

  • Mega Trends 4 Financial Services, 21 maggio 2014

    - by Claudia Caramelli-Oracle
    Oracle ha sponsorizzato questo evento dedicato alle Banche e al mondo assicurativo. Il tema principale è stato cercare di capire come esplorare il futuro per migliorare il coinvolgimento dei clienti e le innovazioni in questo mercato. Oracle ha avuto l'opportunità di incontrare i Direttori Generali e i CxO delle più importanti banche italiane, internazionali e assicurazioni in oltre quattro momenti diversi: 1. Cena executive il 20 maggio2. Sessione plenaria3. Sessione parallela con il tema: Social & Digital Engaging4. CRM & Dig Data IntelligenceL'hashtag #mt4financialservices  ha visto un grosso movimento su Twitter: questo dimostra come le tematiche di cui si è discusso durante l'evento devono e trovano un reale riscontro in quello che è il mercato di riferimento. C'è interesse e soprattutto il mercato aspetta solo di essere ingaggiato in queste modalità! Per maggiori informazioni scrivi a Silvia Valgoi

    Read the article

  • Webinar: NoSQL - Data Center Centric Application Enablement

    - by Charles Lamb
    NoSQL - Data Center Centric Application Enablement AUGUST 6 WEBINAR About the Webinar The growth of Datacenter infrastructure is trending out of bounds, along with the pace in user activity and data generation in this digital era. However, the nature of the typical application deployment within the data center is changing to accommodate new business needs. Those changes introduce complexities in application deployment architecture and design, which cascade into requirements for a new generation of database technology (NoSQL) destined to ease that complexity. This webcast will discuss the modern data centers data centric application, the complexities that must be dealt with and common architectures found to describe and prescribe new data center aware services. Well look at the practical issues in implementation and overview current state of art in NoSQL database technology solving the problems of data center awareness in application development. REGISTER NOW>> MORE INFORMATION >> NOTE! All attendees will be entered to win a guest pass to the NoSQL Now! 2013 Conference & Expo. About the Speaker Robert Greene, Oracle NoSQL Product Management Robert GreeneRobert Greene is a principle product manager / strategist for Oracle’s NoSQL Database technology. Prior to Oracle he was the V.P. Technology for a NoSQL Database company, Versant Corporation, where he set the strategy for alignment with Big Data technology trends resulting in the acquisition of the company by Actian Corp in 2012. Robert has been an active member of both commercial and open source initiatives in the NoSQL and Object Relational Mapping spaces for the past 18 years, developing software, leading project teams, authoring articles and presenting at major conferences on these topics. In his previous life, Robert was an electronic engineer developing first generation wireless, spread spectrum based security systems.

    Read the article

  • Semi-blocking Transformations in SQL Server Integration Services SSIS

    In a SSIS data flow, there are multiple types of transformations. On one hand you have synchronous and asynchronous transformations, but on the other hand you have non-blocking, semi-blocking and fully-blocking components. In this tip, Koen Verbeeck takes a closer look on the performance impact of semi-blocking transformations in SSIS. Can 41,000 DBAs really be wrong? Join 41,000 other DBAs who are following the new series from the DBA Team: the 5 Worst Days in a DBA’s Life. Part 3, As Corrupt As It Gets, is out now – read it here.

    Read the article

  • Remote Data connection in iphone app

    - by Tariq- iPHONE Programmer
    Hello, i am working with Social Networking iphone app which require remote data connection. So i hired a php developer in order to provide me RESTful services. But when i start working with him, he arguing me that he will not make stored procedures and web services. Instead of he suggested me to pass query as a parameter. Suppose If I have to call Search service, he told me to send POST request with 3 parameters: Query="select * from users", username=abd and password = 123 And i thing there is no such architecture in order to use remote data. Then he is saying it is possible through socket programming. And I am 100% sure this is not an appropriate way to access remote data. This is simply illogical. Thousands of iphone application using REST/SOAP services to make remote data connection He just declined me to provide RESTful services. Please its my heartily advice to all developers that post your own views over here. So that I can show to that developers that these are the views from all developers worldwide.

    Read the article

  • Windows Azure ASP.NET MVC 2 Role with Silverlight

    - by GeekAgilistMercenary
    I was working through some scenarios recently with Azure and Silverlight.  I immediately decided a quick walk through for setting up a Silverlight Application running in an ASP.NET MVC 2 Application would be a cool project. This walk through I have Visual Studio 2010, Silverlight 4, and the Azure SDK all installed.  If you need to download any of those go get em? now. Launch Visual Studio 2010 and start a new project.  Click on the section for cloud templates as shown below. After you name the project, the dialog for what type of Windows Azure Cloud Service Role will display.  I selected ASP.NET MVC 2 Web Role, which adds the MvcWebRole1 Project to the Cloud Service Solution. Since I selected the ASP.NET MVC 2 Project type, it immediately prompts for a unit test project.  Because I just want to get everything running first, I will probably be unit testing the Silverlight and just using the MVC Project as a host for the Silverlight for now, and because I would prefer to just add the unit test project later, I am going to select no here. Once you've created the ASP.NET MVC 2 project to host the Silverlight, then create another new project.  Select the Silverlight section under the Installed Templates in the Add New Project dialog.  Then select Silverlight Application. The next dialog that comes up will inquire about using the existing ASP.NET MVC Application I just created, which I do want it to use that so I leave it checked.  The options section however I do not want to check RIA Web Services, do not want a test page added to the project, and I want Silverlight debugging enabled so I leave that checked.  Once those options are appropriately set, just click on OK and the Silverlight Project will be added to the overall solution. The next steps now are to get the Silverlight object appropriately embedded in the web page.  First open up the Site.Master file in the ASP.NET MVC 2 Project located under the Veiws/Shared/ location.  After you open the file review the content of the <header></header> section.  In that section add another <contentplaceholder></contentplaceholder> tag as shown in the code snippet below. <head runat="server"> <title> <asp:ContentPlaceHolder ID="TitleContent" runat="server" /> </title> <link href="../../Content/Site.css" rel="stylesheet" type="text/css" /> <asp:ContentPlaceHolder ID="HeaderContent" runat="server" /> </head> I usually put it toward the bottom of the header section.  It just seems the <title></title> should be on the top of the section and I like to keep it that way. Now open up the Index.aspx page under the ASP.NET MVC 2 Project located in the Views/Home/ directory.  When you open up that file add a <asp:Content><asp:Content> tag as shown in the next snippet. <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server"> Home Page </asp:Content>   <asp:Content ID=headerContent ContentPlaceHolderID=HeaderContent runat=server>   </asp:Content>   <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <h2><%= Html.Encode(ViewData["Message"]) %></h2> <p> To learn more about ASP.NET MVC visit <a href="http://asp.net/mvc" title="ASP.NET MVC Website">http://asp.net/mvc</a>. </p> </asp:Content> In that center tag, I am now going to add what is needed to appropriately embed the Silverlight object into the page.  The first thing I needed is a reference to the Silverlight.js file. <script type="text/javascript" src="Silverlight.js"></script> After that comes a bit of nitty gritty Javascript.  I create another tag (and for those in the know, this is exactly like the generated code that is dumped into the *.html page generated with any Silverlight Project if you select to "add a test page that references the application".  The complete Javascript is below. function onSilverlightError(sender, args) { var appSource = ""; if (sender != null && sender != 0) { appSource = sender.getHost().Source; }   var errorType = args.ErrorType; var iErrorCode = args.ErrorCode;   if (errorType == "ImageError" || errorType == "MediaError") { return; }   var errMsg = "Unhandled Error in Silverlight Application " + appSource + "\n";   errMsg += "Code: " + iErrorCode + " \n"; errMsg += "Category: " + errorType + " \n"; errMsg += "Message: " + args.ErrorMessage + " \n";   if (errorType == "ParserError") { errMsg += "File: " + args.xamlFile + " \n"; errMsg += "Line: " + args.lineNumber + " \n"; errMsg += "Position: " + args.charPosition + " \n"; } else if (errorType == "RuntimeError") { if (args.lineNumber != 0) { errMsg += "Line: " + args.lineNumber + " \n"; errMsg += "Position: " + args.charPosition + " \n"; } errMsg += "MethodName: " + args.methodName + " \n"; }   throw new Error(errMsg); } I literally, since it seems to work fine, just use what is populated in the automatically generated page.  After getting the appropriate Javascript into place I put the actual Silverlight Object Embed code into the HTML itself.  Just so I know the positioning and for final verification when running the application I insert the embed code just below the Index.aspx page message.  As shown below. <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <h2> <%= Html.Encode(ViewData["Message"]) %></h2> <p> To learn more about ASP.NET MVC visit <a href="http://asp.net/mvc" title="ASP.NET MVC Website"> http://asp.net/mvc</a>. </p> <div id="silverlightControlHost"> <object data="data:application/x-silverlight-2," type="application/x-silverlight-2" width="100%" height="100%"> <param name="source" value="ClientBin/CloudySilverlight.xap" /> <param name="onError" value="onSilverlightError" /> <param name="background" value="white" /> <param name="minRuntimeVersion" value="4.0.50401.0" /> <param name="autoUpgrade" value="true" /> <a href="http://go.microsoft.com/fwlink/?LinkID=149156&v=4.0.50401.0" style="text-decoration: none"> <img src="http://go.microsoft.com/fwlink/?LinkId=161376" alt="Get Microsoft Silverlight" style="border-style: none" /> </a> </object> <iframe id="_sl_historyFrame" style="visibility: hidden; height: 0px; width: 0px; border: 0px"></iframe> </div> </asp:Content> I then open up the Silverlight Project MainPage.xaml.  Just to make it visibly obvious that the Silverlight Application is running in the page, I added a button as shown below. <UserControl x:Class="CloudySilverlight.MainPage" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" mc:Ignorable="d" d:DesignHeight="300" d:DesignWidth="400">   <Grid x:Name="LayoutRoot" Background="White"> <Button Content="Button" Height="23" HorizontalAlignment="Left" Margin="48,40,0,0" Name="button1" VerticalAlignment="Top" Width="75" Click="button1_Click" /> </Grid> </UserControl> Just for kicks, I added a message box that would popup, just to show executing functionality also. private void button1_Click(object sender, RoutedEventArgs e) { MessageBox.Show("It runs in the cloud!"); } I then executed the ASP.NET MVC 2 and could see the Silverlight Application in page.  With a quick click of the button, I got a message box.  Success! Now the next step is getting the ASP.NET MVC 2 Project and Silverlight published to the cloud.  As of Visual Studio 2010, Silverlight 4, and the latest Azure SDK, this is actually a ridiculously easy process. Navigate to the Azure Cloud Services web site. Once that is open go back in Visual Studio and right click on the cloud project and select publish. This will publish two files into a directory.  Copy that directory so you can easily paste it into the Azure Cloud Services web site.  You'll have to click on the application role in the cloud (I will have another blog entry soon about where, how, and best practices in the cloud). In the text boxes shown, select the application package file and the configuration file and place them in the appropriate text boxes.  This is the part were it comes in handy to have copied the directory path of the file location.  That way when you click on browser you can just paste that in, then hit enter.  The two files will be listed and you can select the appropriate file. Once that is done, name the service deployment.  Then click on publish.  After a minute or so you will see the following screen. Now click on run.  Once the MvcWebRole1 goes green (the little light symbol to the left of the status) click on the Web Site URL.  Be patient during this process too, it could take a minute or two.  The Silverlight application should again come up just like you ran it on your local machine. Once staging is up and running, click on the circular icon with two arrows to move staging to production.  Once you are done make sure the green light is again go for the production deploy, then click on the Web Site URL to verify the site is working.  At this point I had a successful development, staging, and production deployment. Thanks for reading, hope this was helpful.  I have more Windows Azure and other cloud related material coming, so stay tuned. Original Entry

    Read the article

  • EPM 11.1.2.2 Architecture: Essbase

    - by Marc Schumacher
    Since a lot of components exist to access or administer Essbase, there are also a couple of client tools available. End users typically use the Excel Add-In or SmartView nowadays. While the Excel Add-In talks to the Essbase server directly using various ports, SmartView connects to Essbase through Provider Services using HTTP protocol. The ability to communicate using a single port is one of the major advantages from SmartView over Excel Add-In. If you consider using Excel Add-In going forward, please make sure you are aware of the Statement of Direction for this component. The Administration Services Console, Integration Services Console and Essbase Studio are clients, which are mainly used by Essbase administrators or application designers. While Integration Services and Essbase Studio are used to setup Essbase applications by loading metadata or simply for data loads, Administration Services are utilized for all kind of Essbase administration. All clients are using only one or two ports to talk to their server counterparts, which makes them work through firewalls easily. Although clients for Provider Services (SmartView) and Administration Services (Administration Services Console) are only using a single port to communicate to their backend services, the backend services itself need the Essbase configured port range to talk to the Essbase server. Any communication to repository databases is done using JDBC connections. Essbase Studio and Integration Services are using different technologies to talk to the Essbase server, Integration Services uses CAPI, Essbase Studio uses JAPI. However, both are using the configured port range on the Essbase server to talk to Essbase. Connections to data sources are either based on ODBC (Integration Service, Essbase) or JDBC (Essbase Studio). As for all other components discussed previously, when setting up firewall rules, be aware of the fact that all services may need to talk to the external authentication sources, this is not only needed for Shared Services.

    Read the article

  • Internal Data Masking

    - by ACShorten
    By default, the data in the product is unmasked for authorized users. If particular data within the object is considered a candidate for data masking then the masking capabilities with the product can be used to mask the data in an appropriate fashion. The inbuilt Data Masking capabilities of the Oracle Utilities Application Framework uses a number of configuration elements: An algorithm, of type F1-MASK, is specified to configure the elements of the data masking including the masking character, number of suffix characters left unmasked, characters to ignore in the string, the application service, security type and authorization levels applicable to the mask. A Data Masking Feature Configuration is created to define where the algorithm applies. The specification of the feature allows you to define the fields to encrypt using the configured algorithm. The algorithm can be attached to a schema field, table field, characteristic, search field and even a child record (such as an identifier). The appropriate user groups are then connected to the application services with the appropriate service types and level to indicate whether the masking applies to the user group or not. For example, say there is a field called CCNBR in the product which holds the credit card details. I would create an algorithm, say CCformatCC, to mask the credit card number with the last few digits as unmasked (as the standard in most systems dictate). I would specify on the Field Mask the following: field="CCNBR", alg="CMformatCC" On the algorithm CMfomatCC, I would specify the mask, application service, security type and the authorization level which users would see the credit card unmasked. To finish the configuration off and to implemention I would connect the appropriate user groups to the application service I specified with the security type and appropriate authorization level for that group. Whenever a user accesses the CCNBR field on any of the maintenance screens, searches and other screens that use the CCNBR meta data definition would then be masked according to the user group that the user was a member of. Refer to the documentation supplied with F1-MASK algorithm type entry for more examples of what is possible.

    Read the article

  • Cross-Platform Migration using Heterogeneous Data Guard

    - by Roy F. Swonger
    Most people think of Data Guard as a disaster recovery solution, and it certainly excels in that role. However, did you know that you can also use Data Guard for platform migration under some conditions? While you would normally have your primary and standby Data Guard systems running on the same OS and hardware platform, there are some heterogeneous combinations of primary and stanby system that are supported by Data Guard Physical Standby. One example of heterogenous Data Guard support is the ability to go between Linux and Windows on many processor architectures. Another is the support for environments that are running HP-UX on both PA-RIsC and Itanium hardware. Brand new in 11.2.0.2 is the ability to have both SPARC Solaris and IBM AIX on Power Systems in the same Data Guard environment. See My Oracle Support note 413484.1 for all the details about supported platform combinations. So, why mention this in an upgrade blog? Simple: much of the time required for a platform migration is usually spent copying files from one system to another. If you are moving between systems that are supported by heterogenous Data Guard, then you can reduce that migration downtime to a matter of minutes. This can be a big win when downtime is at a premium (and isn't downtime always at a premium? In addition, you get the benefit of being able to keep the old and new environments synchronized until you are sure the migration is successful! A great case study of using Data Guard for a technology refresh is located on this OTN page. The case study showing CERN's methodology isn't highlighted as a link on the overview page, but it is clickable. As always, make sure you are fully versed on the details and restrictions by reading the available documentation and MOS notes. Happy migrating!

    Read the article

  • SFTP permission denied on files owned by www-data

    - by Charles Roper
    I have a pretty standard server set up running Apache and PHP. An app I am running creates files and these are owned by the Apache user www-data. Files that I upload via SFTP are owned by my own user charlesr. All files are part of the www-data group. My problem is that I cannot modify or overwrite any of the files via SFTP which are owned by www-data, even though charlesr is part of the www-data group. I can modify the files no problem via a SSH session. So I'm not sure what to do. How do I give my SFTP session permissions to modify www-data owned files? For a bit of background, these are the notes I wrote for myself when setting-up the server: Now set up permissions on `/var/www` where your files are served from by default: $ sudo adduser $USER www-data $ sudo chgrp -R www-data /var/www $ sudo chmod -R g+rw /var/www $ sudo chmod -R g+s /var/www Now log out and log in again to make the changes take hold. The previous set of commands does the following: 1. adds the current user ($USER) to the `www-data` group; 2. changes `/var/www` to belong to the `www-data` group; 3. adds read/write permissions to the group that `/var/www` belongs to; 4. sets the SGID bit on `/var/www`; this final point bears some explaining. And then I go on to explain to myself what setting the SGID bit means (i.e. all files created in /var/www become part of the www-data group automatically). Btw, nothing feels sweeter than going back and reading your own detailed notes on the what, how and why of your own server set up when trying to troubleshoot like this - I recommend it highly to all beginners like myself :-)

    Read the article

  • How do I find the JavaScript that is invoked when I click on a button or a link in a web-page (part of a data mining project)?

    - by aste123
    I tried to use the 'inspect element' of the firebug addon for Firefox but it doesn't give me any link to the javascript. For example I got this from the firebug addon: < a href="javascript:" text of the link < /a But there is no link to the actual javascript or anything that I can use to directly go to the said link. How do I accomplish this? I need this as part of a personal data mining project that I am doing.

    Read the article

  • Getting Started with Amazon Web Services in NetBeans IDE

    - by Geertjan
    When you need to connect to Amazon Web Services, NetBeans IDE gives you a nice start. You can drag and drop the "itemSearch" service into a Java source file and then various Amazon files are generated for you. From there, you need to do a little bit of work because the request to Amazon needs to be signed before it can be used. Here are some references and places that got me started: http://associates-amazon.s3.amazonaws.com/signed-requests/helper/index.html http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSGettingStartedGuide/AWSCredentials.html https://affiliate-program.amazon.com/gp/flex/advertising/api/sign-in.html You definitely need to sign up to the Amazon Associates program and also register/create an Access Key ID, which will also get you a Secret Key, as well. Here's a simple Main class that I created that hooks into the generated RestConnection/RestResponse code created by NetBeans IDE: public static void main(String[] args) {    try {        String searchIndex = "Books";        String keywords = "Romeo and Juliet";        RestResponse result = AmazonAssociatesService.itemSearch(searchIndex, keywords);        String dataAsString = result.getDataAsString();        int start = dataAsString.indexOf("<Author>")+8;        int end = dataAsString.indexOf("</Author>");        System.out.println(dataAsString.substring(start,end));    } catch (Exception ex) {        ex.printStackTrace();    }} Then I deleted the generated properties file and the authenticator and changed the generated AmazonAssociatesService.java file to the following: public class AmazonAssociatesService {    private static void sleep(long millis) {        try {            Thread.sleep(millis);        } catch (Throwable th) {        }    }    public static RestResponse itemSearch(String searchIndex, String keywords) throws IOException {        SignedRequestsHelper helper;        RestConnection conn = null;        Map queryMap = new HashMap();        queryMap.put("Service", "AWSECommerceService");        queryMap.put("AssociateTag", "myAssociateTag");        queryMap.put("AWSAccessKeyId", "myAccessKeyId");        queryMap.put("Operation", "ItemSearch");        queryMap.put("SearchIndex", searchIndex);        queryMap.put("Keywords", keywords);        try {            helper = SignedRequestsHelper.getInstance(                    "ecs.amazonaws.com",                    "myAccessKeyId",                    "mySecretKey");            String sign = helper.sign(queryMap);            conn = new RestConnection(sign);        } catch (IllegalArgumentException | UnsupportedEncodingException | NoSuchAlgorithmException | InvalidKeyException ex) {        }        sleep(1000);        return conn.get(null);    }} Finally, I copied this class into my application, which you can see is referred to above: http://code.google.com/p/amazon-product-advertising-api-sample/source/browse/src/com/amazon/advertising/api/sample/SignedRequestsHelper.java Here's the completed app, mostly generated via the drag/drop shown at the start, but slightly edited as shown above: That's all, now everything works as you'd expect.

    Read the article

  • Displaying a Grid of Data in ASP.NET MVC

    One of the most common tasks we face as a web developers is displaying data in a grid. In its simplest incarnation, a grid merely displays information about a set of records - the orders placed by a particular customer, perhaps; however, most grids offer features like sorting, paging, and filtering to present the data in a more useful and readable manner. In ASP.NET WebForms the GridView control offers a quick and easy way to display a set of records in a grid, and offers features like sorting, paging, editing, and deleting with just a little extra work. On page load, the GridView automatically renders as an HTML <table> element, freeing you from having to write any markup and letting you focus instead on retrieving and binding the data to display to the GridView. In an ASP.NET MVC application, however, developers are on the hook for generating the markup rendered by each view. This task can be a bit daunting for developers new to ASP.NET MVC, especially those who have a background in WebForms. This is the first in a series of articles that explore how to display grids in an ASP.NET MVC application. This installment starts with a walk through of creating the ASP.NET MVC application and data access code used throughout this series. Next, it shows how to display a set of records in a simple grid. Future installments examine how to create richer grids that include sorting, paging, filtering, and client-side enhancements. We'll also look at pre-built grid solutions, like the Grid component in the MvcContrib project and JavaScript-based grids like jqGrid. But first things first - let's create an ASP.NET MVC application and see how to display database records in a web page. Read on to learn more! Read More >

    Read the article

  • A Cost Effective Solution to Securing Retail Data

    - by MichaelM-Oracle
    By Mike Wion, Director, Security Solutions, Oracle Consulting Services As so many noticed last holiday season, data breaches, especially those at major retailers, are now a significant risk that requires advance preparation. The need to secure data at all access points is now driven by an expanding privacy and regulatory environment coupled with an increasingly dangerous world of hackers, insider threats, organized crime, and other groups intent on stealing valuable data. This newly released Oracle whitepaper entitled Cost Effective Security Compliance with Oracle Database 12c outlines a powerful story related to a defense in depth, multi-layered, security model that includes preventive, detective, and administrative controls for data security. At Oracle Consulting Services (OCS), we help to alleviate the fears of massive data breach by providing expert services to assist our clients with the planning and deployment of Oracle’s Database Security solutions. With our deep expertise in Oracle Database Security, Oracle Consulting can help clients protect data with the security solutions they need to succeed with architecture/planning, implementation, and expert services; which, in turn, provide faster adoption and return on investment with Oracle solutions. On June 10th at 10:00AM PST , Larry Ellison will present an exclusive webcast entitled “The Future of Database Begins Soon”. In this webcast, Larry will launch the highly anticipated Oracle Database In-Memory technology that will make it possible to perform true real-time, ad-hoc, analytic queries on your organization’s business data as it exists at that moment and receive the results immediately. Imagine real-time analytics available across your existing Oracle applications! Click here to download the whitepaper entitled Cost Effective Security Compliance with Oracle Database 12c.

    Read the article

  • Spreading incoming batched data into a real-time stream

    - by pr1001
    I would like to display some events in 'real-time'. However, I must fetch the data from another source. I can request the last X minutes, though the source is updated approximately every 5 minutes. This means that there will be a delay between the most recent data retrieved and the point in time that I make the request. Second, because I will be receiving a batch of data, I don't want to just fire out all the events down a socket once my fetcher has retrieved it: I would like to spread out the events so that they are both accurately spaced amongst each other and in sync with their original occurrences (e.g. an event is always displayed 6 minutes after it actually happened). My thought is to fetch the data every 5 minutes from the source, knowing that I won't get the very latest data. The original data would be then queued to be sent down the socket 7.5 minutes from its original timestamp – that is, at least ~2.5 minutes from when its batch was fetched and at most 7.5 minutes since then. My question is this: is this the best way to approach the problem? Does this problem have any standard approaches or associated literature related to implementation best-practices and edge cases? I am a bit worried that the frequency of my fetches and the frequency in which the source is updated will get out of sync, leading to points where no data will be retrieved from the source. However, since my socket delay is greater than my fetch frequency, the subsequent fetch should retrieve newer data before the socket queue is empty. Is that correct? Am I missing something? Thanks!

    Read the article

  • Fiddler - A useful free tool for checking Web Services (and web site) traffic

    - by TATWORTH
    Recently I had reason to be very glad that I had Fiddler. I was able to record some web service traffic and identify a problem as Fiddler can record both the call to a web service and response from the web service. By seeing the actual data traffic I was able to resolve a problem found in testing in less time than it has taken me to write this blog entry! This tool is also useful for studying general web site traffic. Fiddler is available from http://www.fiddler2.com/fiddler2/ There are training videos available on the above site.

    Read the article

  • Using R on your Oracle Data Warehouse

    - by jean-pierre.dijcks
    Since it is Predictive Analytics World in our backyard (or are we San Francisco’s backyard…?) I figured it is well worth the time to dust of some old but important news. With big data (should we start calling it “any data analytics” instead?) being the buzz word and analytics the key operative goal, not moving data around is becoming more and more critical to the business users. Why? Because instead of spending time on moving data around into your next analytics server you should be running analytics on those CPUs. You could always do this with Oracle Data Mining within the Oracle Database. But a lot of folks want to leverage R as their main tool. Well, this article describes how you can do this, since 2010… As Casimir Saternos concludes in the article; “There is a growing awareness of the need to effectively analyze astronomical amounts of data, much of which is stored in Oracle databases. Statistics and modeling techniques are used to improve a wide variety of business functions. ODM accessed using the R language increases the value of your data by uncovering additional information. RODM is a powerful tool to enable your organization to make predictions, classify data, and create visualizations that maximize effectiveness and efficiencies.” Happy Analysis!

    Read the article

  • Can you/should you develop components for ASP.NET MVC?

    - by Vilx-
    Following from the previous question I've started to wonder - is it possible to implement "Components" in ASP.NET MVC (latest version)? And should you? Let's clarify what I mean with a "component". With that I mean a "control" (aka "widget"), similar to those that ASP.NET webforms is built upon. A gridview might be a good example. In webforms I can place on my form a datasource component (one line of code), a gridview component (another line of code) and bind them together (specify an attribute on the gridview). In the codebehind file I fill the datasource with data (a few lines of DB-querying code), and I'm all set. At this point the gridview is a fully functional standalone component. I can open the form, and I'll see all the data. I can sort it by clicking on the column headers; it is split into several pages; I can drag the column headers around and rearrange columns; I can turn on "grouping" mode; etc. And I don't need to write another line of code for any of it. The gridview, as a component, already has all the code tucked away in its classes and assemblies. I just place it on the form, initialize it, and it Just Works. At some times (like sorting or navigation to a different page) it will also perform ajax callbacks to the server, but those too will be handled internally, with my code having no knowledge at all about it. And then there are also events that I can attach if I want to get notified when something happens. In MVC I cannot see a way of doing this cleanly. Sure, there are the partial views, but those only handle half of the problem - they render the initial HTML. Some more can be achieved with client-side Javascript (like column re-arranging), but when the grid needs to do an ajax callback (say, to fetch the next page of data), my code will have to get involved and process that request. At best I guess I can provide some helper methods to process it, but I'll have to write the code that calls them, and also provide a controller method with signature matching the arguments of that callback. I guess that I could make some hacks with global events or special routes or something, but that just seems... hackish. Unelegant. Perhaps this is not the MVC way? Although I've completed one project in it, I'm still far from being an MVC expert. But then what is? In the intranet application that we're building there are dozens upon dozens of such grids. Naturally I want them all to have a unified look & behavior, and I don't want to repeat the same code all over the place. So what's the "MVC" approach to this problem?

    Read the article

  • How Web Optimization Services Work to Increase Your on the Internet Reputation

    SEO is a symbol of search engine optimization and that is the key to success from the enterprise. No site has meaning if it seriously isn't properly promoted. Anytime any web surfer is in look up of any certain merchandise, providers or data he makes use of the easiest method of browsing as a result of search engine optimization and this is habit of many individuals to only search straight into 5 or six major sites for their goal. No person has time to seem directly into 100 pages of internet search engine as there is no need to have when he finds in major pages.

    Read the article

< Previous Page | 333 334 335 336 337 338 339 340 341 342 343 344  | Next Page >