Search Results

Search found 47679 results on 1908 pages for 'web admin'.

Page 1107/1908 | < Previous Page | 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114  | Next Page >

  • ASP.NET MVC: what mechanic returns ViewModel objects?

    - by Dr. Zim
    As I understand it, Domain Models are classes that only describe the data (aggregate roots). They are POCOs and do not reference outside libraries (nothing special). View models on the other hand are classes that contain domain model objects as well as all the interface specific objects like SelectList. A ViewModel includes using System.Web.Mvc;. A repository pulls data out of a database and feeds them to us through domain model objects. What mechanic or device creates the view model objects, populating them from a database? Would it be a factory that has database access? Would you bleed the view specific classes like System.Web.Mvc in to the Repository? Something else? For example, if you have a drop down list of cities, you would reference a SelectList object in the root of your View Model object, right next to your DomainModel reference: public class CustomerForm { public CustomerAddress address {get;set;} public SelectList cities {get;set;} } The cities should come from a database and be in the form of a select list object. The hope is that you don't create a special Repository method to extract out just the distinct cities, then create a redundant second SelectList object only so you have the right data types.

    Read the article

  • Rendering a control generates security exception in .Net 4

    - by Jason Short
    I am having a problem with code that worked fine in .Net 2 giving this error under .Net 4. Build (web): Inheritance security rules violated while overriding member: 'Controls.RelatedPosts.RenderControl(System.Web.UI.HtmlTextWriter)'. Security accessibility of the overriding method must match the security accessibility of the method being overriden. This is in DotNetBlogEngine. There were several other security demands in the code that .Net 4 didn't seem to like. I followed some of the advice I found on blogs (and here) and got rid of all the other errors. But this one still eludes me. The Main blogengine core dll is not set for security demands anylonger and is compiled for .Net 4 as well. This error is in the website side attempting to use the dll. There are controls that call a RenderControl method taking an HtmlTextWriter. Apparently the text writer now has some soft of security attributes set on it. Each of the controls implements a custom interface ( public interface ICustomFilter ), there are no security permissions present or demands. The site is running full trust on my local dev machine.

    Read the article

  • RESTful WCF Data Service Authentication

    - by Adrian Grigore
    Hi, I'd like to implement a REST api to an existing ASP.NET MVC website. I've managed to set up WCF Data services so that I can browse my data, but now the question is how to handle authentication. Right now the data service is secured via the site's built in forms authentication, and that's ok when accessing the service from AJAX forms. However, It's not ideal for a RESTful api. What I would like as an alternative to forms authentication is for the users to simply embed the user name and password into the url to the web service or as request parameters. For example, if my web service is usually accessible as http://localhost:1234/api.svc I'd like to be able to access it using the url http://localhost:1234/api.svc/{login}/{password} So, my questions are as follows: Is this a sane approach? If yes, how can I implement this? It seems trivial redirecting GET requests so that the login and password are attached as GET parameters. I also know how to inspect the http context and use those parameters to filter the results. But I am not sure if / how the same approach could be applied to POST, PUT and DELETE requests. Thanks, Adrian

    Read the article

  • UX question: is better to have "serious delete" or have "trash"

    - by ftrotter
    I am developing an application that allows for a user to manage some individual data points. One of the things that my users will want to do is "delete" but what should that mean? For a web application is it better to present a user with the option to have serious delete or to use a "trash" system? Under "serious delete" (would love to know if there is a better name for this...) you click "delete" and then the user is warned "this is final and tragic action. Once you do this you will not be able to get -insert data point name here- back, even if you are crying..." Then if they click delete... well it truly is gone forever. Under the "trash" model, you never trust that the user really wants to delete... instead you remove the data point from the "main display" and put into a bucket called "the trash". This gets it out of the users way, which is what they usually want, but they can get it back if they make a mistake. Obviously this is the way most operating systems have gone. The advantages of "serious delete" are: Easy to implement Easy to explain to users The disadvantages of "serious delete" are: it can be tragically final sometimes, cats walk on keyboards The advantages of the "trash" system are: user is safe from themselves bulk methods like "delete a bunch at once" make more sense saves support headaches The disadvantages of the "trash" system are": For sensitive data, you create an illusion of destruction users think something is gone, but it is not. Lots of subtle distinctions make implementation more difficult Do you "eventually" delete the contents of the trash? My question is which one is the right design pattern for modern web applications? With enough discussion to justify your answer... Would love to be pointed towards some relevant research. -FT

    Read the article

  • iis7 compress dynamic content from custom handler

    - by Malloc
    I am having trouble getting dynamic content coming from a custom handler to be compressed by IIS 7. Our handler spits out json data (Content-Type: application/json; charset=utf-8) and responds to url that looks like: domain.com/example.mal/OperationName?Param1=Val1&Param2=Val2 In IIS 6, all we had to do was put the edit the MetaBase.xml and in the IIsCompressionScheme element make sure that the HcScriptFileExtensions attribute had the custom extension 'mal' included in it. Static and Dynamic compression is turned out at the server and website level. I can confirm that normal .aspx pages are compressed correctly. The only content I cannot have compressed is the content coming from the custom handler. I have tried the following configs with no success: <handlers> <add name="MyJsonService" verb="GET,POST" path="*.mal" type="Library.Web.HttpHandlers.MyJsonServiceHandlerFactory, Library.Web" /> </handlers> <httpCompression> <dynamicTypes> <add mimeType="application/json" enabled="true" /> </dynamicTypes> </httpCompression> _ <httpCompression> <dynamicTypes> <add mimeType="application/*" enabled="true" /> </dynamicTypes> </httpCompression> _ <staticContent> <mimeMap fileExtension=".mal" mimeType="application/json" /> </staticContent> <httpCompression> <dynamicTypes> <add mimeType="application/*" enabled="true" /> </dynamicTypes> </httpCompression> Thanks in advance for the help.

    Read the article

  • DataTableReader is invalid for current DataTable 'TempTable'

    - by Sk93
    Hi, I'm getting the following error whenever my code creates a DataTableReader from a valid DataTable Object: "DataTableReader is invalid for current DataTable 'TempTable'." The thing is, if I reboot my machine, it works fine for an undertimed amount of time, then dies with the above. The code that throws this error could have been working fine for hours and then: bang. you get this error. It's not limited to one line either; it's every single location that a DataTableReader is used. Also, this error does NOT occur on the production web server - ever. This is an example of one of the lines where it falls over: If I step over this line, I get this: However, if I do this in the immediate window: I get no problems. Same goes if I actually use that line in the code. This has been driving me nuts for the best part of a week, and I've failed to find anything on Google that could help (as I'm pretty positive this isn't a coding issue). Some technical info: DEV Box: Vista 32bit (with all current windows updates) Visual Studio 2008 v9.0.30729.1 SP dotNet Framework 3.5 SP1 SQL Server: Microsoft SQL Server 2005 Standard Edition- 9.00.4035.00 (X64) Windows 2003 64bit (with all current windows updates) Web Server: Windows 2003 64bit (with all current windows updates) any help, ideas, or advice would be greatly appreciated! Cheers, Ian

    Read the article

  • Dynamically add event to custom control (Confirm Message Box)

    - by Nyein Nyein Chan Chan
    I have created a custom cofirm message box control and I created an event like this- [Category("Action")] [Description("Raised when the user clicks the button(ok)")] public event EventHandler Submit; protected virtual void OnSubmit(EventArgs e) { if (Submit != null) Submit(this, e); } The Event OnSubmit occurs when user click the OK button on the Confrim Box. void IPostBackEventHandler.RaisePostBackEvent(string eventArgument) { OnSubmit(e); } Now I am adding this OnSubmit Event Dynamically like this- In aspx- <my:ConfirmMessageBox ID="cfmTest" runat="server" ></my:ConfirmMessageBox> <asp:Button ID="btnCallMsg" runat="server" onclick="btnCallMsg_Click" /> <asp:TextBox ID="txtResult" runat="server" ></asp:TextBox> In cs- protected void btnCallMsg_Click(object sender, EventArgs e) { cfmTest.Submit += cfmTest_Submit;//Dynamically Add Event cfmTest.ShowConfirm("Are you sure to Save Data?"); //Show Confirm Message using Custom Control Message Box } protected void cfmTest_Submit(object sender, EventArgs e) { txtResult.Text = "User Confirmed";//I set the text to "User Confrimed" but it's not displayed txtResult.Focus();//I focus the textbox but I got Error } The Error I got is- System.InvalidOperationException was unhandled by user code Message="SetFocus can only be called before and during PreRender." Source="System.Web" So, when I dynamically add and fire custom control's event, there is an error in Web Control. If I add event in aspx file like this, <my:ConfirmMessageBox ID="cfmTest" runat="server" OnSubmit="cfmTest_Submit"></my:ConfirmMessageBox> There is no error and work fine. Can anybody help me to add event dynamically to custom control? Thanks.

    Read the article

  • FOR BOUNTY: "QFontEngine(Win) GetTextMetrics failed ()" error on 64-bit Windows

    - by David Murdoch
    I'll add a large bounty to this when Stack Overflow lets me. I'm using wkhtmltopdf to convert HTML web pages to PDFs. This works perfectly on my 32-bit dev server [unfortunately, I can't ship my machine :-p ]. However, when I deploy to the web application's 64-bit server the following errors are displayed: C:\>wkhtmltopdf http://www.google.com google.pdf Loading pages (1/5) QFontEngine::loadEngine: GetTextMetrics failed () ] 10% QFontEngineWin: GetTextMetrics failed () QFontEngineWin: GetTextMetrics failed () QFontEngine::loadEngine: GetTextMetrics failed () QFontEngineWin: GetTextMetrics failed () QFontEngineWin: GetTextMetrics failed () QFontEngineWin: GetTextMetrics failed () QFontEngine::loadEngine: GetTextMetrics failed () ] 36% QFontEngineWin: GetTextMetrics failed () QFontEngineWin: GetTextMetrics failed () // ...etc.... and the PDF is created and saved... just WITHOUT text. All form-fields, images, borders, tables, divs, spans, ps, etc are rendered accurately...just void of any text at all. Server information: Windows edition: Windows Server Standard Service Pack 2 Processor: Intel Xeon E5410 @ 2.33GHz 2.33 GHz Memory: 8.00 GB System type: 64-bit Operating System Can anyone give me a clue as to what is happening and how I can fix this? Also, I wasn't sure what to tag/title this question with...so if you can think of better tags/title comment them or edit the question. :-)

    Read the article

  • Html.LabelFor and Html.TextBoxFor generate empy html code

    - by Ceridan
    I'm writing my first ASP.NET MVC application and there is one big problem for me. I want to make a control which will represent a form, but when I try to generate labels and textboxes it returns to me empty page. So, this is my model file (MyModel.cs): namespace MyNamespace.Models { public class MyModel { [Required(ErrorMessage = "You have to fill this field")] [DisplayName("Input name")] public string Name{ get; set; } } } This is MyFormControlView.ascx file with my control: <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl<MyNamespace.Models.MyModel>"%> <div> <% using (Html.BeginForm()) { Html.LabelFor(m => m.Name); Html.TextBoxFor(m => m.Name); Html.ValidationMessageFor(m => m.Name); } %> </div> And this is my Index.aspx file where I render the control: <%@ Page Language="C#" MasterPageFile="~/Views/Shared/Main.Master" Inherits="System.Web.Mvc.ViewPage<System.Collections.IEnumerable>" %> <asp:Content runat="server" ID="MainContent" ContentPlaceHolderID="MainContent"> This is my control test! <%Html.RenderPartial("MyFormControlView", new MyNamespace.Models.MyModel { Name = "MyTestName"}); %> </asp:Content> So, when I run my application the result is lonely caption: "This is my control test!" and there are no label or textbox on the generated page. If I inspect the source code of the generated page I can see my block, but it's inner text is empty. Please, could you help me?

    Read the article

  • IIS8 Asp.net State service remote connection failure

    - by maxisam
    Recently we upgrade our web server to windows server 2012 with IIS8. We have this issue when users try to connect the asp.net state service to this web server remotely. It always popup Unable to make the session state request to the session state server. Please ensure that the ASP.NET State service is started and that the client and server ports are the same. If the server is on a remote machine, please ensure that it accepts remote requests by checking the value of HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\aspnet_state\Parameters\AllowRemoteConnection. If the server is on the local machine, and if the before mentioned registry value does not exist or is set to 0, then the state server connection string must use either 'localhost' or '127.0.0.1' as the server name. In IIS7 / 7.5 we use the same way and it works fine. As long as the state service is running and firewall is set properly, we don't have any problem. However, in IIS8 it doesn't work. (We even turn off firewall to test it) Thanks for helping.

    Read the article

  • Invalid security validation exception inside a SharePoint workflow

    - by Dan Revell
    I'm having a strange security problem with a SharePoint workflow. Particular calls seem to result in the following exception: Microsoft.SharePoint.SPException: The security validation for this page is invalid. I've come across this error before and the simple fix is web.AllowUnsafeUpdates = true; ... web.AllowUnsafeUpdates = false; However I've never once encountered this problem inside a workflow before since a workflow runs as system. I first got this error in a code activity where I set the value of a column on the list item. Wrapping the item.Update in AllowUnsafeUpdates fixed it. After the code activity I have a CreateTask activity. This also causes the same error but only after running the code inside the activity's MethodInvoking. In both cases there's a SPListItem.UpdateItem involved within the stack trace. This call is failing a security check. I don't know anything about how this check works so I don't know where to look next. This is a strange one, because this SharePoint dev machine has been working fine for some time. No other projects or workflows exhibit this behaviour so that rules out an installation problem. There's just something about this particular workflow. [UPDATE] I've gotten around the issue by just creating a new project and building it up again. I still have the broken one and I'd still like to figure out the problem with it. I'd appreciate any suggestions of what it might be.

    Read the article

  • Solutions for working with multiple branches in ASP.Net

    - by Corey McKinnon
    At work, we are often working on multiple branches of our product at one time. For example, right now, we have a maintenance branch, a branch with code just going to QA, and a branch for a new major initiative, that won't be merged for some time now. Our web project is set up to use IIS, so every time we switch to a different branch, we have to go in to IIS Admin and change the path on the virtual directory, then reset IIS, and sometimes even restart Visual Studio, to avoid getting build errors. Is there any way to simplify this, other than not having our web project set up as a virtual directory? I'm not sure we want to make that change at this point. What do you do to make this easier, assuming you do this? Corey @RedWolves, virtual machines would definitely work, but I'm not sure it would be any simpler, especially for some of the other developers on my team, which is partly why I'm looking for more simplicity. @Dan, we're not able to change source control providers, unfortunately. @pix0r, that's something I'll try when I get back to work. Thanks for the suggestion. @Haacked, I'll have to give that a try too, but I think we have some issues with why that won't work (I can't remember exactly why right now; this application was originally written in .Net 1.1, pre-Cassini, and I can't remember if we tried it when we upgraded to 2.0 or not). Thanks all for the responses so far.

    Read the article

  • deserialize Json using JavaScriptSerializer Custom Types

    - by Dave
    I have a RESTFUL WCF service returning a .NET custom type as json string. I am using .NET 3.5 framework and I am using JavaScriptSerializer for deserializing it in to my custom type. Serialization to json is handled by WCF. using (WebResponse resp = req.GetResponse()) { using (System.IO.StreamReader sreader = new System.IO.StreamReader(resp.GetResponseStream())) { string jsonString = sreader.ReadToEnd(); CustomType myType = new CustomType(); System.Web.Script.Serialization.JavaScriptSerializer serializer = new System.Web.Script.Serialization.JavaScriptSerializer(); myType = serializer.Deserialize<CustomType>(jsonString); return myType; } } The problem is that, I have a property (or element) of the CustomType which is generic list of another custom type ChildObject. They are in different namespaces. When I deserialize using the code above, I get all the properties of the CustomType myType including a list of ChildObject. But all the properties of the ChildObject is null. The string returned from the stream reader has all the values for the ChildObject. the values are lost when deserializing. Can anyone help me with this please?

    Read the article

  • N-Tier Architecture - Structure with multiple projects in VB.NET

    - by focus.nz
    I would like some advice on the best approach to use in the following situation... I will have a Windows Application and a Web Application (presentation layers), these will both access a common business layer. The business layer will look at a configuration file to find the name of the dll (data layer) which it will create a reference to at runtime (is this the best approach?). The reason for creating the reference at runtime to the data access layer is because the application will interface with a different 3rd party accounting system depending on what the client is using. So I would have a separate data access layer to support each accounting system. These could be separate setup projects, each client would use one or the other, they wouldn't need to switch between the two. Projects: MyCompany.Common.dll - Contains interfaces, all other projects have a reference to this one. MyCompany.Windows.dll - Windows Forms Project, references MyCompany.Business.dll MyCompany.Web.dll - Website project, references MyCompany.Business.dll MyCompany.Busniess.dll - Business Layer, references MyCompany.Data.* (at runtime) MyCompany.Data.AccountingSys1.dll - Data layer for accounting system 1 MyCompany.Data.AccountingSys2.dll - Data layer for accounting system 2 The project MyCompany.Common.dll would contain all the interfaces, each other project would have a reference to this one. Public Interface ICompany ReadOnly Property Id() as Integer Property Name() as String Sub Save() End Interface Public Interface ICompanyFactory Function CreateCompany() as ICompany End Interface The project MyCompany.Data.AccountingSys1.dll and MyCompany.Data.AccountingSys2.dll would contain the classes like the following: Public Class Company Implements ICompany Protected _id As Integer Protected _name As String Public ReadOnly Property Id As Integer Implements MyCompany.Common.ICompany.Id Get Return _id End Get End Property Public Property Name As String Implements MyCompany.Common.ICompany.Name Get Return _name End Get Set(ByVal value as String) _name = value End Set End Property Public Sub Save() Implements MyCompany.Common.ICompany.Save Throw New NotImplementedException() End Sub End Class Public Class CompanyFactory Implements ICompanyFactory Public Function CreateCompany() As ICompany Implements MyCompany.Common.ICompanyFactory.CreateCompany Return New Company() End Function End Class The project MyCompany.Business.dll would provide the business rules and retrieve data form the data layer: Public Class Companies Public Shared Function CreateCompany() As ICompany Dim factory as New MyCompany.Data.CompanyFactory Return factory.CreateCompany() End Function End Class Any opinions/suggestions would be greatly appreciated.

    Read the article

  • Take screenshot with Selenium: WaitForPageToLoad does not wait long enough

    - by OregonGhost
    I'm trying to get screenshots from a web page with multiple browsers. Just experimenting with Selenium RC, I wrote code like this: var sel = new DefaultSelenium(server, 4444, target, url); sel.Start(); sel.Open(url); sel.WaitForPageToLoad("30000"); var imageString = sel.CaptureScreenshotToString(); This basically works, but in most cases the screenshot is of a blank browser window, because the page is not yet ready for display. It kind of works if I add a sleep just after the WaitForPageToLoad, but that slows down the fast browsers and/or may be to short for the slower browsers (or under load). A typical solution for this seems to be to wait for the presence of a certain element. However, this is meant as a simple generic solution to get a screenshot of a local web page with as many browsers as possible (to test the layout) and I don't want to have to enter certain element names or whatever. It's a simple tool where you just enter the Selenium Server URL and the URL you want to test, and get the screenshots back. Any advice?

    Read the article

  • Will JSON replace XML as a data format?

    - by 13ren
    When I first saw XML, I thought it was basically a representation of trees. Then I thought: the important thing isn't that it's a particularly good representation of trees, but that it is one that everyone agrees on. Just like ASCII. And once established, it's hard to displace due to network effects. The new alternative would have to be much better (maybe 10 times better) to displace it. Of course, ASCII has been (mostly) replaced by Unicode, for internationalization. According to google trends, XML has a x43 lead, but is declining - while JSON grows. Will JSON replace XML as a data format? (edited) for which tasks? for which programmers/industries? NOTES: S-expressions (from lisp) are another representation of trees, but which has not gained mainstream adoption. There are many, many other proposals, such as YAML and Protocol Buffers (for binary formats). I can see JSON dominating the space of communicating with client-side AJAX (AJAJ?), and this possibly could back-spread into other systems transitively. XML, being based on SGML, is better than JSON as a document format. I'm interested in XML as a data format. XML has an established ecosystem that JSON lacks, especially ways of defining formats (XML Schema) and transforming them (XSLT). XML also has many other standards, esp for web services - but their weight and complexity can arguably count against XML, and make people want a fresh start (similar to "web services" beginning as a fresh start over CORBA).

    Read the article

  • SQL Cache Dependency not working with Stored Procedure

    - by pjacko
    Hello, I can't get SqlCacheDependency to work with a simple stored proc (SQL Server 2008): create proc dbo.spGetPeteTest as set ANSI_NULLS ON set ANSI_PADDING ON set ANSI_WARNINGS ON set CONCAT_NULL_YIELDS_NULL ON set QUOTED_IDENTIFIER ON set NUMERIC_ROUNDABORT OFF set ARITHABORT ON select Id, Artist, Album from dbo.PeteTest And here's my ASP.NET code (3.5 framework): -- global.asax protected void Application_Start(object sender, EventArgs e) { string connectionString = System.Configuration.ConfigurationManager.ConnectionStrings["MyConn"].ConnectionString; System.Data.SqlClient.SqlDependency.Start(connectionString); } -- Code-Behind private DataTable GetAlbums() { string connectionString = System.Configuration.ConfigurationManager.ConnectionStrings["UnigoConnection"].ConnectionString; DataTable dtAlbums = new DataTable(); using (SqlConnection connection = new SqlConnection(connectionString)) { // Works using select statement, but NOT SP with same text //SqlCommand command = new SqlCommand( // "select Id, Artist, Album from dbo.PeteTest", connection); SqlCommand command = new SqlCommand(); command.Connection = connection; command.CommandType = CommandType.StoredProcedure; command.CommandText = "dbo.spGetPeteTest"; System.Web.Caching.SqlCacheDependency new_dependency = new System.Web.Caching.SqlCacheDependency(command); SqlDataAdapter DA1 = new SqlDataAdapter(); DA1.SelectCommand = command; DataSet DS1 = new DataSet(); DA1.Fill(DS1); dtAlbums = DS1.Tables[0]; Cache.Insert("Albums", dtAlbums, new_dependency); } return dtAlbums; } Anyone have any luck with getting this to work with SPs? Thanks!

    Read the article

  • Make JAXWS-based webservice implement interface and unmarshall to known POJOs

    - by John K
    Given a Java SE 6 client, I would like to provide a configurable back-end: either directly to a database or through a web service which connects to a centralized DB. To that end, I've created some JPA- and JAXB-annotated entity classes and a DAO interface in a POJO library like the following: public interface MyDaoInterface { public MyEntity doSomething(); } @javax.persistence.Entity @javax.xml.bind.annotation.XmlRootElement public class MyEntity { private int a; .... } Now, I would like to have my auto-generated web service stubs implement that interface and interact with my defined entity classes, rather than the generated classes provided via the JAX-B unmarshaller. So, the client-side pseudo code would be something like MyDaoInterface dao; if (usingWebservice) dao = new WebserviceDao(); else dao = new JpaDao(); MyEntity e = dao.doSomething(); Is this possible with JPA, JAXB, JAXWS? Is this even advisable? Currently we achieve this through a slow manual process of massaging code, copying generating classes, and doing other things that seem just plain wrong to me.

    Read the article

  • How to avoid an HttpException due to timeout

    - by Dan
    I'm working on a website powered by .NET asp/C# code. The clients require that sessions have a 25 minute timeout. However, sometimes the site is used, and a user stays connected for long periods of time (longer than 25 mins). Session_End is triggered: protected void Session_End(Object sender, EventArgs e) { Hashtable trackingInformaiton = (Hashtable)Application["trackingInformation"]; trackingInformaiton.Remove(Session["trackingID"]); } The user returns some time later, but when they interact with the website, they get an error, and we get this email notification: User: Unauthenticated User Error: System.Web.HttpException Description: Failed to load viewstate. The control tree into which viewstate is being loaded must match the control tree that was used to save viewstate during the previous request... The telling part of the stack trace is System.Web.UI.Control.AddedControl. Apparently, the server has thrown away the session data, and is sending new data, but the client is trying to deal with old data. Hence the error that "the control tree into which viewstate is being loaded [doesn't] match the control tree that was used to save the viewstate during the prevoius request." So here's the question. How can I force instruct the user's browser to redirect to a "you're logged out" screen when the connection times out? (Is it something I should add to the Session_End method?)

    Read the article

  • Create 2 connection pools using c3p0 in Jetty

    - by Mike
    Hello, I'm trying to set up a maven web project that runs Jetty. In this project, I need 2 JNDIs... my plan is to configure 2 connection pools using c3p0 in Jetty. So, I created WEB-INF/jetty-env.xml, and I have the following:- <Configure class="org.mortbay.jetty.webapp.WebAppContext"> <New id="ds1" class="org.mortbay.jetty.plus.naming.Resource"> <Arg>jdbc/ds1</Arg> <Arg> <New class="com.mchange.v2.c3p0.ComboPooledDataSource"> // ... JTDS to SQL Server - omitted for brevity </New> </Arg> </New> <New id="ds2" class="org.mortbay.jetty.plus.naming.Resource"> <Arg>jdbc/ds2</Arg> <Arg> <New class="com.mchange.v2.c3p0.ComboPooledDataSource"> // ... JTDS to Sybase - omitted for brevity </New> </Arg> </New> </Configure> When I run jetty, I get this exception:- May 14, 2010 1:16:56 PM com.mchange.v2.c3p0.impl.AbstractPoolBackedDataSource getPoolManager INFO: Initializing c3p0 pool... com.mchange.v2.c3p0.ComboPooledDataSource [ acquireIncrement -> ... ... ... Exception in thread "com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread-#0" java.lang.LinkageError: net.sourceforge.jtds.jdbc.DefaultProperties at java.lang.ClassLoader.defineClassImpl(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:258) It seems to me that I can't create 2 connection pools using c3p0. If I remove either one of the connection pool, it worked. What am I doing wrong? How do I create 2 connection pools in Jetty? Thanks much.

    Read the article

  • wsdl return an array of complex types

    - by Anand
    hi, I have defined a web service that will return the data from my mysql data base. I have written the web service in php. Now I have defined a complex type as follows: $server->wsdl->addComplexType( 'Category', 'complexType', 'struct', 'all', '', array( 'category_parent_id' => array('name' => 'category_parent_id', 'type' => 'xsd:int'), 'category_child_id' => array('name' => 'category_child_id', 'type' => 'xsd:int'), 'category_list' => array('name' => 'category_list', 'type' => 'xsd:int') ) ); The above complex type is a row in a table in my database. Now my function must send an array of these rows so how do I achieve the same My code is as follows: require_once('./nusoap/nusoap.php'); $server = new soap_server; $server-configureWSDL('productwsdl', 'urn:productwsdl'); // Register the data structures used by the service $server-wsdl-addComplexType( 'Category', 'complexType', 'struct', 'all', '', array( 'category_parent_id' = array('name' = 'category_parent_id', 'type' = 'xsd:int'), 'category_child_id' = array('name' = 'category_child_id', 'type' = 'xsd:int'), 'category_list' = array('name' = 'category_list', 'type' = 'xsd:int') ) ); $server-register('getaproduct', // method name array(), // input parameters //array('return' = array('result' = 'tns:Category')), // output parameters array('return' = 'tns:Category'), // output parameters 'urn:productwsdl', // namespace 'urn:productwsdl#getaproduct', // soapaction 'rpc', // style 'encoded', // use 'Get the product categories' // documentation ); function getaproduct() { $conn = mysql_connect('localhost','root',''); mysql_select_db('sssl', $conn); $sql = "SELECT * FROM jos_vm_category_xref"; $q = mysql_query($sql); while($r = mysql_fetch_array($q)) { $items[] = array('category_parent_id'=$r['category_parent_id'], 'category_child_id'=$r['category_child_id'], 'category_list'=$r['category_list']); } return $items; } // Use the request to (try to) invoke the service $HTTP_RAW_POST_DATA = isset($HTTP_RAW_POST_DATA) ? $HTTP_RAW_POST_DATA : ''; $server-service($HTTP_RAW_POST_DATA);

    Read the article

  • Could not load ConfigurationSection class - type

    - by nCdy
    at web.config <section name="FlowWebDataProviders" type="FlowWebProvidersSection" requirePermission="false"/> <FlowWebDataProviders peopleProviderName="sqlProvider" IzmListProviderName="sqlProvider"> <PeopleProviders> <add name="sqlProvider" type="SqlPeopleProvider" connectionStringName="FlowServerConnectionString"/> <add name="xmlProvider" type="XmlPeopleProvider" schemaFile="People.xsd" dataFile="People.xml"/> </PeopleProviders> <IzmListProviders> <add name="sqlProvider" type="SqlIzmListProvider" connectionStringName="FlowServerConnectionString"/> </IzmListProviders> </FlowWebDataProviders> and public class FlowWebProvidersSection : ConfigurationSection { [ConfigurationProperty("peopleProviderName", IsRequired = true)] public PeopleProviderName : string { get { this["peopleProviderName"] :> string } set { this["peopleProviderName"] = value; } } [ConfigurationProperty("IzmListProviderName", IsRequired = true)] public IzmListProviderName : string { get { (this["IzmListProviderName"] :> string) } set { this["IzmListProviderName"] = value; } } [ConfigurationProperty("PeopleProviders")] [ConfigurationValidatorAttribute(typeof(ProviderSettingsValidation))] public PeopleProviders : ProviderSettingsCollection { get { this["PeopleProviders"] :> ProviderSettingsCollection } } [ConfigurationProperty("IzmListProviders")] [ConfigurationValidatorAttribute(typeof(ProviderSettingsValidation))] public IzmListProviders : ProviderSettingsCollection { get { this["IzmListProviders"] :> ProviderSettingsCollection } } } and public class ProviderSettingsValidation : ConfigurationValidatorBase { public override CanValidate(typex : Type) : bool { if(typex : object == typeof(ProviderSettingsCollection)) true else false } /// <summary> // validate the provider section /// </summary> public override Validate(value : object) : void { mutable providerCollection : ProviderSettingsCollection = match(value) { | x is ProviderSettingsCollection => x | _ => null } unless (providerCollection == null) { foreach (_provider is ProviderSettings in providerCollection) { when (String.IsNullOrEmpty(_provider.Type)) { throw ConfigurationErrorsException("Type was not defined in the provider"); } mutable dataAccessType : Type = Type.GetType(_provider.Type); when (dataAccessType == null) { throw (InvalidOperationException("Provider's Type could not be found")); } } } } } project : Web Application ... I need to find error first . . . why : Error message parser: Error creating configuration section handler for FlowWebDataProviders: Could not load type 'FlowWebProvidersSection'. ? by the way : syntax of nemerle (current language) is very similar C#, don't afraid to read the code... thank you

    Read the article

  • running bash scripts in php

    - by HDawg
    I have two computers. On the first computer I have apache running with all my web code. On the second computer I have large amounts of data stored with a retrieval script (the script usually takes hours to run). I am essentially creating a web UI to access this data without any time delay. so I call: exec("bash initial.bash"); this is a driver script that is in my Apache folder. It calls the script on the other computer. calling: ssh otherMachine temp.bash& this script invokes the data retrieval script on the second computer. If I call initial.bash in the terminal, everything works smoothly and successfully, but if I call it in my PHP file, then all my commands in initial.bash run, with the exception of ssh otherMachine temp.bash&. I put the & at the end of that, so that temp.bash will run in the background, since it does take a few hours to complete. I am not sure why the nested script is not running when invoked by Apache. Is there a better alternative than using exec or shell_exec to call a script, which ultimately calls another script. The reason I don't call a script on the second machine directly is because of the time it takes the program to run. Shell_exec does not render the php page until the script is complete.

    Read the article

  • Using Apache Camel how do I unmarshal my deserialized object that comes in through a CXF Endpoint?

    - by ScArcher2
    I have a very simple camel route. It starts with a CXF Endpoint exposed as a web service. I then want to convert it to xml and call a method on a bean. Currently i'm getting a CXF specific object after the web service call. How do I take my serialized object out of the CXF MessageList and use it going forward? My Route: <camel:route> <camel:from uri="cxf:bean:helloEndpoint" /> <camel:marshal ref="xstream-utf8" /> <camel:to uri="bean:hello?method=hello"/> </camel:route> The XML Serialized Message: <?xml version='1.0' encoding='UTF-8'?> <org.apache.cxf.message.MessageContentsList serialization="custom"> <unserializable-parents /> <list> <default> <size>1</size> </default> <int>6</int> <com.whatever.Person> <firstName>Joe</firstName> <middleName></middleName> <lastName>Buddah</lastName> <dateOfBirth>2010-04-13 12:09:00.137 CDT</dateOfBirth> </com.whatever.Person> </list> </org.apache.cxf.message.MessageContentsList> I would expect the XML to be more like this: <com.whatever.Person> <firstName>Joe</firstName> <middleName></middleName> <lastName>Buddah</lastName> <dateOfBirth>2010-04-13 12:09:00.137 CDT</dateOfBirth> </com.whatever.Person>

    Read the article

  • java distributed cache for low latency, high availability

    - by Shahbaz
    I've never used distributed caches/DHTs like memcached, jboss cache, ehcache, etc. I'm wondering which, if any, is appropriate for my use. First, I'm not doing web applications (as most of these project seem to be geared towards web apps). I write servers (Order Management Systems actually) for financial trading firms. The servers themselves are not too complicated. They need to receive information (market data, orders, executions, etc.) rout them to their destination while possibly transforming some of these messages. I am looking at these products to solve the following problems: * Safe repository of the state of the server. I'd rather build the logic of my application as a bunch of transformers (similar to Apache Camel) and store the state in a 'safe' place * This repository should be distributed: in case one of these data stores crashes, one or two more should be up and I should be able to switch to them seamlessly * This repository should be fast. Single digits milliseconds count here, in other words, systems which consume/process this data are automated systems, not humans clicking on links. This system needs to have high-throughput and low latency. By sending my data outside the process, I am necessarily slowing performance, but I am trying to balance absolute raw speed and absolute protection of data. * This repository should be safe. Similar to the point about several on-line backups, this system needs to write data to disk (potentially more than one disk). I'd really like to stop writing my own 'transaction servers.' Am I correct to be looking into projects such as jboss cache, ehcache, etc.? Thanks

    Read the article

< Previous Page | 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114  | Next Page >