Search Results

Search found 2402 results on 97 pages for 'metadata discovery'.

Page 38/97 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • Windows Azure : Storage Client Exception Unhandled

    - by veda
    I am writing a code for upload large files into the blobs using blocks... When I tested it, it gave me an StorageClientException It stated: One of the request inputs is out of range. I got this exception in this line of the code: blob.PutBlock(block, ms, null); Here is my code: protected void ButUploadBlocks_click(object sender, EventArgs e) { // store upladed file as a blob storage if (uplFileUpload.HasFile) { name = uplFileUpload.FileName; byte[] byteArray = uplFileUpload.FileBytes; Int64 contentLength = byteArray.Length; int numBytesPerBlock = 250 *1024; // 250KB per block int blocksCount = (int)Math.Ceiling((double)contentLength / numBytesPerBlock); // number of blocks MemoryStream ms ; List<string>BlockIds = new List<string>(); string block; int offset = 0; // get refernce to the cloud blob container CloudBlobContainer blobContainer = cloudBlobClient.GetContainerReference("documents"); // set the name for the uploading files string UploadDocName = name; // get the blob reference and set the metadata properties CloudBlockBlob blob = blobContainer.GetBlockBlobReference(UploadDocName); blob.Properties.ContentType = uplFileUpload.PostedFile.ContentType; for (int i = 0; i < blocksCount; i++, offset = offset + numBytesPerBlock) { block = Convert.ToBase64String(BitConverter.GetBytes(i)); ms = new MemoryStream(); ms.Write(byteArray, offset, numBytesPerBlock); blob.PutBlock(block, ms, null); BlockIds.Add(block); } blob.PutBlockList(BlockIds); blob.Metadata["FILETYPE"] = "text"; } } Can anyone tell me how to solve it...

    Read the article

  • Eclipselink problem in a javase project

    - by arinte
    I am getting this error when I am running my eclipselink project. [EL Warning]: 2008.12.05 11:47:08.056--java.lang.NoClassDefFoundError: org/jboss/resource/adapter/jdbc/ValidConnectionChecker was thrown on attempt of PersistenceLoadProcessor to load class com.mysql.jdbc.integration.jboss.MysqlValidConnectionChecker. The class is ignored. Exception in thread "main" java.lang.NoClassDefFoundError: org/jboss/resource/adapter/jdbc/ValidConnectionChecker at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(Unknown Source) at java.security.SecureClassLoader.defineClass(Unknown Source) at java.net.URLClassLoader.defineClass(Unknown Source) at java.net.URLClassLoader.access$000(Unknown Source) at java.net.URLClassLoader$1.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at org.eclipse.persistence.internal.jpa.deployment.PersistenceUnitProcessor.loadClass(PersistenceUnitProcessor.java:261) at org.eclipse.persistence.internal.jpa.metadata.MetadataProcessor.initPersistenceUnitClasses(MetadataProcessor.java:247) at org.eclipse.persistence.internal.jpa.metadata.MetadataProcessor.processEntityMappings(MetadataProcessor.java:422) at org.eclipse.persistence.internal.jpa.deployment.PersistenceUnitProcessor.processORMetadata(PersistenceUnitProcessor.java:299) at org.eclipse.persistence.internal.jpa.EntityManagerSetupImpl.predeploy(EntityManagerSetupImpl.java:830) at org.eclipse.persistence.internal.jpa.deployment.JPAInitializer.callPredeploy(JPAInitializer.java:101) at org.eclipse.persistence.internal.jpa.deployment.JPAInitializer.initPersistenceUnits(JPAInitializer.java:149) at org.eclipse.persistence.internal.jpa.deployment.JPAInitializer.initialize(JPAInitializer.java:135) at org.eclipse.persistence.jpa.PersistenceProvider.createEntityManagerFactory(PersistenceProvider.java:104) at org.eclipse.persistence.jpa.PersistenceProvider.createEntityManagerFactory(PersistenceProvider.java:64) at javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:83) at javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:60) at Main.main(Main.java:32) Caused by: java.lang.ClassNotFoundException: org.jboss.resource.adapter.jdbc.ValidConnectionChecker at java.net.URLClassLoader$1.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClassInternal(Unknown Source) ... 24 more Any ideas? Why would eclipselink need something from jboss? What jboss jar do I need. I would use open jpa, but for some reason after quit a few persists from my app it starts giving a bunch of stackoverflow errors.

    Read the article

  • WCF Maximum Read Depth Exception

    - by Burt
    I get the following exception when trying to pass a DTO over WCF services. System.Xml.XmlException: The maximum read depth (32) has been exceeded because XML data being read has more levels of nesting than is allowed by the quota. This quota may be increased by changing the MaxDepth property on the XmlDictionaryReaderQuotas object used when creating the XML reader. Line 1, position 5230. at System.Xml.XmlExceptionHelper.ThrowXmlException The app.config binding looks like this <binding name="WSHttpBinding_IProjectWcfService" closeTimeout="00:10:00" openTimeout="00:10:00" receiveTimeout="00:10:00" sendTimeout="00:10:00" bypassProxyOnLocal="false" transactionFlow="false" hostNameComparisonMode="StrongWildcard" maxBufferPoolSize="524288" maxReceivedMessageSize="10240000" messageEncoding="Text" textEncoding="utf-8" useDefaultWebProxy="true" allowCookies="false"> <readerQuotas maxDepth="200" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="false" /> <security mode="Message"> <transport clientCredentialType="Windows" proxyCredentialType="None" realm=""> <extendedProtectionPolicy policyEnforcement="Never" /> </transport> <message clientCredentialType="UserName" negotiateServiceCredential="true" algorithmSuite="Default" establishSecurityContext="true" /> </security> </binding> Web.config service behaviour: <behavior name="BehaviorConfiguration" > <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment --> <dataContractSerializer maxItemsInObjectGraph="6553600" /> <serviceMetadata httpGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="true" /> <serviceAuthorization principalPermissionMode="UseAspNetRoles" roleProviderName="SqlRoleProvider" /> <serviceCredentials> <serviceCertificate findValue="localhost" x509FindType="FindBySubjectName" storeLocation="LocalMachine" storeName="My" /> <userNameAuthentication userNamePasswordValidationMode="MembershipProvider" membershipProviderName="SqlMembershipProvider"/> </serviceCredentials> </behavior> And the DTO looks like this: [Serializable] [DataContract(IsReference=true)] public class MyDto { Any help would be appreciated as I am pulling my hair out with it.

    Read the article

  • SQL Server Express 2005 Merge Replication using RMO causes Null Reference exception

    - by Craig Shearer
    I'm trying to use RMO to programmatically perform merge synchronization. I've basically copied the SQL Server example code, as follows: // Create a connection to the Subscriber. ServerConnection conn = new ServerConnection(subscriberName); MergePullSubscription subscription; try { // Connect to the Subscriber. conn.Connect(); // Define the pull subscription. subscription = new MergePullSubscription(subscriptionDbName, publisherName, publicationDbName, publicationName, conn, false); // If the pull subscription exists, then start the synchronization. if (subscription.LoadProperties()) { // Check that we have enough metadata to start the agent. if (subscription.PublisherSecurity != null || subscription.DistributorSecurity != null) { subscription.SynchronizationAgent.Synchronize(); } else { throw new ApplicationException("There is insufficent metadata to " + "synchronize the subscription. Recreate the subscription with " + "the agent job or supply the required agent properties at run time."); } } else { // Do something here if the pull subscription does not exist. throw new ApplicationException(String.Format( "A subscription to '{0}' does not exist on {1}", publicationName, subscriberName)); } } catch (Exception ex) { // Implement appropriate error handling here. throw new ApplicationException("The subscription could not be " + "synchronized. Verify that the subscription has " + "been defined correctly.", ex); } finally { conn.Disconnect(); } I've got the server merge publication defined correctly, but when I run the above code, I get a null reference exception on the call to: subscription.SynchronizationAgent.Synchronize(); The stack trace is as follows: at Microsoft.SqlServer.Replication.MergeSynchronizationAgent.StatusEventSinkMethod(String message, Int32 percent, Int32* returnValue) at Test.ConsoleTest.Program.SynchronizePullSubscription() in F:\Visual Studio Projects\Test\source\Test.ConsoleTest\Program.cs:line 124 It seems, from the stack trace, like something to do with the Status event, but I don't have a handler defined, and defining one makes no difference.

    Read the article

  • jboss 5.0 data source configuration in ear file. How can I run oracle 10g and 11g on the same serve

    - by Joe
    Currently my setup is: in my ear META-INF/jboss-app.xml <jboss-app> <service>datasource-ds.xml</service> </module> and datasource-ds.xml <datasources> <local-tx-datasource> <jndi-name>jdbc/mydeployment</jndi-name> <connection-url>jdbc:oracle:thin:@eir:myport:mydbname</connection-url> <driver-class>oracle.jdbc.driver.OracleDriver</driver-class> <user-name>myuser</user-name> <password>mypassword</password> <exception-sorter-class-name>org.jboss.resource.adapter.jdbc.vendor.OracleExceptionSorter</exception-sorter-class-name> <metadata> <type-mapping>Oracle9i</type-mapping> </metadata> </local-tx-datasource> </datasources> and it works when ojdbc5.jar is in my servername/lib How can I config my oracle driver information in my .ear file so that I can have two different ear deployments, one using oracle 10g and one using oracle 11g?

    Read the article

  • sqlalchemy date type in 0.6 migration using mssql

    - by nosklo
    I'm connection to mssql server through pyodbc, via FreeTDS odbc driver, on linux ubuntu 10.04. Sqlalchemy 0.5 uses DATETIME for sqlalchemy.Date() fields. Now Sqlalchemy 0.6 uses DATE, but sql server 2000 doesn't have a DATE type. How can I make DATETIME be the default for sqlalchemy.Date() on sqlalchemy 0.6 mssql+pyodbc dialect? I'd like to keep it as clean as possible. Here's code to reproduce the issue: import sqlalchemy from sqlalchemy import Table, Column, MetaData, Date, Integer, create_engine engine = create_engine( 'mssql+pyodbc://sa:sa@myserver/mydb?driver=FreeTDS') m = MetaData(bind=engine) tb = sqlalchemy.Table('test_date', m, Column('id', Integer, primary_key=True), Column('dt', Date()) ) tb.create() And here is the traceback I'm getting: Traceback (most recent call last): File "/tmp/teste.py", line 15, in <module> tb.create() File "/home/nosklo/.local/lib/python2.6/site-packages/sqlalchemy/schema.py", line 428, in create bind.create(self, checkfirst=checkfirst) File "/home/nosklo/.local/lib/python2.6/site-packages/sqlalchemy/engine/base.py", line 1647, in create connection=connection, **kwargs) File "/home/nosklo/.local/lib/python2.6/site-packages/sqlalchemy/engine/base.py", line 1682, in _run_visitor **kwargs).traverse_single(element) File "/home/nosklo/.local/lib/python2.6/site-packages/sqlalchemy/sql/visitors.py", line 77, in traverse_single return meth(obj, **kw) File "/home/nosklo/.local/lib/python2.6/site-packages/sqlalchemy/engine/ddl.py", line 58, in visit_table self.connection.execute(schema.CreateTable(table)) File "/home/nosklo/.local/lib/python2.6/site-packages/sqlalchemy/engine/base.py", line 1157, in execute params) File "/home/nosklo/.local/lib/python2.6/site-packages/sqlalchemy/engine/base.py", line 1210, in _execute_ddl return self.__execute_context(context) File "/home/nosklo/.local/lib/python2.6/site-packages/sqlalchemy/engine/base.py", line 1268, in __execute_context context.parameters[0], context=context) File "/home/nosklo/.local/lib/python2.6/site-packages/sqlalchemy/engine/base.py", line 1367, in _cursor_execute context) File "/home/nosklo/.local/lib/python2.6/site-packages/sqlalchemy/engine/base.py", line 1360, in _cursor_execute context) File "/home/nosklo/.local/lib/python2.6/site-packages/sqlalchemy/engine/default.py", line 277, in do_execute cursor.execute(statement, parameters) sqlalchemy.exc.ProgrammingError: (ProgrammingError) ('42000', '[42000] [FreeTDS][SQL Server]Column or parameter #2: Cannot find data type DATE. (2715) (SQLExecDirectW)') '\nCREATE TABLE test_date (\n\tid INTEGER NOT NULL IDENTITY(1,1), \n\tdt DATE NULL, \n\tPRIMARY KEY (id)\n)\n\n' ()

    Read the article

  • ServiceRoute + WebServiceHostFactory kills WSDL generation? How to create extensionless WCF service

    - by Ethan J. Brown
    I'm trying to use extenionless / .svc-less WCF services. Can anyone else confirm or deny the issue I'm experiencing? I use routing in code, and do this in Application_Start of global.asax.cs: RouteTable.Routes.Add(new ServiceRoute("Data", new WebServiceHostFactory(), typeof(DataDips))); I have tested in both IIS 6 and IIS 7.5 and I can use the service just fine (ie my extensionless handler is correctly configured for ASP.NET). However, metadata generation is totally screwed up. I can hit my /mex endpoint with the WCF Test Client (and I presume svcutil.exe) -- but the ?wsdl generation you typically get with .svc is toast. I can't hit it with a browser (get 400 bad request), I can't hit it with wsdl.exe, etc. Metadata generation is configured correctly in web.config. This is a problem of course, because the service is exposed as basicHttpBinding so that an old style ASMX client can get to it. But of course, the client can't generate the proxy without a WSDL description. If I instead use serviceActivation routing in config like this, rather than registering a route in code: <serviceHostingEnvironment aspNetCompatibilityEnabled="true"> <serviceActivations> <add relativeAddress="Data.svc" service="DataDips" /> </serviceActivations> </serviceHostingEnvironment> Then voila... it works. But then I don't have a clean extensionless url. If I change relativeAddress from Data.svc to Data, then I get a configuration exception as this is not supported by config. (Must use an extension registered to WCF). I've also attempted to use this code in conjunction with the above config: RouteTable.Routes.MapPageRoute("","Data/{*data}","~/Data.svc/{*data}",false); My thinking is that I can just point the extensionless url at the configured .svc url. This doesn't work -- the /Data.svc continues to work, but /Data returns a 404. Anyone with any bright ideas?

    Read the article

  • How would you implement API key in WCF Data Service?

    - by rushonerok
    Is there a way to require an API key in the URL / or some other way of passing the service a private key in order to grant access to the data? I have this right now... using System; using System.Data.Services; using System.Data.Services.Common; using System.Collections.Generic; using System.Linq; using System.ServiceModel.Web; using Numina.Framework; using System.Web; using System.Configuration; [System.ServiceModel.ServiceBehavior(IncludeExceptionDetailInFaults = true)] public class odata : DataService { public static void InitializeService(DataServiceConfiguration config) { config.SetEntitySetAccessRule("*", EntitySetRights.AllRead); //config.SetServiceOperationAccessRule("*", ServiceOperationRights.All); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } protected override void OnStartProcessingRequest(ProcessRequestArgs args) { HttpRequest Request = HttpContext.Current.Request; if(Request["apikey"] != ConfigurationManager.AppSettings["ApiKey"]) throw new DataServiceException("ApiKey needed"); base.OnStartProcessingRequest(args); } } ...This works but it's not perfect because you cannot get at the metadata and discover the service through the Add Service Reference explorer. I could check if $metadata is in the url but it seems like a hack. Is there a better way?

    Read the article

  • Validating DataAnnotations with Validator class

    - by Pablote
    I'm trying to validate a class decorated with dataannotation with the Validator class. It works fine when the attributes are applied to the same class. But when I try to use a metadata class it doesn't work. Is there anything I should do with the Validator so it uses the metadata class? Here's some code.. this works: public class Persona { [Required(AllowEmptyStrings = false, ErrorMessage = "El nombre es obligatorio")] public string Nombre { get; set; } [Range(0, int.MaxValue, ErrorMessage="La edad no puede ser negativa")] public int Edad { get; set; } } this doesnt work: [MetadataType(typeof(Persona_Validation))] public class Persona { public string Nombre { get; set; } public int Edad { get; set; } } public class Persona_Validation { [Required(AllowEmptyStrings = false, ErrorMessage = "El nombre es obligatorio")] public string Nombre { get; set; } [Range(0, int.MaxValue, ErrorMessage = "La edad no puede ser negativa")] public int Edad { get; set; } } this is how I validate the instances: ValidationContext context = new ValidationContext(p, null, null); List<ValidationResult> results = new List<ValidationResult>(); bool valid = Validator.TryValidateObject(p, context, results, true); thanks.

    Read the article

  • Error reached after genereated entity framework classes by edmgen tool

    - by loviji
    Hello, First I read this question, but this knowledge did not help to solve my problems. In initial I've created edmx file by Visual Studio. Generated files with names: uqsModel.Designer.cs uqsModel.edmx This files are located on App_Code folder. And my web app work normally. In Web Config generated connectionstring automatically. <add name="uqsEntities" connectionString="metadata=res://*/App_Code.uqsModel.csdl|res://*/App_Code.uqsModel.ssdl|res://*/App_Code.uqsModel.msl;provider=System.Data.SqlClient;provider connection string=&quot;Data Source=aemloviji\sqlexpress;Initial Catalog=uqs;Integrated Security=True;MultipleActiveResultSets=True&quot;" providerName="System.Data.EntityClient" /></connectionStrings> Then I had to generate classes by the instrument edmgen tool(full generation mode). Generated new files with names: uqsModel.cs uqsModel.csdl uqsModel.msl uqsModel.ssdl uqsViews.cs it save new classed to the folder where edmx files located before, and remove existing edmx files. And when page redirrects to any web page server side code fails. And problem: Unable to load the specified metadata resource. Some idea, please.

    Read the article

  • Wicket Authorization Using MetaDataKey

    - by JGirl
    I am trying to implement a simple authorization strategy for my Wicket application. I am implemented my own AuthorizationStrategy (extending IAuthorizationStrategy). http://old.nabble.com/Authorization-strategy-help-td18948597.html After reading the above link, I figured it makes more sense to use metadata-driven authorization than one using Annotations. So I have a simple RoleCheck class public class RoleCheck { private String privilege; public RoleCheck(String priv) { this.privilege = priv; } public void setPrivilege(String privilege) { this.privilege = privilege; } public String getPrivilege() { return privilege; } } I add it a component public static MetaDataKey priv = new MetaDataKey() {}; editLink.setMetaData(priv, new RoleCheck("Update")); And in my Authorization Strategy class, I try to get the metadata associated with the component public boolean isActionAuthorized(Component component, Action action) { if (action.equals(Component.RENDER)) { RoleCheck privCheck = (RoleCheck) component.getMetaData(EditControlToolBar.priv); if (privCheck != null) { ... } } However the getMetaData gives an error "Bound mismatch: The generic method getMetaData(MetaDataKey) of type Component is not applicable for the arguments (MetaDataKey). The inferred type RoleCheck is not a valid substitute for the bounded parameter " Any help would be appreciated. Thank you

    Read the article

  • Service reference addition issue in visual studio 2010

    - by user293072
    I am currently working on an application that allows reverse geocoding using silverlight + bing maps. The thing is that I want to add a reference to the reverse geocoding service provided in msdn ( http://msdn.microsoft.com/en-us/library/cc879136.aspx) i.e. http:// dev.virtualearth.net/webservices/v1/geocodeservice/geocodeservice.svc?wsdl, but when I try to get a reference in vs2010, I get the following error: The document at the url http:// dev.virtualearth.net/webservices/v1/metadata/geocodeservice/geocodeservice.wsdl was not recognized as a known document type. The error message from each known type may help you fix the problem: Report from 'XML Schema' is ''', hexadecimal value 0x1F, is an invalid character. Line 1, position 1.'. Report from 'DISCO Document' is ''', hexadecimal value 0x1F, is an invalid character. Line 1, position 1.'. Report from 'WSDL Document' is 'There is an error in XML document (1, 1).'. '', hexadecimal value 0x1F, is an invalid character. Line 1, position 1. Metadata contains a reference that cannot be resolved: 'http://dev.virtualearth.net/webservices/v1/geocodeservice/geocodeservice.svc?wsdl'. Content Type application/soap+xml; charset=utf-8 was not supported by service http: //dev.virtualearth.net/webservices/v1/geocodeservice/geocodeservice.svc?wsdl. The client and service bindings may be mismatched. The remote server returned an error: (415) Unsupported Media Type. If the service is defined in the current solution, try building the solution and adding the service reference again. It is good to mention that I can access the service URL from the browser (with a no style information warning). I am aware that there are other reverse geolocoding services out there, but I am somewhat forced by certain circumstances to use only Microsoft-related components/services. Please help :)

    Read the article

  • Running multiple instances of tomcat in eclipse WTP

    - by lisak
    Hey, SCENARIO: 10 CATALINA_BASEs with own configuration (always the same port numbers 8080, but 10 different IP/hostnames on one host via virtual IPs). created a server in WTP and pick "Use the custom location" option in the server configuration in eclipse. New configuration files are created in workspace/Server/server-name-config/ Set up the server path and deploy path for my catalina base (not the internal .metadata one) After I started it, the new configuration files overwrote the original catalina-base/conf files I had there - I was glad, it should be like this but after I made changes in the eclipse config files workspace/Server/server-name-config/ and restarted the server, the changes didn't appeared in the original files in CATALINA_BASE/conf What the hell is that ? So I set the CATALINA_BASE/conf/server.xml to fault configuration and restarted tomcat from eclipse and it worked ! it took the configuration from /Server/server-name-config/server.xml Then I deleted CATALINA_BASE/conf/server.xml and it said that there is no server.xml in catalina base ! How is it possible ? I don't understand why eclipse WTP developers made so tight integration. There should by just symbolic links in /Server/server-name-config/ pointing to CATALINA_BASE/conf/ ... now there is a weird system which is totally unpredictable. The changes in /Server/server-name-config/ are not reflected in CATALINA_BASE/conf ... from where the standard bootstrap.jar or other catalina classloaders and classes build server, engine and other objects with particular setting. Moreover the CATALINA_BASEs could be used outside eclipse then. The second problem, I'm setting up various things in CATALINA_BASE/bin/startup.sh and setenv.sh which is easy cause I can use bash for it. Is then modifying VM parameters in the "Open launch configuration" settings the only way how to do it in eclipse ? Sorry for such a huge pile of questions, but I'm annoyed by the fact that it is much better to not use eclipse WTP for this because it is very poorly designed and it's a shame because this would spare me a lot of time. And using the internal .metadata/ instances it's even more terrifying way that the one I described.

    Read the article

  • Space in Directory Parameter of svcutil.exe

    - by Drew Frisk
    I'm attempting to download metadata for a WCF service using svcutil but I'm running into issues with the /directory:< parameter. The directory I want to save to has a space in it: C:\Service References\Logging so when I execute /t:metadata I receive the following error: Error: The directory 'C:\Program Files (x86)\Microsoft SDKs\Windows\v8.0A\bin\NETFX 4.0 Tools\References\Logging' could not be found. Verify that the directory exists and that you have the appropriate permissions to read it. It looks to me like the space in "Service References" is causing the issue. From my understanding of command shell (which is very little) spaces act as delimiters for an executable. So I tried escaping the space with a carrot Service^ References and surrounding the path in double quotes "C:\Service References\Logging" but neither of those seem to be working, as the /directory: parameter doesn't recognize them as valid characters in the value. I haven't been able to find any direction in regards to this and svcutil, so I'm at a loss right now. I could download the files to a temp folder and then move them, but I would prefer not to take that approach. I would appreciate any direction that could be given on trying to resolve this. Thanks in advance.

    Read the article

  • How to get interpolated message in NHibernate.Validator

    - by SztupY
    Hi! I'm trying to integrate NHibernate.Validator with ASP.NET MVC client side validations, and the only problem I found is that I simply can't convert the non-interpolated message to a human-readable one. I thought this would be an easy task, but turned out to be the hardest part of the client-side validation. The main problem is that because it's not server-side, I actually only need the validation attributes that are being used, and I don't actually have an instance or anything else at hand. Here are some excerpts from what I've been already trying: // Get the the default Message Interpolator from the Engine IMessageInterpolator interp = _engine.Interpolator; if (interp == null) { // It is null?? Oh, try to create a new one interp = new NHibernate.Validator.Interpolator.DefaultMessageInterpolator(); } // We need an instance of the object that needs to be validated, se we have to create one object instance = Activator.CreateInstance(Metadata.ContainerType); // we enumerate all attributes of the property. For example we have found a PatternAttribute var a = attr as PatternAttribute; // it seems that the default message interpolator doesn't work, unless initialized if (interp is NHibernate.Validator.Interpolator.DefaultMessageInterpolator) { (interp as NHibernate.Validator.Interpolator.DefaultMessageInterpolator).Initialize(a); } // but even after it is initialized the following will throw a NullReferenceException, although all of the parameters are specified, and they are not null (except for the properties of the instance, which are all null, but this can't be changed) var message = interp.Interpolate(new InterpolationInfo(Metadata.ContainerType, instance, PropertyName, a, interp, a.Message)); I know that the above is a fairly complex code for a seemingly simple question, but I'm still stuck without solution. Is there any way to get the interpolated string out of NHValidator?

    Read the article

  • Maven webapp with Eclipse and WTP plugin deploy files in stranges ways in Tomcat

    - by hokkos
    Hi, I use Eclipse J2EE 3.5 with Maven and tomcat. To deploy my maven webapp with WTP I added a Dynamic Web Module facet and changed the "org.eclipse.wst.common.component" file of the project because the webapp is not in a WebContent directory, here is the content of the file: <?xml version="1.0" encoding="UTF-8"?> <project-modules id="moduleCoreId" project-version="1.5.0"> <wb-module deploy-name="tkey-ca-web"> <wb-resource deploy-path="/" source-path="/src/main/webapp"/> <wb-resource deploy-path="/WEB-INF/classes" source-path="/src/main/java"/> <wb-resource deploy-path="/WEB-INF/classes" source-path="/src/main/resources"/> <property name="context-root" value="toto"/> <property name="java-output-path" value="/toto/target/classes"/> </wb-module> </project-modules> But it never deploy the content correctly, in "workspace.metadata.plugins\org.eclipse.wst.server.core\tmp1\wtpwebapps\toto\" the directory structure is correct with WEB-INF and META-INF but empty, the jsp, html, css files are in "workspace.metadata.plugins\org.eclipse.wst.server.core\tmp1\wtpwebapps\toto\WEB-INF\classes\" with another WEB-INF and META-INF structure but with the files. I don't understand this at all, thanks.

    Read the article

  • AMQP gem specifying a dead letter exchange

    - by JP.
    I've specified a queue on the RabbitMQ server called MyQueue. It is durable and has x-dead-letter-exchange set to MyQueue.DLX. (I also have an exchange called MyExchange bound to that queue, and another exchange called MyQueue.DLX, but I don't believe this is important to the question) If I use ruby's amqp gem to subscribe to those messages I would do it like this: # Doing this before and in a new thread has to do with how my code is structured # shown here in case it has a bearing on the question Thread.new do AMQP.start('amqp://guest:[email protected]:5672') end EventMachine.next_tick do channel = AMQP::Channel.new(AMQP.connection) queue = channel.queue("MyQueue", :durable => true, :'x-dead-letter-exchange' => "MyQueue.DLX") queue.subscribe(:ack => true) do |metadata, payload| p metadata p payload end end If I execute this code with the queues and exchanges already created and bound (as they need to be in my set up) then RabbitMQ throws the following error in its logs: =ERROR REPORT==== 19-Aug-2013::14:25:53 === connection <0.19654.2>, channel 2 - soft error: {amqp_error,precondition_failed, "inequivalent arg 'x-dead-letter-exchange'for queue 'MyQueue' in vhost '/': received none but current is the value 'MyQueue.DLX' of type 'longstr'", 'queue.declare'} Which seems to be saying that I haven't specified the same Dead Letter Exchange as the pre-existing queue - but I believe I have with the queue = ... line. Any ideas?

    Read the article

  • .net framework execution aborted while executing CLR sproc?

    - by Sean Ochoa
    I constructed a sproc that does the equivalent of FOR XML AUTO in SQL 2008. Now that I'm testing it, it gives me a really unhelpful error msg. Any idea what this error means? Msg 10329, Level 16, State 49, Procedure ForXML, Line 0 .Net Framework execution was aborted. System.Threading.ThreadAbortException: Thread was being aborted. System.Threading.ThreadAbortException: at System.Runtime.InteropServices.Marshal.PtrToStringUni(IntPtr ptr, Int32 len) at System.Data.SqlServer.Internal.CXVariantBase.WSTRToString() at System.Data.SqlServer.Internal.SqlWSTRLimitedBuffer.GetString(SmiEventSink sink) at System.Data.SqlServer.Internal.RowData.GetString(SmiEventSink sink, Int32 i) at Microsoft.SqlServer.Server.ValueUtilsSmi.GetValue(SmiEventSink_Default sink, ITypedGettersV3 getters, Int32 ordinal, SmiMetaData metaData, SmiContext context) at Microsoft.SqlServer.Server.ValueUtilsSmi.GetValue200(SmiEventSink_Default sink, SmiTypedGetterSetter getters, Int32 ordinal, SmiMetaData metaData, SmiContext context) at System.Data.SqlClient.SqlDataReaderSmi.GetValue(Int32 ordinal) at System.Data.SqlClient.SqlDataReaderSmi.GetValues(Object[] values) at System.Data.ProviderBase.DataReaderContainer.CommonLanguageSubsetDataReader.GetValues(Object[] values) at System.Data.ProviderBase.SchemaMapping.LoadDataRow() at System.Data.Common.DataAdapter.FillLoadDataRow(SchemaMapping mapping) at System.Data.Common.DataAdapter.FillFromReader(DataSet dataset, DataTable datatable, String srcTable, DataReaderContainer dataReader, Int32 startRecord, Int32 maxRecords, DataColumn parentChapterColumn, Object parentChapterValue) at System.Data.Common.DataAdapter.Fill(DataTable[] dataTables, IDataReader dataReader, Int32 startRecord, Int32 maxRecords) at System.Data.Common.DbDataAdapter.FillInternal(DataSet dataset, DataTable[] datatables, Int32 startRecord, Int32 maxRecords, String srcTable, IDbCommand command, CommandBehavior behavior) at System.Data.Common.DbDataAdapter.Fill(DataTable[] dataTables, Int32 startRecord, Int32 maxRecords, IDbCommand command, CommandBehavior behavior) at System.Data.Common.DbDataAdapter.Fill(DataTable dataTable) at ForXML.GetXML...

    Read the article

  • Convert ADO.Net EF Connection String To Be SQL Azure Cloud Connection String Compatible!?

    - by Goober
    The Scenario I have written a Silverlight 3 Application that uses an SQL Server database. I'm moving the application onto the Cloud (Azure Platform). In order to do this I have had to setup my database on SQL Azure. I am using the ADO.Net Entity Framework to model my database. I have got the application running on the cloud, but I cannot get it to connect to the database. Below is the original localhost connection string, followed by the SQL Azure connection string that isn't working. The application itself runs fine, but fails when trying to retrieve data. The Original Localhost Connection String <add name="InmZenEntities" connectionString="metadata=res://*/InmZenModel.csdl|res://*/InmZenModel.ssdl|res://*/InmZenModel.msl; provider=System.Data.SqlClient; provider connection string=&quot; Data Source=localhost; Initial Catalog=InmarsatZenith; Integrated Security=True; MultipleActiveResultSets=True&quot;" providerName="System.Data.EntityClient" /> The Converted SQL Azure Connection String <add name="InmZenEntities" connectionString="metadata=res://*/InmZenModel.csdl|res://*/InmZenModel.ssdl|res://*/InmZenModel.msl; provider=System.Data.SqlClient; provider connection string=&quot; Server=tcp:MYSERVER.ctp.database.windows.net; Database=InmarsatZenith; UserID=MYUSERID;Password=MYPASSWORD; Trusted_Connection=False; MultipleActiveResultSets=True&quot;" providerName="System.Data.EntityClient" /> The Question Anyone know if this connection string for SQL Azure is correct? Help greatly appreciated.

    Read the article

  • Importing fixtures with foreign keys and SQLAlchemy?

    - by Chris Reid
    I've been experimenting with using fixture to load test data sets into my Pylons / PostgreSQL app. This works great except that it fails to properly create foreign keys if they reference an auto increment id field. My fixture looks like this: class AuthorsData(DataSet): class frank_herbert: first_name = "Frank" last_name = "Herbert" class BooksData(DataSet): class dune: title = "Dune" author_id = AuthorsData.frank_herbert.ref('id') And the model: t_authors = sa.Table("authors", meta.metadata, sa.Column("id", sa.types.Integer, primary_key=True), sa.Column("first_name", sa.types.String(100)), sa.Column("last_name", sa.types.String(100)), ) t_books = sa.Table("books", meta.metadata, sa.Column("id", sa.types.Integer, primary_key=True), sa.Column("title", sa.types.String(100)), sa.Column("author_id", sa.types.Integer, sa.ForeignKey('authors.id')) ) When running "paster setup-app development.ini", SQLAlchemey reports the FK value as "None" so it's obviously not finding it: 15:59:48,683 INFO [sqlalchemy.engine.base.Engine.0x...9eb0] INSERT INTO books (title, author_id) VALUES (%(title)s, %(author_id)s) RETURNING books.id 15:59:48,683 INFO [sqlalchemy.engine.base.Engine.0x...9eb0] {'author_id': None, 'title': 'Dune'} The fixture docs actually warn that this might be a problem: "However, in some cases you may need to reference an attribute that does not have a value until it is loaded, like a serial ID column. (Note that this is not supported by the SQLAlchemy data layer when using sessions.)" http://farmdev.com/projects/fixture/using-dataset.html#referencing-foreign-dataset-classes Does this mean that this is just not supported with SQLAlchemy? Or is it possible to load the data without using SA "sessions"? How are other people handling this issue?

    Read the article

  • Python halts while iteratively processing my 1GB csv file

    - by Dan
    I have two files: metadata.csv: contains an ID, followed by vendor name, a filename, etc hashes.csv: contains an ID, followed by a hash The ID is essentially a foreign key of sorts, relating file metadata to its hash. I wrote this script to quickly extract out all hashes associated with a particular vendor. It craps out before it finishes processing hashes.csv stored_ids = [] # this file is about 1 MB entries = csv.reader(open(options.entries, "rb")) for row in entries: # row[2] is the vendor if row[2] == options.vendor: # row[0] is the ID stored_ids.append(row[0]) # this file is 1 GB hashes = open(options.hashes, "rb") # I iteratively read the file here, # just in case the csv module doesn't do this. for line in hashes: # not sure if stored_ids contains strings or ints here... # this probably isn't the problem though if line.split(",")[0] in stored_ids: # if its one of the IDs we're looking for, print the file and hash to STDOUT print "%s,%s" % (line.split(",")[2], line.split(",")[4]) hashes.close() This script gets about 2000 entries through hashes.csv before it halts. What am I doing wrong? I thought I was processing it line by line. ps. the csv files are the popular HashKeeper format and the files I am parsing are the NSRL hash sets. http://www.nsrl.nist.gov/Downloads.htm#converter UPDATE: working solution below. Thanks everyone who commented! entries = csv.reader(open(options.entries, "rb")) stored_ids = dict((row[0],1) for row in entries if row[2] == options.vendor) hashes = csv.reader(open(options.hashes, "rb")) matches = dict((row[2], row[4]) for row in hashes if row[0] in stored_ids) for k, v in matches.iteritems(): print "%s,%s" % (k, v)

    Read the article

  • Adding iPod Support to (previously) iPhone Only App

    - by rjstelling
    When I started on my current project, there was already an App in the App Store. This App was iPhone only. My first task was to test and build a version that also ran on an iPod Touch. About 3 weeks ago Apple removed the option on iTunes connect to set the device requirements. And sent an email out to all developers: "The App Store requires that you provide metadata about your application before submitting it. While most of this metadata is specified using the iPhone Developer Program Portal, the process for selecting device-related dependencies in iTunes Connect is no longer available. Instead, if your app relies on features that are specific to a device, such as the compass on iPhone 3GS, add the UIRequiredDeviceCapabilities key to your app's Info.plist file to indicate the specific hardware feature required." When I compiled the iPod compatible version I set the device requirements (UIRequiredDeviceCapabilities) in the info.plist to: location-services (gps or skyhook) wi-fi (any device) However, as the App was originally uploaded and the option for "iPhone only" set in iTunes connect this appears to be the default. The kicker is, because Apple have removed this feature there is no way to change it! Has anyone come up against this problem? And how did you solve it? Is it possible I have incorrect values in UIRequiredDeviceCapabilities? UPDATE: The app will run fine on a iPod Touch if installed as a development version via Xcode. The problem is on the App Store it is listed as iPhone only and when iPod Touch users search in the App store no results are returned.

    Read the article

  • Creating self-referential tables with polymorphism in SQLALchemy

    - by Jace
    I'm trying to create a db structure in which I have many types of content entities, of which one, a Comment, can be attached to any other. Consider the following: from datetime import datetime from sqlalchemy import create_engine from sqlalchemy import Column, ForeignKey from sqlalchemy import Unicode, Integer, DateTime from sqlalchemy.orm import relation, backref from sqlalchemy.ext.declarative import declarative_base Base = declarative_base() class Entity(Base): __tablename__ = 'entities' id = Column(Integer, primary_key=True) created_at = Column(DateTime, default=datetime.utcnow, nullable=False) edited_at = Column(DateTime, default=datetime.utcnow, onupdate=datetime.utcnow, nullable=False) type = Column(Unicode(20), nullable=False) __mapper_args__ = {'polymorphic_on': type} # <...insert some models based on Entity...> class Comment(Entity): __tablename__ = 'comments' __mapper_args__ = {'polymorphic_identity': u'comment'} id = Column(None, ForeignKey('entities.id'), primary_key=True) _idref = relation(Entity, foreign_keys=id, primaryjoin=id == Entity.id) attached_to_id = Column(Integer, ForeignKey('entities.id'), nullable=False) #attached_to = relation(Entity, remote_side=[Entity.id]) attached_to = relation(Entity, foreign_keys=attached_to_id, primaryjoin=attached_to_id == Entity.id, backref=backref('comments', cascade="all, delete-orphan")) text = Column(Unicode(255), nullable=False) engine = create_engine('sqlite://', echo=True) Base.metadata.bind = engine Base.metadata.create_all(engine) This seems about right, except SQLAlchemy doesn't like having two foreign keys pointing to the same parent. It says ArgumentError: Can't determine join between 'entities' and 'comments'; tables have more than one foreign key constraint relationship between them. Please specify the 'onclause' of this join explicitly. How do I specify onclause?

    Read the article

  • Android getting XML values

    - by Nils
    Hello, I have the following XML code, which I got by a UPnP device and like to get the res value - the RTSP URL. In this case rtsp://10.42.0.103:554/live.sdp How can I do this? I heard that Android has some built-in support for reading XML. Is that true? <DIDL-Lite xmlns="urn:schemas-upnp-org:metadata-1-0/DIDL-Lite/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:upnp="urn:schemas-upnp-org:metadata-1-0/upnp/"> <item id="11" parentID="1" restricted="1"> <dc:title>Network Camera Stream 1</dc:title> <upnp:class>object.item.videoItem</upnp:class> <res protocolInfo="rtsp-rtp-udp:*:video/mpeg4-generic:*" resolution="640x480">rtsp://10.42.0.103:554/live.sdp</res> </item> <item id="12" parentID="1" restricted="1"> <dc:title>Network Camera Stream 2</dc:title> <upnp:class>object.item.videoItem</upnp:class> <res protocolInfo="rtsp-rtp-udp:*:video/mpeg4-generic:*" resolution="176x144">rtsp://10.42.0.103:554/live2.sdp</res> </item> </DIDL-Lite>

    Read the article

  • SQLAlchemy: select over multiple tables

    - by ahojnnes
    Hi, I wanted to optimize my database query: link_list = select( columns=[link_table.c.rating, link_table.c.url, link_table.c.donations_in], whereclause=and_( not_(link_table.c.id.in_( select( columns=[request_table.c.recipient], whereclause=request_table.c.donator==donator.id ).as_scalar() )), link_table.c.id!=donator.id, ), limit=20, ).execute().fetchall() and tried to merge those two selects in one query: link_list = select( columns=[link_table.c.rating, link_table.c.url, link_table.c.donations_in], whereclause=and_( link_table.c.active==True, link_table.c.id!=donator.id, request_table.c.donator==donator.id, link_table.c.id!=request_table.c.recipient, ), limit=20, order_by=[link_table.c.rating.desc()] ).execute().fetchall() the database-schema looks like: link_table = Table('links', metadata, Column('id', Integer, primary_key=True, autoincrement=True), Column('url', Unicode(250), index=True, unique=True), Column('registration_date', DateTime), Column('donations_in', Integer), Column('active', Boolean), ) request_table = Table('requests', metadata, Column('id', Integer, primary_key=True, autoincrement=True), Column('recipient', Integer, ForeignKey('links.id')), Column('donator', Integer, ForeignKey('links.id')), Column('date', DateTime), ) There are several links (donator) in request_table pointing to one link in the link_table. I want to have links from link_table, which are not yet "requested". But this does not work. Is it actually possible, what I'm trying to do? If so, how would you do that? Thank you very much in advance!

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >