Search Results

Search found 16001 results on 641 pages for 'convention over configuration'.

Page 196/641 | < Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >

  • hibernate3-maven-plugin: entiries in different maven projects, hbm2ddl fails

    - by Mike
    I'm trying to put an entity in a different maven project. In the current project I have: @Entity public class User { ... private FacebookUser facebookUser; ... public FacebookUser getFacebookUser() { return facebookUser; } ... public void setFacebookUser(FacebookUser facebookUser) { this.facebookUser = facebookUser; } Then FacebookUser (in a different maven project, that's a dependency of a current project) is defined as: @Entity public class FacebookUser { ... @Id @GeneratedValue(strategy = GenerationType.AUTO) public Long getId() { return id; } Here is my maven hibernate3-maven-plugin configuration: <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>hibernate3-maven-plugin</artifactId> <version>2.2</version> <executions> <execution> <phase>process-classes</phase> <goals> <goal>hbm2ddl</goal> </goals> </execution> </executions> <configuration> <components> <component> <name>hbm2ddl</name> <implementation>jpaconfiguration</implementation> </component> </components> <componentProperties> <ejb3>false</ejb3> <persistenceunit>Default</persistenceunit> <outputfilename>schema.ddl</outputfilename> <drop>false</drop> <create>true</create> <export>false</export> <format>true</format> </componentProperties> </configuration> </plugin> Here is the error I'm getting: org.hibernate.MappingException: Could not determine type for: com.xxx.facebook.model.FacebookUser, at table: user, for columns: [org.hibernate.mapping.Column(facebook_user)] I know that FacebookUser is on the classpath because if I make facebook user transient, project compiles fine: @Transient public FacebookUser getFacebookUser() { return facebookUser; }

    Read the article

  • Doesn't get into Debug Mode

    - by Grace Jones
    When I Press F5 on my VS2005 to debug the application, it launches the web app window but it is coming out of debug mode. When I tried to trace the Error in the EventViewer, this was the error: Failed in Token.vb(GetToken). The token was not in memory and the identity of the authenticated IIS caller was not permitted. The session may have unexpectedly terminated. The specific error message included: Session state can only be used when enableSessionState is set to true, either in a configuration file or in the Page directive. Please also make sure that System.Web.SessionStateModule or a custom session state module is included in the \\ section in the application configuration. I don't have any httpModules section in my config file and the sessionState mode="InProc"...

    Read the article

  • Web.config encryption/decryption

    - by Akshay
    In my applications we.config file I have a connection string stored. I encrypted it using '---open the web.config file Dim config As Configuration = _ ConfigurationManager.OpenWebConfiguration( _ Request.ApplicationPath) '---indicate the section to protect Dim section As ConfigurationSection = _ config.Sections("connectionStrings") '---specify the protection provider section.SectionInformation.ProtectSection(protectionProvider) '---Apple the protection and update config.Save() Now I can decrypt it using the code Dim config As Configuration = _ ConfigurationManager.OpenWebConfiguration( _ Request.ApplicationPath) Dim section As ConfigurationSection = _ config.Sections("connectionStrings") section.SectionInformation.UnProtectSection() config.Save() I want to know where is the key stored. Also If somehow my web.config file is stolen, will it be possible for him/her to decrypt it using thhe code above.

    Read the article

  • Run EJB3Unit against Oracle Database

    - by justastefan
    I want to run EJB3Unit-Test in my oracle 10g Database. Therefore I use this configuration (ejb3unit.properties). ### The ejb3unit configuration file ### ejb3unit.inMemoryTest=false ejb3unit.connection.url=jdbc:oracle:thin:....:1432:SID ejb3unit.connection.driver_class=oracle.jdbc.OracleDriver ejb3unit.connection.username=user ejb3unit.connection.password=name ejb3unit.dialect=org.hibernate.dialect.Oracle10gDialect ejb3unit.show_sql=true ## values are create-drop, create, update ## ejb3unit.schema.update=create I will result in the following error: Caused by: HibernateException: cannot instantiate dialect class ... org.hibernate.dialect.Oracle10gDialect cannot be cast to org.ejb3unit.hibernate.dialect.Dialect How can ejb3unit-testing be done using oracle db?

    Read the article

  • Is the output of Eclipse's incremental java compiler used in production? Or is it simply to support Eclipse's features?

    - by Doug T.
    I'm new to Java and Eclipse. One of my most recent discoveries was how Eclipse comes shipped with its own java compiler (ejc) for doing incremental builds. Eclipse seems to by default output incrementally built class files to the projRoot/bin folder. I've noticed too that many projects come with ant files to build the project that uses the java compiler built into the system for doing the production builds. Coming from a Windows/Visual Studio world where Visual Studio is invoking the compiler for both production and debugging, I'm used to the IDE having a more intimate relationship with the command-line compiler. I'm used to the project being the make file. So my mental model is a little off. Is whats produced by Eclipse ever used in production? Or is it typically only used to support Eclipse's features (ie its intellisense/incremental building/etc)? Is it typical that for the final "release" build of a project, that ant, maven, or another tool is used to do the full build from the command line? Mostly I'm looking for the general convention in the Eclipse/Java community. I realize that there may be some outliers out there who DO use ecj in production, but is this generally frowned upon? Or is this normal/accepted practice?

    Read the article

  • Best practice with pyGTK and Builder XML files

    - by Phoenix87
    I usually design GUI with Glade, thus producing a series of Builder XML files (one such file for each application window). Now my idea is to define a class, e.g. MainWindow, that inherits from gtk.Window and that implements all the signal handlers for the application main window. The problem is that when I retrieve the main window from the containing XML file, it is returned as a gtk.Window instance. The solution I have adopted so far is the following: I have defined a class "Window" in the following way class Window(): def __init__(self, win_name): builder = gtk.Builder() self.builder = builder builder.add_from_file("%s.glade" % win_name) self.window = builder.get_object(win_name) builder.connect_signals(self) def run(self): return self.window.run() def show_all(self): return self.window.show_all() def destroy(self): return self.window.destroy() def child(self, name): return self.builder.get_object(name) In the actual application code I have then defined a new class, say MainWindow, that inherits frow Window, and that looks like class Main(Window): def __init__(self): Window.__init__(self, "main") ### Signal handlers ##################################################### def on_mnu_file_quit_activated(self, widget, data = None): ... The string "main" refers to the main window, called "main", which resides into the XML Builder file "main.glade" (this is a sort of convention I decided to adopt). So the question is: how can I inherit from gtk.Window directly, by defining, say, the class Foo(gtk.Window), and recast the return value of builder.get_object(win_name) to Foo?

    Read the article

  • How to make a btrfs snapshot?

    - by MountainX
    My /home partition consists of an entire physical disk. It is formatted as btrfs. I want to snapshot it. I'm confused regarding subvolume naming, in particular. I am aware that there are similar questions, but each similar question seems to be asking something different from what I'm asking (and they are older, which means probably outdated, given the rapid development of btrfs). For example, the answer to this question is apparently not the answer to my question because my /home partition is a separate volume and the man page for btrfs shows a different command for creating snapshots now. another similar problem, no solid solution. someone else as confused as me on the naming issues My question: Starting simple: is this the correct command to take a simple snapshot of my home partition? btrfs subvolume snapshot /home/@home /home/@home_snapshot_20120421 I got really brave and tested it and it does not work. The error is error accessing /home/@home. As shown below, @home is listed. I'm obviously confused on subvolume names. Do I need to use them in creating snapshots? Some examples show taking snapshots of home using /home as the source parameter, but based on examples of root volumes, it seems to me that I need to use /home/@home. Would this command work? And if not, why? btrfs subvolume snapshot /home /home/@home_snapshot_20120421 Is the @ just a naming convention? Is it meaningful at all? Here's some output that may be relevant: btrfs subvolume list /home ID 256 top level 5 path @home I'm not sure what that means, exactly. When I try btrfs device scan it gives an error (e.g. unable to scan the device /dev/sda1). My file system doesn't have any errors. Everything is fine.

    Read the article

  • Python unicode issues (2.6)

    - by ephemeralis
    I'm currently working on a irc bot for a multi-lingual channel, and I'm encountering some issues with unicode which are proving nearly impossible to solve. No matter what configuration of unicode encoding I seem to try, the list function which the below code sits within just flat out does nothing (c.notice is a class function which sends a NOTICE command to the irc server) or when it does do something, spits out something which obviously isn't encoded. The command should be sending ??, but instead it seems hellbent on sending å¤©å­ with a previous configuration of the same commands. The one I have specified below is of the 'send nothing' variety. I haven't worked with unicode before this, and thus I am quite stuck. I'm also positive that I'm doing this completely wrong as a consequence. (compileCMD just takes a list and spits out a single string of all the elements within the list) uk = self.compileCMD(self.faq.keys(),0) ukeys = unicode(uk,"utf-8").encode("utf-8") c.notice(nick, u"Current list of faq entries: %s" % (uk))

    Read the article

  • maximum string length quota error consuming WCF webservice from Biztalk.

    - by TygerKrash
    I'm getting this error message "The Maximum string content length quota (8192) has been exceeded while reading XML data. This quotea may be increased by changing the MaxStringContentLength property on the XmlDictionaryReaderQuotas object used when creating the XML reader" In the one of my orchestrations that consumes a wcf webservice (stacktrace indicates the receive shape is where the issue is), It is likely that the response is very large. looking at some of the other questions with this error message the solution is to change a WCF bindings setting in the configuration file. However I can't find these configuration settings when I'm using biztalk. They don't seem to be generated anywhere, should I be trying to add them to BTSNTSVc.exe.config. Any suggestions welcome.

    Read the article

  • Using Rails 3 and Haml 3, how do I configure Haml?

    - by dpogg1
    I'm using Rails 3.0.0.beta3 and Haml 3.0.0.rc.2, and I can't find where I need to place the configuration lines for Haml (nor what they are in the new version, for that matter). Using Rails 2.3.5 and Haml 2, I would do Haml::Template.options[:format] = :html5 in environment.rb. Or, in Sinatra, set :haml, {:format => :html5} in my main file. But in Rails 3 everything's been changed around, and no matter where I put that configuration line, I get an undefined method or undefined object error.

    Read the article

  • Fluent NHibernate MappingException : could not instantiate id generator

    - by Mark Simpson
    I'm pottering around with Fluent NHibernate to try and get a simple app up and running. I'm running through this Fluent NHibernate Tutorial. Everything seems to be going fine and I've created the required classes etc. and it all builds, but when I run the test, I get an exception. Someone in the comments section of the tutorial has the same problem, but I can't find any good information on what's causing it. Any help appreciated. It's probably something trivial. Exception details: FluentNHTest.Tests.Mappings.CustomerMappingTests.ValidateMappings: FluentNHibernate.Cfg.FluentConfigurationException : An invalid or incomplete configuration was used while creating a SessionFactory. Check PotentialReasons collection, and InnerException for more detail. ---- FluentNHibernate.Cfg.FluentConfigurationException : An invalid or incomplete configuration was used while creating a SessionFactory. Check PotentialReasons collection, and InnerException for more detail. ---- NHibernate.MappingException : could not instantiate id generator ---- System.FormatException : Input string was not in a correct format.

    Read the article

  • Restful WebAPI VS Regular Controllers

    - by Rohan Büchner
    I'm doing some R&D on what seems like a very confusing topic, I've also read quite a few of the other SO questions, but I feel my question might be unique enough to warrant me asking. We've never developed an app using pure WebAPI. We're trying to write a SPA style app, where the back end is fully decoupled from the front end code Assuming our service does not know anything about who is accessing/consuming it: WebAPI seems like the logical route to serve data, as opposed to using the standard MVC controllers, and serving our data via an action result and converting it to JSON. This to me at least seems like an MC design... which seems odd, and not what MVC was meant for. (look mom... no view) What would be considered normal convention in terms of performing action(y) calls? My sense is that my understanding of WebAPI is incorrect. The way I perceive WebAPI, is that its meant to be used in a CRUD sense, but what if I want to do something like: "InitialiseMonthEndPayment".... Would I need to create a WebAPI controller, called InitialiseMonthEndPaymentController, and then perform a POST... Seems a bit weird, as opposed to a MVC controller where i can just add a new action on the MonthEnd controller called InitialisePayment. Or does this require a mindset shift in terms of design? Any further links on this topic will be really useful, as my fear is we implement something that might be weird an could turn into a coding/maintenance concern later on?

    Read the article

  • Using Application Settings and reading defaults from app.config

    - by Peter Goras
    Hi, I need to deploy a Windows Forms application using ClickOnce deployment. (VS2008, .NET 3.5) And I need to provide a configuration file for this app that any user can modify. For this reason, I am using Application Settings instead of standard appSetttings in app.config so I can separate the the user config from app config. see http://msdn.microsoft.com/en-us/library/ms228995(VS.80).aspx Creating a Settings.settings file using VS generated a class with hard-coded default values like this: [global::System.Configuration.DefaultSettingValueAttribute("blahblah")] public string MyProperty ... WTF? I want to read the default values from the app.config! So I created my own class deriving from ApplicationSettingsBase but I cannot get this to read values from the app.config. Any ideas?

    Read the article

  • how to deploy wcf-service on iis 6.0

    - by Walter
    I have IIS 6.0 on Windows Server 2003. I installed .Net 3.5 and 4 beta 2. "Normal" asp things are working (perfect). But when I try to navigate to my service (/myServer/MyService.svc) I got a 404. Page not found. To be exact, I got a 404 2 "Web service extension lockdown policy prevents this request." I used ServiceModelReg.exe /ia to make sure that the extension I known and I checked the configuration using: admin-Tools, iis, home-tab, configuration, executable-box, and there: extension: .svc, path: c:\windows\microsoft.net\framework\v4.0.210... verbs: all verbs So everything seems ok. But I still get a 404-2

    Read the article

  • Windsor Container: How to specify a public property should not be filled by the container?

    - by George Mauer
    When Instantiating a class, Windsor by default treats all public properties of the class as optional dependencies and tries to satisfy them. In my case, this creates a rather complicated circular dependency which causes my application to hang. How can I explicitly tell Castle Windsor that it should not be trying to satisfy a public property? I assume there must be an attribute to that extent. I can't find it however so please let me know the appropriate namespace/assembly. If there is any way to do this without attributes (such as Xml Configuration or configuration via code) that would be preferable since the specific library where this is happening has to date not needed a dependency on castle.

    Read the article

  • default maven compiler setting

    - by Jeeyoung Kim
    Hello Maven gurus, Right now, I'm writing a small java application by my own, with few maven pom.xml files. I want to make all my maven packages to compile with jdk 1.6, and I can't find a good way to do it without manually setting it on every single POMs - I'm sick of copy-and-pasting <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <configuration> <source>1.6</source> <target>1.6</target> </configuration> in every single pom.xml file I generate. Is there a simpler way to resolve this issue?

    Read the article

  • Trouble using genericra to integrate activemq and glassfish when using failover protocol

    - by Kyle
    Hi, I'm attempting to use activemq in glassfish using the genericra resource adapter provided with glassfish 2.1. I have found a few pages with helpful information including http://activemq.apache.org/sjsas-with-genericjmsra.html. I have actually had success and been able to get MDBs to use activemq as their JMS provider, but I'm running into an issue as I'm trying to do some more complicated configuration. I want to set up a master-slave configuration, which would require my clients to use a brokerURL of failover:(tcp://broker1:61616,tcp://broker2:61616). In order to do this, I set the following property when calling asadmin create-resource-adapter-config (I have to escape '=' and ':'): ConnectionFactoryProperties=brokerURL\=failover\:(tcp\://127.0.0.1\:61616,tcp://127.0.0.1\:61617) However, I am now getting a StringIndexOutOfBoundsException when my application starts up. I suspect the comma in between the two URLs is the culprit, since this works fine: brokerURL\=failover\:(tcp\://127.0.0.1\:61616) Just wondering if anyone has dealt with this issue before. Also wondering if there is a better way to integrate with glassfish than using the generic resource adapter.

    Read the article

  • intalling linecache-0.46 gem(I am using rbenv)

    - by user2899281
    While bundle install the error: Gem::Installer::ExtensionBuildError: ERROR: Failed to build gem native extension. /home/launchpad/.rbenv/versions/1.9.3-p448/bin/ruby extconf.rb Can't handle 1.9.x yet * extconf.rb failed * Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/home/launchpad/.rbenv/versions/1.9.3-p448/bin/ruby Gem files will remain installed in /home/launchpad/.rbenv/versions/1.9.3-p448/lib/ruby/gems/1.9.1/gems/linecache-0.46 for inspection. Results logged to /home/launchpad/.rbenv/versions/1.9.3-p448/lib/ruby/gems/1.9.1/gems/linecache-0.46/ext/gem_make.out An error occurred while installing linecache (0.46), and Bundler cannot continue. Make sure that gem install linecache -v '0.46' succeeds before bundling.

    Read the article

  • Is it possible to protect a single element in the appSettings section instead of the entire section?

    - by hambonious
    I would like to protect one key/value pair in my appSettings but not the others using something like I've previously done with the ProtectSection method as seen below. var configurationSection = config.GetSection("appSettings"); configurationSection.SectionInformation.ProtectSection("DataProtectionConfigurationProvider"); Ideally I would like to do something like the following: var configurationElement = config.GetSection("appSettings").GetElement("Protected"); configurationElement.ElementInformation.ProtectElement("DataProtectionConfigurationProvider"); Here is the example appSettings I would be operating on: <configuration> <appSettings> <add key="Unprotected" value="ChangeMeFreely" /> <add key="Protected" value="########"/> </appSettings> </configuration> I've been searching but haven't found a way to do this. Is this possible?

    Read the article

  • What is the advantage to using a factor of 1024 instead of 1000 for disk size units?

    - by Joe Z.
    When considering the disk space of a storage medium, normally the computer or operating system will represent it in terms of powers of 1024 - a kilobyte is 1,024 bytes, a megabyte is 1,048,576 bytes, a gigabyte is 1,073,741,824 bytes, and so on. But I don't see any practical reason why this convention was adopted. Usually when disk size is represented in kilo-, mega-, or giga-bytes, it has to be converted into decimal first. In places where a power-of-two byte count actually matters (like the block size on a file system), the size is given in bytes anyway (e.g. 4096 bytes). Was it just a little aesthetic novelty that computer makers decided to adopt, but storage medium vendors decided to disregard? Whenever you buy a hard drive, there's always a disclaimer nowadays that says "One gigabyte means one billion bytes". It would feel like using the binary definition of "gigabyte" would artificially inflate the byte count of a device, making drive-makers have to pack 1.1 terabytes into a drive in order to have it show up as "1 TB", or to simply pack 1 terabyte in and have it show up as "931 GB" (and most of them do the latter). Some people have decided to use units like "KiB" or "MiB" in favour of "KB" and "MB" in order to distinguish the two. But is there any merit to the binary prefixes in the first place? There's probably a bit of old history I'm not aware of on this topic, and if there is, I'm looking for somebody to explain it. (Apologies if this is in the wrong place. I felt that a question on best practice might belong here, but I have faith that it will be migrated to the right place if it's incorrect.)

    Read the article

  • Choosing between YUI Charts or Google Visualization API

    - by r2b2
    Hello , I'm a bit stuck with which charting library I will use in my project. Im stuck with this two (but also open for other suggestions) For YUI Charts : Pro : - Very robust and configurable Cons : - Uses flash 9 , which might potentially be inaccessible for users without up to date flash version - Does not support export to image (for flash versions < 10 only) For Google Visualization API pros: - small file size for the libraries, - can be exported to static image charts (via separate API call) Cons - limited configuration options So there, please help me decide. YUI charts has the edge over configuration options but Google Visualization API has the edge in terms of accessibility as it uses SVG to render the grapsh instead of Flash. For users that are hand-cuffed by corporate IT prohibitions , they cant just upgrade their Flash version and the page will not work. Thanks!

    Read the article

  • log4j/log4cxx : exclusive 1 to 1 relation between logger and appender

    - by Omry
    Using the xml configuration of log4cxx (which is identical in configuration to log4j). I want to have a certain logger output exclusively to a specific appender (have it the only logger which outputs to that appender). I found that it's possible to bind a logger to a specific appender like this: <logger name="LoggerName"> <level value="info"/> <appender-ref ref="AppenderName"/> </logger> but it that logger still outputs to the root appender because I have this standard piece in the conf file: <root> <priority value="DEBUG"/> <appender-ref ref="OtherAppender"/> </root> How can I exclude that logger from the root logger? in other words, how do I configure the log such that all loggers inherit the appenders of the root logger except a specific logger?

    Read the article

  • How to properly structure a project in winform?

    - by user850010
    A while ago I started to create a winform application and at that time it was small and I did not give any thought of how to structure the project. Since then I added additional features as I needed and the project folder is getting bigger and bigger and now I think it is time to structure the project in some way, but I am not sure what is the proper way, so I have few questions. How to properly restructure the project folder? At the moment I am thinking of something like this: Create Folder for Forms Create Folder for Utility classes Create Folder for Classes that contain only data What is the naming convention when adding classes? Should I also rename classes so that their functionality can be identified by just looking at their name? For example renaming all forms classes, so that their name ends with Form. Or is this not necessary if special folders for them are created? What to do, so that not all the code for main form ends up in Form1.cs Another problem I encountered is that as the main form is getting more massive with each feature I add, the code file (Form1.cs) is getting really big. I have for example a TabControl and each tab has bunch of controls and all the code ended up in Form1.cs. How to avoid this? Also, Do you know any articles or books that deal with these problems?

    Read the article

  • ASP.NET 2.0 and 4.0 seem to treat the root url differently in Forms Authentication

    - by Kev
    If have the following web.config: <configuration> <system.web> <authentication mode="Forms"> <forms name="MembershipCookie" loginUrl="Login.aspx" protection="All" timeout="525600" slidingExpiration="true" enableCrossAppRedirects="true" path="/" /> </authentication> <authorization> <deny users="?" /> </authorization> </system.web> <location path="Default.aspx"> <system.web> <authorization> <allow users="*"/> </authorization> </system.web> </location> </configuration> The application is an ASP.NET 2.0 application running on Windows 2008R2/IIS7.5. If the site's application pool is configured to run ASP.NET 2.0 and I browse to http://example.com then Default.aspx is rendered as you'd expect from the rules above. However if the application pool is set to run ASP.NET 4.0 I am redirected to the login page. If I explicitly specify http://example.com/default.aspx then all is good and default.aspx renders. I've tried rewriting / -> /default.aspx (using IIS UrlRewriter 2.0) but the result is still the same, I get kicked to the login page. I've also tried this with an ASP.NET 4.0 application with the same result (which is where the problem initially arose). The reason I tried this with a 2.0 application was to see if there was a change in behaviour, and it seems that / is handled differently in 4.0. So to summarise, using the configuration above the following is observed: ASP.NET Version Url Behaviour ------------------------------------------------------------------------- 2.0 http://example.com Renders Default.aspx 2.0 http://example.com/Default.aspx Renders Default.aspx 4.0 http://example.com Redirects to Login.aspx 4.0 http://example.com/Default.aspx Renders Default.aspx Is this a bug/breaking change or have I missed something glaringly obvious?

    Read the article

< Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >