Search Results

Search found 29159 results on 1167 pages for 'xml configuration'.

Page 564/1167 | < Previous Page | 560 561 562 563 564 565 566 567 568 569 570 571  | Next Page >

  • Dependency Injection: How to maintain multiple configurations?

    - by Malax
    Hi StackOverflow, Lets assume we've build a system with a DI framework which is working quite fine. This system currently uses JMS to "talk" with other systems not maintained by us. The majority of our customers like the JMS approach and uses it according to our specification. The component which does all the messaging is injected with Spring into the rest of the application. Now we got the case that one customer cannot implement the JMS solution and want to use another messaging technology. Thats not a problem because we can simply implement a messaging service using this technology and inject it in the rest of the application. But how are we supposed to handle the deployment and maintenance of the configuration? Since the application uses Spring i could imagine to check in all the configurations i have for this application and the system administrator could start the application and passing the name of the DI XML file to specify which configuration should be loaded. But... it just don't feel right. Are there any solutions for such cases available? What are the best-practices you use? I could even imagine more complex scenarios which do not contain only one service substitution... Thanks a lot!

    Read the article

  • Can't install do_mysql gem?

    - by maccy1
    I'm trying to install the do_mysql on my Snow Leopord system Macbook Pro 13", but I keep getting this error: n216-160:~ myself$ sudo gem1.9 install do_mysql Password: Building native extensions. This could take a while... ERROR: Error installing do_mysql: ERROR: Failed to build gem native extension. /opt/local/bin/ruby1.9 extconf.rb checking for mysql_query() in -lmysqlclient... no *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/opt/local/bin/ruby1.9 --with-mysql-config --without-mysql-config --with-mysql-dir --without-mysql-dir --with-mysql-include --without-mysql-include=${mysql-dir}/include --with-mysql-lib --without-mysql-lib=${mysql-dir}/lib --with-mysqlclientlib --without-mysqlclientlib Gem files will remain installed in /opt/local/lib/ruby1.9/gems/1.9.1/gems/do_mysql-0.10.0 for inspection. Results logged to /opt/local/lib/ruby1.9/gems/1.9.1/gems/do_mysql-0.10.0/ext/do_mysql_ext/gem_make.out n216-160:~ myself$ I have no idea why. I also reinstalled my verison of MySQL with the MySQL 5.4.3 beta, 64-bit as others suggested but no dice. Does anyone have any idea what is wrong?

    Read the article

  • Hibernate Bi- Directional many to many mapping advice!

    - by Rob
    hi all, i woundered if anyone might be able to help me out. I am trying to work out what to google for (or any other ideas!!) basically i have a bidirectional many to many mapping between a user entity and a club entity (via a join table called userClubs) I now want to include a column in userClubs that represents the role so that when i call user.getClubs() I can also work out what level access they have. Is there a clever way to do this using hibernate or do i need to rethink the database structure? Thank you for any help (or just for reading this far!!) the user.hbm.xml looks a bit like <set name="clubs" table="userClubs" cascade="save-update"> <key column="user_ID"/> <many-to-many column="activity_ID" class="com.ActivityGB.client.domain.Activity"/> </set> the activity.hbm.xml part <set name="members" inverse="true" table="userClubs" cascade="save-update"> <key column="activity_ID"/> <many-to-many column="user_ID" class="com.ActivityGB.client.domain.User"/> </set> The current userClubs table contains the fields id | user_ID | activity_ID I would like to include in there id | user_ID | activity_ID | role and be able to access the role on both sides...

    Read the article

  • .NET How would I build a DAL to meet my requirments?

    - by Jonno
    Assuming that I must deploy an asp.net app over the following 3 servers: 1) DB - not public 2) 'middle' - not public 3) Web server - public I am not allowed to connect from the web server to the DB directly. I must pass through 'middle' - this is purely to slow down an attacker if they breached the web server. All db access is via stored procedures. No table access. I simply want to provide the web server with a ado dataset (I know many will dislike this, but this is the requirement). Using asmx web services - it works, but XML serialisation is slow and it's an extra set of code to maintain and deploy. Using a ssh/vpn tunnel so that the one connects to the db 'via' the middle server, seems to remove any possible benefit of maintaining 'middle'. Using WCF binary/tcp removes the XML problem, but still there is extra code. Is there an approach that provides the ease of ssh/vpn, but the potential benefit of having the dal on the middle server? Many thanks.

    Read the article

  • using pom for test scope dependencies

    - by IttayD
    Hi, Is it possible to create a pom file so it can be used inside another pom to add test scope dependencies? So in module E's pom.xml I have: <dependencies> <dependency> <groupId>com.example</artifactId> <artifactId>D</artifactId> <type>pom</type> <scope>test</scope> </dependency> </dependencies> So that if D's pom.xml contains dependencies on artifacts A, B, C, then these artifacts are in the compilation and execution classpath of E's tests. NOTE: the reason I want such a pom, and not rely on regular dependency resolution is that I have created a tests jar using maven-jar-plugin:test-jar and using that jar as a dependency causes maven to not use its transitive dependencies. (see http://jira.codehaus.org/browse/MNG-1378) UPDATE: this does not work for me (maybe because I'm trying to use it for the test scope): http://www.sonatype.com/books/mvnref-book/reference/pom-relationships-sect-pom-best-practice.html

    Read the article

  • Query column and everything subordinate (hard to describe, non native speaker, PLS let me explain)

    - by MAD9
    A few weeks ago, I asked a question about how to generate hierarchical XML from a table, that has a parentID column. It all works fine. The point is, according to the hierarchy, I also want to query a table. I'll give you an example: Thats the table with the codes: ID CODE NAME PARENTID 1 ROOT IndustryCode NULL 2 IND Industry 1 3 CON Consulting 1 4 FIN Finance 1 5 PHARM Pharmaceuticals 2 6 AUTO Automotive 2 7 STRAT Strategy 3 8 IMPL Implementation 3 9 CFIN Corporate Finance 4 10 CMRKT Capital Markets 9 From which I generate (for displaying in a TreeViewControl) this XML: <record key="1" parentkey="" Code="ROOT" Name="IndustryCode"> <record key="2" parentkey="1" Code="IND" Name="Industry"> <record key="5" parentkey="2" Code="PHARM" Name="Pharmaceuticals" /> <record key="6" parentkey="2" Code="AUTO" Name="Automotive" /> </record> <record key="3" parentkey="1" Code="CON" Name="Consulting"> <record key="7" parentkey="3" Code="STRAT" Name="Strategy" /> <record key="8" parentkey="3" Code="IMPL" Name="Implementation" /> </record> <record key="4" parentkey="1" Code="FIN" Name="Finance"> <record key="9" parentkey="4" Code="CFIN" Name="Corporate Finance"> <record key="10" parentkey="9" Code="CMRKT" Name="Capital Markets" /> </record> </record> </record> As you can see, some codes are subordinate to others, for example AUTO << IND << ROOT What I want (and have absolutely no idea how to realise or even, where to start) is to be able to query another table (where one column is this certain code of course) for a code and get all records with the specific code and all subordinate codes For example: I query the other table for "IndustryCode = IND[ustry]" and get (of course) the records containing "IND", but also AUTO[motive] and PHARM[aceutical] (= all subordinates) Its an SQL Express Server 2008 with Advanced Services.

    Read the article

  • How to perform undirected graph processing from SQL data

    - by recipriversexclusion
    I ran into the following problem in dynamically creating topics for our ActiveMQ system: I have a number of processes (M_1, ..., M_n), where n is not large, typically 5-10. Some of the processes will listen to the output of others, through a message queue; these edges are specified in an XML file, e.g. <link from="M1" to="M3"</link> <link from="M2" to="M4"</link> <link from="M3" to="M4"</link> etc. The edges are sparse, so there won't be many of them. I will parse this XML and store this information in an SQL DB, one table for nodes and another for edges. Now, I need to dynamically create strings of the form M1.exe --output_topic=T1 M2.exe --output_topic=T2 M3.exe --input_topic=T1 --output_topic=T3 M4.exe --input_topic=T2 --input_topic=T3 where the tags are sequentially generated. What is the best way to go about querying SQL to obtain these relationships? Are there any tools or other tutorials you can point me to? I've never done graps with SQL. Using SQL is imperative, because we use it for other stuff, too. Thanks!

    Read the article

  • wsimport generate a client with cookies

    - by dierre
    I'm generating a client for a SOAP 1.2 service using wsimport from the jaxws-maven-plugin in maven with the following execution: <groupId>org.jvnet.jax-ws-commons</groupId> <artifactId>jaxws-maven-plugin</artifactId> <version>2.2</version> <executions> <execution> <goals> <goal>wsimport</goal> </goals> <configuration> <sourceDestDir>${project.basedir}/src/main/java</sourceDestDir> <wsdlUrls> <wsdlUrl>${webservice.url}</wsdlUrl> </wsdlUrls> <extension>true</extension> </configuration> </execution> The first time the client call the proxy, the load balancer generate a cookie and sends it back. The client should send it back so the load balancer knows where (which server) is dedicated to a specific client (the idea is that the first time the client get a server and the cookie identifies the server, then the load balancer sends the client to the same server for every call) Now, is there a way to tell to the plugin to enable automatically the cookie handling?

    Read the article

  • JCo | How to iterate column wise

    - by cedar715
    The data from SAP is returned as a JCo.Table. However, we don't want to display ALL the columns in the VIEW. So, what we have done is, we have created a file called display.xml which has the JCO.Table columns to be displayed. The display.xml is converted to a List and each field is verified if it is present in the display list(see the code below) which is redundant from second row onwards. final Table outputTable = jcoFunction.getTableParameterList(). getTable("OUTPUT_TABLE"); final int numRows = outputTable.getNumRows(); for (int i = 0; i < numRows; i++) { final FieldIterator fields = outputTable.fields(); while (fields.hasNextFields()) { final JCO.Field recordField = fields.nextField(); final String sapFieldName = recordField.getName(); final DisplayFieldDto key = new DisplayFieldDto(sapFieldName); if (displayFields.contains(key)) { System.out.println("recordField.getName() = " + recordField.getName()); final String sapFieldName = (String)recordField.getValue(); } else { // ignore the field. } } } What is the better way to filter the fields in JCo? Can I iterate column wise? Thank you :)

    Read the article

  • drupal form textfield #default_value not working

    - by alvin.ng
    I am working on a custom module with multi-page form on drupal 6. I found that #default_value is not working when my '#type' = 'textfield'. However, when '#type'='textarea', it displays correctly with the '#default_value' specified. Basically, I wrote a FormFactory to return different form definition ($form) based on the post parameter received. Initially, it returns the display of directories list, user then selects from radio buttos until a specific directory contains a xml file, it will become edit form. The edit form will have text fields display the data (#default_value) inside the xml file, however the type 'textarea' works here rather than 'textfield'. How can I make my '#default_value' work in this case? Below is the non-working field definition: $form['pageset']['newsTitle'] = array( '#type' => 'textfield', '#title' => 'News Title', '#default_value' => "{$element->newsTitle}", '#rows' => 1, '#required' => TRUE, ); Then I changed it to textarea as shown below to make it work: $form['pageset']['newsTitle'] = array( '#type' => 'textarea', '#title' => 'News Title', '#default_value' => "{$element->newsTitle}", '#rows' => 1, '#required' => TRUE, );

    Read the article

  • Forms Authentication & Virtual Directory

    - by benclaytonfranklin
    Hi, We're having trouble getting Forms Authentication to work with a virtual directory in IIS. We have a main site, and then a microsite setup within a virtual directory. This mircosite has its own admin system within an "Admin" folder, which has authentication on it but currently it is not kicking in and the admin section is browsable by anyone. The web.config with the admin folder has the following: <?xml version="1.0"?> <configuration> <appSettings/> <connectionStrings/> <system.web> <authorization> <deny users="?"/> </authorization> <customErrors mode="RemoteOnly" defaultRedirect="~/Admin/Error.aspx"/> </system.web> </configuration> Could anyone give me any clues as to why this might not be working? Cheers!

    Read the article

  • Image Grabbing with False Referer

    - by Mr Carl
    Hey guys, I'm struggling with grabbing an image at the moment... sounds silly, but check out this link :P http://manga.justcarl.co.uk/A/Oishii_Kankei/31/1 If you get the image URL, the image loads. Go back, it looks like it's working fine, but that's just the browser loading up the cached image. The application was working fine before, I'm thinking they implemented some kind of Referer check on their images. So I found some code and came up with the following... $ref = 'http://www.thesite.com/'; $file = 'theimage.jpg'; $hdrs = array( 'http' = array( 'method' = "GET", 'header'= "accept-language: en\r\n" . "Accept:application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*\/*;q=0.5\r\n" . "Referer: $ref\r\n" . // Setting the http-referer "Content-Type: image/jpeg\r\n" ) ); // get the requested page from the server // with our header as a request-header $context = stream_context_create($hdrs); $fp = fopen($imgChapterPath.$file, 'rb', false, $context); fpassthru($fp); fclose($fp); Essentially it's making up a false referrer. All I'm getting back though is a bunch of gibberish (thanks to fpassthru) so I think it's getting the image, but I'm afraid to say I have no idea how to output/display the collected image. Cheers, Carl

    Read the article

  • java logging nightmare and log4j not behaving as expected with spring + tomcat6

    - by maverick
    I have a spring application that has configured log4j (via xml) and that runs on Tomcat6 that was working fine until we add a bunch of dependencies via Maven. At some point the whole application just started logging part of what it was supposed to be declared into the log4.xml "a small rant here" Why logging has to be that hard in java world? why suddenly an application that was just fine start behaving so weird and why it's so freaking hard to debug? I've been reading and trying to solve this issue for days but so far no luck, hopefully some expert here can give me some insights on this I've added log4j debug option to check whether log4j is taking reading the config file and its values and this is what part of it shows log4j: Level value for org.springframework.web is [debug]. log4j: org.springframework.web level set to DEBUG log4j: Retreiving an instance of org.apache.log4j.Logger. log4j: Setting [org.compass] additivity to [true]. log4j: Level value for org.compass is [debug]. log4j: org.compass level set to DEBUG As you can see debug is enabled for compass and spring.web but it only shows "INFO" level for both packages. My log4j config file has nothing out of extraordinary just a plain ConsoleAppender <log4j:configuration xmlns:log4j="http://jakarta.apache.org/log4j/"> <!-- Appenders --> <appender name="console" class="org.apache.log4j.ConsoleAppender"> <param name="Target" value="System.out" /> <layout class="org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="%-5p: %c - %m%n" /> </layout> </appender> What's the trick to make this work? What it's my misunderstanding here? Can someone point me in the right direction and explain how can I make this logging mess more bullet proof?

    Read the article

  • How to create refresh statements for TableAdapter objects in Visual Studio?

    - by Mark Wilkins
    I am working on developing an ADO.NET data provider and an associated DDEX provider. I am unable to convince the Visual Studio TableAdapater Configuration Wizard to generate SQL statements to refresh the data table after inserts and updates. It generates the insert and delete statements but will not produce the select statements to do the refresh. The functionality referred to can be accessed by dropping a table from the Server Explorer (inside Visual Studio) onto a DataSet (e.g., DataSet1.xsd). It creates a TableAdapter object and configures SELECT, UPDATE, DELETE, and INSERT statements. If you right click on the TableAdapter object, the context menu has a “Configure” option that starts the “TableAdapter Configuration Wizard”. The first dialog of that wizard has an Advanced Options button, which leads to an option titled “Refresh the data table”. When used with SQL Server tables, that option causes a statement of the form “select field1, field2, …” to be added on to the end of the commands for the TableAdapter’s InsertCommand and UpdateCommand. Do you have any idea what type property or interface might need to be exposed from the DDEX provider (or maybe the ADO.NET data provider) in order to make Visual Studio add those refresh statements to the update/insert commands? The MSDN documentation for the Advanced SQL Generation Options Dialog Box has a note stating, “Refreshing the data table is only supported on databases that support batching of SQL statements.” This seems to imply that a .NET data provider might need to expose some property indicating such behavior is supported. But I cannot find it. Any ideas?

    Read the article

  • Using SiteMesh with RequestDispatcher's forward()

    - by Rob Hruska
    I'm attempting to integrate SiteMesh into a legacy application using Tomcat 5 as my a container. I have a main.jsp that I'm decorating with a simple decorator. In decorators.xml, I've just got one decorator defined: <decorators defaultdir="/decorators"> <decorator name="layout-main" page="layout-main.jsp"> <pattern>/jsp/main.jsp</pattern> </decorator> </decorators> This decorator works if I manually go to http://example.com/my-webapp/jsp/main.jsp. However, there are a few places where a servlet, instead of doing a redirect to a jsp, does a forward: getServletContext().getRequestDispatcher("/jsp/main.jsp").forward(request, response); This means that the URL remains at something like http://example.com/my-webapp/servlet/MyServlet instead of the jsp file and is therefore not being decorated, I presume since it doesn't match the pattern in decorators.xml. I can't do a <pattern>/*</pattern> because there are other jsps that do not need to be decorated by layout-main.jsp. I can't do a <pattern>/servlet/MyServlet*</pattern> because MyServlet may forward to main.jsp sometimes and perhaps error.jsp at other times. Is there a way to work around this without expansive changes to how the servlets work? Since it's a legacy app I don't have as much freedom to change things, so I'm hoping for something configuration-wise that will fix this. SiteMesh's documentation really isn't that great. I've been working mostly off the example application that comes with the distribution. I really like SiteMesh, and am hoping I can get it to work in this case.

    Read the article

  • innerHTML doesn't work correctly with xhtml in Chrome

    - by Desperadeus
    Hi! I've got a trouble with Chrome5.0.375.70, but FF 3.6.3 and Opera 10.53 are OK. Below is the line of code: document.getElementById('content').innerHTML = data.documentElement.innerHTML; The data object from the code is a document (typeof(data) == 'object') and I've got it by ajax request to chapter01.xhtml: <?xml version="1.0" encoding="utf-8"?> <!DOCTYPE html [ <!ENTITY D "&#x2014;"> <!ENTITY o "&#x2018;"> <!ENTITY c "&#x2019;"> <!ENTITY O "&#x201C;"> <!ENTITY C "&#x201D;"> ]> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Alice's Adventures in Wonderland by Lewis Carroll. Chapter I: Down the Rabbit-Hole</title> <link rel="stylesheet" type="text/css" href="style.css"/> <link rel="stylesheet" type="application/vnd.adobe-page-template+xml" href="page-template.xpgt"/> </head> <body> <div class="title_box"> <h2 class="chapnum">Chapter I</h2> <h2 class="chaptitle">Down the Rabbit-Hole</h2> <hr/> </div> The Chrome cuts all before body and as a result link to css in header is missed; user can't see formatted text and images. How can I fix it or bypass?

    Read the article

  • .NET assembly not loading from NTVDM

    - by John Reid
    I have a VDD dll that's loaded by a DOS program running inside the NTVDM. This dll uses C++/CLI and references a .NET assembly. All in all, the loading process is something like this: NTVDM runs: prntsr.com which uses VDD RegisterModule to load: prnvdd.dll which references .NET assembly: prnlib.dll The prntsr.com, prnvdd.dll and prnlib.dll files are all in the same folder. However, when loading it, I get the following exception: Unhandled Exception: System.IO.FileNotFoundException: Could not load file or assembly 'PRNLib, Version=1.0.0.0, Culture=neutral, PublicKeyToken=ecf23cee305e91b7' or one of its dependencies. The system cannot find the file specified. File name: 'PRNLib, Version=1.0.0.0, Culture=neutral, PublicKeyToken=ecf23cee305e91b7' at VDD_Initialise() === Pre-bind state information === LOG: User = DOMAIN\user LOG: DisplayName = PRNLib, Version=1.0.0.0, Culture=neutral, PublicKeyToken=ecf2 3cee305e91b7 (Fully-specified) LOG: Appbase = file:///C:/WINDOWS/system32/ LOG: Initial PrivatePath = NULL Calling assembly : (Unknown). === LOG: This bind starts in default load context. LOG: No application configuration file found. LOG: Using machine configuration file from C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\config\machine.config. LOG: Post-policy reference: PRNLib, Version=1.0.0.0, Culture=neutral, PublicKeyToken=ecf23cee305e91b7 LOG: Attempting download of new URL file:///C:/WINDOWS/system32/PRNLib.DLL. LOG: Attempting download of new URL file:///C:/WINDOWS/system32/PRNLib/PRNLib.DLL. LOG: Attempting download of new URL file:///C:/WINDOWS/system32/PRNLib.EXE. LOG: Attempting download of new URL file:///C:/WINDOWS/system32/PRNLib/PRNLib.EXE. It only searches C:\WINDOWS\system32\ for the assembly, which I guess this is due to NTVDM.EXE - as this is the actual process that the assembly is being loaded into, it takes its location as the AppBase. Any ideas how to change the AppBase or otherwise work around this problem?

    Read the article

  • starting oracle 10g on ubuntu, Listener failed to start.

    - by tsegay
    I have installed oracle 10g on a ubuntu 10.x, This is my first time installation. After installing I tried to start it with the command below. tsegay@server-name:/u01/app/oracle/product/10.2.0/db_1/bin$ lsnrctl LSNRCTL for Linux: Version 10.2.0.1.0 - Production on 29-DEC-2010 22:46:51 Copyright (c) 1991, 2005, Oracle. All rights reserved. Welcome to LSNRCTL, type "help" for information. LSNRCTL> start Starting /u01/app/oracle/product/10.2.0/db_1/bin/tnslsnr: please wait... TNSLSNR for Linux: Version 10.2.0.1.0 - Production System parameter file is /u01/app/oracle/product/10.2.0/db_1/network/admin/listener.ora Log messages written to /u01/app/oracle/product/10.2.0/db_1/network/log/listener.log Error listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1))) TNS-12555: TNS:permission denied TNS-12560: TNS:protocol adapter error TNS-00525: Insufficient privilege for operation Linux Error: 1: Operation not permitted Listener failed to start. See the error message(s) above... my listener.ora file looks like this: # listener.ora Network Configuration File: /u01/app/oracle/product/10.2.0/db_1/network/admin/listener.ora # Generated by Oracle configuration tools. SID_LIST_LISTENER = (SID_LIST = (SID_DESC = (SID_NAME = PLSExtProc) (ORACLE_HOME = /u01/app/oracle/product/10.2.0/db_1) (PROGRAM = extproc) ) ) LISTENER = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1)) (ADDRESS = (PROTOCOL = TCP)(HOST = acct-vmserver)(PORT = 1521)) ) ) I can guess the problem is with permission issue, But i dont know where I have to do the change on permission. Any help is appreciated ...

    Read the article

  • Running an executable on network share with CustomAction with wix?

    - by martin
    Hello, i have created a msi-package which compresses some xml-files to a zip-file during installation. I have created a CustomAction for this purposes: <CustomAction Id="CompressMy" BinaryKey="zipEXE" ExeCommand="a -tzip &quot;[TEMPLATE_DIR]my.zip&quot; &quot;[TempSourceFolder]data.xml&quot;" Return="check" HideTarget="no" Impersonate="no" Execute="deferred" /> The installation works fine, if i try to install to a local drive, but recently a customer wanted to install [TEMPLATE_DIR] to a network drive on Windows Vista. The CustomAction fails, because of the elevated install-user hasn't mapped the network drive, even if the installer-calling user has mapped the drive. This happens also, if I try to install to an unc-path. I use 7zip for compressing. I have added it to my msi-package. I have tried to set Impersonate="yes", but then the Installations fails, if my TEMPLATE_DIR is f.e. the ProgramData-dir. Do you have any idea what i can do? I thinked about checking if TEMPLATE_DIR is a network path, but I didn't know how i can check this. Or do you have any other Ideas how I can provide a local and a network installation while using this custom action. Would be great if there are any advices, greetings, Martin

    Read the article

  • Standard Android Button with a different color

    - by Mike
    I'd like to change the color of a standard Android button slightly in order to better match a client's branding. For example, see the "Find a Table" button for the OpenTable application: The best way I've found to do this so far is to change the Button's drawable to the following drawable located in res/drawable/red_button.xml: <?xml version="1.0" encoding="utf-8"?> <selector xmlns:android="http://schemas.android.com/apk/res/android"> <item android:state_pressed="true" android:drawable="@drawable/red_button_pressed" /> <item android:state_focused="true" android:drawable="@drawable/red_button_focus" /> <item android:drawable="@drawable/red_button_rest" /> </selector> But doing that requires that I actually create three different drawables for each button I want to customize (one for the button at rest, one when focused, and one when pressed). That seems more complicated and non-DRY than I need. All I really want to do is apply some sort of color transform to the button. Is there an easier way to go about changing a button's color than I'm doing?

    Read the article

  • Using ControllerClassNameHandlerMapping with @Controller and extending AbstractController

    - by whiskerz
    Hey there, actually I thought I was trying something really simple. ControllerClassNameHandlerMapping sounded great to produce a small spring webapp using a very lean configuration. Just annotate the Controller with @Controller, have it extend AbstractController and the configuration shouldn't need more than this <context:component-scan base-package="test.mypackage.controller" /> <bean id="urlMapping" class="org.springframework.web.servlet.mvc.support.ControllerClassNameHandlerMapping" /> to resolve my requests and map them to my controllers. I've mapped the servlet to "*.spring", and calling <approot>/hello.spring All I ever get is an error stating that no mapping was found. If however I extend the MultiActionController, and do something like <approot>/hello/hello.spring it works. Which somehow irritates me, as I would have thought that if that is working, why didn't my first try? Does anyone have any idea? The two controllers I used looked like this @Controller public class HelloController extends AbstractController { @Override protected ModelAndView handleRequestInternal(HttpServletRequest request, HttpServletResponse response) throws Exception { ModelAndView modelAndView = new ModelAndView("hello"); modelAndView.addObject("message", "Hello World!"); return modelAndView; } } and @Controller public class HelloController extends MultiActionController { public ModelAndView hello(HttpServletRequest request, HttpServletResponse response) throws Exception { ModelAndView modelAndView = new ModelAndView("hello"); modelAndView.addObject("message", "Hello World!"); return modelAndView; } }

    Read the article

  • Best practises for Magento Deployment

    - by Spongeboy
    I am looking setting up a deployment process for a highly customised Magento site, and was wondering how other people do this. I will be setting up dev, UAT and prod environments. All the Magento files will be in source control (SVN). At this stage, I can't see any requirements for changing the DB, so the 3 databases will be manually maintained. Specifically, How do you apply Magento upgrades? (Individually in each env, or on dev then roll out, or just give up on upgrades?) What files/folders do leave alone in each environment (e.g. magento/app/etc/local.xml) Do you restrict developers to editing specific files/folders? Do you restrict theme designers to editing specific files/folders? How do you manage database changes? Theme Designer Files/Folders Designers can restricted to editing the following folders- app/design/frontend/your_interface/your_theme/layout/ app/design/frontend/your_interface/your_theme/template/ app/design/frontend/your_interface/your_theme/locale/ skin/frontend/your_interface/your_theme/ Extension Developer Files/Folders Extension developers can edit the following folders/files- /app/code/local /app/etc/modules/<Namespace>_<Module>.xml Database environment management As the store's base URL is stored in the database, you cannot just copy databases between environments. Options include- Overriding the base url in php. Blog article on setting up dev and staging databases Changing the base url in the database after copying. (Where is this stored?) Doing a MySQLDump or backup, then doing a replace on the URL in the SQL file.

    Read the article

  • how to create a DataAccessLayer ?

    - by NIGHIL DAS
    hi, i am creating a database applicatin in .Net. I am using a DataAccessLayer for communicating .net objects with database but i am not sure that this class is correct or not Can anyone cross check it and rectify any mistakes namespace IDataaccess { #region Collection Class public class SPParamCollection : List<SPParams> { } public class SPParamReturnCollection : List<SPParams> { } #endregion #region struct public struct SPParams { public string Name { get; set; } public object Value { get; set; } public ParameterDirection ParamDirection { get; set; } public SqlDbType Type { get; set; } public int Size { get; set; } public string TypeName { get; set; } // public string datatype; } #endregion /// <summary> /// Interface DataAccess Layer implimentation New version /// </summary> public interface IDataAccess { DataTable getDataUsingSP(string spName); DataTable getDataUsingSP(string spName, SPParamCollection spParamCollection); DataSet getDataSetUsingSP(string spName); DataSet getDataSetUsingSP(string spName, SPParamCollection spParamCollection); SqlDataReader getDataReaderUsingSP(string spName); SqlDataReader getDataReaderUsingSP(string spName, SPParamCollection spParamCollection); int executeSP(string spName); int executeSP(string spName, SPParamCollection spParamCollection, bool addExtraParmas); int executeSP(string spName, SPParamCollection spParamCollection); DataTable getDataUsingSqlQuery(string strSqlQuery); int executeSqlQuery(string strSqlQuery); SPParamReturnCollection executeSPReturnParam(string spName, SPParamReturnCollection spParamReturnCollection); SPParamReturnCollection executeSPReturnParam(string spName, SPParamCollection spParamCollection, SPParamReturnCollection spParamReturnCollection); SPParamReturnCollection executeSPReturnParam(string spName, SPParamCollection spParamCollection, SPParamReturnCollection spParamReturnCollection, bool addExtraParmas); int executeSPReturnParam(string spName, SPParamCollection spParamCollection, ref SPParamReturnCollection spParamReturnCollection); object getScalarUsingSP(string spName); object getScalarUsingSP(string spName, SPParamCollection spParamCollection); } } using IDataaccess; namespace Dataaccess { /// <summary> /// Class DataAccess Layer implimentation New version /// </summary> public class DataAccess : IDataaccess.IDataAccess { #region Public variables static string Strcon; DataSet dts = new DataSet(); public DataAccess() { Strcon = sReadConnectionString(); } private string sReadConnectionString() { try { //dts.ReadXml("C:\\cnn.config"); //Strcon = dts.Tables[0].Rows[0][0].ToString(); //System.Configuration.Configuration config = ConfigurationManager.OpenExeConfiguration(ConfigurationUserLevel.None); //Strcon = config.ConnectionStrings.ConnectionStrings["connectionString"].ConnectionString; // Add an Application Setting. //Strcon = "Data Source=192.168.50.103;Initial Catalog=erpDB;User ID=ipixerp1;Password=NogoXVc3"; Strcon = System.Configuration.ConfigurationManager.AppSettings["connection"]; //Strcon = System.Configuration.ConfigurationSettings.AppSettings[0].ToString(); } catch (Exception) { } return Strcon; } public SqlConnection connection; public SqlCommand cmd; public SqlDataAdapter adpt; public DataTable dt; public int intresult; public SqlDataReader sqdr; #endregion #region Public Methods public DataTable getDataUsingSP(string spName) { return getDataUsingSP(spName, null); } public DataTable getDataUsingSP(string spName, SPParamCollection spParamCollection) { try { using (connection = new SqlConnection(Strcon)) { connection.Open(); using (cmd = new SqlCommand(spName, connection)) { int count, param = 0; if (spParamCollection == null) { param = -1; } else { param = spParamCollection.Count; } for (count = 0; count < param; count++) { cmd.Parameters.AddWithValue(spParamCollection[count].Name, spParamCollection[count].Value); } cmd.CommandType = CommandType.StoredProcedure; cmd.CommandTimeout = 60; adpt = new SqlDataAdapter(cmd); dt = new DataTable(); adpt.Fill(dt); return (dt); } } } finally { connection.Close(); } } public DataSet getDataSetUsingSP(string spName) { return getDataSetUsingSP(spName, null); } public DataSet getDataSetUsingSP(string spName, SPParamCollection spParamCollection) { try { using (connection = new SqlConnection(Strcon)) { connection.Open(); using (cmd = new SqlCommand(spName, connection)) { int count, param = 0; if (spParamCollection == null) { param = -1; } else { param = spParamCollection.Count; } for (count = 0; count < param; count++) { cmd.Parameters.AddWithValue(spParamCollection[count].Name, spParamCollection[count].Value); } cmd.CommandType = CommandType.StoredProcedure; cmd.CommandTimeout = 60; adpt = new SqlDataAdapter(cmd); DataSet ds = new DataSet(); adpt.Fill(ds); return ds; } } } finally { connection.Close(); } } public SqlDataReader getDataReaderUsingSP(string spName) { return getDataReaderUsingSP(spName, null); } public SqlDataReader getDataReaderUsingSP(string spName, SPParamCollection spParamCollection) { try { using (connection = new SqlConnection(Strcon)) { connection.Open(); using (cmd = new SqlCommand(spName, connection)) { int count, param = 0; if (spParamCollection == null) { param = -1; } else { param = spParamCollection.Count; } for (count = 0; count < param; count++) { cmd.Parameters.AddWithValue(spParamCollection[count].Name, spParamCollection[count].Value); } cmd.CommandType = CommandType.StoredProcedure; cmd.CommandTimeout = 60; sqdr = cmd.ExecuteReader(); return (sqdr); } } } finally { connection.Close(); } } public int executeSP(string spName) { return executeSP(spName, null); } public int executeSP(string spName, SPParamCollection spParamCollection, bool addExtraParmas) { try { using (connection = new SqlConnection(Strcon)) { connection.Open(); using (cmd = new SqlCommand(spName, connection)) { int count, param = 0; if (spParamCollection == null) { param = -1; } else { param = spParamCollection.Count; } for (count = 0; count < param; count++) { SqlParameter par = new SqlParameter(spParamCollection[count].Name, spParamCollection[count].Value); if (addExtraParmas) { par.TypeName = spParamCollection[count].TypeName; par.SqlDbType = spParamCollection[count].Type; } cmd.Parameters.Add(par); } cmd.CommandType = CommandType.StoredProcedure; cmd.CommandTimeout = 60; return (cmd.ExecuteNonQuery()); } } } finally { connection.Close(); } } public int executeSP(string spName, SPParamCollection spParamCollection) { return executeSP(spName, spParamCollection, false); } public DataTable getDataUsingSqlQuery(string strSqlQuery) { try { using (connection = new SqlConnection(Strcon)) connection.Open(); { using (cmd = new SqlCommand(strSqlQuery, connection)) { cmd.CommandType = CommandType.Text; cmd.CommandTimeout = 60; adpt = new SqlDataAdapter(cmd); dt = new DataTable(); adpt.Fill(dt); return (dt); } } } finally { connection.Close(); } } public int executeSqlQuery(string strSqlQuery) { try { using (connection = new SqlConnection(Strcon)) { connection.Open(); using (cmd = new SqlCommand(strSqlQuery, connection)) { cmd.CommandType = CommandType.Text; cmd.CommandTimeout = 60; intresult = cmd.ExecuteNonQuery(); return (intresult); } } } finally { connection.Close(); } } public SPParamReturnCollection executeSPReturnParam(string spName, SPParamReturnCollection spParamReturnCollection) { return executeSPReturnParam(spName, null, spParamReturnCollection); } public int executeSPReturnParam() { return 0; } public int executeSPReturnParam(string spName, SPParamCollection spParamCollection, ref SPParamReturnCollection spParamReturnCollection) { try { SPParamReturnCollection spParamReturned = new SPParamReturnCollection(); using (connection = new SqlConnection(Strcon)) { connection.Open(); using (cmd = new SqlCommand(spName, connection)) { int count, param = 0; if (spParamCollection == null) { param = -1; } else { param = spParamCollection.Count; } for (count = 0; count < param; count++) { cmd.Parameters.AddWithValue(spParamCollection[count].Name, spParamCollection[count].Value); } cmd.CommandType = CommandType.StoredProcedure; foreach (SPParams paramReturn in spParamReturnCollection) { SqlParameter _parmReturn = new SqlParameter(paramReturn.Name, paramReturn.Size); _parmReturn.Direction = paramReturn.ParamDirection; if (paramReturn.Size > 0) _parmReturn.Size = paramReturn.Size; else _parmReturn.Size = 32; _parmReturn.SqlDbType = paramReturn.Type; cmd.Parameters.Add(_parmReturn); } cmd.CommandTimeout = 60; intresult = cmd.ExecuteNonQuery(); connection.Close(); //for (int i = 0; i < spParamReturnCollection.Count; i++) //{ // spParamReturned.Add(new SPParams // { // Name = spParamReturnCollection[i].Name, // Value = cmd.Parameters[spParamReturnCollection[i].Name].Value // }); //} } } return intresult; } finally { connection.Close(); } } public SPParamReturnCollection executeSPReturnParam(string spName, SPParamCollection spParamCollection, SPParamReturnCollection spParamReturnCollection) { return executeSPReturnParam(spName, spParamCollection, spParamReturnCollection, false); } public SPParamReturnCollection executeSPReturnParam(string spName, SPParamCollection spParamCollection, SPParamReturnCollection spParamReturnCollection, bool addExtraParmas) { try { SPParamReturnCollection spParamReturned = new SPParamReturnCollection(); using (connection = new SqlConnection(Strcon)) { connection.Open(); using (cmd = new SqlCommand(spName, connection)) { int count, param = 0; if (spParamCollection == null) { param = -1; } else { param = spParamCollection.Count; } for (count = 0; count < param; count++) { //cmd.Parameters.AddWithValue(spParamCollection[count].Name, spParamCollection[count].Value); SqlParameter par = new SqlParameter(spParamCollection[count].Name, spParamCollection[count].Value); if (addExtraParmas) { par.TypeName = spParamCollection[count].TypeName; par.SqlDbType = spParamCollection[count].Type; } cmd.Parameters.Add(par); } cmd.CommandType = CommandType.StoredProcedure; foreach (SPParams paramReturn in spParamReturnCollection) { SqlParameter _parmReturn = new SqlParameter(paramReturn.Name, paramReturn.Value); _parmReturn.Direction = paramReturn.ParamDirection; if (paramReturn.Size > 0) _parmReturn.Size = paramReturn.Size; else _parmReturn.Size = 32; _parmReturn.SqlDbType = paramReturn.Type; cmd.Parameters.Add(_parmReturn); } cmd.CommandTimeout = 60; cmd.ExecuteNonQuery(); connection.Close(); for (int i = 0; i < spParamReturnCollection.Count; i++) { spParamReturned.Add(new SPParams { Name = spParamReturnCollection[i].Name, Value = cmd.Parameters[spParamReturnCollection[i].Name].Value }); } } } return spParamReturned; } catch (Exception ex) { return null; } finally { connection.Close(); } } public object getScalarUsingSP(string spName) { return getScalarUsingSP(spName, null); } public object getScalarUsingSP(string spName, SPParamCollection spParamCollection) { try { using (connection = new SqlConnection(Strcon)) { connection.Open(); using (cmd = new SqlCommand(spName, connection)) { int count, param = 0; if (spParamCollection == null) { param = -1; } else { param = spParamCollection.Count; } for (count = 0; count < param; count++) { cmd.Parameters.AddWithValue(spParamCollection[count].Name, spParamCollection[count].Value); cmd.CommandTimeout = 60; } cmd.CommandType = CommandType.StoredProcedure; return cmd.ExecuteScalar(); } } } finally { connection.Close(); cmd.Dispose(); } } #endregion } }

    Read the article

  • What are the most time consuming checks performed by .NET when executing a managed appplication?

    - by ltorje
    I've developed a .NET based Windows service that uses part managed (C#) and unmanaged code (C/C++ libraries). In some domain environments (e.g. Win 2k3 32bit server inside domain abc.com) sometimes the service takes more than 30 seconds to start (especially on OS restart), thus failing to start the service. I suspect that it has something to do with enterprise level security but I do not know for sure. http://msdn.microsoft.com/en-us/library/aa720255%28VS.71%29.aspx I've tried the following without success: - delay loading references by moving the using directives as far as possible from the servicebase implementation (especially the xml namespace - know to cause delays in loading) - delay loading and configuring log4net - precompiling the code by using ngen - delaying the start of the worker thread - add/remove manifest + decencies set inside - sign/unsign the binaries - use the configuration settings (there are a lot of settings and the scope level for all is set to application ) as later as possible - add all dependencies to GAC I didn't tried yet to add security demands for the class that has the Main method implemented. I didn't tries to implement my own configuration loader because after inspecting the autogenerated code, I've noticed that the setting class is a singletone and it gets its instance on call. By completely removing the log4net dependency it worked, but this is not an option. When the network card is disabled the service starts immediately. Any suggestions/comments/solution you have would be most welcomed.

    Read the article

  • Keep Hibernate Initializer from Crashing Program

    - by manyxcxi
    I have a Java program using a basic Hibernate session factory. I had an issue with a hibernate hbm.xml mapping file and it crashed my program even though I had the getSessionFactory() call in a try catch try { session = SessionFactoryUtil.getSessionFactory().openStatelessSession(); session.beginTransaction(); rh = getRunHistoryEntry(session); if(rh == null) { throw new Exception("No run history information found in the database for run id " + runId_ + "!"); } } catch(Exception ex) { logger.error("Error initializing hibernate"); } It still manages to break out of this try/catch and crash the main thread. How do I keep it from doing this? The main issue is I have a bunch of cleanup commands that NEED to be run before the main thread shuts down and need to be able to guarantee that even after a failure it still cleans up and goes down somewhat gracefully. The session factory looks like this: public class SessionFactoryUtil { private static final SessionFactory sessionFactory; static { try { // Create the SessionFactory from hibernate.cfg.xml sessionFactory = new Configuration().configure().buildSessionFactory(); } catch (Throwable ex) { // Make sure you log the exception, as it might be swallowed System.err.println("Initial SessionFactory creation failed." + ex); throw new ExceptionInInitializerError(ex); } } public static SessionFactory getSessionFactory() { try { return sessionFactory; } catch(Exception ex) { return null; } } }

    Read the article

< Previous Page | 560 561 562 563 564 565 566 567 568 569 570 571  | Next Page >