Search Results

Search found 13214 results on 529 pages for 'git config'.

Page 139/529 | < Previous Page | 135 136 137 138 139 140 141 142 143 144 145 146  | Next Page >

  • Jersey / ServletContext and resource loading on startup.

    - by Raphael Jolivet
    Hello, I'm kind of new in web development with Java. I am developing a web service and I've chosen REST / Jersey for it. I want to init some stuff on startup of the service and to keep them all along the life of the service. First question : Is the constructor of the Jersey Servlet a good place to do that ? Basically, what I want to do is to load a config.ini file located in my WEB-INF directory. Following this help, I understand I need a ServletContext to load my file as a resource. However, it is not clear to me how to get this ServletContext in a Jersey Servlet, as it is not really an instance of a servlet, but rather a POJO with some annotations. I wanted to try this tip, but the attribute "context" is null in the constructor. I think that Jersey might populate it after the constructor. Right ? So how is the right way to do this ? Here is my code so far : /** Main REST servlet */ @Path("/") public class Servlet { // ---------------------------------------------------- // Constants // ---------------------------------------------------- static private final String CONFIG_PATH = "/WEB-INF/config.ini"; // ---------------------------------------------------- // Attributes // ---------------------------------------------------- /** Context */ @Context ServletContext context; // ---------------------------------------------------- // Constructor // ---------------------------------------------------- /** Init the servlet */ public Servlet() { // Load config.ini from WEB-INF Config.config = new Config( this.context.getResourceAsStream(CONFIG_PATH)); // FAIL! this.context is null ... } // ---------------------------------------------------- // URI Handlers // ---------------------------------------------------- /** Welcome page */ @GET @Path("/") @Produces(MediaType.TEXT_HTML) public String welcome() { return "<h1>Hi there.</h1>"; } } Any help would be much appreciated. Thanks in advance, Raphael

    Read the article

  • How to make dynamically generated .net service client read configuration from another location than

    - by Bryan
    Hi, I've currently written code to use the ServiceContractGenerator to generate web service client code based on a wsdl, and then compile it into an assembly in memory using the code dom. I'm then using reflection to set up the binding, endpoint, service values/types, and then ultimately invoke the web service method based on xml configuration that can be altered at run time. This all currently works fine. However, the problem I'm currently running into, is that I'm hitting several exotic web services that require lots of custom binding/security settings. This is forcing me to add more and more configuration into my custom xml configurations, as well as the corresponding updates to my code to interpret and set those binding/security settings in code. Ultimately, this makes adding these 'exotic' services slower, and I can see myself eventually reimplementing the 'system.serviceModel' section of the web or app.config file, which is never a good thing. My question is, and this is where my lack of experience .net and C# shows, is there a way to define the configuration normally found in the web.config or app.config 'system.serviceModel' section somewhere else, and at run time supply this to configuration to the web service client? Is there a way to attach an app.config directly to an assembly as a resource or any other way to supply this configuration to the client? Basically, I'd like attach an app.config only containing a 'system.serviceModel' to the assembly containing a web service client so that it can use its configuration. This way I wouldn't need to handle every configuration under the sun, I could let .net do it for me. Fyi, it's not an option for me to put the configuration for every service in the app.config for the running application. Any help would be greatly appreciated. Thanks in advance! Bryan

    Read the article

  • C# Static constructors design problem - need to specify parameter

    - by Neil Dobson
    I have a re-occurring design problem with certain classes which require one-off initialization with a parameter such as the name of an external resource such as a config file. For example, I have a corelib project which provides application-wide logging, configuration and general helper methods. This object could use a static constructor to initialize itself but it need access to a config file which it can't find itself. I can see a couple of solutions, but both of these don't seem quite right: 1) Use a constructor with a parameter. But then each object which requires corelib functionality should also know the name of the config file, so this has to be passed around the application. Also if I implemented corelib as a singleton I would also have to pass the config file as a parameter to the GetInstance method, which I believe is also not right. 2) Create a static property or method to pass through the config file or other external parameter. I have sort of used the latter method and created a Load method which initializes an inner class which it passes through the config file in the constructor. Then this inner class is exposed through a public property MyCoreLib. public static class CoreLib { private static MyCoreLib myCoreLib; public static void Load(string configFile) { myCoreLib = new MyCoreLib(configFile); } public static MyCoreLib MyCoreLib { get { return myCoreLib; } } public class MyCoreLib { private string configFile; public MyCoreLib(string configFile) { this.configFile = configFile; } public void DoSomething() { } } } I'm still not happy though. The inner class is not initialized until you call the load method, so that needs to be considered anywhere the MyCoreLib is accessed. Also there is nothing to stop someone calling the load method again. Any other patterns or ideas how to accomplish this?

    Read the article

  • SSIS package not picking up configuration files correctly

    - by blntechie
    We have recently migrated some 30 DTS packages in SQL Server 2000 to SSIS packages in SQL Server 2008. We have created packages in such a way that all environment related variables and other required information are picked from configuration files for maintenance of packages across different environments like Dev, QA and Prod. After setting up all the packages with the config files, when we tested the packages from Business Intelligence Development Studio, it worked fine and picked up the values from the configuration file. And when changing the values in config files to Dev or other env it correctly picked up the values and executed. Similarly, tried for 2 different environments and the packages worked fine. So we deployed to Prod and it was working fine. Yesterday, I had to make a functional change for one package and so I made the change in the package (it is just changing a parameter in a SQL procedure execution task and not related to any variables) and tested in BIDS with 2 environments and it worked fine. As the change was not related to any environment change, we deployed only the updated package (not the associated config file) manually in Prod (i.e without the use of manifest). The config file which was used by the package previously and working fine in Prod remained unaltered. But when the package was executed, the package was pointing to QA and the package didn't read from the config file I believe. One reason may be, it is still using the last executed values which remains in the .dtsx file(can be checked by opening the file in a text editor) usually. But normally, when a package is executed, the values will be overwritten from config file. Guess it is not happening. What are the possible reasons for this? We have tested extensively switching between test environments and it does not show this behavior. We have encountered this in Prod environment twice now. Anyone else have experienced this and how have you resolved this?

    Read the article

  • Running multiple instances of tomcat in eclipse WTP

    - by lisak
    Hey, SCENARIO: 10 CATALINA_BASEs with own configuration (always the same port numbers 8080, but 10 different IP/hostnames on one host via virtual IPs). created a server in WTP and pick "Use the custom location" option in the server configuration in eclipse. New configuration files are created in workspace/Server/server-name-config/ Set up the server path and deploy path for my catalina base (not the internal .metadata one) After I started it, the new configuration files overwrote the original catalina-base/conf files I had there - I was glad, it should be like this but after I made changes in the eclipse config files workspace/Server/server-name-config/ and restarted the server, the changes didn't appeared in the original files in CATALINA_BASE/conf What the hell is that ? So I set the CATALINA_BASE/conf/server.xml to fault configuration and restarted tomcat from eclipse and it worked ! it took the configuration from /Server/server-name-config/server.xml Then I deleted CATALINA_BASE/conf/server.xml and it said that there is no server.xml in catalina base ! How is it possible ? I don't understand why eclipse WTP developers made so tight integration. There should by just symbolic links in /Server/server-name-config/ pointing to CATALINA_BASE/conf/ ... now there is a weird system which is totally unpredictable. The changes in /Server/server-name-config/ are not reflected in CATALINA_BASE/conf ... from where the standard bootstrap.jar or other catalina classloaders and classes build server, engine and other objects with particular setting. Moreover the CATALINA_BASEs could be used outside eclipse then. The second problem, I'm setting up various things in CATALINA_BASE/bin/startup.sh and setenv.sh which is easy cause I can use bash for it. Is then modifying VM parameters in the "Open launch configuration" settings the only way how to do it in eclipse ? Sorry for such a huge pile of questions, but I'm annoyed by the fact that it is much better to not use eclipse WTP for this because it is very poorly designed and it's a shame because this would spare me a lot of time. And using the internal .metadata/ instances it's even more terrifying way that the one I described.

    Read the article

  • Change userSettings during MSI installation

    - by pagetailor
    Hi, I am trying to modify the userSettings section (Properties.MyApp.Default) in the MyApp.exe.config file during the installation using an MSI installer. I basically implemented it like in this excellent article: http://raquila.com/software/configure-app-config-application-settings-during-msi-install/ The difference is that I am not editing the appSettings but the userSettings section. The problem is that although the code runs fine, the settings are not saved. After the installation the config file contains the old settings I use in my development environment. I also tried to override OnAfterInstall(System.Collections.IDictionary stateSaver) instead of Install(System.Collections.IDictionary stateSaver) but it doesn't make a difference. Here is the code that should change the config values: protected override void OnAfterInstall(IDictionary savedState) { base.OnAfterInstall(savedState); string targetDirectory = Context.Parameters["targetdir"]; string tvdbAccountID = Context.Parameters["TVDBACCID"]; // read other config elements... Properties.Settings.Default.Tvdb_AccountIdentifier = tvdbAccountID; // set other config elements Properties.Settings.Default.Save(); } Any idea how to persist these changes? I already read about Wix but that seems like an overkill to me. Thanks in advance!

    Read the article

  • Firebird Data Access Designer (DDEX) installation

    - by persian Dev
    hi i want to use firebird library , and i followed its instruction as below , but i get "The referenced component 'FirebirdSql.Data.Firebird' could not be found." error. instruction : Prerequisites Make sure that you have Visual Studio .NET 2005 Standard or higher edition. Express editions are not supported. Registry update Remember to update the path in FirebirdDDEXProviderPackageLess32.reg or FirebirdDDEXProviderPackageLess64.reg, places where to update it are marked %Path%. Install the .reg file into the registry. Machine.config update Add the following two sections to machine.config (located usually at C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\CONFIG\machine.config and C:\WINDOWS\Microsoft.NET\Framework64\v2.0.50727\CONFIG\machine.config on 64-bit system). <configuration> ... <configSections> ... <section name="firebirdsql.data.firebirdclient" type="System.Data.Common.DbProviderConfigurationHandler, System.Data, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" /> ... </configSections> ... <system.data> <DbProviderFactories> ... <add name="FirebirdClient Data Provider" invariant="FirebirdSql.Data.FirebirdClient" description=".Net Framework Data Provider for Firebird" type="FirebirdSql.Data.FirebirdClient.FirebirdClientFactory, FirebirdSql.Data.FirebirdClient, Version=%Version%, Culture=%Culture%, PublicKeyToken=%PublicKeyToken%" /> ... </DbProviderFactories> </system.data> ... </configuration> And subst: %Version% With the version of the provider assembly that you have in the GAC. %Culture% With the culture of the provider assembly that you have in the GAC. %PublicKeyToken% With the PublicKeyToken of the provider assembly that you have in the GAC.

    Read the article

  • Learning Hibernate: too many connections

    - by stivlo
    I'm trying to learn Hibernate and I wrote the simplest Person Entity and I was trying to insert 2000 of them. I know I'm using deprecated methods, I will try to figure out what are the new ones later. First, here is the class Person: @Entity public class Person { private int id; private String name; @Id @GeneratedValue(strategy = GenerationType.TABLE, generator = "person") @TableGenerator(name = "person", table = "sequences", allocationSize = 1) public int getId() { return id; } public void setId(int id) { this.id = id; } public String getName() { return name; } public void setName(String name) { this.name = name; } } Then I wrote a small App class that insert 2000 entities with a loop: public class App { private static AnnotationConfiguration config; public static void insertPerson() { SessionFactory factory = config.buildSessionFactory(); Session session = factory.getCurrentSession(); session.beginTransaction(); Person aPerson = new Person(); aPerson.setName("John"); session.save(aPerson); session.getTransaction().commit(); } public static void main(String[] args) { config = new AnnotationConfiguration(); config.addAnnotatedClass(Person.class); config.configure("hibernate.cfg.xml"); //is the default already new SchemaExport(config).create(true, true); //print and execute for (int i = 0; i < 2000; i++) { insertPerson(); } } } What I get after a while is: Exception in thread "main" org.hibernate.exception.JDBCConnectionException: Cannot open connection Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: Too many connections Now I know that probably if I put the transaction outside the loop it would work, but mine was a test to see what happens when executing multiple transactions. And since there is only one open at each time, it should work. I tried to add session.close() after the commit, but I got Exception in thread "main" org.hibernate.SessionException: Session was already closed So how to solve the problem?

    Read the article

  • javascript: what does this syntax means?

    - by user1067138
    it is like this: (function () { //codes here })(); here is an example: (function () { var D = TED.EditorCore, E = TED.extend, A = TED.EditorInstanceManager, B = TED.augmentObject; window.TED["SimpleEditor"] = C; function C(F) { C.superclass.call(this, F) } C.defaultConfig = { height: "100px", width: "400px", //blablabla... flashNumLimit: 10, didaDelay: 300, imageWidthLimit: 570 }; E(C, D, { getContentLength: function () { return Math.ceil(this.filteHTML(this.editArea.innerHTML, ["img", "br"]).replace(/<img[^>]*>/gi, "mm").replace(/<br[^>]*>/gi, "m").replace(/&nbsp;/gi, "m").replace(/[^\x00-\xff]/g, "mm").length / 2) }, filteEditHTML: function () { return html = this.editArea.innerHTML.replace(/_moz_dirty=""/gi, "").replace(/\[/g, "[[-").replace(/\]/g, "-]]").replace(new RegExp("<\\/?(?:br[^>]*)>", "gi"), "[$1]").replace(new RegExp('<span([^>]*class="?at"?[^>]*)>', "gi"), "[span$1]").replace(new RegExp('<img([^>]*class="?(?:' + this.config.emptyClassName + "|" + this.config.smileyClassName + ')"?[^>]*)>', "gi"), "[img$1]").replace(/<[^>]*>/g, "").replace(/\[\[\-/g, "[").replace(/\-\]\]/g, "]").replace(new RegExp("\\[(/?(?:br|img|span)[^\\]]*)\\]", "gi"), "<$1>") }, filteSubmitHTML: function () { this.reLayout(); var G = this.editArea.innerHTML.replace(/_moz_dirty=""/gi, "").replace(/\[/g, "[[-").replace(/\]/g, "-]]").replace(new RegExp("<(/?(?:" + this.submitValidHTML.join("|") + ")[^>]*)>", "gi"), "[$1]").replace(new RegExp('<img([^>]*class="?(?:' + this.config.imageClassName + "|" + this.config.smileyClassName + "|" + this.config.flashClassName + "|" + this.config.musicClassName + ')"?[^>]*)>', "gi"), "[img$1]").replace(/<[^>]*>/g, "").replace(/\[\[\-/g, "[").replace(/\-\]\]/g, "]").replace(new RegExp("\\[(/?(?:" + this.submitValidHTML.join("|") + "|img)[^\\]]*)\\]", "gi"), "<$1>"); var F = document.createElement("div"); F.innerHTML = G; this.parseURL(F); return F.innerHTML } }); B(C, A) })(); what exactly does (funtion (){})(); do?

    Read the article

  • How do I correctly detect presence of FLTK using SCONS?

    - by James Morris
    I'm trying to build an application I've downloaded which uses the SCONS "make replacement" and the Fast Light Tool Kit Gui. The SConstruct code to detect the presence of fltk is: guienv = Environment(CPPFLAGS = '') guiconf = Configure(guienv) if not guiconf.CheckLibWithHeader('lo', 'lo/lo.h','c'): print 'Did not find liblo for OSC, exiting!' Exit(1) if not guiconf.CheckLibWithHeader('fltk', 'FL/Fl.H','c++'): print 'Did not find FLTK for the gui, exiting!' Exit(1) Unfortunately, on my (Gentoo Linux) system, and many others (Linux distributions) this can be quite troublesome if the package manager allows the simultaneous install of FLTK-1 and FLTK-2. Being Idealistic, I attempted to modify the SConstruct file to use fltk-config --cflags and fltk-config --ldflags (or fltk-config --libs might be better than ldflags) by adding them like so: guienv.Append(CPPPATH = os.popen('fltk-config --cflags').read()) guienv.Append(LIBPATH = os.popen('fltk-config --ldflags').read()) But this causes the test for liblo to fail! Looking in config.log shows how it failed: scons: Configure: Checking for C library lo... gcc -o .sconf_temp/conftest_4.o -c "-I/usr/include/fltk-1.1 -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -D_THREAD_SAFE -D_REENTRANT" gcc: no input files scons: Configure: no How should this really be done?

    Read the article

  • Rails with passenger only runs in development

    - by Ignace
    Hey, I have a problem on one of our webservers. I'll try to explain it as clear as possible, but I'm not 100% aware of all the configuration of the server. There are 2 sites running next to eachother (blcc_preprod and blcc_prod), so in the 'sites-enabled' of apache this i have a file 'blcc' like this: <VirtualHost *:80> DocumentRoot /opt/dn/blcc/www RailsBaseURI /blcc_preprod RailsBaseURI /blcc_prod RailsEnv production </VirtualHost> My config/environments/production.log (from both) looks like this(I removed all comments, because it messes with the layout) config.cache_classes = true config.action_controller.consider_all_requests_local = false config.action_controller.perform_caching = true config.action_view.cache_template_loading = true config.log_level = :debug The weird thing is, that my production log dates from months ago, so something is really wrong. Could someone help? If you need more info, just ask. Thanks! Edit: Error.log from apache show the normal output for a event to the server (the situation here is that the webserver plugs in to another business (java) server via a framework) Access.log is empty Content from other_vhosts_access.log after we surf to ip/blcc_preprod is the following blcc.localdomain:80 192.168.21.194 - - [25/May/2010:08:33:04 +0200] "GET/ blcc_preprod HTTP/1.1" 500 594 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 1.1.4322; .NET CLR 2.0.50727)"

    Read the article

  • photo upload with codeigniter

    Hi friends, I know there are many tutorials online, but I could not make them work :( maybe something particularly wrong with my system :/ My Controller localpath is: /localhost/rl/applications/backend/controller/ Controller: function do_upload() { $config['upload_path'] = './uploads/'; $config['allowed_types'] = 'gif|jpg|png'; $config['max_size'] = '100'; $config['max_width'] = '1024'; $config['max_height'] = '768'; $this->load->library('upload', $config); if ( ! $this->upload->do_upload()) { $error = array('error' => $this->upload->display_errors()); $this->load->view('add_image', $error); } else { $data = array('upload_data' => $this->upload->data()); $data['id'] = $this->input->post['id_work']; $this->load->view('add_image', $data); } } My View localpath is: /localhost/rl/applications/backend/view/ View: echo form_open_multipart('do_upload'); <ul class="frm"> <li><label>File: *</label><input type="file" name="userfile" class="frmlmnt" size="50" /></li> <li><label></label><input type="submit" class="btn" value="Upload" /></li> </ul> </form> Maybe I do something wrong with path

    Read the article

  • How to store session values with Node.js and mongodb?

    - by Tirithen
    How do I get sessions working with Node.js, [email protected] and mongodb? I'm now trying to use connect-mongo like this: var config = require('../config'), express = require('express'), MongoStore = require('connect-mongo'), server = express.createServer(); server.configure(function() { server.use(express.logger()); server.use(express.methodOverride()); server.use(express.static(config.staticPath)); server.use(express.bodyParser()); server.use(express.cookieParser()); server.use(express.session({ store: new MongoStore({ db: config.db }), secret: config.salt })); }); server.configure('development', function() { server.use(express.errorHandler({ dumpExceptions: true, showStack: true })); }); server.configure('production', function() { server.use(express.errorHandler()); }); server.set('views', __dirname + '/../views'); server.set('view engine', 'jade'); server.listen(config.port); I'm then, in a server.get callback trying to use req.session.test = 'hello'; to store that value in the session, but it's not stored between the requests. It probobly takes something more that this to store session values, how? Is there a better documented module than connect-mongo?

    Read the article

  • How do I write an RSpec test to unit-test this interesting metaprogramming code?

    - by Kyle Kaitan
    Here's some simple code that, for each argument specified, will add specific get/set methods named after that argument. If you write attr_option :foo, :bar, then you will see #foo/foo= and #bar/bar= instance methods on Config: module Configurator class Config def initialize() @options = {} end def self.attr_option(*args) args.each do |a| if not self.method_defined?(a) define_method "#{a}" do @options[:"#{a}"] ||= {} end define_method "#{a}=" do |v| @options[:"#{a}"] = v end else throw Exception.new("already have attr_option for #{a}") end end end end end So far, so good. I want to write some RSpec tests to verify this code is actually doing what it's supposed to. But there's a problem! If I invoke attr_option :foo in one of the test methods, that method is now forever defined in Config. So a subsequent test will fail when it shouldn't, because foo is already defined: it "should support a specified option" do c = Configurator::Config c.attr_option :foo # ... end it "should support multiple options" do c = Configurator::Config c.attr_option :foo, :bar, :baz # Error! :foo already defined # by a previous test. # ... end Is there a way I can give each test an anonymous "clone" of the Config class which is independent of the others?

    Read the article

  • template files evaluation in python

    - by saminny
    I am trying to use python for translating a set of templates to a set of configuration files based on values taken from a main configuration file. However, I am having certain issues. Consider the following example of a template file. file1.cfg.template %(CLIENT1)s %(HOST1)s %(PORT1)d C %(COMPID1)s %(CLIENT2)s %(HOST2)s %(PORT2)d C %(COMPID2)s This file contains an entry for each client. There are hundreds of config files like this and I don't want to have logic for each type of config file. Python should do the replacements and generate config files automatically given a set of global values read from a main xml config file. However, in the above example, if CLIENT2 does not exist, how do I delete that line? I expect Python would generate the config file using something like this: os.open("file1.cfg.template").read() % myhash where myhash is hash of configuration parameters from the main config file which may not contain CLIENT2 at all. In the case it does not contain CLIENT2, I want that line to disappear from the file. Is it possible to insert some 'IF' block in the file and have python evaluate it? Thanks for your help. Any suggestions most welcome.

    Read the article

  • Could this be considered a well-written PHP5 class?

    - by Ben Dauphinee
    I have been learning OOP principals on my own for a while, and taken a few cracks at writing classes. What I really need to know now is if I am actually using what I have learned correctly, or if I could improve as far as OOP is concerned. I have chopped a massive portion of code out of a class that I have been working on for a while now, and pasted it here. To all you skilled and knowledgeable programmers here I ask: Am I doing it wrong? class acl extends genericAPI{ // -- Copied from genericAPI class protected final function sanityCheck($what, $check, $vars){ switch($check){ case 'set': if(isset($vars[$what])){return(1);}else{return(0);} break; } } // --------------------------------- protected $db = null; protected $dataQuery = null; public function __construct(Zend_Db_Adapter_Abstract $db, $config = array()){ $this->db = $db; if(!empty($config)){$this->config = $config;} } protected function _buildQuery($selectType = null, $vars = array()){ // Removed switches for simplicity sake $this->dataQuery = $this->db->select( )->from( $this->config['table_users'], array('tf' => '(CASE WHEN count(*) > 0 THEN 1 ELSE 0 END)') )->where( $this->config['uidcol'] . ' = ?', $vars['uid'] ); } protected function _sanityRun_acl($sanitycheck, &$vars){ switch($sanitycheck){ case 'uid_set': if(!$this->sanityCheck('uid', 'set', $vars)){ throw new Exception(ERR_ACL_NOUID); } $vars['uid'] = settype($vars['uid'], 'integer'); break; } } private function user($action = null, $vars = array()){ switch($action){ case 'exists': $this->_sanityRun_acl('uid_set', $vars); $this->_buildQuery('user_exists_idcheck', $vars); return($this->db->fetchOne($this->dataQuery->__toString())); break; } } public function user_exists($uid){ return($this->user('exists', array('uid' => $uid))); } } $return = $acl_test->user_exists(1);

    Read the article

  • C# why unit test has this strange behaviour?

    - by 5YrsLaterDBA
    I have a class to encrypt the connectionString. public class SKM { private string connStrName = "AndeDBEntities"; internal void encryptConnStr() { if(isConnStrEncrypted()) return; ... } private bool isConnStrEncrypted() { bool status = false; // Open app.config of executable. System.Configuration.Configuration config = ConfigurationManager.OpenExeConfiguration(ConfigurationUserLevel.None); // Get the connection string from the app.config file. string connStr = config.ConnectionStrings.ConnectionStrings[connStrName].ConnectionString; status = !(connStr.Contains("provider")); Log.logItem(LogType.DebugDevelopment, "isConnStrEncrypted", "SKM::isConnStrEncrypted()", "isConnStrEncrypted=" + status); return status; } } Above code works fine in my application. But not in my unit test project. In my unit test project, I test the encryptConnStr() method. it will call isConnStrEncrypted() method. Then exception (null pointer) will be thrown at this line: string connStr = config.ConnectionStrings.ConnectionStrings[connStrName].ConnectionString; I have to use index like this to pass the unit test: string connStr = config.ConnectionStrings.ConnectionStrings[0].ConnectionString; I remember it worked several days ago at the time I added above unit test. But now it give me an error. The unit test is not integrated with our daily auto build yet. We only have ONE connectionStr. It works with product but not in unit test. Don't know why. Anybody can explain to me?

    Read the article

  • Multiple File upload doesnot work in CI

    - by sabuj
    Hi I have used Jquery Plugin to upload multiple file with CI. It works fine for one file but if i try to upload more than one file it doesnot work it gives the result like this"menu_back9.pngmenu_back9.png.jpg". function do_upload() { $config['upload_path'] = './uploads/'; // server directory $config['allowed_types'] = 'gif|jpg|png'; // by extension, will check for whether it is an image $config['max_size'] = '1000'; // in kb $config['max_width'] = '1024'; $config['max_height'] = '768'; $this-load-library('upload', $config); $this-load-library('Multi_upload'); $files = $this-multi_upload-go_upload(); if ( ! $files ) { $error = array('error' => $this->upload->display_errors()); $this->load->view('admin/add_digital_product', $error); } else { foreach($files as $file) { $pic_name = $file['name']; echo $pic_name; } exit; $data = array( 'dg_image_1'=$picture, 'dgproduct_name'=$this-input-post('dgproduct_name',TRUE), 'dgproduct_description'=$this-input-post('dgproduct_description',TRUE), 'url_additional'=$this-input-post('url_additional',TRUE), 'url_stored'=$this-input-post('url_stored',TRUE), 'delivery_format'=$this-input-post('delivery_format',TRUE), 'item_size'=$this-input-post('item_size',TRUE), 'price'=$this-input-post('price',TRUE), 'item_code'=$this-input-post('item_code',TRUE), 'payment_pro_code'=$this-input-post('payment_pro_code',TRUE), 'delivery_time'=$this-input-post('delivery_time',TRUE), 'thankyou_message'=$this-input-post('thankyou_message',TRUE), ); $this-db-insert('sm_digital_product',$data); redirect(site_url().'admin/admins'); } } I used avobe code If any body know please tell me. I am stack there.

    Read the article

  • photo upload with codeigniter

    I know there are many tutorials online, but I could not make them work :( maybe something particularly wrong with my system :/ My Controller localpath is: /localhost/rl/applications/backend/controller/ Controller: function do_upload() { $config['upload_path'] = './uploads/'; $config['allowed_types'] = 'gif|jpg|png'; $config['max_size'] = '100'; $config['max_width'] = '1024'; $config['max_height'] = '768'; $this->load->library('upload', $config); if ( ! $this->upload->do_upload()) { $error = array('error' => $this->upload->display_errors()); $this->load->view('add_image', $error); } else { $data = array('upload_data' => $this->upload->data()); $data['id'] = $this->input->post['id_work']; $this->load->view('add_image', $data); } } My View localpath is: /localhost/rl/applications/backend/view/ View: echo form_open_multipart('do_upload'); <ul class="frm"> <li><label>File: *</label><input type="file" name="userfile" class="frmlmnt" size="50" /></li> <li><label></label><input type="submit" class="btn" value="Upload" /></li> </ul> </form> Maybe I do something wrong with path

    Read the article

  • PHP SDK for Facebook: Uploading an Image for an Event created using the Graph API

    - by wenbert
    Can anyone shed some light on my problem? <?php $config = array(); $config['appId'] = "foo"; $config['secret'] = "bar"; $config['cookie'] = true; $config['fileUpload'] = true; $facebook = new Facebook($config); $eventParams = array( "privacy_type" => $this->request->data['Event']['privacy'], "name" => $this->request->data['Event']['event'], "description" => $this->request->data['Event']['details'], "start_time" => $this->request->data['Event']['when'], "country" => "NZ" ); //around 300x300 pixels //I have set the permissions to Everyone $imgpath = "C:\\Yes\\Windows\\Path\\Photo_for_the_event_app.jpg"; $eventParams["@file.jpg"] = "@".$imgpath; $fbEvent = $facebook->api("me/events", "POST", $eventParams); var_dump($fbEvent); //I get the event id I also have this in my "scope" when the user is asked to Allow the app to post on his behalf: user_about_me,email,publish_stream,create_event,photo_upload This works. It creates the event with all the details I have specified. EXCEPT for the event image. I have been to most of Stackoverflow posts related to my problem but all of them are not working for me. (EG: http://stackoverflow.com/a/4245260/66767) I also do not get any error. Any ideas? THanks!

    Read the article

  • Could this be considered a well-written class (am I using OOP correctly)?

    - by Ben Dauphinee
    I have been learning OOP principals on my own for a while, and taken a few cracks at writing classes. What I really need to know now is if I am actually using what I have learned correctly, or if I could improve as far as OOP is concerned. I have chopped a massive portion of code out of a class that I have been working on for a while now, and pasted it here. To all you skilled and knowledgeable programmers here I ask: Am I doing it wrong? class acl extends genericAPI{ // -- Copied from genericAPI class protected final function sanityCheck($what, $check, $vars){ switch($check){ case 'set': if(isset($vars[$what])){return(1);}else{return(0);} break; } } // --------------------------------- protected $db = null; protected $dataQuery = null; public function __construct(Zend_Db_Adapter_Abstract $db, $config = array()){ $this->db = $db; if(!empty($config)){$this->config = $config;} } protected function _buildQuery($selectType = null, $vars = array()){ // Removed switches for simplicity sake $this->dataQuery = $this->db->select( )->from( $this->config['table_users'], array('tf' => '(CASE WHEN count(*) > 0 THEN 1 ELSE 0 END)') )->where( $this->config['uidcol'] . ' = ?', $vars['uid'] ); } protected function _sanityRun_acl($sanitycheck, &$vars){ switch($sanitycheck){ case 'uid_set': if(!$this->sanityCheck('uid', 'set', $vars)){ throw new Exception(ERR_ACL_NOUID); } $vars['uid'] = settype($vars['uid'], 'integer'); break; } } private function user($action = null, $vars = array()){ switch($action){ case 'exists': $this->_sanityRun_acl('uid_set', $vars); $this->_buildQuery('user_exists_idcheck', $vars); return($this->db->fetchOne($this->dataQuery->__toString())); break; } } public function user_exists($uid){ return($this->user('exists', array('uid' => $uid))); } } $return = $acl_test->user_exists(1);

    Read the article

  • Fixing up Visual Studio&rsquo;s gitignore , using IFix

    - by terje
    Originally posted on: http://geekswithblogs.net/terje/archive/2014/06/13/fixing-up-visual-studiorsquos-gitignore--using-ifix.aspxDownload tool Is there anything wrong with the built-in Visual Studio gitignore ???? Yes, there is !  First, some background: When you set up a git repo, it should be small and not contain anything not really needed.  One thing you should not have in your git repo is binary files. These binary files may come from two sources, one is the output files, in the bin and obj folders.  If you have a  gitignore file present, which you should always have (!!), these folders are excluded by the standard included file (the one included when you choose Team Explorer/Settings/GitIgnore – Add.) The other source are the packages folder coming from your NuGet setup.  You do use NuGet, right ?  Of course you do !  But, that gitignore file doesn’t have any exclude clause for those folders.  You have to add that manually.  (It will very probably be included in some upcoming update or release).  This is one thing that is missing from the built-in gitignore. To add those few lines is a no-brainer, you just include this: # NuGet Packages packages/* *.nupkg # Enable "build/" folder in the NuGet Packages folder since # NuGet packages use it for MSBuild targets. # This line needs to be after the ignore of the build folder # (and the packages folder if the line above has been uncommented) !packages/build/ Now, if you are like me, and you probably are, you add git repo’s faster than you can code, and you end up with a bunch of repo’s, and then start to wonder: Did I fix up those gitignore files, or did I forget it? The next thing you learn, for example by reading this blog post, is that the “standard” latest Visual Studio gitignore file exist at https://github.com/github/gitignore, and you locate it under the file name VisualStudio.gitignore.  Here you will find all the new stuff, for example, the exclusion of the roslyn ide folders was commited on May 24th.  So, you think, all is well, Visual Studio will use this file …..     I am very sorry, it won’t. Visual Studio comes with a gitignore file that is baked into the release, and that is by this time “very old”.  The one at github is the latest.  The included gitignore miss the exclusion of the nuget packages folder, it also miss a lot of new stuff, like the Roslyn stuff. So, how do you fix this ?  … note .. while we wait for the next version… You can manually update it for every single repo you create, which works, but it does get boring after a few times, doesn’t it ? IFix Enter IFix ,  install it from here. IFix is a command line utility (and the installer adds it to the system path, you might need to reboot), and one of the commands is gitignore If you run it from a directory, it will check and optionally fix all gitignores in all git repo’s in that folder or below.  So, start up by running it from your C:/<user>/source/repos folder. To run it in check mode – which will not change anything, just do a check: IFix  gitignore --check What it will do is to check if the gitignore file is present, and if it is, check if the packages folder has been excluded.  If you want to see those that are ok, add the --verbose command too.  The result may look like this: Fixing missing packages Let us fix a single repo by adding the missing packages structure,  using IFix --fix We first check, then fix, then check again to verify that the gitignore is correct, and that the “packages/” part has been added. If we open up the .gitignore, we see that the block shown below has been added to the end of the .gitignore file.   Comparing and fixing with latest standard Visual Studio gitignore (from github) Now, this tells you if you miss the nuget packages folder, but what about the latest gitignore from github ? You can check for this too, just add the option –merge (why this is named so will be clear later down) So, IFix gitignore --check –merge The result may come out like this  (sorry no colors, not got that far yet here): As you can see, one repo has the latest gitignore (test1), the others are missing either 57 or 150 lines.  IFix has three ways to fix this: --add --merge --replace The options work as follows: Add:  Used to add standard gitignore in the cases where a .gitignore file is missing, and only that, that means it won’t touch other existing gitignores. Merge: Used to merge in the missing lines from the standard into the gitignore file.  If gitignore file is missing, the whole standard will be added. Replace: Used to force a complete replacement of the existing gitignore with the standard one. The Add and Replace options can be used without Fix, which means they will actually do the action. If you combine with --check it will otherwise not touch any files, just do a verification.  So a Merge Check will  tell you if there is any difference between the local gitignore and the standard gitignore, a Compare in effect. When you do a Fix Merge it will combine the local gitignore with the standard, and add what is missing to the end of the local gitignore. It may mean some things may be doubled up if they are spelled a bit differently.  You might also see some extra comments added, but they do no harm. Init new repo with standard gitignore One cool thing is that with a new repo, or a repo that is missing its gitignore, you can grab the latest standard just by using either the Add or the Replace command, both will in effect do the same in this case. So, IFix gitignore --add will add it in, as in the complete example below, where we set up a new git repo and add in the latest standard gitignore: Notes The project is open sourced at github, and you can also report issues there.

    Read the article

  • SQL Server devs–what source control system do you use, if any? (answer and maybe win free stuff)

    - by jamiet
    Recently I noticed a tweet from notable SQL Server author and community dude-at-large Steve Jones in which he asked how many SQL Server developers were putting their SQL Server source code (i.e. DDL) under source control (I’m paraphrasing because I can’t remember the exact tweet and Twitter’s search functionality is useless). The question surprised me slightly as I thought a more pertinent question would be “how many SQL Server developers are not using source control?” because I have been doing just that for many years now and I simply assumed that use of source control is a given in this day and age. Then I started thinking about it. “Perhaps I’m wrong” I pondered, “perhaps the SQL Server folks that do use source control in their day-to-day jobs are in the minority”. So, dear reader, I’m interested to know a little bit more about your use of source control. Are you putting your SQL Server code into a source control system? If so, what source control server software (e.g. TFS, Git, SVN, Mercurial, SourceSafe, Perforce) are you using? What source control client software are you using (e.g. TFS Team Explorer, Tortoise, Red Gate SQL Source Control, Red Gate SQL Connect, Git Bash, etc…)? Why did you make those particular software choices? Any interesting anecdotes to share in regard to your use of source control and SQL Server? To encourage you to contribute I have five pairs of licenses for Red Gate SQL Source Control and Red Gate SQL Connect to give away to what I consider to be the five best replies (“best” is totally subjective of course but this is my blog so my decision is final ), if you want to be considered don’t forget to leave contact details; email address, Twitter handle or similar will do. To start you off and to perhaps get the brain cells whirring, here are my answers to the questions above: Are you putting your SQL Server code into a source control system? As I think I’ve already said…yes. Always. If so, what source control server software (e.g. TFS, Git, SVN, Mercurial, SourceSafe, Perforce) are you using? I move around a lot between many clients so it changes on a fairly regular basis; my current client uses Team Foundation Server (aka TFS) and as part of a separate project is trialing the use of Team Foundation Service. I have used SVN extensively in the past which I am a fan of (I generally prefer it to TFS) and am trying to get my head around Git by using it for ObjectStorageHelper. What source control client software are you using (e.g. TFS Team Explorer, Tortoise, Red Gate SQL Source Control, Red Gate SQL Connect, Git Bash, etc…)? On my current project, Team Explorer. In the past I have used Tortoise to connect to SVN. Why did you make those particular software choices? I generally use whatever the client uses and given that I work with SQL Server I find that the majority of my clients use TFS, I guess simply because they are Microsoft development shops. Any interesting anecdotes to share in regard to your use of source control and SQL Server? Not an anecdote as such but I am going to share some frustrations about TFS. In many ways TFS is a great product because it integrates many separate functions (source control, work item tracking, build agents) into one whole and I’m firmly of the opinion that that is a good thing if for no reason other than being able to associate your check-ins with a work-item. However, like many people there are aspects to TFS source control that annoy me day-in, day-out. Chief among them has to be the fact that it uses a file’s read-only property to determine if a file should be checked-out or not and, if it determines that it should, it will happily do that check-out on your behalf without you even asking it to. I didn’t realise how ridiculous this was until I first used SVN about three years ago – with SVN you make any changes you wish and then use your source control client to determine which files have changed and thus be checked-in; the notion of “check-out” doesn’t even exist. That sounds like a small thing but you don’t realise how liberating it is until you actually start working that way. Hoping to hear some more anecdotes and opinions in the comments. Remember….free software is up for grabs! @jamiet 

    Read the article

  • A Generic, IDisposable WCF Service Client

    - by Steve Wilkes
    WCF clients need to be cleaned up properly, but as they're usually auto-generated they don't implement IDisposable. I've been doing a fair bit of WCF work recently, so I wrote a generic WCF client wrapper which effectively gives me a disposable service client. The ServiceClientWrapper is constructed using a WebServiceConfig instance, which contains a Binding, an EndPointAddress, and whether the client should ignore SSL certificate errors - pretty useful during testing! The Binding can be created based on configuration data or entirely programmatically - that's not the client's concern. Here's the service client code: using System; using System.Net; using System.Net.Security; using System.ServiceModel; public class ServiceClientWrapper<TService, TChannel> : IDisposable     where TService : ClientBase<TChannel>     where TChannel : class {     private readonly WebServiceConfig _config;     private TService _serviceClient;     public ServiceClientWrapper(WebServiceConfig config)     {         this._config = config;     }     public TService CreateServiceClient()     {         this.DisposeExistingServiceClientIfRequired();         if (this._config.IgnoreSslErrors)         {             ServicePointManager.ServerCertificateValidationCallback =                 (obj, certificate, chain, errors) => true;         }         else         {             ServicePointManager.ServerCertificateValidationCallback =                 (obj, certificate, chain, errors) => errors == SslPolicyErrors.None;         }         this._serviceClient = (TService)Activator.CreateInstance(             typeof(TService),             this._config.Binding,             this._config.Endpoint);         if (this._config.ClientCertificate != null)         {             this._serviceClient.ClientCredentials.ClientCertificate.Certificate =                 this._config.ClientCertificate;         }         return this._serviceClient;     }     public void Dispose()     {         this.DisposeExistingServiceClientIfRequired();     }     private void DisposeExistingServiceClientIfRequired()     {         if (this._serviceClient != null)         {             try             {                 if (this._serviceClient.State == CommunicationState.Faulted)                 {                     this._serviceClient.Abort();                 }                 else                 {                     this._serviceClient.Close();                 }             }             catch             {                 this._serviceClient.Abort();             }             this._serviceClient = null;         }     } } A client for a particular service can then be created something like this: public class ManagementServiceClientWrapper :     ServiceClientWrapper<ManagementServiceClient, IManagementService> {     public ManagementServiceClientWrapper(WebServiceConfig config)         : base(config)     {     } } ...where ManagementServiceClient is the auto-generated client class, and the IManagementService is the auto-generated WCF channel class - and used like this: using(var serviceClientWrapper = new ManagementServiceClientWrapper(config)) {     serviceClientWrapper.CreateServiceClient().CallService(); } The underlying WCF client created by the CreateServiceClient() will be disposed after the using, and hey presto - a disposable WCF service client.

    Read the article

  • Problems installing clamcour

    - by user10778
    Hello People, when i try to install clamcour from terminal, it gives me this error, somebody can help me? calmcourdir# ./configure checking for libraries containing socket functions... -lc checking for socket... yes checking for bind... yes checking for listen... yes checking for accept... yes checking for shutdown... yes checking for socklen_t... yes checking for struct sockaddr_un.sun_len... no System log functions checking syslog.h usability... yes checking syslog.h presence... yes checking for syslog.h... yes checking for openlog... yes checking for syslog... yes checking for closelog... yes Time functions checking whether time.h and sys/time.h may both be included... yes checking whether struct tm is in sys/time.h or time.h... time.h checking for localtime_r... yes checking for strftime... yes checking for unistd.h... (cached) yes checking limits.h usability... yes checking limits.h presence... yes checking for limits.h... yes checking for sysconf... yes BZip2 support checking bzlib.h usability... no checking bzlib.h presence... no checking for bzlib.h... no checking for BZ2_bzWriteOpen in -lbz2... no GZip support checking zlib.h usability... no checking zlib.h presence... no checking for zlib.h... no checking for gzopen in -lz... no LibClamAV support checking for /usr/bin/clamav-config... no checking for /usr/local/bin/courier-config... no checking for /usr/clamav/bin/clamav-config... no checking for /usr/local/clamav/bin/clamav-config... no ./configure: line 25234: : command not found configure: error: Cannot find clamav-config

    Read the article

< Previous Page | 135 136 137 138 139 140 141 142 143 144 145 146  | Next Page >