Search Results

Search found 16899 results on 676 pages for 'local'.

Page 249/676 | < Previous Page | 245 246 247 248 249 250 251 252 253 254 255 256  | Next Page >

  • Value get changed even though I'm not using reference

    - by atch
    In code: struct Rep { const char* my_data_; Rep* my_left_; Rep* my_right_; Rep(const char*); }; typedef Rep& list; ostream& operator<<(ostream& out, const list& a_list) { int count = 0; list tmp = a_list;//----->HERE I'M CREATING A LOCAL COPY for (;tmp.my_right_;tmp = *tmp.my_right_) { out << "Object no: " << ++count << " has name: " << tmp.my_data_; //tmp = *tmp.my_right_; } return out;//------>HERE a_list is changed } I've thought that if I'll create local copy to a_list object I'll be operating on completely separate object. Why isn't so? Thanks.

    Read the article

  • Can't get syntax on to work in my gvim.

    - by user198553
    (I'm new to Linux and Vim, and I'm trying to learn Vim but I'm having some issues with it that I can't seen do fix) I'm in a Linux installation (Ubuntu 8.04) that I can't update, using Vim 7.1.138. My vim installation is in /usr/share/vim/vim71/. /home/user/ My .vimrc file is in /home/user/.vimrc, as follows: fun! MySys() return "linux" endfun set runtimepath=~/.vim,$VIMRUTNTIME source ~/.vim/.vimrc And then, in my /home/user/.vim/.vimrc: " =============== GENERAL CONFIG ============== set nocompatible syntax on " =============== ENCODING AND FILE TYPES ===== set encoding=utf8 set ffs=unix,dos,mac " =============== INDENTING =================== set ai " Automatically set the indent of a new line (local to buffer) set si " smartindent (local to buffer) " =============== FONT ======================== " Set font according to system if MySys() == "mac" set gfn=Bitstream\ Vera\ Sans\ Mono:h13 set shell=/bin/bash elseif MySys() == "windows" set gfn=Bitstream\ Vera\ Sans\ Mono:h10 elseif MySys() == "linux" set gfn=Inconsolata\ 14 set shell=/bin/bash endif " =============== COLORS ====================== colorscheme molokai " ============== PLUGINS ====================== " -------------- NERDTree --------------------- :noremap ,n :NERDTreeToggle<CR> " =============== DIRECTORIES ================= set backupdir=~/.backup/vim set directory=~/.swap/vim ...fact is the command syntax on is not working, neither in vim or gvim. And the strange thing is: If I try to set the syntax using the gvim toolbat, it works. Then, in normal mode in gvim, after activating using the toolbar, using the code :syntax off, it works, and just after doing this trying to do :syntax on doesn't work!! What's going on? Am I missing something?

    Read the article

  • Getting exception when trying to monkey patch pymongo.connection._Pool

    - by Creotiv
    I use pymongo 1.9 on Ubuntu 10.10 with python 2.6.6 When i trying to monkey patch pymongo.connection._Pool i'm getting error on connection: AutoReconnect: could not find master/primary But when i change _Pool class in pymongo.connection module, it work pretty fine. Even if i copy _Pool implementation from pymongo.connection module and will try to monkey patch by the same code, it still giving same exception. I need to remove threading.local from _Pool class, because i use gevent and i need to implement Pool for all mongo connections(for all threads). I use this code: import pymongo class GPool: """A simple connection pool. Uses thread-local socket per thread. By calling return_socket() a thread can return a socket to the pool. Right now the pool size is capped at 10 sockets - we can expose this as a parameter later, if needed. """ # Non thread-locals __slots__ = ["sockets", "socket_factory", "pool_size","sock"] #sock = None def __init__(self, socket_factory): self.pool_size = 10 if not hasattr(self,"sock"): self.sock = None self.socket_factory = socket_factory if not hasattr(self, "sockets"): self.sockets = [] def socket(self): # we store the pid here to avoid issues with fork / # multiprocessing - see # test.test_connection:TestConnection.test_fork for an example # of what could go wrong otherwise pid = os.getpid() if self.sock is not None and self.sock[0] == pid: return self.sock[1] try: self.sock = (pid, self.sockets.pop()) except IndexError: self.sock = (pid, self.socket_factory()) return self.sock[1] def return_socket(self): if self.sock is not None and self.sock[0] == os.getpid(): # There's a race condition here, but we deliberately # ignore it. It means that if the pool_size is 10 we # might actually keep slightly more than that. if len(self.sockets) < self.pool_size: self.sockets.append(self.sock[1]) else: self.sock[1].close() self.sock = None pymongo.connection._Pool = GPool

    Read the article

  • Check for modification failure in content Integration using visualSvn sever and cruise control.net

    - by harun123
    I am using CruiseControl.net for continous integration. I've created a repository for my project using VisualSvn server (uses Windows Authentication). Both the servers are hosted in the same system (Os-Microsoft Windows Server 2003 sp2). When i force build the project using CruiseControl.net "Failed task(s): Svn: CheckForModifications" is shown as the message. When i checked the build report, it says as follows: BUILD EXCEPTION Error Message: ThoughtWorks.CruiseControl.Core.CruiseControlException: Source control operation failed: svn: OPTIONS of 'https://sp-ci.sbsnetwork.local:8443/svn/IntranetPortal/Source': **Server certificate verification failed: issuer is not trusted** (https://sp-ci.sbsnetwork.local:8443). Process command: C:\Program Files\VisualSVN Server\bin\svn.exe log **sameUrlAbove** -r "{2010-04-29T08:35:26Z}:{2010-04-29T09:04:02Z}" --verbose --xml --username ccnetadmin --password cruise --non-interactive --no-auth-cache at ThoughtWorks.CruiseControl.Core.Sourcecontrol.ProcessSourceControl.Execute(ProcessInfo processInfo) at ThoughtWorks.CruiseControl.Core.Sourcecontrol.Svn.GetModifications (IIntegrationResult from, IIntegrationResult to) at ThoughtWorks.CruiseControl.Core.Sourcecontrol.QuietPeriod.GetModifications(ISourceControl sourceControl, IIntegrationResult lastBuild, IIntegrationResult thisBuild) at ThoughtWorks.CruiseControl.Core.IntegrationRunner.GetModifications(IIntegrationResult from, IIntegrationResult to) at ThoughtWorks.CruiseControl.Core.IntegrationRunner.Integrate(IntegrationRequest request) My SourceControl node in the ccnet.config is as shown below: <sourcecontrol type="svn"> <executable>C:\Program Files\VisualSVN Server\bin\svn.exe</executable> <trunkUrl> check out url </trunkUrl> <workingDirectory> C:\ProjectWorkingDirectories\IntranetPortal\Source </workingDirectory> <username> ccnetadmin </username> <password> cruise </password> </sourcecontrol> Can any one suggest how to avoid this error?

    Read the article

  • Running migration on server when deploying with capistrano

    - by Pandafox
    Hi, I'm trying to deploy my rails application with capistrano, but I'm having some trouble running my migrations. In my development environment I just use sqlite as my database, but on my production server I use MySQL. The problem is that I want the migrations to run from my server and not my local machine, as I am not able to connect to my database from a remote location. My server setup: A debian box running ngnix, passenger, mysql and a git repository. What is the easiest way to do this? update: Here's my deploy script: set :application, "example.com" set :domain, "example.com" set :scm, :git set :repository, "[email protected]:project.git" set :use_sudo, false set :deploy_to, "/var/www/example.com" role :web, domain role :app, domain role :db, "localhost", :primary = true after "deploy", "deploy:migrate" When I run cap deploy, everything is working fine until it tries to run the migration. Here's the error I'm getting: ** [deploy:update_code] exception while rolling back: Capistrano::ConnectionError, connection failed for: localhost (Errno::ECONNREFUSED: Connection refused - connect(2)) connection failed for: localhost (Errno::ECONNREFUSED: Connection refused - connect(2))) This is why I need to run the migration from the server and not from my local machine. Any ideas?

    Read the article

  • In browser trusted application Silverlight 5

    - by Philippe
    With the new Silverlight 5, we can now have a In-Browser elevated-trust application. However, I'm experiencing some problems to deploy the application. When I am testing the application from Visual Studio, everything works fine because it automatically gives every right if the website is hosted on the local machine (localhost, 127.0.0.1). I saw on MSDN that I have to follow 3 steps to make it work on any website: Signed the XAP - I did it following the Microsoft tutorial Install the Trusted publishers certificate store - I did it too following the Microsoft Tutorial Adding a Registry key with the value : AllowElevatedTrustAppsInBrowser. The third step is the one I am the most unsure about. Do we need to add this registry key on the local machine or on the server ? Is there any automatic function in silverlight to add this key or its better to make a batchfile? Even with those 3 steps, the application is still not working when called from another url than localhost. Does anybody has successfully implemented a In-browser elevated-trust application? Do you see what I'm doing wrong? Thank you very much! Philippe, Sources: - http://msdn.microsoft.com/en-us/library/gg192793(v=VS.96).aspx - http://pitorque.de/MisterGoodcat/post/Silverlight-5-Tidbits-Trusted-applications.aspx

    Read the article

  • ASP.Net Problem with Event Handlers and Control Creation Timing

    - by Oliver Weichhold
    What I am trying to achieve here is to display a number of LinkButtons in a RadGrid Column. The buttons are generated from a collection property member of the bound grid row item. The CollectionLinkButton control is nothing more than a asp:Panel derived control that populates its Child Controls from "DataItem.SomeCollection" and this is working fine. The problem I am facing is with this part: Collection='<%# DataBinder.Eval(Container, "DataItem.SomeCollection") %' This is because databound Collection Property is populated so late in the lifecycle of the page that the LinkButton Controls that the CollectionLinkButton class creates from the collection are not available yet during Postback when the Click event Handler is supposed to fire and I have currently no idea how to solve this problem. <radG:RadGrid ID="grid" runat="server" DataSourceID="ds_AB"> <MasterTableView> <Columns> <radG:GridTemplateColumn> <ItemTemplate> <local:CollectionLinkButton ID="LinkButton1" runat="server" CssClass="EntityLinkButton" Collection='<%# DataBinder.Eval(Container, "DataItem.SomeCollection") %>' CollectionProperty="Id" CollectionDisplayProperty="Name" Text='<%# DataBinder.Eval(Container, "DataItem.Name") %>'</local:CollectionLinkButton> </ItemTemplate> </radG:GridTemplateColumn>

    Read the article

  • problem configure JBoss to work with JNDI

    - by Spiderman
    I am trying to bind connection to the DB using JNDI in my application that runs on JBoss. I did the following: I created the datasource file oracle-ds.xml filled it with the relevant xml elements: <datasources> <local-tx-datasource> <jndi-name>bilby</jndi-name> ... </local-tx-datasource> </datasources> and put it in the folder \server\default\deploy Added the relevant oracle jar file than in my application I performed: JndiObjectFactoryBean factory = new JndiObjectFactoryBean(); factory.setJndiName("bilby"); try{ factory.afterPropertiesSet(); dataSource = factory.getObject(); } catch(NamingException ne) { ne.printStackTrace(); } and this cause the error: javax.naming.NameNotFoundException: bilby not bound then in the output after this error occured I saw the line: 18:37:56,560 INFO [ConnectionFactoryBindingService] Bound ConnectionManager 'jb oss.jca:service=DataSourceBinding,name=bilby' to JNDI name 'java:bilby' So what is my configuration problem? I think that it may be that JBoss first loads and runs the .war file of my application and only then it loads the oracle-ds.xml that contain my data-source definition. The problem is that they are both located in the same folder. Is there a way to define priority of loading them, or maybe this is not the problem at all. Any idea?

    Read the article

  • Update a tableView with a plist took from another table

    - by Pheel
    Background: I have a tab bar application, which has a tableView as the "heart" of the app. It loads data from a plist and, through a button that checks if there are any updates on the remote plist file, updates the local plist with the remote contents. Then, i have another tableView, that should display only those plist items that have a bool value set to YES. Now i want to add a button to the second table that reloads the plist took from the first table. Expected: When i update the local plist from the first table and when i press the button on the second table, the 2nd table is supposed to update and show the cells with that bool value set to YES. (Note: I set YES as default to some items on plist). What happens: The first table updates its content from remote. The second table shows the old items with the value set to YES. When i press the button to refresh data, it reads the plist fine (by logging it, it has the same contents of the first table -only those set to YES-),but it doesn't update data even if i have [self.tableView reloadData];. When i close the app and open it again, the second table is filled with the right items. :\ Code i'm using: //Reading Plist { NSArray *documentPaths = NSSearchPathForDirectoriesInDomains(NSCachesDirectory, NSUserDomainMask, YES); NSString *plistPath = [[documentPaths lastObject] stringByAppendingPathComponent:@"myPlist.plist"]; NSFileManager *fMgr = [NSFileManager defaultManager]; if (![fMgr fileExistsAtPath:plistPath]) { plistPath = [[NSBundle mainBundle] pathForResource:@"myPlist" ofType:@"plist"]; } NSMutableArray *returnArr = [NSMutableArray arrayWithContentsOfFile:plistPath]; NSPredicate *predicate = [NSPredicate predicateWithFormat:@"isFav == YES"]; for (NSDictionary *sect in returnArr) { NSArray *arr = [sect objectForKey:@"Rows"]; [sect setValue:[arr filteredArrayUsingPredicate:predicate] forKey:@"Rows"]; } [self.tableView reloadData]; } //Refresh data button - (void) refreshTable:(id)sender { NSLog(@"plist read"); [self readPlist]; NSLog(@"refreshed plist:%@",[self readPlist]); [self.tableView reloadData]; } Does anyone know why the table is not updating?

    Read the article

  • SSIS - How do I use a resultset as input in a SQL task and get data types right?

    - by thursdaysgeek
    I am trying to merge records from an Oracle database table to my local SQL table. I have a variable for the package that is an Object, called OWell. I have a data flow task that gets the Oracle data as a SQL statment (select well_id, well_name from OWell order by Well_ID), and then a conversion task to convert well_id from a DT_STR of length 15 to a DT_WSTR; and convert well_name from a DT_STR of length 15 to DT_WSTR of length 50. That is then stored in the recordset OWell. The reason for the conversions is the table that I want to add records to has an identity field: SSIS shows well_id as a DT_WSTR of length 15, well_name a DT_WSTR of length 50. I then have a SQL task that connects to the local database and attempts to add records that are not there yet. I've tried various things: using the OWell as a result set and referring to it in my SQL statement. Currently, I have the ResultSet set to None, and the following SQL statment: Insert into WELL (WELL_ID, WELL_NAME) Select OWELL_ID, OWELL_NAME from OWell where OWELL_ID not in (select WELL.WELL_ID from WELL) For Parameter Mapping, I have Paramater 0, called OWell_ID, from my variable User::OWell. Parameter 1, called OWell_Name is from the same variable. Both are set to VARCHAR, although I've also tried NVARCHAR. I do not have a Result set. I am getting the following error: Error: 0xC002F210 at Insert records to FLEDG, Execute SQL Task: Executing the query "Insert into WELL (WELL_ID, WELL_NAME) Select OWELL..." failed with the following error: "An error occurred while extracting the result into a variable of type (DBTYPE_STR)". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly. I don't think it's a data type issue, but rather that I somehow am not using the resultset properly. How, exactly, am I supposed to refer to that recordset in my SQL task, so that I can use the two recordset fields and add records that are missing?

    Read the article

  • Rack middleware deadlock

    - by Joel
    I include this simple Rack Middleware in a Rails application: class Hello def initialize(app) @app = app end def call(env) [200, {"Content-Type" => "text/html"}, "Hello"] end end Plug it in inside environment.rb: ... Dir.glob("#{RAILS_ROOT}/lib/rack_middleware/*.rb").each do |file| require file end Rails::Initializer.run do |config| config.middleware.use Hello ... I'm using Rails 2.3.5, Webrick 1.3.1, ruby 1.8.7 When the application is started in production mode, everything works as expected - every request is intercepted by the Hello middleware, and "Hello" is returned. However, when run in development mode, the very first request works returning "Hello", but the next request hangs. Interrupting webrick while it is in the hung state yields this: ^C[2010-03-24 14:31:39] INFO going to shutdown ... deadlock 0xb6efbbc0: sleep:- - /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/reloader.rb:31 deadlock 0xb7d1b1b0: sleep:J(0xb6efbbc0) (main) - /usr/lib/ruby/1.8/webrick/server.rb:113 Exiting /usr/lib/ruby/1.8/webrick/server.rb:113:in `join': Thread(0xb7d1b1b0): deadlock (fatal) from /usr/lib/ruby/1.8/webrick/server.rb:113:in `start' from /usr/lib/ruby/1.8/webrick/server.rb:113:in `each' from /usr/lib/ruby/1.8/webrick/server.rb:113:in `start' from /usr/lib/ruby/1.8/webrick/server.rb:23:in `start' from /usr/lib/ruby/1.8/webrick/server.rb:82:in `start' from /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/handler/webrick.rb:14:in `run' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/commands/server.rb:111 from /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' from /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' from script/server:3 Something to do with the class reloader in development mode. There is also mention of deadlock in the exception. Any ideas what might be causing this? Any recommendations as to the best approach to debug this?

    Read the article

  • How can I create a new Person object correctly in Javascript?

    - by TimDog
    I'm still struggling with this concept. I have two different Person objects, very simply: ;Person1 = (function() { function P (fname, lname) { P.FirstName = fname; P.LastName = lname; return P; } P.FirstName = ''; P.LastName = ''; var prName = 'private'; P.showPrivate = function() { alert(prName); }; return P; })(); ;Person2 = (function() { var prName = 'private'; this.FirstName = ''; this.LastName = ''; this.showPrivate = function() { alert(prName); }; return function(fname, lname) { this.FirstName = fname; this.LastName = lname; } })(); And let's say I invoke them like this: var s = new Array(); //Person1 s.push(new Person1("sal", "smith")); s.push(new Person1("bill", "wonk")); alert(s[0].FirstName); alert(s[1].FirstName); s[1].showPrivate(); //Person2 s.push(new Person2("sal", "smith")); s.push(new Person2("bill", "wonk")); alert(s[2].FirstName); alert(s[3].FirstName); s[3].showPrivate(); The Person1 set alerts "bill" twice, then alerts "private" once -- so it recognizes the showPrivate function, but the local FirstName variable gets overwritten. The second Person2 set alerts "sal", then "bill", but it fails when the showPrivate function is called. The new keyword here works as I'd expect, but showPrivate (which I thought was a publicly exposed function within the closure) is apparently not public. I want to get my object to have distinct copies of all local variables and also expose public methods -- I've been studying closures quite a bit, but I'm still confused on this one. Thanks for your help.

    Read the article

  • How to maintain base files for development environment central while allowing people to change their

    - by Ittai
    Hi, what I'd like to do is have files in a central location so that when I add people to my development team they can see the base version of these files but meanwhile have the ability for the rest of the team to work with their own local version. I know I can just put the files in source-control (we use Tortoiese-SVN) and have my team change the local versions but I'd rather not as the exclamation mark signaling the file has been changed and needs to be committed, quite frankly, irritates me greatly. I'll give two examples of what I mean: We use quite a few build.xml files which relate to a single properties files which contains many definitions. Some of them can be different between team-members (mainly temporary working directories) and I'd like a new team-member to have the ability to get the properties file with the base config but change it if they wish. Have the eclipse settings file in the SVN so that when a new team-member joins they can just retrieve the files from the server and have a base system running. If they wish they will be able to change some of these settings. Thanks, Ittai

    Read the article

  • When i run rake db:create ,Error rake aborted! uninitialized constant Cucumber

    - by Big Bang Theory
    Hi I am trying to experiment on an open source application application . when i run $ rake db:create Following is the stacktrace rake aborted! uninitialized constant Cucumber /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.2/lib/active_support/dependencies.rb:443:in `load_missing_constant' /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.2/lib/active_support/dependencies.rb:80:in `const_missing' /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.2/lib/active_support/dependencies.rb:92:in `const_missing' /home/BigBangTheory/Desktop/spot-us/lib/tasks/cucumber.rake:13 /usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:1882:in `in_namespace' /usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:910:in `namespace' /home/BigBangTheory/Desktop/spot-us/lib/tasks/cucumber.rake:12 /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.2/lib/active_support/dependencies.rb:145:in `load_without_new_constant_marking' /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.2/lib/active_support/dependencies.rb:145:in `load' /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.2/lib/active_support/dependencies.rb:521:in `new_constants_in' /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.2/lib/active_support/dependencies.rb:145:in `load' /usr/lib/ruby/gems/1.8/gems/rails-2.3.2/lib/tasks/rails.rb:8 /usr/lib/ruby/gems/1.8/gems/rails-2.3.2/lib/tasks/rails.rb:8:in `each' /usr/lib/ruby/gems/1.8/gems/rails-2.3.2/lib/tasks/rails.rb:8 /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' /home/BigBangTheory/Desktop/spot-us/Rakefile:9 /usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2383:in `load' /usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2383:in `raw_load_rakefile' /usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2017:in `load_rakefile' /usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2068:in `standard_exception_handling' /usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2016:in `load_rakefile' /usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2000:in `run' /usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2068:in `standard_exception_handling' /usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:1998:in `run' /usr/lib/ruby/gems/1.8/gems/rake-0.8.7/bin/rake:31 /usr/bin/rake:19:in `load' /usr/bin/rake:19 Any help ?

    Read the article

  • IO operation taking long time for files in remote server

    - by user841311
    I have files of size 150 MB each in a remote server in a different domain in the network. I am accessing them thorugh UNC path. I want to read the file content and perform a basic string search. When I try reading the files line by line, the operation just don't finish and takes long time, more than 30 minutes. However when I copy those files to my local machine, the same code reads and performs the string search in less than 5 seconds. I don't have .NET framework installed in the server so I have to do this from my machine. I want to perform all this through C# code in .NET framework 3.5 so I don't want to explictly ftp all the files to my machine before performing this operation. Sample Code DirectoryInfo dir = new DirectoryInfo(@strFilePath); FileInfo[] fiArray = dir.getFiles("*.txt"); foreach (FileInfo fi in fiArray) { //reading file content from server takes long time but fast in local machine //perform string search } Let me know if my requirement is not clear. Thanks in advance!

    Read the article

  • Best practises for Magento Deployment

    - by Spongeboy
    I am looking setting up a deployment process for a highly customised Magento site, and was wondering how other people do this. I will be setting up dev, UAT and prod environments. All the Magento files will be in source control (SVN). At this stage, I can't see any requirements for changing the DB, so the 3 databases will be manually maintained. Specifically, How do you apply Magento upgrades? (Individually in each env, or on dev then roll out, or just give up on upgrades?) What files/folders do leave alone in each environment (e.g. magento/app/etc/local.xml) Do you restrict developers to editing specific files/folders? Do you restrict theme designers to editing specific files/folders? How do you manage database changes? Theme Designer Files/Folders Designers can restricted to editing the following folders- app/design/frontend/your_interface/your_theme/layout/ app/design/frontend/your_interface/your_theme/template/ app/design/frontend/your_interface/your_theme/locale/ skin/frontend/your_interface/your_theme/ Extension Developer Files/Folders Extension developers can edit the following folders/files- /app/code/local /app/etc/modules/<Namespace>_<Module>.xml Database environment management As the store's base URL is stored in the database, you cannot just copy databases between environments. Options include- Overriding the base url in php. Blog article on setting up dev and staging databases Changing the base url in the database after copying. (Where is this stored?) Doing a MySQLDump or backup, then doing a replace on the URL in the SQL file.

    Read the article

  • Running an executable on network share with CustomAction with wix?

    - by martin
    Hello, i have created a msi-package which compresses some xml-files to a zip-file during installation. I have created a CustomAction for this purposes: <CustomAction Id="CompressMy" BinaryKey="zipEXE" ExeCommand="a -tzip &quot;[TEMPLATE_DIR]my.zip&quot; &quot;[TempSourceFolder]data.xml&quot;" Return="check" HideTarget="no" Impersonate="no" Execute="deferred" /> The installation works fine, if i try to install to a local drive, but recently a customer wanted to install [TEMPLATE_DIR] to a network drive on Windows Vista. The CustomAction fails, because of the elevated install-user hasn't mapped the network drive, even if the installer-calling user has mapped the drive. This happens also, if I try to install to an unc-path. I use 7zip for compressing. I have added it to my msi-package. I have tried to set Impersonate="yes", but then the Installations fails, if my TEMPLATE_DIR is f.e. the ProgramData-dir. Do you have any idea what i can do? I thinked about checking if TEMPLATE_DIR is a network path, but I didn't know how i can check this. Or do you have any other Ideas how I can provide a local and a network installation while using this custom action. Would be great if there are any advices, greetings, Martin

    Read the article

  • Write to a textfile using Javascript

    - by karikari
    Under Firefox, I want to do something like this : I have a .htm file, that has a button on it. This button, when I click it, the action will write a text inside a local .txt file. By the way, my .htm file is run locally too. I have tried multiple times using this code, but still cant make my .htm file write to my textfile: function save() { try { netscape.security.PrivilegeManager.enablePrivilege("UniversalXPConnect"); } catch (e) { alert("Permission to save file was denied."); } var file = Components.classes["@mozilla.org/file/local;1"] .createInstance(Components.interfaces.nsILocalFile); file.initWithPath( savefile ); if ( file.exists() == false ) { alert( "Creating file... " ); file.create( Components.interfaces.nsIFile.NORMAL_FILE_TYPE, 420 ); } var outputStream = Components.classes["@mozilla.org/network/file-output-stream;1"] .createInstance( Components.interfaces.nsIFileOutputStream ); outputStream.init( file, 0x04 | 0x08 | 0x20, 420, 0 ); var output = 'test test test test'; var result = outputStream.write( output, output.length ); outputStream.close(); } This part is for the button: <input type="button" value="write to file2" onClick="save();">

    Read the article

  • JavaScript Exception/Error Handling Not Working

    - by Seán Hayes
    This might be a little hard to follow. I've got a function inside an object: f_openFRHandler: function(input) { console.debug('f_openFRHandler'); try{ //throw 'foo'; DragDrop.FileChanged(input); //foxyface.window.close(); } catch(e){ console.error(e); jQuery('#foxyface_open_errors').append('<div>Max local storage limit reached, unable to store new images in your browser. Please remove some images and try again.</div>'); } }, inside the try block it calls: this.FileChanged = function(input) { // FileUploadManager.addFileInput(input); console.debug(input); var files = input.files; for (var i = 0; i < files.length; i++) { var file = files[i]; if (!file.type.match(/image.*/)) continue; var reader = new FileReader(); reader.onload = (function(f, isLast) { return function(e) { if (files.length == 1) { LocalStorageManager.addImage(f.name, e.target.result, false, true); LocalStorageManager.loadCurrentImage(); //foxyface.window.close(); } else { FileUploadManager.addFileData(f, e.target.result); // add multiple files to list if (isLast) setTimeout(function() { LocalStorageManager.loadCurrentImage() },100); } }; })(file, i == files.length - 1); reader.readAsDataURL(file); } return true; LocalStorageManager.addImage calls: this.setItem = function(data){ localStorage.setItem('ImageStore', $.json_encode(data)); } localStorage.setItem throws an error if too much local storage has been used. I want to catch that error in f_openFRHandler (first code sample), but it's being sent to the error console instead of the catch block. I tried the following code in my Firebug console to make sure I'm not crazy and it works as expected despite many levels of function nesting: try{ (function(){ (function(){ throw 'foo' })() })() } catch(e){ console.debug(e) } Any ideas?

    Read the article

  • PHP: Profiling code and strict environment ~ Improving my coding

    - by DavidYell
    I would like to update my local working environment to be stricter in an effort to improve my code. I know that my code is okay, but as with most things there is always room for improvement. I use XAMPP on my local machine, for simplicities sake Apache Friends XAMPP (Basic Package) version 1.7.2 So I've updated my php.ini : error_reporting to be E_ALL | E_STRICT to help with the code standard. I've also enabled the XDebug extension zend_extension = "C:\xampp\php\ext\php_xdebug.dll" which seems to be working, having tested some broken code and got the nice standard orange error notice. However, having read this question, http://stackoverflow.com/questions/133686/what-is-the-best-way-to-profile-php-code and enabled the profiler, I cannot seem to create a cachegrind file. Many of the guides that I've looked at seem to think you need to install XDebug in XAMPP which leads me to think they are out of date, as XDebug is bundled with XAMPP these days. So I would appreciate it if anyone can help point me in the right direction with both configuring XDebug to output grind files, and or just a great set of default settings for the XDebug config in XAMPP. Seems there is very little documentation to go on. If people have tips on integrating these tools with Netbeans, that would be awesomesauce. I'm happy to get suggestions on other things that I can do to help tighten up my php code, both syntactically and performance wise Thanks, and apologies for the rambling question(s)! Ninja edit I should menion that I'm using named vhosts as my Apache configuration, which I think is why running XDebug on port 9000 isn't working for me. I guess I'd need to edit my vhost to include port 9000

    Read the article

  • Global resources can't be resolved after publishing Website in VS2008

    - by Scoregraphic
    Hi there I have a web-project running in VS 2008. We have some global resource files (*.resx) in the App_GlobalResources folder for internationalisation. All this works like a charm on my local IIS installation out of VS. But when I publish my web-project to the local filesystem and/or another server, all the resources can no longer be found. So I guess the pre-compilation is somehow corrupting stuff. When I call the pre-compiled web, I get an error that the resource object with key xyz cannot be found, although it could be found before. I checked with .NET reflector if the resource stuff made it into the *.dlls. All those identifiers are there (bin/Web.dll, bin/<culture>/Web.resources.dll). The identifiers are loaded like this: <asp:MenuItem NavigateUrl="~/OrderNew.aspx" Text="<%$ Resources:MyProject, MenuNewOrder %>" Value="NewOrder"> The resource files are called MyProject.resx and MyProject.<culture>.resx where <culture> corresponds the the specific culture (i.e. MyProject.de-DE.resx). Any ideas how to solve this? I really appreciate any help. Thanks Edit: If I copy the App_GlobalResources folder manually to the output, the resources may be loaded normally. So I really really wonder what this pre-compilation is all about. I'm still interested in solving the issue "the right way".

    Read the article

  • Issue binding Image Source dependency property

    - by Archana R
    Hello, I have created a custom control for ImageButton as <Style TargetType="{x:Type Button}"> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="{x:Type Local:ImageButton}"> <StackPanel Height="Auto" Orientation="Horizontal"> <Image Margin="0,0,3,0" Source="{Binding ImageSource}" /> <TextBlock Text="{TemplateBinding Content}" /> </StackPanel> </ControlTemplate> </Setter.Value> </Setter> </Style> ImageButton class looks like public class ImageButton : Button { public ImageButton() : base() { } public ImageSource ImageSource { get { return base.GetValue(ImageSourceProperty) as ImageSource; } set { base.SetValue(ImageSourceProperty, value); } } public static readonly DependencyProperty ImageSourceProperty = DependencyProperty.Register("Source", typeof(ImageSource), typeof(ImageButton)); } However I'm not able to bind the ImageSource to the image as: (This code is in UI Folder and image is in Resource folder) <Local:ImageButton x:Name="buttonBrowse1" Width="100" Margin="10,0,10,0" Content="Browse ..." ImageSource="../Resources/BrowseFolder.bmp"/> But if i take a simple image it gets displayed if same source is specified. Can anyone tell me what shall be done?

    Read the article

  • Sharing storage between servers

    - by El Yobo
    I have a PHP based web application which is currently only using one webserver but will shortly be scaling up to another. In most regards this is pretty straightforward, but the application also stores a lot of files on the filesystem. It seems that there are many approaches to sharing the files between the two servers, from the very simple to the reasonably complex. These are the options that I'm aware of Simple network storage NFS SMB/CIFS Clustered filesystems Lustre GFS/GFS2 GlusterFS Hadoop DFS MogileFS What I want is for a file uploaded via one webserver be immediately available if accessed through the other. The data is extremely important and absolutely cannot be lost, so whatever is implemented needs to a) never lose data and b) have very high availability (as good as, or better, than a local filesystem). It seems like the clustered filesystems will also provide faster data access than local storage (for large files) but that isn't of vita importance at the moment. What would you recommend? Do you have any suggestions to add or anything specifically to look out for with the above options? Any suggestions on how to manage backup of data on the clustered filesystems?

    Read the article

  • Using unset member variables within a class or struct

    - by Doug Kavendek
    It's pretty nice to catch some really obvious errors when using unset local variables or when accessing a class or struct's members directly prior to initializing them. In visual studio 2008 you get an "uninitialized local variable used" warning at compile-time and get a run-time check failure at the point of access when debugging. However, if you access an uninitialized struct's member variable through one of its functions, you don't get any warnings or assertions. Obviously the easiest solution is don't do that, but nobody's perfect. For example: struct Test { float GetMember() const { return member; } float member; }; Test test; float f1 = test.member; // Raises warning, asserts in VS debugger at runtime float f2 = test.GetMember(); // No problem, just keeps on going This surprised me, but it makes some sense -- the compiler can't assume calling a function on an unused struct is an error, or how else would you initialize or construct it? And anything fancier just quickly brings up so many other complications that it makes sense that it wouldn't bother classifying which functions are ok to call and when, especially just as a debugging help. I know I can set up my own assertions or error checking within the class itself, but that can complicate some simpler structs. Still, it would seem like within the context of the function call, wouldn't it know insides GetMember() that member wasn't initialized yet? I'm assuming it's not only relying on static compile-time deduction, given the Run-Time Check Failure #3 it raises during execution, so based on my current understanding of it it would seem reasonable for the same checks to apply. Is this just a limitation of this specific compiler/debugger (Visual Studio 2008), or more tied to how C++ works?

    Read the article

  • What can cause Bonjour to not call me back during browsing?

    - by millenomi
    I have a rather popular Bonjour-based application in App Store. It works perfectly, but around 0.2% of my users report a bizarre bug: "no arrows appear on the edges of the screen, so I can't share stuff with other people!". Needless to say, displaying these arrows is tied to the browsing of a particular Bonjour service on the local domain. The problem is, the Apple review team seems to intermittently happen to be in this 0.2%. This isn't good for review results, as you might imagine. No matter how much I try, I cannot reproduce this bug. From the few logs I have, it looks like my app is running correctly, just not receiving NSNetServiceBrowser delegate calls. What can cause this? Things I've tried: Having a shorter service name < 14 chars in length to be in spec. Publishing on @"local." rather than @"" (aka Go Look For The Default Registration Domain). My app is rather useless on a wide-area network anyway. Things I haven't tried: restarting the browsing machinery periodically. (I have two browsers, though, one looking for the legacy longer name, one for the new shorter one.) What to do?

    Read the article

< Previous Page | 245 246 247 248 249 250 251 252 253 254 255 256  | Next Page >