Search Results

Search found 21640 results on 866 pages for 'local storage'.

Page 380/866 | < Previous Page | 376 377 378 379 380 381 382 383 384 385 386 387  | Next Page >

  • What's the best way to mitigate NFS and sudo?

    - by user225874
    Quick background: We have 40 workstations running Linux. NFS is used extensively for bulk data storage and home directories. This allows users to roam freely will relatively transparent file systems. This is an educational environment where postdocs and students have successfully pulled off a coup of sorts. All have gained root on their individual workstations by grooming a technophobic PI who thinks IT people are evil. If I so much as suggest chroot or sudo restrictions, I'll find myself working out of a broom closet. With that in mind, what's the best way to mitigate something like this below? $ hostname workstation1 $ whoami john $ sudo su jane $ whoami jane $ cp -R /home/nfs/jane /mnt/thumbdrive/

    Read the article

  • Getting exception when trying to monkey patch pymongo.connection._Pool

    - by Creotiv
    I use pymongo 1.9 on Ubuntu 10.10 with python 2.6.6 When i trying to monkey patch pymongo.connection._Pool i'm getting error on connection: AutoReconnect: could not find master/primary But when i change _Pool class in pymongo.connection module, it work pretty fine. Even if i copy _Pool implementation from pymongo.connection module and will try to monkey patch by the same code, it still giving same exception. I need to remove threading.local from _Pool class, because i use gevent and i need to implement Pool for all mongo connections(for all threads). I use this code: import pymongo class GPool: """A simple connection pool. Uses thread-local socket per thread. By calling return_socket() a thread can return a socket to the pool. Right now the pool size is capped at 10 sockets - we can expose this as a parameter later, if needed. """ # Non thread-locals __slots__ = ["sockets", "socket_factory", "pool_size","sock"] #sock = None def __init__(self, socket_factory): self.pool_size = 10 if not hasattr(self,"sock"): self.sock = None self.socket_factory = socket_factory if not hasattr(self, "sockets"): self.sockets = [] def socket(self): # we store the pid here to avoid issues with fork / # multiprocessing - see # test.test_connection:TestConnection.test_fork for an example # of what could go wrong otherwise pid = os.getpid() if self.sock is not None and self.sock[0] == pid: return self.sock[1] try: self.sock = (pid, self.sockets.pop()) except IndexError: self.sock = (pid, self.socket_factory()) return self.sock[1] def return_socket(self): if self.sock is not None and self.sock[0] == os.getpid(): # There's a race condition here, but we deliberately # ignore it. It means that if the pool_size is 10 we # might actually keep slightly more than that. if len(self.sockets) < self.pool_size: self.sockets.append(self.sock[1]) else: self.sock[1].close() self.sock = None pymongo.connection._Pool = GPool

    Read the article

  • IO operation taking long time for files in remote server

    - by user841311
    I have files of size 150 MB each in a remote server in a different domain in the network. I am accessing them thorugh UNC path. I want to read the file content and perform a basic string search. When I try reading the files line by line, the operation just don't finish and takes long time, more than 30 minutes. However when I copy those files to my local machine, the same code reads and performs the string search in less than 5 seconds. I don't have .NET framework installed in the server so I have to do this from my machine. I want to perform all this through C# code in .NET framework 3.5 so I don't want to explictly ftp all the files to my machine before performing this operation. Sample Code DirectoryInfo dir = new DirectoryInfo(@strFilePath); FileInfo[] fiArray = dir.getFiles("*.txt"); foreach (FileInfo fi in fiArray) { //reading file content from server takes long time but fast in local machine //perform string search } Let me know if my requirement is not clear. Thanks in advance!

    Read the article

  • Pros/cons of reading connection string from physical file vs Application object (ASP.NET)?

    - by HaterTot
    my ASP.NET application reads an xml file to determine which environment it's currently in (e.g. local, development, production). It checks this file every single time it opens a connection to the database, in order to know which connection string to grab from the Application Settings. I'm entering a phase of development where efficiency is becoming a concern. I don't think it's a good idea to have to read a file on a physical disk ever single time I wish to access the database (very often). I was considering storing the connection string in Application["ConnectionString"]. So the code would be public static string GetConnectionString { if (Application["ConnectionString"] == null) { XmlDocument doc = new XmlDocument(); doc.Load(HttpContext.Current.Request.PhysicalApplicationPath + "bin/ServerEnvironment.xml"); XmlElement xe = (XmlElement) xnl[0]; switch (xe.InnerText.ToString().ToLower()) { case "local": connString = Settings.Default.ConnectionStringLocal; break; case "development": connString = Settings.Default.ConnectionStringDevelopment; break; case "production": connString = Settings.Default.ConnectionStringProduction; break; default: throw new Exception("no connection string defined"); } Application["ConnectionString"] = connString; } return Application["ConnectionString"].ToString(); } I didn't design the application so I figure there must have been a reason for reading the xml file every time (to change settings while the application runs?) I have very little concept of the inner workings here. What are the pros and cons? Do you think I'd see a small performance gain by implementing the function above? THANKS

    Read the article

  • Is it possible create a 4TB bootable partition in the x86 edition of Windows Server 2003 Enterprise?

    - by Giffyguy
    I'd like to find out if there is any way to accomplish this, since it would benifit my storage server greatly. I am using a Promise FastTrak 8660 and five Seagate ST31000340NS 1TB drives in a RAID 5 array. I figure that if the x86 ENTERPRISE edition of Server 2003 can handle 64GB of RAM, it should have no problem supporting larger HDD volumes as well. I've read (somewhere...) that the Windows Server operating systems are not limited to the standard 2TB like Windows XP and 2000 are. I'm hoping it's something that just needs to be turned on, similar to the way PAE works for the 4GB RAM limit in x86 servers.

    Read the article

  • HTC Android Fails to mount- Mount from computer?

    - by Ben Franchuk
    I Have an HTC Incredible S (S-Off, Rooted, ViperVIVO 1.3.0 ICS) that has seemingly ceased to posses the ability to mount its SD Storage to my computer. For whatever reason, whenever I plug in my device to transfer files from computer to phone and vice versa, the computer, for some reason, cannot actually aces the phone. I get prompted with a window on my phone when I first plug it in, asking me which mode I want to put the device into (Charge mode, tether mode, etc.), and even if I select the "Disk Drive" function, the phone still cannot successfully mount to my computer. The phone itself unmounts itself from the SD and says that the computer is connected, but again, it doesn't work. Is there any way to force mount the device from my computer- either via command or otherwise? This should help in that if I unmount the SD from the phone I should be able to mount it to my computer, from my computer, Correct?

    Read the article

  • ASP.Net Problem with Event Handlers and Control Creation Timing

    - by Oliver Weichhold
    What I am trying to achieve here is to display a number of LinkButtons in a RadGrid Column. The buttons are generated from a collection property member of the bound grid row item. The CollectionLinkButton control is nothing more than a asp:Panel derived control that populates its Child Controls from "DataItem.SomeCollection" and this is working fine. The problem I am facing is with this part: Collection='<%# DataBinder.Eval(Container, "DataItem.SomeCollection") %' This is because databound Collection Property is populated so late in the lifecycle of the page that the LinkButton Controls that the CollectionLinkButton class creates from the collection are not available yet during Postback when the Click event Handler is supposed to fire and I have currently no idea how to solve this problem. <radG:RadGrid ID="grid" runat="server" DataSourceID="ds_AB"> <MasterTableView> <Columns> <radG:GridTemplateColumn> <ItemTemplate> <local:CollectionLinkButton ID="LinkButton1" runat="server" CssClass="EntityLinkButton" Collection='<%# DataBinder.Eval(Container, "DataItem.SomeCollection") %>' CollectionProperty="Id" CollectionDisplayProperty="Name" Text='<%# DataBinder.Eval(Container, "DataItem.Name") %>'</local:CollectionLinkButton> </ItemTemplate> </radG:GridTemplateColumn>

    Read the article

  • Running migration on server when deploying with capistrano

    - by Pandafox
    Hi, I'm trying to deploy my rails application with capistrano, but I'm having some trouble running my migrations. In my development environment I just use sqlite as my database, but on my production server I use MySQL. The problem is that I want the migrations to run from my server and not my local machine, as I am not able to connect to my database from a remote location. My server setup: A debian box running ngnix, passenger, mysql and a git repository. What is the easiest way to do this? update: Here's my deploy script: set :application, "example.com" set :domain, "example.com" set :scm, :git set :repository, "[email protected]:project.git" set :use_sudo, false set :deploy_to, "/var/www/example.com" role :web, domain role :app, domain role :db, "localhost", :primary = true after "deploy", "deploy:migrate" When I run cap deploy, everything is working fine until it tries to run the migration. Here's the error I'm getting: ** [deploy:update_code] exception while rolling back: Capistrano::ConnectionError, connection failed for: localhost (Errno::ECONNREFUSED: Connection refused - connect(2)) connection failed for: localhost (Errno::ECONNREFUSED: Connection refused - connect(2))) This is why I need to run the migration from the server and not from my local machine. Any ideas?

    Read the article

  • Value get changed even though I'm not using reference

    - by atch
    In code: struct Rep { const char* my_data_; Rep* my_left_; Rep* my_right_; Rep(const char*); }; typedef Rep& list; ostream& operator<<(ostream& out, const list& a_list) { int count = 0; list tmp = a_list;//----->HERE I'M CREATING A LOCAL COPY for (;tmp.my_right_;tmp = *tmp.my_right_) { out << "Object no: " << ++count << " has name: " << tmp.my_data_; //tmp = *tmp.my_right_; } return out;//------>HERE a_list is changed } I've thought that if I'll create local copy to a_list object I'll be operating on completely separate object. Why isn't so? Thanks.

    Read the article

  • In browser trusted application Silverlight 5

    - by Philippe
    With the new Silverlight 5, we can now have a In-Browser elevated-trust application. However, I'm experiencing some problems to deploy the application. When I am testing the application from Visual Studio, everything works fine because it automatically gives every right if the website is hosted on the local machine (localhost, 127.0.0.1). I saw on MSDN that I have to follow 3 steps to make it work on any website: Signed the XAP - I did it following the Microsoft tutorial Install the Trusted publishers certificate store - I did it too following the Microsoft Tutorial Adding a Registry key with the value : AllowElevatedTrustAppsInBrowser. The third step is the one I am the most unsure about. Do we need to add this registry key on the local machine or on the server ? Is there any automatic function in silverlight to add this key or its better to make a batchfile? Even with those 3 steps, the application is still not working when called from another url than localhost. Does anybody has successfully implemented a In-browser elevated-trust application? Do you see what I'm doing wrong? Thank you very much! Philippe, Sources: - http://msdn.microsoft.com/en-us/library/gg192793(v=VS.96).aspx - http://pitorque.de/MisterGoodcat/post/Silverlight-5-Tidbits-Trusted-applications.aspx

    Read the article

  • Check for modification failure in content Integration using visualSvn sever and cruise control.net

    - by harun123
    I am using CruiseControl.net for continous integration. I've created a repository for my project using VisualSvn server (uses Windows Authentication). Both the servers are hosted in the same system (Os-Microsoft Windows Server 2003 sp2). When i force build the project using CruiseControl.net "Failed task(s): Svn: CheckForModifications" is shown as the message. When i checked the build report, it says as follows: BUILD EXCEPTION Error Message: ThoughtWorks.CruiseControl.Core.CruiseControlException: Source control operation failed: svn: OPTIONS of 'https://sp-ci.sbsnetwork.local:8443/svn/IntranetPortal/Source': **Server certificate verification failed: issuer is not trusted** (https://sp-ci.sbsnetwork.local:8443). Process command: C:\Program Files\VisualSVN Server\bin\svn.exe log **sameUrlAbove** -r "{2010-04-29T08:35:26Z}:{2010-04-29T09:04:02Z}" --verbose --xml --username ccnetadmin --password cruise --non-interactive --no-auth-cache at ThoughtWorks.CruiseControl.Core.Sourcecontrol.ProcessSourceControl.Execute(ProcessInfo processInfo) at ThoughtWorks.CruiseControl.Core.Sourcecontrol.Svn.GetModifications (IIntegrationResult from, IIntegrationResult to) at ThoughtWorks.CruiseControl.Core.Sourcecontrol.QuietPeriod.GetModifications(ISourceControl sourceControl, IIntegrationResult lastBuild, IIntegrationResult thisBuild) at ThoughtWorks.CruiseControl.Core.IntegrationRunner.GetModifications(IIntegrationResult from, IIntegrationResult to) at ThoughtWorks.CruiseControl.Core.IntegrationRunner.Integrate(IntegrationRequest request) My SourceControl node in the ccnet.config is as shown below: <sourcecontrol type="svn"> <executable>C:\Program Files\VisualSVN Server\bin\svn.exe</executable> <trunkUrl> check out url </trunkUrl> <workingDirectory> C:\ProjectWorkingDirectories\IntranetPortal\Source </workingDirectory> <username> ccnetadmin </username> <password> cruise </password> </sourcecontrol> Can any one suggest how to avoid this error?

    Read the article

  • SSIS - How do I use a resultset as input in a SQL task and get data types right?

    - by thursdaysgeek
    I am trying to merge records from an Oracle database table to my local SQL table. I have a variable for the package that is an Object, called OWell. I have a data flow task that gets the Oracle data as a SQL statment (select well_id, well_name from OWell order by Well_ID), and then a conversion task to convert well_id from a DT_STR of length 15 to a DT_WSTR; and convert well_name from a DT_STR of length 15 to DT_WSTR of length 50. That is then stored in the recordset OWell. The reason for the conversions is the table that I want to add records to has an identity field: SSIS shows well_id as a DT_WSTR of length 15, well_name a DT_WSTR of length 50. I then have a SQL task that connects to the local database and attempts to add records that are not there yet. I've tried various things: using the OWell as a result set and referring to it in my SQL statement. Currently, I have the ResultSet set to None, and the following SQL statment: Insert into WELL (WELL_ID, WELL_NAME) Select OWELL_ID, OWELL_NAME from OWell where OWELL_ID not in (select WELL.WELL_ID from WELL) For Parameter Mapping, I have Paramater 0, called OWell_ID, from my variable User::OWell. Parameter 1, called OWell_Name is from the same variable. Both are set to VARCHAR, although I've also tried NVARCHAR. I do not have a Result set. I am getting the following error: Error: 0xC002F210 at Insert records to FLEDG, Execute SQL Task: Executing the query "Insert into WELL (WELL_ID, WELL_NAME) Select OWELL..." failed with the following error: "An error occurred while extracting the result into a variable of type (DBTYPE_STR)". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly. I don't think it's a data type issue, but rather that I somehow am not using the resultset properly. How, exactly, am I supposed to refer to that recordset in my SQL task, so that I can use the two recordset fields and add records that are missing?

    Read the article

  • MySQL keeps crashing OS server.. Please help adjust my.ini!

    - by TruMan1
    I have MySQL 5.0 installed on a Windows 2008 machine (3GB RAM). My server crashes on a regular basis (almost once a day) with this error: Changed limits: max_open_files: 2048 max_connections: 800 table_cache: 619 I did not use the heavy InnoDB .ini file, although I am rethinking that I should have? I am worried that big configuration changes will make my current sites stop working. What should I do? Here is my current ini settings: default-character-set=latin1 default-storage-engine=INNODB max_connections=800 query_cache_size=84M table_cache=1520 tmp_table_size=30M thread_cache_size=38 myisam_max_sort_file_size=100G myisam_sort_buffer_size=30M key_buffer_size=129M read_buffer_size=64K read_rnd_buffer_size=256K sort_buffer_size=256K innodb_additional_mem_pool_size=6M innodb_flush_log_at_trx_commit=1 innodb_log_buffer_size=3M innodb_buffer_pool_size=250M innodb_log_file_size=50M innodb_thread_concurrency=10

    Read the article

  • problem configure JBoss to work with JNDI

    - by Spiderman
    I am trying to bind connection to the DB using JNDI in my application that runs on JBoss. I did the following: I created the datasource file oracle-ds.xml filled it with the relevant xml elements: <datasources> <local-tx-datasource> <jndi-name>bilby</jndi-name> ... </local-tx-datasource> </datasources> and put it in the folder \server\default\deploy Added the relevant oracle jar file than in my application I performed: JndiObjectFactoryBean factory = new JndiObjectFactoryBean(); factory.setJndiName("bilby"); try{ factory.afterPropertiesSet(); dataSource = factory.getObject(); } catch(NamingException ne) { ne.printStackTrace(); } and this cause the error: javax.naming.NameNotFoundException: bilby not bound then in the output after this error occured I saw the line: 18:37:56,560 INFO [ConnectionFactoryBindingService] Bound ConnectionManager 'jb oss.jca:service=DataSourceBinding,name=bilby' to JNDI name 'java:bilby' So what is my configuration problem? I think that it may be that JBoss first loads and runs the .war file of my application and only then it loads the oracle-ds.xml that contain my data-source definition. The problem is that they are both located in the same folder. Is there a way to define priority of loading them, or maybe this is not the problem at all. Any idea?

    Read the article

  • Linear Performance Scalability with HP San Solutions

    - by Berzemus
    Hi all, I need a San Solution with linear scalability in size as well as in performance. From what I know, with a Modular Smart Array solution such as the P2000/MSA-class solutions from HP, even with a dual controller initial node, I can only increase the size of it, as added nodes come controller-less, so overall performance tends to decrease. On the other hand, the P4000 (lefthand) family of solutions has each of it's nodes have it's own controller, and so when a node is added, storage capacity as well as performance increase. Am I right in all that I say, and is the P4000 the only solution, or have I forgotten something ?

    Read the article

  • Personal wiki on usb / the cloud?

    - by drby
    I'm looking for a personal wiki that can be installed on an usb stick and (more importantly) somewhere on the cloud (dropbox). I've looked at the Wiki Matrix, but I really don't care that much about any of the options, so I end up with a choice between ~50 wikis at the end. Tried out TiddlyWiki but there are some things that really annoy me like the fact that all pages get opened on the same page. It really looks like it'd turn into a giant mess pretty quickly. I'd like to have something that's pretty close in terms of appearance and usability to wikipedia. Hierarchical categories for organization would be really nice. And accessible storage (in case I ever want to convert it to something else).

    Read the article

  • Best filesystem choices for NFS storing VMware disk images

    - by mlambie
    Currently we use an iSCSI SAN as storage for several VMware ESXi servers. I am investigating the use of an NFS target on a Linux server for additional virtual machines. I am also open to the idea of using an alternative operating system (like OpenSolaris) if it will provide significant advantages. What Linux-based filesystem favours very large contiguous files (like VMware's disk images)? Alternatively, how have people found ZFS on OpenSolaris for this kind of workload? (This question was originally asked on SuperUser; feel free to migrate answers here if you know how).

    Read the article

  • Update a tableView with a plist took from another table

    - by Pheel
    Background: I have a tab bar application, which has a tableView as the "heart" of the app. It loads data from a plist and, through a button that checks if there are any updates on the remote plist file, updates the local plist with the remote contents. Then, i have another tableView, that should display only those plist items that have a bool value set to YES. Now i want to add a button to the second table that reloads the plist took from the first table. Expected: When i update the local plist from the first table and when i press the button on the second table, the 2nd table is supposed to update and show the cells with that bool value set to YES. (Note: I set YES as default to some items on plist). What happens: The first table updates its content from remote. The second table shows the old items with the value set to YES. When i press the button to refresh data, it reads the plist fine (by logging it, it has the same contents of the first table -only those set to YES-),but it doesn't update data even if i have [self.tableView reloadData];. When i close the app and open it again, the second table is filled with the right items. :\ Code i'm using: //Reading Plist { NSArray *documentPaths = NSSearchPathForDirectoriesInDomains(NSCachesDirectory, NSUserDomainMask, YES); NSString *plistPath = [[documentPaths lastObject] stringByAppendingPathComponent:@"myPlist.plist"]; NSFileManager *fMgr = [NSFileManager defaultManager]; if (![fMgr fileExistsAtPath:plistPath]) { plistPath = [[NSBundle mainBundle] pathForResource:@"myPlist" ofType:@"plist"]; } NSMutableArray *returnArr = [NSMutableArray arrayWithContentsOfFile:plistPath]; NSPredicate *predicate = [NSPredicate predicateWithFormat:@"isFav == YES"]; for (NSDictionary *sect in returnArr) { NSArray *arr = [sect objectForKey:@"Rows"]; [sect setValue:[arr filteredArrayUsingPredicate:predicate] forKey:@"Rows"]; } [self.tableView reloadData]; } //Refresh data button - (void) refreshTable:(id)sender { NSLog(@"plist read"); [self readPlist]; NSLog(@"refreshed plist:%@",[self readPlist]); [self.tableView reloadData]; } Does anyone know why the table is not updating?

    Read the article

  • Rack middleware deadlock

    - by Joel
    I include this simple Rack Middleware in a Rails application: class Hello def initialize(app) @app = app end def call(env) [200, {"Content-Type" => "text/html"}, "Hello"] end end Plug it in inside environment.rb: ... Dir.glob("#{RAILS_ROOT}/lib/rack_middleware/*.rb").each do |file| require file end Rails::Initializer.run do |config| config.middleware.use Hello ... I'm using Rails 2.3.5, Webrick 1.3.1, ruby 1.8.7 When the application is started in production mode, everything works as expected - every request is intercepted by the Hello middleware, and "Hello" is returned. However, when run in development mode, the very first request works returning "Hello", but the next request hangs. Interrupting webrick while it is in the hung state yields this: ^C[2010-03-24 14:31:39] INFO going to shutdown ... deadlock 0xb6efbbc0: sleep:- - /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/reloader.rb:31 deadlock 0xb7d1b1b0: sleep:J(0xb6efbbc0) (main) - /usr/lib/ruby/1.8/webrick/server.rb:113 Exiting /usr/lib/ruby/1.8/webrick/server.rb:113:in `join': Thread(0xb7d1b1b0): deadlock (fatal) from /usr/lib/ruby/1.8/webrick/server.rb:113:in `start' from /usr/lib/ruby/1.8/webrick/server.rb:113:in `each' from /usr/lib/ruby/1.8/webrick/server.rb:113:in `start' from /usr/lib/ruby/1.8/webrick/server.rb:23:in `start' from /usr/lib/ruby/1.8/webrick/server.rb:82:in `start' from /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/handler/webrick.rb:14:in `run' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/commands/server.rb:111 from /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' from /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' from script/server:3 Something to do with the class reloader in development mode. There is also mention of deadlock in the exception. Any ideas what might be causing this? Any recommendations as to the best approach to debug this?

    Read the article

  • VPS for Glassfish

    - by Harry Pham
    Our small startup company plan to deploy a web application on Glassfish, I and wonder if some of the experience user out there can answer me couple question. When I shopping for server, I usually look at RAM amount, as GF does required good amount of RAM to run, below are the two sites with significant price different for the same amount of RAM. I wonder why?? Godaddy: http://www.godaddy.com/hosting/virtual-dedicated-servers.aspx?ci=9013 Versus http://entic.net/Servers Does below plan from Godaddy consider good to run GF application. OS: Linux CentOS • RAM: 4 GB • Storage: 60 GB • Bandwidth: 2,000 GB/mo Our web application is a social network, expected to have 2000-4000 users to start with

    Read the article

  • Laptop stopped recognizing USB hard drive

    - by vahokif
    Hi, My Packard Bell EasyNote TX86 laptop stopped recognizing my 1 TB Toshiba Store Art hard drive. It worked fine until now, and it still works on other computers. Other USB devices (including storage) work, and I've tried plugging it in every port, to no avail. When I plug it in it spins up, but Windows doesn't react at all (it's not in disk management), Linux doesn't write anything in dmesg and I can't see it in BIOS setup. I didn't use it at all today, apart from plugging it into a freshly-installed Windows 7 machine once (where it worked). What can I do? Which device is to blame here? EDIT: One more thing. I unplugged the drive while the laptop was hibernated. Google says this might be the problem and it might have something to do with resetting the USB Host Controller.

    Read the article

  • How to maintain base files for development environment central while allowing people to change their

    - by Ittai
    Hi, what I'd like to do is have files in a central location so that when I add people to my development team they can see the base version of these files but meanwhile have the ability for the rest of the team to work with their own local version. I know I can just put the files in source-control (we use Tortoiese-SVN) and have my team change the local versions but I'd rather not as the exclamation mark signaling the file has been changed and needs to be committed, quite frankly, irritates me greatly. I'll give two examples of what I mean: We use quite a few build.xml files which relate to a single properties files which contains many definitions. Some of them can be different between team-members (mainly temporary working directories) and I'd like a new team-member to have the ability to get the properties file with the base config but change it if they wish. Have the eclipse settings file in the SVN so that when a new team-member joins they can just retrieve the files from the server and have a base system running. If they wish they will be able to change some of these settings. Thanks, Ittai

    Read the article

  • When i run rake db:create ,Error rake aborted! uninitialized constant Cucumber

    - by Big Bang Theory
    Hi I am trying to experiment on an open source application application . when i run $ rake db:create Following is the stacktrace rake aborted! uninitialized constant Cucumber /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.2/lib/active_support/dependencies.rb:443:in `load_missing_constant' /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.2/lib/active_support/dependencies.rb:80:in `const_missing' /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.2/lib/active_support/dependencies.rb:92:in `const_missing' /home/BigBangTheory/Desktop/spot-us/lib/tasks/cucumber.rake:13 /usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:1882:in `in_namespace' /usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:910:in `namespace' /home/BigBangTheory/Desktop/spot-us/lib/tasks/cucumber.rake:12 /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.2/lib/active_support/dependencies.rb:145:in `load_without_new_constant_marking' /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.2/lib/active_support/dependencies.rb:145:in `load' /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.2/lib/active_support/dependencies.rb:521:in `new_constants_in' /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.2/lib/active_support/dependencies.rb:145:in `load' /usr/lib/ruby/gems/1.8/gems/rails-2.3.2/lib/tasks/rails.rb:8 /usr/lib/ruby/gems/1.8/gems/rails-2.3.2/lib/tasks/rails.rb:8:in `each' /usr/lib/ruby/gems/1.8/gems/rails-2.3.2/lib/tasks/rails.rb:8 /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' /home/BigBangTheory/Desktop/spot-us/Rakefile:9 /usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2383:in `load' /usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2383:in `raw_load_rakefile' /usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2017:in `load_rakefile' /usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2068:in `standard_exception_handling' /usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2016:in `load_rakefile' /usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2000:in `run' /usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2068:in `standard_exception_handling' /usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:1998:in `run' /usr/lib/ruby/gems/1.8/gems/rake-0.8.7/bin/rake:31 /usr/bin/rake:19:in `load' /usr/bin/rake:19 Any help ?

    Read the article

  • Virtualization deployment for datacenter

    - by bogha
    Hi, my company is going to deploy an IT Infrastructure on a virtual platform, can you please help me with the following: 1- which one do you recommend, Cisco Unified computing system ( cisco + emc + vmware )or HP Blades( virtualization solution + HP Storage )? 2- i Need to install a DNS Server, Web server, cpanel for managing hosting packages and Microsoft layer of product for usingg in the corporate infrastructur ( active directory, Local DNS, Exchange server, DHCP, Global catalog ) what is the minimum requirments for these servers ( in terms of CPU and Memory ) . 3- what is the best way to implement a redundant solution in a virtual environment. thank you

    Read the article

  • How can I create a new Person object correctly in Javascript?

    - by TimDog
    I'm still struggling with this concept. I have two different Person objects, very simply: ;Person1 = (function() { function P (fname, lname) { P.FirstName = fname; P.LastName = lname; return P; } P.FirstName = ''; P.LastName = ''; var prName = 'private'; P.showPrivate = function() { alert(prName); }; return P; })(); ;Person2 = (function() { var prName = 'private'; this.FirstName = ''; this.LastName = ''; this.showPrivate = function() { alert(prName); }; return function(fname, lname) { this.FirstName = fname; this.LastName = lname; } })(); And let's say I invoke them like this: var s = new Array(); //Person1 s.push(new Person1("sal", "smith")); s.push(new Person1("bill", "wonk")); alert(s[0].FirstName); alert(s[1].FirstName); s[1].showPrivate(); //Person2 s.push(new Person2("sal", "smith")); s.push(new Person2("bill", "wonk")); alert(s[2].FirstName); alert(s[3].FirstName); s[3].showPrivate(); The Person1 set alerts "bill" twice, then alerts "private" once -- so it recognizes the showPrivate function, but the local FirstName variable gets overwritten. The second Person2 set alerts "sal", then "bill", but it fails when the showPrivate function is called. The new keyword here works as I'd expect, but showPrivate (which I thought was a publicly exposed function within the closure) is apparently not public. I want to get my object to have distinct copies of all local variables and also expose public methods -- I've been studying closures quite a bit, but I'm still confused on this one. Thanks for your help.

    Read the article

< Previous Page | 376 377 378 379 380 381 382 383 384 385 386 387  | Next Page >