Search Results

Search found 1395 results on 56 pages for 'repo'.

Page 54/56 | < Previous Page | 50 51 52 53 54 55 56  | Next Page >

  • Fedora 11 System - Failed Hard Drive Removed, and Boot gets GRUB Hard Disk Error

    - by Mindful
    Greetings, I have a machine with a 120GB ATA drive that has what I thought to be non-essential data on it. I also have a 320GB SATA hard drive with the OS/Application/Files (good data I want to keep). My 120GB ATA is failing I believe, as my computer kept slowing to a halt. However, when I move the drive from BIOS my computer will not start, says "GRUB Hard Disk Error". I know that my Fedora system has an LVM setup. I am looking to just remove the 120GB drive from "the mix", and just have one hard drive. How do I recover ? Thank you. I have access to a Linux Live CD right now and can make any changes. However, it won't boot into my OS - it fails. UPDATE: here's my Grub.Conf # grub.conf generated by anaconda # # Note that you do not have to rerun grub after making changes to this file # NOTICE: You have a /boot partition. This means that # all kernel and initrd paths are relative to /boot/, eg. # root (hd1,0) # kernel /vmlinuz-version ro root=/dev/VolGroup00/LogVol00 # initrd /initrd-version.img #boot=/dev/sda1 default=0 timeout=5 splashimage=(hd1,0)/grub/splash.xpm.gz hiddenmenu title Fedora (2.6.30.10-105.2.23.fc11.i686.PAE) root (hd1,0) kernel /vmlinuz-2.6.30.10-105.2.23.fc11.i686.PAE ro root=/dev/VolGroup00/LogVol00 rhgb quiet initrd /initrd-2.6.30.10-105.2.23.fc11.i686.PAE.img title Fedora (2.6.30.9-102.fc11.i686.PAE) root (hd1,0) kernel /vmlinuz-2.6.30.9-102.fc11.i686.PAE ro root=/dev/VolGroup00/LogVol00 rhgb quiet initrd /initrd-2.6.30.9-102.fc11.i686.PAE.img title Fedora (2.6.27.24-170.2.68.fc10.i686.PAE) root (hd1,0) kernel /vmlinuz-2.6.27.24-170.2.68.fc10.i686.PAE ro root=/dev/VolGroup00/LogVol00 rhgb quiet initrd /initrd-2.6.27.24-170.2.68.fc10.i686.PAE.img title Fedora (2.6.27.24-170.2.68.fc10.i686) root (hd1,0) kernel /vmlinuz-2.6.27.24-170.2.68.fc10.i686 ro root=/dev/VolGroup00/LogVol00 rhgb quiet initrd /initrd-2.6.27.24-170.2.68.fc10.i686.img title Fedora (2.6.27.21-170.2.56.fc10.i686) root (hd1,0) kernel /vmlinuz-2.6.27.21-170.2.56.fc10.i686 ro root=/dev/VolGroup00/LogVol00 rhgb quiet initrd /initrd-2.6.27.21-170.2.56.fc10.i686.img title Fedora (2.6.27.19-170.2.35.fc10.i686) root (hd1,0) kernel /vmlinuz-2.6.27.19-170.2.35.fc10.i686 ro root=/dev/VolGroup00/LogVol00 rhgb quiet initrd /initrd-2.6.27.19-170.2.35.fc10.i686.img title Upgrade to Fedora 10 (Cambridge) kernel /upgrade/vmlinuz preupgrade repo=hd::/var/cache/yum/preupgrade stage2=http://chi-10g-1-mirror.fastsoft.net/pub/linux/fedora/linux/releases/10/Fedora/i386/os/images/install.img ks=hd:UUID=f11769ba-29bc-46de-8c40-a949720a438e:/upgrade/ks.cfg initrd /upgrade/initrd.img title Win rootnoverify (hd0,0) chainloader +1

    Read the article

  • force unattended install php apt debian squeeze

    - by user1258619
    i am trying to do an unattended install via php for several packages but every time when the dependencies come up it aborts instead of forcing the answer to be yes. (i have broken apt a few times...) each time though i start off re-imaging my vps(testing server) so there isn't an issue of something still being hung or crashed.can someone tell me what i am doing wrong? keep in mind this is the 12th version of this script to get nowhere. fwrite(STDOUT, "Root Password:\n"); $root_pass = chop(fgets(STDIN)); $file_apt = '/etc/apt/apt.conf.d/70debconf'; // Open the file to get existing content $current_apt = file_get_contents($file_apt); // Append a new person to the file $current_apt .= "Dpkg::Options {\"--force-confold\";};\n"; // Write the contents back to the file file_put_contents($file_apt, $current_apt); $update = shell_exec('echo '.$root_pass.' | DEBIAN_FRONTEND=noninteractive sudo -S apt-get update'); echo $update; $update_upgrade = shell_exec('echo '.$root_pass.' | DEBIAN_FRONTEND=noninteractive sudo -s apt-get upgrade'); echo $update_upgrade; $install_unattended_mysql = shell_exec('echo '.$root_pass.' | DEBIAN_FRONTEND=noninteractive apt-get install --yes --force-yes mysql-server'); echo $install_unattended_mysql; $install_mysql_set_password = shell_exec('mysql -u root -e "UPDATE mysql.user SET password=PASSWORD("'.$root_pass.'") WHERE user="root"; FLUSH PRIVILEGES;'); echo $install_mysql_set_password; i have read a few places that i needed to edit the apt.conf file so i am doing so here and doing an update and an upgrade. also the upgrade does abort when it actually has to install something. The following packages will be upgraded: apache2 apache2-doc apache2-mpm-prefork apache2-utils apache2.2-bin apache2.2-common base-files bind9 bind9-host bind9utils debian-archive-keyring dpkg dselect libbind9-60 libc-bin libc6 libdns69 libisc62 libisccc60 libisccfg62 liblwres60 locales 22 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Need to get 18.4 MB of archives. After this operation, 8192 B of additional disk space will be used. Do you want to continue [Y/n]? Abort. I also should note that only a few pieces of software are going to be installed from the apt repo's as i will include some binaries to go along with it.

    Read the article

  • git, egit, submodules, and symlinks -- how should shared sub-projects be handled in eclipse?

    - by Autophil
    Here's the situation. I have a few git projects with a directory structure layed out more or less like this: simpleproj app www admin demo lib model orm view model user blah ... storeproj app www about mobile fbapp lib model orm view model user message cart product merchant Each directory in "lib" contains a separate project, either created in-house or forked, all of which use git for source control. So I figured I should make them submodules of my projects, right? Well, we've been moving toward eclipse + egit, because some of our windows guys not used to a CLI need something they can use without being scared of screwing things up. Anyway, the problem is, egit doesn't support submodules. So, my solution has been a rather crude one involving symlinks... lets say my directory structure on my dev box is generally layed out like this: ~/projects/ bigproj .git app lib model (- ~/lib/model/src/) orm (- ~/lib/orm/src/) neatproj .git app lib view (- ~/lib/view/src/) oldproj .git app lib orm (- ~/lib/orm/src/) ~/lib/ model .git src README.md orm .git src COPYING view .git src ...the symlinks link inside of directory with the git repo, so eclipse doesn't get confused, and everything sort of works. On my machine, I can update the libs from anywhere and all projects will be updated (needing to be committed again of course). Each project stores a separate copy of the contents of the symlinked directories within "lib" -- but only when staged from within eclipse. After committing from eclipse and moving back to the CLI, git sees that a bunch of files have been removed and a few symlinks have been created. Of course this is acceptable also, probably more so than keeping a separate history of the libs for each project... but eclipse and CLI git obviously need to be on the same page so tons of files aren't vanishing and reappearing. So this brings me to my question. I'd like to know how to either: get eclipse+egit to see the symlinks as symlinks if git will somehow handle them properly*, or get the CLI git to treat them as non-symlinks. Or, if there's a better way to do this, I'm all ears. Hope this all made sense! :D Note: tried to tag this as git-submodules, but was not allowed :( * should I make them relative or absolute? Either way it's a mess. Also will symlinks will work on windows? i know there's something similar but you need a 3rd party tool to manage them AFAIK, i doubt these would translate well.

    Read the article

  • Alternative Windows Offline Files + Windows Backup + Previous Version Setup

    - by Herson
    Currently our documents are all hosted in a Windows 7 box. Users can access the files using Windows share and the documents are available offline (windows 7 feature). The documents are being backed up daily by Windows 7 backup and restore utility. Users can access previous versions of the file (from the backups) using Windows Explorer "previous versions" feature. This setup is currently working well, except for the following: We would prefer to have access to hourly versions of the file, not daily. The previous version mechanism is tied up to the backup mechanism. Windows 7 performs a full backup every week and incremental backup everyday. The previous versions of a file is actually what are the available in the backups. If you 20GB documents and want to maintain at least three(3) year history, you will use at minimum 3 years * 52 weeks * 20GB or about 3TB even if there are few changes in the documents. Its pretty inefficient use of space. Looking up previous versions of a file is very slow (tens of minutes). This is probably related to the previous issue - Windows has to traverse its all of its backups. I am considering using SVN + autocommit/autoupdate tortoisesvn. It will have the following advantages: Backups are easy and will also backup the whole history of each documents. (Just backup the repository). Creating previous versions can be frequent. I think svn commit / update can be done every two minutes or so. Users can sync over the net. However, I can see the following issues: More conflicts than the original setup because both multiple users can now edit the same file even both are online, i.e. can connect to the SVN repo. The users can off course lock the file first before editing, but that would mean they have to adjust. Delay on propagation of file changes. On windows 7 file sharing, changes made by one online user will be instantaneously available to other online users. With the SVN setup, changes will only be propagated when the users execute the svn add/commit/update sequence. Delay will be probably a few minutes. This workflow will no longer work: "Hi, I just edited document X, can you have a quick look?" I would like to ask the opinion of the community for alternative setups, or improvements on the above setups to work out the kinks.

    Read the article

  • Linux service --status-all shows "Firewall is stopped." what service does firewall refer to?

    - by codewaggle
    I have a development server with the lamp stack running CentOS: [Prompt]# cat /etc/redhat-release CentOS release 5.8 (Final) [Prompt]# cat /proc/version Linux version 2.6.18-308.16.1.el5xen ([email protected]) (gcc version 4.1.2 20080704 (Red Hat 4.1.2-52)) #1 SMP Tue Oct 2 22:50:05 EDT 2012 [Prompt]# yum info iptables Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: mirror.anl.gov * extras: centos.mirrors.tds.net * rpmfusion-free-updates: mirror.us.leaseweb.net * rpmfusion-nonfree-updates: mirror.us.leaseweb.net * updates: mirror.steadfast.net Installed Packages Name : iptables Arch : x86_64 Version : 1.3.5 Release : 9.1.el5 Size : 661 k Repo : installed .... Snip.... When I run: service --status-all Part of the output looks like this: .... Snip.... httpd (pid xxxxx) is running... Firewall is stopped. Table: filter Chain INPUT (policy DROP) num target prot opt source destination 1 RH-Firewall-1-INPUT all -- 0.0.0.0/0 0.0.0.0/0 Chain FORWARD (policy DROP) num target prot opt source destination 1 RH-Firewall-1-INPUT all -- 0.0.0.0/0 0.0.0.0/0 Chain OUTPUT (policy ACCEPT) num target prot opt source destination Chain RH-Firewall-1-INPUT (2 references) ....Snip.... iptables has been loaded to the kernel and is active as represented by the rules being displayed. Checking just the iptables returns the rules just like status all does: [Prompt]# service iptables status Table: filter Chain INPUT (policy DROP) num target prot opt source destination 1 RH-Firewall-1-INPUT all -- 0.0.0.0/0 0.0.0.0/0 Chain FORWARD (policy DROP) num target prot opt source destination 1 RH-Firewall-1-INPUT all -- 0.0.0.0/0 0.0.0.0/0 Chain OUTPUT (policy ACCEPT) num target prot opt source destination Chain RH-Firewall-1-INPUT (2 references) .... Snip.... Starting or restarting iptables indicates that the iptables have been loaded to the kernel successfully: [Prompt]# service iptables restart Flushing firewall rules: [ OK ] Setting chains to policy ACCEPT: filter [ OK ] Unloading iptables modules: [ OK ] Applying iptables firewall rules: [ OK ] Loading additional iptables modules: ip_conntrack_netbios_n[ OK ] [Prompt]# service iptables start Flushing firewall rules: [ OK ] Setting chains to policy ACCEPT: filter [ OK ] Unloading iptables modules: [ OK ] Applying iptables firewall rules: [ OK ] Loading additional iptables modules: ip_conntrack_netbios_n[ OK ] I've googled "Firewall is stopped." and read a number of iptables guides as well as the RHEL documentation, but no luck. As far as I can tell, there isn't a "Firewall" service, so what is the line "Firewall is stopped." referring to?

    Read the article

  • Python script is exiting with no output and I have no idea why

    - by Adam Tuttle
    I'm attempting to debug a Subversion post-commit hook that calls some python scripts. What I've been able to determine so far is that when I run post-commit.bat manually (I've created a wrapper for it to make it easier) everything succeeds, but when SVN runs it one particular step doesn't work. We're using CollabNet SVNServe, which I know from the documentation removes all environment variables. This had caused some problems earlier, but shouldn't be an issue now. Before Subversion calls a hook script, it removes all variables - including $PATH on Unix, and %PATH% on Windows - from the environment. Therefore, your script can only run another program if you spell out that program's absolute name. The relevant portion of post-commit.bat is: echo -------------------------- >> c:\svn-repos\company\hooks\svn2ftp.out.log set SITENAME=staging set SVNPATH=branches/staging/wwwroot/ "C:\Python3\python.exe" C:\svn-repos\company\hooks\svn2ftp.py ^ --svnUser="svnusername" ^ --svnPass="svnpassword" ^ --ftp-user=ftpuser ^ --ftp-password=ftppassword ^ --ftp-remote-dir=/ ^ --access-url=svn://10.0.100.6/company ^ --status-file="C:\svn-repos\company\hooks\svn2ftp-%SITENAME%.dat" ^ --project-directory=%SVNPATH% "staging.company.com" %1 %2 >> c:\svn-repos\company\hooks\svn2ftp.out.log echo -------------------------- >> c:\svn-repos\company\hooks\svn2ftp.out.log When I run post-commit.bat manually, for example: post-commit c:\svn-repos\company 12345, I see output like the following in svn2ftp.out.log: -------------------------- args1: c:\svn-repos\company args0: staging.company.com abspath: c:\svn-repos\company project_dir: branches/staging/wwwroot/ local_repos_path: c:\svn-repos\company getting youngest revision... done, up-to-date -------------------------- However, when I commit something to the repo and it runs automatically, the output is: -------------------------- -------------------------- svn2ftp.py is a bit long, so I apologize but here goes. I'll have some notes/disclaimers about its contents below it. #!/usr/bin/env python """Usage: svn2ftp.py [OPTION...] FTP-HOST REPOS-PATH Upload to FTP-HOST changes committed to the Subversion repository at REPOS-PATH. Uses svn diff --summarize to only propagate the changed files Options: -?, --help Show this help message. -u, --ftp-user=USER The username for the FTP server. Default: 'anonymous' -p, --ftp-password=P The password for the FTP server. Default: '@' -P, --ftp-port=X Port number for the FTP server. Default: 21 -r, --ftp-remote-dir=DIR The remote directory that is expected to resemble the repository project directory -a, --access-url=URL This is the URL that should be used when trying to SVN export files so that they can be uploaded to the FTP server -s, --status-file=PATH Required. This script needs to store the last successful revision that was transferred to the server. PATH is the location of this file. -d, --project-directory=DIR If the project you are interested in sending to the FTP server is not under the root of the repository (/), set this parameter. Example: -d 'project1/trunk/' This should NOT start with a '/'. 2008.5.2 CKS Fixed possible Windows-related bug with tempfile, where the script didn't have permission to write to the tempfile. Replaced this with a open()-created file created in the CWD. 2008.5.13 CKS Added error logging. Added exception for file-not-found errors when deleting files. 2008.5.14 CKS Change file open to 'rb' mode, to prevent Python's universal newline support from stripping CR characters, causing later comparisons between FTP and SVN to report changes. """ try: import sys, os import logging logging.basicConfig( level=logging.DEBUG, format='%(asctime)s %(levelname)s %(message)s', filename='svn2ftp.debug.log', filemode='a' ) console = logging.StreamHandler() console.setLevel(logging.ERROR) logging.getLogger('').addHandler(console) import getopt, tempfile, smtplib, traceback, subprocess from io import StringIO import pysvn import ftplib import inspect except Exception as e: logging.error(e) #capture the location of the error frame = inspect.currentframe() stack_trace = traceback.format_stack(frame) logging.debug(stack_trace) print(stack_trace) #end capture sys.exit(1) #defaults host = "" user = "anonymous" password = "@" port = 21 repo_path = "" local_repos_path = "" status_file = "" project_directory = "" remote_base_directory = "" toAddrs = "[email protected]" youngest_revision = "" def email(toAddrs, message, subject, fromAddr='[email protected]'): headers = "From: %s\r\nTo: %s\r\nSubject: %s\r\n\r\n" % (fromAddr, toAddrs, subject) message = headers + message logging.info('sending email to %s...' % toAddrs) server = smtplib.SMTP('smtp.company.com') server.set_debuglevel(1) server.sendmail(fromAddr, toAddrs, message) server.quit() logging.info('email sent') def captureErrorMessage(e): sout = StringIO() traceback.print_exc(file=sout) errorMessage = '\n'+('*'*80)+('\n%s'%e)+('\n%s\n'%sout.getvalue())+('*'*80) return errorMessage def usage_and_exit(errmsg): """Print a usage message, plus an ERRMSG (if provided), then exit. If ERRMSG is provided, the usage message is printed to stderr and the script exits with a non-zero error code. Otherwise, the usage message goes to stdout, and the script exits with a zero errorcode.""" if errmsg is None: stream = sys.stdout else: stream = sys.stderr print(__doc__, file=stream) if errmsg: print("\nError: %s" % (errmsg), file=stream) sys.exit(2) sys.exit(0) def read_args(): global host global user global password global port global repo_path global local_repos_path global status_file global project_directory global remote_base_directory global youngest_revision try: opts, args = getopt.gnu_getopt(sys.argv[1:], "?u:p:P:r:a:s:d:SU:SP:", ["help", "ftp-user=", "ftp-password=", "ftp-port=", "ftp-remote-dir=", "access-url=", "status-file=", "project-directory=", "svnUser=", "svnPass=" ]) except getopt.GetoptError as msg: usage_and_exit(msg) for opt, arg in opts: if opt in ("-?", "--help"): usage_and_exit() elif opt in ("-u", "--ftp-user"): user = arg elif opt in ("-p", "--ftp-password"): password = arg elif opt in ("-SU", "--svnUser"): svnUser = arg elif opt in ("-SP", "--svnPass"): svnPass = arg elif opt in ("-P", "--ftp-port"): try: port = int(arg) except ValueError as msg: usage_and_exit("Invalid value '%s' for --ftp-port." % (arg)) if port < 1 or port > 65535: usage_and_exit("Value for --ftp-port must be a positive integer less than 65536.") elif opt in ("-r", "--ftp-remote-dir"): remote_base_directory = arg elif opt in ("-a", "--access-url"): repo_path = arg elif opt in ("-s", "--status-file"): status_file = os.path.abspath(arg) elif opt in ("-d", "--project-directory"): project_directory = arg if len(args) != 3: print(str(args)) usage_and_exit("host and/or local_repos_path not specified (" + len(args) + ")") host = args[0] print("args1: " + args[1]) print("args0: " + args[0]) print("abspath: " + os.path.abspath(args[1])) local_repos_path = os.path.abspath(args[1]) print('project_dir:',project_directory) youngest_revision = int(args[2]) if status_file == "" : usage_and_exit("No status file specified") def main(): global host global user global password global port global repo_path global local_repos_path global status_file global project_directory global remote_base_directory global youngest_revision read_args() #repository,fs_ptr #get youngest revision print("local_repos_path: " + local_repos_path) print('getting youngest revision...') #youngest_revision = fs.youngest_rev(fs_ptr) assert youngest_revision, "Unable to lookup youngest revision." last_sent_revision = get_last_revision() if youngest_revision == last_sent_revision: # no need to continue. we should be up to date. print('done, up-to-date') return if last_sent_revision or youngest_revision < 10: # Only compare revisions if the DAT file contains a valid # revision number. Otherwise we risk waiting forever while # we parse and uploading every revision in the repo in the case # where a repository is retroactively configured to sync with ftp. pysvn_client = pysvn.Client() pysvn_client.callback_get_login = get_login rev1 = pysvn.Revision(pysvn.opt_revision_kind.number, last_sent_revision) rev2 = pysvn.Revision(pysvn.opt_revision_kind.number, youngest_revision) summary = pysvn_client.diff_summarize(repo_path, rev1, repo_path, rev2, True, False) print('summary len:',len(summary)) if len(summary) > 0 : print('connecting to %s...' % host) ftp = FTPClient(host, user, password) print('connected to %s' % host) ftp.base_path = remote_base_directory print('set remote base directory to %s' % remote_base_directory) #iterate through all the differences between revisions for change in summary : #determine whether the path of the change is relevant to the path that is being sent, and modify the path as appropriate. print('change path:',change.path) ftp_relative_path = apply_basedir(change.path) print('ftp rel path:',ftp_relative_path) #only try to sync path if the path is in our project_directory if ftp_relative_path != "" : is_file = (change.node_kind == pysvn.node_kind.file) if str(change.summarize_kind) == "delete" : print("deleting: " + ftp_relative_path) try: ftp.delete_path("/" + ftp_relative_path, is_file) except ftplib.error_perm as e: if 'cannot find the' in str(e) or 'not found' in str(e): # Log, but otherwise ignore path-not-found errors # when deleting, since it's not a disaster if the file # we want to delete is already gone. logging.error(captureErrorMessage(e)) else: raise elif str(change.summarize_kind) == "added" or str(change.summarize_kind) == "modified" : local_file = "" if is_file : local_file = svn_export_temp(pysvn_client, repo_path, rev2, change.path) print("uploading file: " + ftp_relative_path) ftp.upload_path("/" + ftp_relative_path, is_file, local_file) if is_file : os.remove(local_file) elif str(change.summarize_kind) == "normal" : print("skipping 'normal' element: " + ftp_relative_path) else : raise str("Unknown change summarize kind: " + str(change.summarize_kind) + ", path: " + ftp_relative_path) ftp.close() #write back the last revision that was synced print("writing last revision: " + str(youngest_revision)) set_last_revision(youngest_revision) # todo: undo def get_login(a,b,c,d): #arguments don't matter, we're always going to return the same thing try: return True, "svnUsername", "svnPassword", True except Exception as e: logging.error(e) #capture the location of the error frame = inspect.currentframe() stack_trace = traceback.format_stack(frame) logging.debug(stack_trace) #end capture sys.exit(1) #functions for persisting the last successfully synced revision def get_last_revision(): if os.path.isfile(status_file) : f=open(status_file, 'r') line = f.readline() f.close() try: i = int(line) except ValueError: i = 0 else: i = 0 f = open(status_file, 'w') f.write(str(i)) f.close() return i def set_last_revision(rev) : f = open(status_file, 'w') f.write(str(rev)) f.close() #augmented ftp client class that can work off a base directory class FTPClient(ftplib.FTP) : def __init__(self, host, username, password) : self.base_path = "" self.current_path = "" ftplib.FTP.__init__(self, host, username, password) def cwd(self, path) : debug_path = path if self.current_path == "" : self.current_path = self.pwd() print("pwd: " + self.current_path) if not os.path.isabs(path) : debug_path = self.base_path + "<" + path path = os.path.join(self.current_path, path) elif self.base_path != "" : debug_path = self.base_path + ">" + path.lstrip("/") path = os.path.join(self.base_path, path.lstrip("/")) path = os.path.normpath(path) #by this point the path should be absolute. if path != self.current_path : print("change from " + self.current_path + " to " + debug_path) ftplib.FTP.cwd(self, path) self.current_path = path else : print("staying put : " + self.current_path) def cd_or_create(self, path) : assert os.path.isabs(path), "absolute path expected (" + path + ")" try: self.cwd(path) except ftplib.error_perm as e: for folder in path.split('/'): if folder == "" : self.cwd("/") continue try: self.cwd(folder) except: print("mkd: (" + path + "):" + folder) self.mkd(folder) self.cwd(folder) def upload_path(self, path, is_file, local_path) : if is_file: (path, filename) = os.path.split(path) self.cd_or_create(path) # Use read-binary to avoid universal newline support from stripping CR characters. f = open(local_path, 'rb') self.storbinary("STOR " + filename, f) f.close() else: self.cd_or_create(path) def delete_path(self, path, is_file) : (path, filename) = os.path.split(path) print("trying to delete: " + path + ", " + filename) self.cwd(path) try: if is_file : self.delete(filename) else: self.delete_path_recursive(filename) except ftplib.error_perm as e: if 'The system cannot find the' in str(e) or '550 File not found' in str(e): # Log, but otherwise ignore path-not-found errors # when deleting, since it's not a disaster if the file # we want to delete is already gone. logging.error(captureErrorMessage(e)) else: raise def delete_path_recursive(self, path): if path == "/" : raise "WARNING: trying to delete '/'!" for node in self.nlst(path) : if node == path : #it's a file. delete and return self.delete(path) return if node != "." and node != ".." : self.delete_path_recursive(os.path.join(path, node)) try: self.rmd(path) except ftplib.error_perm as msg : sys.stderr.write("Error deleting directory " + os.path.join(self.current_path, path) + " : " + str(msg)) # apply the project_directory setting def apply_basedir(path) : #remove any leading stuff (in this case, "trunk/") and decide whether file should be propagated if not path.startswith(project_directory) : return "" return path.replace(project_directory, "", 1) def svn_export_temp(pysvn_client, base_path, rev, path) : # Causes access denied error. Couldn't deduce Windows-perm issue. # It's possible Python isn't garbage-collecting the open file-handle in time for pysvn to re-open it. # Regardless, just generating a simple filename seems to work. #(fd, dest_path) = tempfile.mkstemp() dest_path = tmpName = '%s.tmp' % __file__ exportPath = os.path.join(base_path, path).replace('\\','/') print('exporting %s to %s' % (exportPath, dest_path)) pysvn_client.export( exportPath, dest_path, force=False, revision=rev, native_eol=None, ignore_externals=False, recurse=True, peg_revision=rev ) return dest_path if __name__ == "__main__": logging.info('svnftp.start') try: main() logging.info('svnftp.done') except Exception as e: # capture the location of the error for debug purposes frame = inspect.currentframe() stack_trace = traceback.format_stack(frame) logging.debug(stack_trace[:-1]) print(stack_trace) # end capture error_text = '\nFATAL EXCEPTION!!!\n'+captureErrorMessage(e) subject = "ALERT: SVN2FTP Error" message = """An Error occurred while trying to FTP an SVN commit. repo_path = %(repo_path)s\n local_repos_path = %(local_repos_path)s\n project_directory = %(project_directory)s\n remote_base_directory = %(remote_base_directory)s\n error_text = %(error_text)s """ % globals() email(toAddrs, message, subject) logging.error(e) Notes/Disclaimers: I have basically no python training so I'm learning as I go and spending lots of time reading docs to figure stuff out. The body of get_login is in a try block because I was getting strange errors saying there was an unhandled exception in callback_get_login. Never figured out why, but it seems fine now. Let sleeping dogs lie, right? The username and password for get_login are currently hard-coded (but correct) just to eliminate variables and try to change as little as possible at once. (I added the svnuser and svnpass arguments to the existing argument parsing.) So that's where I am. I can't figure out why on earth it's not printing anything into svn2ftp.out.log. If you're wondering, the output for one of these failed attempts in svn2ftp.debug.log is: 2012-09-06 15:18:12,496 INFO svnftp.start 2012-09-06 15:18:12,496 INFO svnftp.done And it's no different on a successful run. So there's nothing useful being logged. I'm lost. I've gone way down the rabbit hole on this one, and don't know where to go from here. Any ideas?

    Read the article

  • LIMBO fails on startup with Internal errors - invalid parameters received

    - by user61262
    I installed LIMBO from the Humble Bundle V and as far as I am aware, this has wine packaged with it (I also installed the latest from the repo's in case is was because of that). However the game doesn't even start and fails with the message: Wine Program Error Internal errors - invalid parameters received. Is there a way to log the error or does anyone know why this happens? This question was asked previously but it seems to have disappeared. My Graphics cards is a Geforece GT 250 Cheers ice. [edit: Wine outputs the following error: wine /opt/limbo/support/limbo/drive_c/Program\ Files/limbo/limbo.exe fixme:system:SystemParametersInfoW Unimplemented action: 59 (SPI_SETSTICKYKEYS) fixme:system:SystemParametersInfoW Unimplemented action: 53 (SPI_SETTOGGLEKEYS) fixme:system:SystemParametersInfoW Unimplemented action: 51 (SPI_SETFILTERKEYS) fixme:win:EnumDisplayDevicesW ((null),0,0x32f580,0x00000000), stub! err:x11settings:X11DRV_ChangeDisplaySettingsEx No matching mode found 1920x1080x32 @60! (XRandR) err:xrandr:X11DRV_XRandR_SetCurrentMode Resolution change not successful -- perhaps display has changed? wine: Unhandled page fault on read access to 0x00000000 at address 0x48213e (thread 0009), starting debugger... The debugger has the following output: Unhandled exception: page fault on read access to 0x00000000 in 32-bit code (0x0048213e). Register dump: CS:0073 SS:007b DS:007b ES:007b FS:0033 GS:003b EIP:0048213e ESP:0032f9f4 EBP:0037cdd0 EFLAGS:00010202( R- -- I - - - ) EAX:00000000 EBX:00000000 ECX:00000000 EDX:0037cf4c ESI:0037cda8 EDI:0037cdcc Stack dump: 0x0032f9f4: 0037cda8 0034c708 7bc35120 00000000 0x0032fa04: 0037cda8 0032fa38 0079fc58 00000000 0x0032fa14: 0048b7d4 00000001 0037cdcc 00000001 0x0032fa24: 00000780 00000438 0034c620 00000000 0x0032fa34: 0034c708 0032fa78 007a04e2 00000002 0x0032fa44: 0048c4bc 00000780 00000438 0037cda8 Backtrace: =>0 0x0048213e in limbo (+0x8213e) (0x0037cdd0) 0x0048213e: movl 0x0(%eax),%edx Modules: Module Address Debug info Name (103 modules) PE 400000- 926000 Export limbo PE 10000000-101ff000 Deferred d3dx9_43 ELF 79bb3000-7b800000 Deferred libnvidia-glcore.so.295.53 ELF 7b800000-7ba15000 Deferred kernel32<elf> \-PE 7b810000-7ba15000 \ kernel32 ELF 7bc00000-7bcc3000 Deferred ntdll<elf> \-PE 7bc10000-7bcc3000 \ ntdll ELF 7bf00000-7bf04000 Deferred <wine-loader> ELF 7d7e0000-7d7e4000 Deferred libnvidia-tls.so.295.53 ELF 7d7e4000-7d8bc000 Deferred libgl.so.1 ELF 7d9d0000-7d9d9000 Deferred librt.so.1 ELF 7d9d9000-7d9de000 Deferred libgpg-error.so.0 ELF 7d9de000-7d9f6000 Deferred libresolv.so.2 ELF 7d9f6000-7d9fa000 Deferred libkeyutils.so.1 ELF 7d9fa000-7da43000 Deferred libdbus-1.so.3 ELF 7da43000-7da55000 Deferred libp11-kit.so.0 ELF 7da55000-7dada000 Deferred libgcrypt.so.11 ELF 7dada000-7daec000 Deferred libtasn1.so.3 ELF 7daec000-7daf5000 Deferred libkrb5support.so.0 ELF 7daf5000-7dafa000 Deferred libcom_err.so.2 ELF 7dafa000-7db22000 Deferred libk5crypto.so.3 ELF 7db22000-7dbf1000 Deferred libkrb5.so.3 ELF 7dbf1000-7dc03000 Deferred libavahi-client.so.3 ELF 7dc03000-7dc11000 Deferred libavahi-common.so.3 ELF 7dc11000-7dcd5000 Deferred libgnutls.so.26 ELF 7dcd5000-7dd13000 Deferred libgssapi_krb5.so.2 ELF 7dd13000-7dd66000 Deferred libcups.so.2 ELF 7dd94000-7ddc8000 Deferred uxtheme<elf> \-PE 7dda0000-7ddc8000 \ uxtheme ELF 7ddc8000-7ddd3000 Deferred libxcursor.so.1 ELF 7ddd4000-7dde7000 Deferred gnome-keyring-pkcs11.so ELF 7de47000-7de4d000 Deferred libxfixes.so.3 ELF 7deac000-7ded6000 Deferred libexpat.so.1 ELF 7ded6000-7df0a000 Deferred libfontconfig.so.1 ELF 7df0a000-7df1a000 Deferred libxi.so.6 ELF 7df1a000-7df1e000 Deferred libxcomposite.so.1 ELF 7df1e000-7df27000 Deferred libxrandr.so.2 ELF 7df27000-7df31000 Deferred libxrender.so.1 ELF 7df31000-7df37000 Deferred libxxf86vm.so.1 ELF 7df37000-7df3b000 Deferred libxinerama.so.1 ELF 7df3b000-7df5d000 Deferred imm32<elf> \-PE 7df40000-7df5d000 \ imm32 ELF 7df5d000-7df64000 Deferred libxdmcp.so.6 ELF 7df64000-7df85000 Deferred libxcb.so.1 ELF 7df85000-7df9f000 Deferred libice.so.6 ELF 7df9f000-7e0d3000 Deferred libx11.so.6 ELF 7e0d3000-7e0e5000 Deferred libxext.so.6 ELF 7e0e5000-7e178000 Deferred winex11<elf> \-PE 7e0f0000-7e178000 \ winex11 ELF 7e178000-7e18e000 Deferred libz.so.1 ELF 7e18e000-7e228000 Deferred libfreetype.so.6 ELF 7e228000-7e247000 Deferred libtinfo.so.5 ELF 7e247000-7e269000 Deferred libncurses.so.5 ELF 7e27d000-7e292000 Deferred xinput1_3<elf> \-PE 7e280000-7e292000 \ xinput1_3 ELF 7e292000-7e2a6000 Deferred psapi<elf> \-PE 7e2a0000-7e2a6000 \ psapi ELF 7e2a6000-7e304000 Deferred dbghelp<elf> \-PE 7e2b0000-7e304000 \ dbghelp ELF 7e304000-7e391000 Deferred msvcrt<elf> \-PE 7e320000-7e391000 \ msvcrt ELF 7e391000-7e4c5000 Deferred wined3d<elf> \-PE 7e3a0000-7e4c5000 \ wined3d ELF 7e4c5000-7e4fe000 Deferred d3d9<elf> \-PE 7e4d0000-7e4fe000 \ d3d9 ELF 7e4fe000-7e573000 Deferred rpcrt4<elf> \-PE 7e510000-7e573000 \ rpcrt4 ELF 7e573000-7e67b000 Deferred ole32<elf> \-PE 7e590000-7e67b000 \ ole32 ELF 7e67b000-7e697000 Deferred dinput8<elf> \-PE 7e680000-7e697000 \ dinput8 ELF 7e697000-7e6d1000 Deferred winspool<elf> \-PE 7e6a0000-7e6d1000 \ winspool ELF 7e6d1000-7e7c9000 Deferred comctl32<elf> \-PE 7e6e0000-7e7c9000 \ comctl32 ELF 7e7c9000-7e833000 Deferred shlwapi<elf> \-PE 7e7e0000-7e833000 \ shlwapi ELF 7e833000-7ea44000 Deferred shell32<elf> \-PE 7e840000-7ea44000 \ shell32 ELF 7ea44000-7eb23000 Deferred comdlg32<elf> \-PE 7ea50000-7eb23000 \ comdlg32 ELF 7eb23000-7eb3c000 Deferred version<elf> \-PE 7eb30000-7eb3c000 \ version ELF 7eb3c000-7eb9c000 Deferred advapi32<elf> \-PE 7eb50000-7eb9c000 \ advapi32 ELF 7eb9c000-7ec59000 Deferred gdi32<elf> \-PE 7ebb0000-7ec59000 \ gdi32 ELF 7ec59000-7ed99000 Deferred user32<elf> \-PE 7ec70000-7ed99000 \ user32 ELF 7ef99000-7efa6000 Deferred libnss_files.so.2 ELF 7efa6000-7efc0000 Deferred libnsl.so.1 ELF 7efc0000-7efec000 Deferred libm.so.6 ELF 7efee000-7eff4000 Deferred libuuid.so.1 ELF 7eff4000-7f000000 Deferred libnss_nis.so.2 ELF b7411000-b7415000 Deferred libxau.so.6 ELF b7415000-b741e000 Deferred libnss_compat.so.2 ELF b741f000-b7424000 Deferred libdl.so.2 ELF b7424000-b75ca000 Deferred libc.so.6 ELF b75cb000-b75e6000 Deferred libpthread.so.0 ELF b75e9000-b75f2000 Deferred libsm.so.6 ELF b75fa000-b773c000 Dwarf libwine.so.1 ELF b773e000-b7760000 Deferred ld-linux.so.2 ELF b7760000-b7761000 Deferred [vdso].so Threads: process tid prio (all id:s are in hex) 00000008 (D) Z:\opt\limbo\support\limbo\drive_c\Program Files\limbo\limbo.exe 00000009 0 <== 0000000e services.exe 00000020 0 0000001f 0 00000019 0 00000018 0 00000017 0 00000015 0 00000010 0 0000000f 0 00000012 winedevice.exe 0000001d 0 0000001a 0 00000014 0 00000013 0 0000001b plugplay.exe 00000021 0 0000001e 0 0000001c 0 00000022 explorer.exe 00000023 0 System information: Wine build: wine-1.4 Platform: i386 Host system: Linux Host version: 3.2.0-24-generic-pae

    Read the article

  • Solaris 11 Live CD alapú telepítés

    - by AndrasF
    Az elozo részben megigért két telepítési eljárás helyett kénytelen vagyok ebben a bejegyzésben kizárólag a Live CD-s változattal foglalkozni. Korábban nem gondoltam, hogy ennek bemutatása is több, mint 50 képernyo kimenetet igényel, ezért változtatnom kellett a korábbi tervezeten. A Solaris 11 Live CD-s telepítés elsosorban az asztali (desktop) felhasználók igényeit veszi figyelembe és kizárólag x86-os architektúrájú gépeken támogatott (annak ellenére, hogy SPARC-os rendszerek is rendelkeznek grafikus kártyával - pl. T4-1).A folyamat két részre bontható: eloször a vendéggép kerül kialakítása VirtualBox környezetben, majd ezt követi a Solaris 11-es telepítése virtuális gépre. HCL és segédprogramok (DDT, DDU) Mielott telepíteni szeretnénk a Solaris operációs rendszert, célszeru tájékozódni fizikai rendszerünk támogatottságáról. Erre jól használható a már említett hardver kompatibilitási (HCL) lista, vagy az alábbi két segédprogram: Device Detection Tool Device Driver Utility Mindkét alkalmazás képes rendszerünk hardver komponenseit feltérképezni és ellenorizni azok meghajtóprogram (driver) ellátottságát. Eltérés köztük abban nyilvánul meg, hogy míg a DDT futtatásához Java szükséges, addig a DDU Solarist igényel. Ez utóbbiról a telepítés során röviden szó fog esni. Telepíto készletek letöltési helye Hálózati installációtól eltekintve (*) telepítokészletre van szükségünk, mely az alábbi oldalról töltheto le. Célszeru letöltenünk mindhárom állományt és a csomagokat tartalmazó ún. repository médiát (a következo felsorolás utolsó eleme) is: sol-11-1111-live-x86.iso sol-11-1111-text-x86.iso sol-11-1111-ai-x86.iso sol-11-1111-repo-full.iso Az elso három változat indítható USB formátumban is rendelkezésre áll - ekkor iso végzodés helyett usb található a fájlnevek végén. Rövid utalást az egyes készletek feladatáról az elozo blog bejegyzés tartalmaz (link). Amennyiben SPARC architektúrájú rendszerre szeretnénk a telepítést végezni, 'x86' helyett a 'sparc' szöveget tartalmazó állományokra lesz szükség. (*) - arra is lehetoség van, hogy AI készletrol történo indítás segítségével végezzük a hálózaton keresztül történo telepítést. Ez akkor fontos, ha célgépünkön nincs PXE (Preboot Execution Environment) boot támogatás. VirtualBox konfigurálás Külön fizikai eszköz felhasználása nélkül virtuális környezetben is használható a Solaris 11, mint vendéggép. A VirtualBox használatával erre kényelmes lehetoség kínálkozik. Gazdagépünknek (Windows, Unix, Linux) megfelelo telepíto program, vagy programcsomag (jelenleg a 4.1.16-os verzió a legfrissebb változat) és az installációt is taglaló felhasználói kézikönyv letöltheto a termék oldaláról. A sikeres telepítést követoen az alábbi lépések során jutunk el az új virtuális gép kialakulásáig: 1. A VBox indítása után a központi ablak megmutatja a már létezo virtuális gépeinket (Sol11demo, Sol11u1b07, Sol11.1B16, Sun_ZFS_Storage_7000) és az aktuálisan kiválasztott egyed (Sol11demo) fobb jellemzoit (megnevezés, memória mérete, virtuális tároló eszközök listája...stb.) 2. A New gombra kattintva elindul a virtuális gépet létrehozó segéd (wizard) 3. Ezt követoen nevet kell adnunk a vendéggépnek és ki kell választanunk az operációs rendszer típusát (beszédes név használata esetén a VirtualBox képes az operációs rendszer családját kiválasztani, nekünk pusztán csak verziót kell beállítanunk): adjuk meg Solaris11-et névként és válasszuk a 64bites változatot (feltéve, hogy gazdagépünk támogatja ezt) 4. Telepítéshez és a kezdeti lépések megtételéhez 1536MB memória tökéletesen megfelel (ez késobb módosítható az elvárások függvényében) 5. Fizikai társaihoz hasonlóan, egyetlen virtuális gép sem létezhet merevlemez (jelen esetben virtuális diszk) nélkül. Használhatunk egy már létezo területet (virtuális lemezt tartalmazó állomány), de létrehozhatunk egy nekünk tetszo új példányt is. Maradjunk ez utóbbinál (Create new hard disk)! 6. A lehetséges formátumok közül - az egyszeruség okán - éljünk a felkínált alaptípussal (VDI - VirtualBox Disk Image). 7. Létrehozás során a virtuális lemez készülhet egyidejuleg (Fixed size), vagy több lépésben dinamikusan (Dynamically allocated). Az elso változat sokkal kevésbé terheli a rendszert, a második elonye pedig a helytakarékosság. Válasszuk a fix méretu változatot. 8. Most már csak egyetlen adat ismeretlen a VirtualBox számára, mégpedig a létrehozásra kerülo virtuális lemez nagysága. 8GB-os terület jelen esetben alkalmas az ismerkedés elkezdéséhez. 9. Amennyiben minden beállítást helyesen adtunk meg, a Create gomb megnyomása után elindul a virtuális lemez létrehozása. 10. Ez a muvelet a megadott adatoktól függoen néhány perc alatt befejezodik. 11. Hasonló megerosítés (Create gomb aktiválása) után elkezdodik a kért virtuális gép létrehozása is. 12. Sikeres végrehajtás után az új vitruális gép közvetlenül megjelenik a központi ablak baloldali listáján a rendelkezésre álló virtuális gépek közt. A blog bejegyzés folyamatosan frissül...a rész fennmaradó tartalma hamarosan felkerül az oldalra.

    Read the article

  • Dealing with "I-am-cool-and-you-are-dumb" manager [closed]

    - by Software Guy
    I have been working with a software company for about 6 months now. I like the projects I work on there and I really like all the people there except for 1 guy. That guy is technically smart, and he is a co-founder of the company. He is an okay guy in person (the kind you wouldn't want to care about much) but things get tricky when he is your manager. In general I am all okay but there are times when I feel I am not being treated fairly: He doesn't give much thought to when he makes mistakes and when I do something similar, he is super critical. Recently he went as far as to say "I am not sure if I can trust you with this feature". The detais of this specific case are this: I was working on this feature, and I was already a couple of hours over my normal working hours, and then I decided to stop and continue tomorrow. We use git, and I like to commit changes locally and only push when I feel they are ready. This manager insists that I push all the changes to the central repo (in case my hard drive crashes). So I push the change, and the ticket is marked as "to be tested". Next day I come in, he sits next to me and starts complaining and says that I posted above. I really didn't know what to say, I tried to explain to him that the ticket is still being worked upon but he didn't seem to listen. He interrupts me in-between when I am coding, which I do not mind, but when I do that same, his face turns like this :| and reacts as if his work was super important and I am just wasting his time. He asks me to accumulate all questions, and then ask him altogether which is not always possible, as you need a clarification before you can continue on a feature implementation. And when I am coding, he talks on the phone with his customers next to me (when he can go to the meeting room with his laptop) and doesn't care. He made me switch to a whole new IDE (from Netbeans to a commercial IDE costing a lot of money) for a really tiny feature (which I later found out was in Netbeans as well!). I didn't make a big deal out of it as I am equally comfortable working with this new IDE, but I couldn't get the science behind his obsession. He said this feature makes sure that if any method is updated by a programmer, the IDE will turn the method name to red in places where it is used. I told him that I do not have a problem since I always search for method usage in the project and make sure its updated. IDEs even have refactoring features for exactly that, but... I recently implemented a feature for a project, and I was happy about it and considering him a senior, I asked him his comments about the implementation quality.. he thought long and hard, made a few funny faces, and when he couldn't find anything, he said "ummm, your program will crash if JS is disabled" - he was wrong, since I had made sure it would work fine with default values even if JS was disabled. I told him that and then he said "oh okay". BUT, the funny thing is, a few days back, he implemented something and I objected with "But that would not run if JS is disabled" and his response was "We don't have to care about people who disable JS" :-/ Once he asked me to investigate if there was a way to modify a CMS generated menu programmatically by extending the CMS, I did my research and told him that the only was is to inject a menu item using JavaScript / jQuery and his reaction was "ah that's ugly, and hacky, not acceptable" and two days later, I see that feature implemented in the same way as I had suggested. The point is, his reaction was not respectful at all, even if what I proposed was hacky, he should be respectful, that I know what's hacky and if I am suggesting something hacky, there must be a reason for it. There are plenty of other reasons / examples where I feel I am not being treated fairly. I want your advice as to what is it that I am doing wrong and how to deal with such a situation. The other guys in the team are actually very good people, and I do not want to leave the job either (although I could, if I want to). All I want is respect and equal treatment. I have thought about talking to this guy in a face to face meeting, but that worries me that his attitude might get worse and make things more difficult for me (since he doesn't seem to be the guy who thinks he can be wrong too). I am also considering talking to the other co-founder but I am not sure how he will take it (as both founders have been friends forever). Thanks for reading the long message, I really appreciate your help.

    Read the article

  • Solaris 11.2: Functional Deprecation

    - by alanc
    In Solaris 11.1, I updated the system headers to enable use of several attributes on functions, including noreturn and printf format, to give compilers and static analyzers more information about how they are used to give better warnings when building code. In Solaris 11.2, I've gone back in and added one more attribute to a number of functions in the system headers: __attribute__((__deprecated__)). This is used to warn people building software that they’re using function calls we recommend no longer be used. While in many cases the Solaris Binary Compatibility Guarantee means we won't ever remove these functions from the system libraries, we still want to discourage their use. I made passes through both the POSIX and C standards, and some of the Solaris architecture review cases to come up with an initial list which the Solaris architecture review committee accepted to start with. This set is by no means a complete list of Obsolete function interfaces, but should be a reasonable start at functions that are well documented as deprecated and seem useful to warn developers away from. More functions may be flagged in the future as they get deprecated, or if further passes are made through our existing deprecated functions to flag more of them. Header Interface Deprecated by Alternative Documented in <door.h> door_cred(3C) PSARC/2002/188 door_ucred(3C) door_cred(3C) <kvm.h> kvm_read(3KVM), kvm_write(3KVM) PSARC/1995/186 Functions on kvm_kread(3KVM) man page kvm_read(3KVM) <stdio.h> gets(3C) ISO C99 TC3 (Removed in ISO C11), POSIX:2008/XPG7/Unix08 fgets(3C) gets(3C) man page, and just about every gets(3C) reference online from the past 25 years, since the Morris worm proved bad things happen when it’s used. <unistd.h> vfork(2) PSARC/2004/760, POSIX:2001/XPG6/Unix03 (Removed in POSIX:2008/XPG7/Unix08) posix_spawn(3C) vfork(2) man page. <utmp.h> All functions from getutent(3C) man page PSARC/1999/103 utmpx functions from getutentx(3C) man page getutent(3C) man page <varargs.h> varargs.h version of va_list typedef ANSI/ISO C89 standard <stdarg.h> varargs(3EXT) <volmgt.h> All functions PSARC/2005/672 hal(5) API volmgt_check(3VOLMGT), etc. <sys/nvpair.h> nvlist_add_boolean(3NVPAIR), nvlist_lookup_boolean(3NVPAIR) PSARC/2003/587 nvlist_add_boolean_value, nvlist_lookup_boolean_value nvlist_add_boolean(3NVPAIR) & (9F), nvlist_lookup_boolean(3NVPAIR) & (9F). <sys/processor.h> gethomelgroup(3C) PSARC/2003/034 lgrp_home(3LGRP) gethomelgroup(3C) <sys/stat_impl.h> _fxstat, _xstat, _lxstat, _xmknod PSARC/2009/657 stat(2) old functions are undocumented remains of SVR3/COFF compatibility support If the above table is cut off when viewing in the blog, try viewing this standalone copy of the table. To See or Not To See To see these warnings, you will need to be building with either gcc (versions 3.4, 4.5, 4.7, & 4.8 are available in the 11.2 package repo), or with Oracle Solaris Studio 12.4 or later (which like Solaris 11.2, is currently in beta testing). For instance, take this oversimplified (and obviously buggy) implementation of the cat command: #include <stdio.h> int main(int argc, char **argv) { char buf[80]; while (gets(buf) != NULL) puts(buf); return 0; } Compiling it with the Studio 12.4 beta compiler will produce warnings such as: % cc -V cc: Sun C 5.13 SunOS_i386 Beta 2014/03/11 % cc gets_test.c "gets_test.c", line 6: warning: "gets" is deprecated, declared in : "/usr/include/iso/stdio_iso.h", line 221 The exact warning given varies by compilers, and the compilers also have a variety of flags to either raise the warnings to errors, or silence them. Of couse, the exact form of the output is Not An Interface that can be relied on for automated parsing, just shown for example. gets(3C) is actually a special case — as noted above, it is no longer part of the C Standard Library in the C11 standard, so when compiling in C11 mode (i.e. when __STDC_VERSION__ >= 201112L), the <stdio.h> header will not provide a prototype for it, causing the compiler to complain it is unknown: % gcc -std=c11 gets_test.c gets_test.c: In function ‘main’: gets_test.c:6:5: warning: implicit declaration of function ‘gets’ [-Wimplicit-function-declaration] while (gets(buf) != NULL) ^ The gets(3C) function of course is still in libc, so if you ignore the error or provide your own prototype, you can still build code that calls it, you just have to acknowledge you’re taking on the risk of doing so yourself. Solaris Studio 12.4 Beta % cc gets_test.c "gets_test.c", line 6: warning: "gets" is deprecated, declared in : "/usr/include/iso/stdio_iso.h", line 221 % cc -errwarn=E_DEPRECATED_ATT gets_test.c "gets_test.c", line 6: "gets" is deprecated, declared in : "/usr/include/iso/stdio_iso.h", line 221 cc: acomp failed for gets_test.c This warning is silenced in the 12.4 beta by cc -erroff=E_DEPRECATED_ATT No warning is currently issued by Studio 12.3 & earler releases. gcc 3.4.3 % /usr/sfw/bin/gcc gets_test.c gets_test.c: In function `main': gets_test.c:6: warning: `gets' is deprecated (declared at /usr/include/iso/stdio_iso.h:221) Warning is completely silenced with gcc -Wno-deprecated-declarations gcc 4.7.3 % /usr/gcc/4.7/bin/gcc gets_test.c gets_test.c: In function ‘main’: gets_test.c:6:5: warning: ‘gets’ is deprecated (declared at /usr/include/iso/stdio_iso.h:221) [-Wdeprecated-declarations] % /usr/gcc/4.7/bin/gcc -Werror=deprecated-declarations gets_test.c gets_test.c: In function ‘main’: gets_test.c:6:5: error: ‘gets’ is deprecated (declared at /usr/include/iso/stdio_iso.h:221) [-Werror=deprecated-declarations] cc1: some warnings being treated as errors Warning is completely silenced with gcc -Wno-deprecated-declarations gcc 4.8.2 % /usr/bin/gcc gets_test.c gets_test.c: In function ‘main’: gets_test.c:6:5: warning: ‘gets’ is deprecated (declared at /usr/include/iso/stdio_iso.h:221) [-Wdeprecated-declarations] while (gets(buf) != NULL) ^ % /usr/bin/gcc -Werror=deprecated-declarations gets_test.c gets_test.c: In function ‘main’: gets_test.c:6:5: error: ‘gets’ is deprecated (declared at /usr/include/iso/stdio_iso.h:221) [-Werror=deprecated-declarations] while (gets(buf) != NULL) ^ cc1: some warnings being treated as errors Warning is completely silenced with gcc -Wno-deprecated-declarations

    Read the article

  • Is it feasible and useful to auto-generate some code of unit tests?

    - by skiwi
    Earlier today I have come up with an idea, based upon a particular real use case, which I would want to have checked for feasability and usefulness. This question will feature a fair chunk of Java code, but can be applied to all languages running inside a VM, and maybe even outside. While there is real code, it uses nothing language-specific, so please read it mostly as pseudo code. The idea Make unit testing less cumbersome by adding in some ways to autogenerate code based on human interaction with the codebase. I understand this goes against the principle of TDD, but I don't think anyone ever proved that doing TDD is better over first creating code and then immediatly therafter the tests. This may even be adapted to be fit into TDD, but that is not my current goal. To show how it is intended to be used, I'll copy one of my classes here, for which I need to make unit tests. public class PutMonsterOnFieldAction implements PlayerAction { private final int handCardIndex; private final int fieldMonsterIndex; public PutMonsterOnFieldAction(final int handCardIndex, final int fieldMonsterIndex) { this.handCardIndex = Arguments.requirePositiveOrZero(handCardIndex, "handCardIndex"); this.fieldMonsterIndex = Arguments.requirePositiveOrZero(fieldMonsterIndex, "fieldCardIndex"); } @Override public boolean isActionAllowed(final Player player) { Objects.requireNonNull(player, "player"); Hand hand = player.getHand(); Field field = player.getField(); if (handCardIndex >= hand.getCapacity()) { return false; } if (fieldMonsterIndex >= field.getMonsterCapacity()) { return false; } if (field.hasMonster(fieldMonsterIndex)) { return false; } if (!(hand.get(handCardIndex) instanceof MonsterCard)) { return false; } return true; } @Override public void performAction(final Player player) { Objects.requireNonNull(player); if (!isActionAllowed(player)) { throw new PlayerActionNotAllowedException(); } Hand hand = player.getHand(); Field field = player.getField(); field.setMonster(fieldMonsterIndex, (MonsterCard)hand.play(handCardIndex)); } } We can observe the need for the following tests: Constructor test with valid input Constructor test with invalid inputs isActionAllowed test with valid input isActionAllowed test with invalid inputs performAction test with valid input performAction test with invalid inputs My idea mainly focuses on the isActionAllowed test with invalid inputs. Writing these tests is not fun, you need to ensure a number of conditions and you check whether it really returns false, this can be extended to performAction, where an exception needs to be thrown in that case. The goal of my idea is to generate those tests, by indicating (through GUI of IDE hopefully) that you want to generate tests based on a specific branch. The implementation by example User clicks on "Generate code for branch if (handCardIndex >= hand.getCapacity())". Now the tool needs to find a case where that holds. (I haven't added the relevant code as that may clutter the post ultimately) To invalidate the branch, the tool needs to find a handCardIndex and hand.getCapacity() such that the condition >= holds. It needs to construct a Player with a Hand that has a capacity of at least 1. It notices that the capacity private int of Hand needs to be at least 1. It searches for ways to set it to 1. Fortunately it finds a constructor that takes the capacity as an argument. It uses 1 for this. Some more work needs to be done to succesfully construct a Player instance, involving the creation of objects that have constraints that can be seen by inspecting the source code. It has found the hand with the least capacity possible and is able to construct it. Now to invalidate the test it will need to set handCardIndex = 1. It constructs the test and asserts it to be false (the returned value of the branch) What does the tool need to work? In order to function properly, it will need the ability to scan through all source code (including JDK code) to figure out all constraints. Optionally this could be done through the javadoc, but that is not always used to indicate all constraints. It could also do some trial and error, but it pretty much stops if you cannot attach source code to compiled classes. Then it needs some basic knowledge of what the primitive types are, including arrays. And it needs to be able to construct some form of "modification trees". The tool knows that it needs to change a certain variable to a different value in order to get the correct testcase. Hence it will need to list all possible ways to change it, without using reflection obviously. What this tool will not replace is the need to create tailored unit tests that tests all kinds of conditions when a certain method actually works. It is purely to be used to test methods when they invalidate constraints. My questions: Is creating such a tool feasible? Would it ever work, or are there some obvious problems? Would such a tool be useful? Is it even useful to automatically generate these testcases at all? Could it be extended to do even more useful things? Does, by chance, such a project already exist and would I be reinventing the wheel? If not proven useful, but still possible to make such thing, I will still consider it for fun. If it's considered useful, then I might make an open source project for it depending on the time. For people searching more background information about the used Player and Hand classes in my example, please refer to this repository. At the time of writing the PutMonsterOnFieldAction has not been uploaded to the repo yet, but this will be done once I'm done with the unit tests.

    Read the article

  • Unable to resolve class in build.gradle using Android Studio 0.60/Gradle 0.11

    - by saywhatnow
    Established app working fine using Android Studio 0.5.9/ Gradle 0.9 but upgrading to Android Studio 0.6.0/ Gradle 0.11 causes the error below. Somehow Studio seems to have lost the ability to resolve the android tools import at the top of the build.gradle file. Anyone got any ideas on how to solve this? build file 'Users/[me]/Repositories/[project]/[module]/build.gradle': 1: unable to resolve class com.android.builder.DefaultManifestParser @ line 1, column 1. import com.android.builder.DefaultManifestParser 1 error at org.codehaus.groovy.control.ErrorCollector.failIfErrors(ErrorCollector.java:302) at org.codehaus.groovy.control.CompilationUnit.applyToSourceUnits(CompilationUnit.java:858) at org.codehaus.groovy.control.CompilationUnit.doPhaseOperation(CompilationUnit.java:548) at org.codehaus.groovy.control.CompilationUnit.compile(CompilationUnit.java:497) at groovy.lang.GroovyClassLoader.doParseClass(GroovyClassLoader.java:306) at groovy.lang.GroovyClassLoader.parseClass(GroovyClassLoader.java:287) at org.gradle.groovy.scripts.internal.DefaultScriptCompilationHandler.compileScript(DefaultScriptCompilationHandler.java:115) ... 77 more 2014-06-09 10:15:28,537 [ 92905] INFO - .BaseProjectImportErrorHandler - Failed to import Gradle project at '/Users/[me]/Repositories/[project]' org.gradle.tooling.BuildException: Could not run build action using Gradle distribution 'http://services.gradle.org/distributions/gradle-1.12-all.zip'. at org.gradle.tooling.internal.consumer.ResultHandlerAdapter.onFailure(ResultHandlerAdapter.java:53) at org.gradle.tooling.internal.consumer.async.DefaultAsyncConsumerActionExecutor$1$1.run(DefaultAsyncConsumerActionExecutor.java:57) at org.gradle.internal.concurrent.DefaultExecutorFactory$StoppableExecutorImpl$1.run(DefaultExecutorFactory.java:64) [project]/[module]/build.gradle import com.android.builder.DefaultManifestParser apply plugin: 'android-sdk-manager' apply plugin: 'android' android { sourceSets { main { manifest.srcFile 'src/main/AndroidManifest.xml' res.srcDirs = ['src/main/res'] } debug { res.srcDirs = ['src/debug/res'] } release { res.srcDirs = ['src/release/res'] } } compileSdkVersion 19 buildToolsVersion '19.0.0' defaultConfig { minSdkVersion 14 targetSdkVersion 19 } signingConfigs { release } buildTypes { release { runProguard false proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.txt' signingConfig signingConfigs.release applicationVariants.all { variant -> def file = variant.outputFile def manifestParser = new DefaultManifestParser() def wmgVersionCode = manifestParser.getVersionCode(android.sourceSets.main.manifest.srcFile) println wmgVersionCode variant.outputFile = new File(file.parent, file.name.replace("-release.apk", "_" + wmgVersionCode + ".apk")) } } } packagingOptions { exclude 'META-INF/LICENSE.txt' exclude 'META-INF/NOTICE.txt' } } def Properties props = new Properties() def propFile = file('signing.properties') if (propFile.canRead()){ props.load(new FileInputStream(propFile)) if (props!=null && props.containsKey('STORE_FILE') && props.containsKey('STORE_PASSWORD') && props.containsKey('KEY_ALIAS') && props.containsKey('KEY_PASSWORD')) { println 'RELEASE BUILD SIGNING' android.signingConfigs.release.storeFile = file(props['STORE_FILE']) android.signingConfigs.release.storePassword = props['STORE_PASSWORD'] android.signingConfigs.release.keyAlias = props['KEY_ALIAS'] android.signingConfigs.release.keyPassword = props['KEY_PASSWORD'] } else { println 'RELEASE BUILD NOT FOUND SIGNING PROPERTIES' android.buildTypes.release.signingConfig = null } }else { println 'RELEASE BUILD NOT FOUND SIGNING FILE' android.buildTypes.release.signingConfig = null } repositories { maven { url 'https://repo.commonsware.com.s3.amazonaws.com' } maven { url 'https://oss.sonatype.org/content/repositories/snapshots/' } } dependencies { compile 'com.github.gabrielemariotti.changeloglib:library:1.4.+' compile 'com.google.code.gson:gson:2.2.4' compile 'com.google.android.gms:play-services:+' compile 'com.android.support:appcompat-v7:+' compile 'com.squareup.okhttp:okhttp:1.5.+' compile 'com.octo.android.robospice:robospice:1.4.11' compile 'com.octo.android.robospice:robospice-cache:1.4.11' compile 'com.octo.android.robospice:robospice-retrofit:1.4.11' compile 'com.commonsware.cwac:security:0.1.+' compile 'com.readystatesoftware.sqliteasset:sqliteassethelper:+' compile 'com.android.support:support-v4:19.+' compile 'uk.co.androidalliance:edgeeffectoverride:1.0.1+' compile 'de.greenrobot:eventbus:2.2.1+' compile project(':captureActivity') compile ('de.keyboardsurfer.android.widget:crouton:1.8.+') { exclude group: 'com.google.android', module: 'support-v4' } compile files('libs/CWAC-LoaderEx.jar') }

    Read the article

  • How to configure Hudson and git plugin with an SSH key

    - by jlpp
    I've got Hudson (continuous integration system) with the git plugin running on a Tomcat Windows Service. msysgit is installed and the msysgit bin dir is in the path. PuTTY/Pageant/plink are installed and msysgit is configured to use them. When I run a job that attempts to clone the git repository I get the following error: $ git clone -o origin git@hostname:project.git "e:\HUDSON_HOME\jobs\Project Trunk\workspace" ERROR: Error cloning remote repo 'origin' : Could not clone git@hostname:project.git ERROR: Cause: Error performing git clone -o origin git@hostname:project.git e:\HUDSON_HOME\jobs\Project Trunk\workspace Trying next repository ERROR: Could not clone from a repository FATAL: Could not clone hudson.plugins.git.GitException: Could not clone Running git clone -o origin git@hostname:project.git "e:\HUDSON_HOME\jobs\Project Trunk\workspace" from the command line works without error. I've confirmed that my issue is not the same as http://stackoverflow.com/questions/1177292/hudson-git-clone-error because git is in the path and I don't get any error about the git executable on Hudson's Configure System page. This leads me to believe that the problem is that the user who owns the Tomcat/Hudson Windows service (Local System) has no SSH key set up to be able to clone the git repository. My question is, how can I set things up so that the git plugin/msysgit know to use a particular SSH key when trying to clone? I don't think Pageant will work because the Tomcat service is running as the "Local System" user, but I may be wrong. I have tried setting Pageant up as a service (using runassvc.exe), passing the appropriate key, and having it run as "Local System". The Tomcat/Hudson service doesn't seem to be able to see the key from the pageant service. Are there any other techniques for setting up a key? Thanks. EDIT: The discussion on http://n4.nabble.com/Hudson-with-git-and-ssh-td375633.html shows that someone else had a similar question. ssh-agent was suggested and this tool does come with msysgit but I'm not sure how to use it in conjunction with the Hudson service. Still, good clue if anyone can fill in the gaps. Thanks to Peter for the comment with the link. Also, the discussion on http://n4.nabble.com/questions-about-git-and-github-plug-ins-td383420.html starts off with the same question. I'm trying to resurrect that thread.

    Read the article

  • Mercurial to Mercurial to Subversion Workflow Problem

    - by Dalroth
    We're migrating from Subversion to Mercurial. To facilitate the migration, we're creating an intermediate Mercurial repository that is a clone of our Subversion repository. All developers will be begin switching over to the Mercurial repository, and we'll periodically push changes from the intermediate Mercurial repository to the existing Subversion repository. After a period of time, we'll simply obsolete the Subversion repository and the intermediate Mercurial repository will become the new system of record. Dev 1 Local --+--> Mercurial --+--> Subversion Dev 2 Local --+ + Dev 3 Local --+ + Dev 4 -------------------------+ I've been testing this out, but I keep running into a problem when I push changes from my local repository, to the intermediate Mercurial repository, and then up into our Subversion repository. On my local machine, I have a changeset that is committed and ready to be pushed to our intermediate Mercurial repository. Here you can see it is revision #2263 with hash 625... I push only this changeset to the remote repository. So far, everything looks good. The changeset has been pushed. hg update 1 files updated, 0 files merged, 0 files removed, 0 files unresolved I now switch over to the remote repository, and update the working directory. hg push pushing to svn://... searching for changes [r3834] bmurphy: database namespace pulled 1 revisions saving bundle to /srv/hg/repository/.hg/strip-backup/62539f8df3b2-temp adding branch adding changesets adding manifests adding file changes added 1 changesets with 1 changes to 1 files rebase completed Next, I push the change up to Subversion, works great. At this point, the change is in the Subversion repository and I return attention back to my local client. I pull changes to my local machine. Huh? I've now got two changesets. My original changeset appears as a local branch now. The other changeset has a new revision number 2264, and a new hash 10c1... Anyway, I update my local repo to the new revision. I'm now switched over. So, I finally click the "determine and mark outgoing changesets" and as you can see Mercurial still wants to push out my previous changesets even though they've already been pushed. Clearly, I'm doing something wrong. I also can't merge the two revisions. If I merge the two revisions on my local machine, I end up with a "merge" commit. When I push that merge commit out to the intermediate Mercurial repository, I can no longer push changes out to our Subversion repository. I end up with the following problem: hg update 0 files updated, 0 files merged, 0 files removed, 0 files unresolved hg push pushing to svn://... searching for changes abort: Sorry, can't find svn parent of a merge revision. and I have to rollback the merge to get back to a working state. What am I missing?

    Read the article

  • Can't get gitosis and ssh to play nice on cygwin

    - by Noel Kennedy
    I have followed this guide to setting up gitosis on a windows 2003 server via cygwin. I have now got to a point where it largely works. I can clone, pull and push. The problem I am having is that I think I have not got the ssh bit right at all. When I connect via msysgit from machines and accounts where I have not created or uploaded ssh keys it works. Every time I clone, pull or push I get a password challenge for the 'git' user running on the server but basically I can execute git commands. When I connect with users with an ssh key in the ~/.ssh folder, I don't get the password challange and instead I get a permissions failure: DEBUG:gitosis.serve.main:Got command "git-upload-pack '/cris.git'" DEBUG:gitosis.access.haveAccess:Access check for 'teamcity@hhit24808' as 'writable' on 'cris.git'... DEBUG:gitosis.access.haveAccess:Stripping .git suffix from 'cris.git', new value 'cris' DEBUG:gitosis.access.haveAccess:Access check for 'teamcity@hhit24808' as 'writeable' on 'cris.git'... DEBUG:gitosis.access.haveAccess:Stripping .git suffix from 'cris.git', new value 'cris' DEBUG:gitosis.access.haveAccess:Access check for 'teamcity@hhit24808' as 'readonly' on 'cris.git'... DEBUG:gitosis.access.haveAccess:Stripping .git suffix from 'cris.git', new value 'cris' ERROR:gitosis.serve.main:Repository read access denied fatal: The remote end hung up unexpectedly I have uploaded the public rsa key into the key_dir folder. Here is my conf file: [gitosis] loglevel = DEBUG [group gitosis-admin] writable = gitosis-admin members = myemail@mydomain [group cris-developers] members = myemail@mydomain TeamCity@HHIT24808 writable = cris If it matters, I have generated a key without a passphrase as I believe this is necessary to enable ssh for automated scripts. When I use keys with a passphrase, I get challanged for the phrase but then get the same permissions problem. I have tried 'writable' and 'writeable' for permissions. Help!! Update 1: When I try to clone a non-existant repo, I get the same error message, co-incidence? Update 2: Wierd, I've got one machine and one login working. It seems to be something to do with the syntax for addressing git over ssh. This now works on one machine for one login: git clone git@servername:cris.git The same command fails for a user on another machine without an uploaded ssh key. But this command works (after being challanged for git@servername's password) git clone git@servername:/home/git/repositories/cris.git neither command works on a 2nd login whose ssh key has been uploaded

    Read the article

  • Mouse wheel not scrolling in JDialog but working in JFrame

    - by Iulian Serbanoiu
    Hello, I'm facing a frustrating issue. I have an application where the scroll wheel doesn't work in a JDialog window (but works in a JFrame). Here's the code: import javax.swing.*; import java.awt.event.*; public class Failtest extends JFrame { public static void main(String[] args) { new Failtest(); } public Failtest() { super(); setDefaultCloseOperation(JFrame.DISPOSE_ON_CLOSE); setTitle("FRAME"); JScrollPane sp1 = new JScrollPane(getNewList()); add(sp1); setSize(150, 150); setVisible(true); JDialog d = new JDialog(this, false);// NOT WORKING //JDialog d = new JDialog((JFrame)null, false); // NOT WORKING //JDialog d = new JDialog((JDialog)null, false);// WORKING - WHY? d.setTitle("DIALOG"); d.setDefaultCloseOperation(JDialog.DISPOSE_ON_CLOSE); JScrollPane sp = new JScrollPane(getNewList()); d.add(sp); d.setSize(150, 150); d.setVisible(true); } public JList getNewList() { String objs[] = new String[30]; for(int i=0; i<objs.length; i++) { objs[i] = "Item "+i; } JList l = new JList(objs); return l; } } I found a solution which is present as a comment in the java code - the constructor receiving a (JDialog)null parameter. Can someone enlighten me? My opinion is that this is a java bug. Tested on Windows XP-SP3 with 1 JDK and 2 JREs: D:\Program Files\Java\jdk1.6.0_17\bin>javac -version javac 1.6.0_17 D:\Program Files\Java\jdk1.6.0_17\bin>java -version java version "1.6.0_17" Java(TM) SE Runtime Environment (build 1.6.0_17-b04) Java HotSpot(TM) Client VM (build 14.3-b01, mixed mode, sharing) D:\Program Files\Java\jdk1.6.0_17\bin>cd .. D:\Program Files\Java\jdk1.6.0_17>java -version java version "1.6.0_18" Java(TM) SE Runtime Environment (build 1.6.0_18-b07) Java HotSpot(TM) Client VM (build 16.0-b13, mixed mode, sharing) Thank you in advance, Iulian Serbanoiu PS: The problem is not new - the code is taken from a forum (here) where this problem was also mentioned - but no solutions to it (yet) LATER EDIT: The problem persists with jre/jdk_1.6.0_10, 1.6.0_16 also LATER EDIT 2: Back home, tested on linux (Ubuntu - lucid/lynx) - both with openjdk and sun-java from distribution repo and it works (I used the .class file compiled on Windows) !!! - so I believe I'm facing a JRE bug that happens on some Windows configurations.

    Read the article

  • Doubt about adopting CI (Hudson) into an existing automated Build Process (phing, svn)

    - by maraspin
    OUR CURRENT BUILD PROCESS We're a small team of developers (2 to 4 people depending on project) who currently use Phing to deploy code to a staging environment, before going live. We keep our code in a SVN repo, where the trunk holds current active development and, at certain times, we do make branches that we test and then (if successful), tag and export to the staging env. If everything goes well there too, we finally deploy'em in production servers. Actions are highly automated, but always triggered by human intervention. THE DOUBT We'd now like to introduce Continuous Integration (with Hudson) in the process; unfortunately we have a few doubts about activity syncing, since we're afraid that CI could somewhat interfere with our build process and cause certain problems. Considering that an automated CI cycle has a certain frequency of automatically executed actions, we in fact only see 2 possible cases for "integration", each with its own problems: Case A: each CI cycle produces a new branch with its own name; we do use such a name to manually (through phing as it happens now) export the code from the SVN to the staging env. The problem I see here is that (unless specific countermeasures are taken) the number of branches we have can grow out of control (let's suppose we commit often, so that we have a fresh new build/branch every N minutes). Case B: each CI cycle creates a new branch named 'current', for instance, which is tagged with a unique name only when we manually decide to export it to staging; the current branch, at any case is then deleted, as soon as the next CI cycle starts up. The problem we see here is that a new cycle could kick in while someone is tagging/exporting the 'current' branch to staging thus creating an inconsistent build (but maybe here I'm just too pessimist, since I confess I don't know whether SVN offers some built-in protection against this). With all this being said, I was wondering if anyone with similar experiences could be so kind to give us some hints on the subject, since none of the approaches depicted above looks completely satisfing to us. Is there something important we just completely left off in the overall picture? Thanks for your attention &, in advance, for your help!

    Read the article

  • NetBeans not liking libraries in lib-src

    - by DJTripleThreat
    I'm working on a project with a group that is using Eclipse, but I'm using Netbeans. Up until today this wasn't an issue. When updating from the repo they have added some source code as a library under a directory called /lib-src. When I try to compile the code I get an error that it can't find certain packages... these are the packages under /lib-src. Using NetBeans I can add the library as a folder so now the references to those packages are happy. However, I'm getting this new error when compiling: UNEXPECTED TOP-LEVEL ERROR: java.lang.OutOfMemoryError: Java heap space at java.util.HashMap.addEntry(HashMap.java:753) at java.util.HashMap.put(HashMap.java:385) at com.android.dx.dex.file.ClassDataItem.addStaticField(ClassDataItem.java:134) at com.android.dx.dex.file.ClassDefItem.addStaticField(ClassDefItem.java:280) at com.android.dx.dex.cf.CfTranslator.processFields(CfTranslator.java:159) at com.android.dx.dex.cf.CfTranslator.translate0(CfTranslator.java:130) at com.android.dx.dex.cf.CfTranslator.translate(CfTranslator.java:85) at com.android.dx.command.dexer.Main.processClass(Main.java:297) at com.android.dx.command.dexer.Main.processFileBytes(Main.java:276) at com.android.dx.command.dexer.Main.access$100(Main.java:56) at com.android.dx.command.dexer.Main$1.processFileBytes(Main.java:228) at com.android.dx.cf.direct.ClassPathOpener.processOne(ClassPathOpener.java:134) at com.android.dx.cf.direct.ClassPathOpener.processDirectory(ClassPathOpener.java:190) at com.android.dx.cf.direct.ClassPathOpener.processOne(ClassPathOpener.java:122) at com.android.dx.cf.direct.ClassPathOpener.processDirectory(ClassPathOpener.java:190) at com.android.dx.cf.direct.ClassPathOpener.processOne(ClassPathOpener.java:122) at com.android.dx.cf.direct.ClassPathOpener.processDirectory(ClassPathOpener.java:190) at com.android.dx.cf.direct.ClassPathOpener.processOne(ClassPathOpener.java:122) at com.android.dx.cf.direct.ClassPathOpener.processDirectory(ClassPathOpener.java:190) at com.android.dx.cf.direct.ClassPathOpener.processOne(ClassPathOpener.java:122) at com.android.dx.cf.direct.ClassPathOpener.processDirectory(ClassPathOpener.java:190) at com.android.dx.cf.direct.ClassPathOpener.processOne(ClassPathOpener.java:122) at com.android.dx.cf.direct.ClassPathOpener.processDirectory(ClassPathOpener.java:190) at com.android.dx.cf.direct.ClassPathOpener.processOne(ClassPathOpener.java:122) at com.android.dx.cf.direct.ClassPathOpener.processDirectory(ClassPathOpener.java:190) at com.android.dx.cf.direct.ClassPathOpener.processOne(ClassPathOpener.java:122) at com.android.dx.cf.direct.ClassPathOpener.process(ClassPathOpener.java:108) at com.android.dx.command.dexer.Main.processOne(Main.java:245) at com.android.dx.command.dexer.Main.processAllFiles(Main.java:183) at com.android.dx.command.dexer.Main.run(Main.java:139) at com.android.dx.command.dexer.Main.main(Main.java:120) at com.android.dx.command.Main.main(Main.java:87) /home/aaron/NetBeansProjects/xbmc-remote/nbproject/build-impl.xml:411: exec returned: 3 BUILD FAILED (total time: 1 minute 25 seconds) I can include the build-impl.xml file if you need it, but I don't think that is main issue. Any ideas?

    Read the article

  • Proxy settings with ivy...

    - by user315228
    Hi, I have an issue where in I have defined dependancies in ivy.xml on our internal corporate svn. I am able to access this svn site without any proxy task in ant. While my dependencies resides on ibiblio, that’s something outside our corporate, and needs proxy inorder to download something. I am facing problem using ivy here: I have following in build.xml <target name="proxy"> <property name="proxy.host" value="xyz.proxy.net"/> <property name="proxy.port" value="8443"/> <setproxy proxyhost="${proxy.host}" proxyport="${proxy.port}"/> </target> &lt;!-- resolve the dependencies of stratus --&gt; &lt;target name="resolveTestDependency" depends="testResolve, proxy" description="retrieve test dependencies with ivy"&gt; &lt;ivy:settings file="stratus-ivysettings.xml" /> &lt;ivy:retrieve conf="test" pattern="${jars}/[artifact]-[revision].[ext]"/&gt;<!--pattern here specifies where do you want to download lib to?--> </target> <target name=" testResolve "> <ivy:settings file="stratus-ivysettings.xml" /> <ivy:resolve conf="test" file="stratus-ivy.xml"/> </target> Following is the excerpt from stratus-ivysettings.xml <resolvers <!-- here you define your file in private machine not on the repo (e.g. jPricer.jar or edgApi.jar)-- <url name="privateFS" <ivy pattern="http://xyz.svn.com/ivyRepository/[organisation]/ivy/ivy.xml"/ </url . . . <url name="public" m2compatible="true" <artifact pattern="http://www.ibiblio.org/maven2/[organisation]/[module]/[revision]/[artifact]-[revision].[ext]"/ </url . . . So as can be seen here for getting ivy.xml, I don’t need any proxy as its within our own network which cant be accesses when I set proxy. But on the other hand I am using ibiblio as well which is external to our network and works only with proxy. So above build.xml wont work in that case. Can somebody help here. I don’t need proxy while getting ivy.xml (as if I have proxy, ivy wont be able to find ivy file behind proxy from within the network), and I just need it when my resolver goes to public url.

    Read the article

  • Proxy settings with ivy...

    - by user315228
    Hi, I have an issue where in I have defined dependancies in ivy.xml on our internal corporate svn. I am able to access this svn site without any proxy task in ant. While my dependencies resides on ibiblio, that’s something outside our corporate, and needs proxy inorder to download something. I am facing problem using ivy here: I have following in build.xml <target name="proxy" <property name="proxy.host" value="xyz.proxy.net"/ <property name="proxy.port" value="8443"/ <setproxy proxyhost="${proxy.host}" proxyport="${proxy.port}"/ </target <!-- resolve the dependencies of stratus --> <target name="resolveTestDependency" depends="testResolve, proxy" description="retrieve test dependencies with ivy"> <ivy:settings file="stratus-ivysettings.xml" /> <ivy:retrieve conf="test" pattern="${jars}/[artifact]-[revision].[ext]"/><!--pattern here specifies where do you want to download lib to?--> </target> <target name=" testResolve "> <ivy:settings file="stratus-ivysettings.xml" /> <ivy:resolve conf="test" file="stratus-ivy.xml"/> </target> Following is the excerpt from stratus-ivysettings.xml <resolvers <!-- here you define your file in private machine not on the repo (e.g. jPricer.jar or edgApi.jar)-- <!-- This we will use a url nd not local file system.. -- <url name="privateFS" <ivy pattern="http://xyz.svn.com/ivyRepository/ [organisation]/ivy/ivy.xml"/ </url . . . <url name="public" m2compatible="true" <artifact pattern="http://www.ibiblio.org/maven2/[organisation]/[module]/[revision]/[artifact]-[revision].[ext]"/ </url . . . So as can be seen here for getting ivy.xml, I don’t need any proxy as its within our own network which cant be accesses when I set proxy. But on the other hand I am using ibiblio as well which is external to our network and works only with proxy. So above build.xml wont work in that case. Can somebody help here. I don’t need proxy while getting ivy.xml (as if I have proxy, ivy wont be able to find ivy file behind proxy from within the network), and I just need it when my resolver goes to public url.

    Read the article

  • Git for Websites / post-receive / Separation of Test and Production Sites

    - by Walt W
    Hi all, I'm using Git to manage my website's source code and deployment, and currently have the test and live sites running on the same box. Following this resource http://toroid.org/ams/git-website-howto originally, I came up with the following post-receive hook script to differentiate between pushes to my live site and pushes to my test site: while read ref do #echo "Ref updated:" #echo $ref -- would print something like example at top of file result=`echo $ref | gawk -F' ' '{ print $3 }'` if [ $result != "" ]; then echo "Branch found: " echo $result case $result in refs/heads/master ) git --work-tree=c:/temp/BLAH checkout -f master echo "Updated master" ;; refs/heads/testbranch ) git --work-tree=c:/temp/BLAH2 checkout -f testbranch echo "Updated testbranch" ;; * ) echo "No update known for $result" ;; esac fi done echo "Post-receive updates complete" However, I have doubts that this is actually safe :) I'm by no means a Git expert, but I am guessing that Git probably keeps track of the current checked-out branch head, and this approach probably has the potential to confuse it to no end. So a few questions: IS this safe? Would a better approach be to have my base repository be the test site repository (with corresponding working directory), and then have that repository push changes to a new live site repository, which has a corresponding working directory to the live site base? This would also allow me to move the production to a different server and keep the deployment chain intact. Is there something I'm missing? Is there a different, clean way to differentiate between test and production deployments when using Git for managing websites? As an additional note in light of Vi's answer, is there a good way to do this that would handle deletions without mucking with the file system much? Thank you, -Walt PS - The script I came up with for the multiple repos (and am using unless I hear better) is as follows: sitename=`basename \`pwd\`` while read ref do #echo "Ref updated:" #echo $ref -- would print something like example at top of file result=`echo $ref | gawk -F' ' '{ print $3 }'` if [ $result != "" ]; then echo "Branch found: " echo $result case $result in refs/heads/master ) git checkout -q -f master if [ $? -eq 0 ]; then echo "Test Site checked out properly" else echo "Failed to checkout test site!" fi ;; refs/heads/live-site ) git push -q ../Live/$sitename live-site:master if [ $? -eq 0 ]; then echo "Live Site received updates properly" else echo "Failed to push updates to Live Site" fi ;; * ) echo "No update known for $result" ;; esac fi done echo "Post-receive updates complete" And then the repo in ../Live/$sitename (these are "bare" repos with working trees added after init) has the basic post-receive: git checkout -f if [ $? -eq 0 ]; then echo "Live site `basename \`pwd\`` checked out successfully" else echo "Live site failed to checkout" fi

    Read the article

  • is appassembler plugin broken for java service wrapper on windows 64bit?

    - by Paul McKenzie
    Hi I'm developing on 32bit windows and am using appassembler to create a java service wrapper assembly, and it works ok. But I need to also create a 64bit assembly for deployment to a dev server. In the following config I have substituted the 32bit platform with the 64bit, see the <includes> section. But it no longer places the wrapper jar and dll in the lib folder. If I omit the includes completely, I get linux, solaris, Mac OSX and Win32 libraries, but no win64. Anyone got this working? <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>appassembler-maven-plugin</artifactId> <version>1.1-SNAPSHOT</version> <configuration> <target>${project.build.directory}/appassembler</target> <repositoryLayout>flat</repositoryLayout> <defaultJvmSettings> <initialMemorySize>256M</initialMemorySize> <maxMemorySize>1024M</maxMemorySize> </defaultJvmSettings> <daemons> <daemon> <id>MyApp</id> <mainClass>com.foo.AppMain</mainClass> <platforms> <platform>jsw</platform> </platforms> <generatorConfigurations> <generatorConfiguration> <generator>jsw</generator> <includes> <include>windows-x86-64</include> </includes> <configuration> <property> <name>set.default.REPO_DIR</name> <value>../../repo</value> </property> </configuration> </generatorConfiguration> </generatorConfigurations> </daemon> </daemons> </configuration> <executions> <execution> <goals> <goal>generate-daemons</goal> <goal>create-repository</goal> </goals> </execution> </executions> </plugin>

    Read the article

  • Git/Mercurial (hg) opinion

    - by Richard
    First, let me say I'm not a professional programmer, but an engineer who had a need for it and had to learn. I was always working alone, so it was just me and my seven splitted personalities ... and we worked okey as a team :) Most of my stuff is done in C/Fortran/Matlab and so far I've been learning git to manage it all. However, although I've had no unsolvable problems with it, I've never been "that" happy with it ... for everything I cannot do, I hae to look up a book. And, for some time now I've been hearning a lot of good stuff about hg. Now, a coleague of mine will have to work with me on a project (I almost feel sorry for him) and he's started learning hg (says he likes it more), and I'm considering the switch myself. We work almost exclusivly on Windows platform (although I manage relatively ok using unix tools and things that come from that part of the world). So, I was wondering, in a described scenario, what problems could I expect with the switch. I heard that hg is rather more user friendly towards windows users, regarding the user interfaces. How does it handle repositories ? Does it create them the same way as git does (just one subdirectory in a working directory) and can I just copy the whole project directory (including git repo) and just carry them somewhere with no extra thinking ? (I really liked that when I was choosing over git/svn). Are there any good books on it that you can recommend (something like Pro Git, only for Hg). What are good ways to implement hg into Visual Studio/GVim for Windows, or into Windows Explorer so I can work relatively easily (I would like to avoid using the command line for everything regarding it, like in git shell). Is there something else I should be aware of (please, on this don't point me to other questions ... they just give me a ton of info, and I'm not sure what is it that I should take as important, and what to disregard). I'm trying to cut some time, since I cannot spend all that time relearning hg, like I did for git. I've also heard git is c project, while mercurial is python ... is there any noticeable difference in speed. git was pretty speedy ... will I encounter some waiting while working. Notice: All my projects are of let's say, middle size ... mostly numerical simulations ... 10-15000 lines (medium size?)

    Read the article

  • How to display subversion URL for the Project with jenkins email-ext plugin?

    - by kamal
    Here is the jelly script i am using: <j:jelly xmlns:j="jelly:core" xmlns:st="jelly:stapler" xmlns:d="jelly:define"> <STYLE>BODY, TABLE, TD, TH, P { font-family:Verdana,Helvetica,sans serif; font-size:11px; color:black; } h1 { color:black; } h2 { color:black; } h3 { color:black; } TD.bg1 { color:white; background-color:#0000C0; font-size:120% } TD.bg2 { color:white; background-color:#4040FF; font-size:110% } TD.bg3 { color:white; background-color:#8080FF; } TD.test_passed { color:blue; } TD.test_failed { color:red ; } TD.console { font-family:Courier New; }</STYLE> <BODY> <j:set var="spc" value="&amp;nbsp;&amp;nbsp;" /> <!-- GENERAL INFO --> <TABLE> <TR> <TD align="right"> <j:choose> <j:when test="${build.result=='SUCCESS'}"> <IMG SRC="${rooturl}static/e59dfe28/images/32x32/blue.gif" /> </j:when> <j:when test="${build.result=='FAILURE'}"> <IMG SRC="${rooturl}static/e59dfe28/images/32x32/red.gif" /> </j:when> <j:otherwise> <IMG SRC="${rooturl}static/e59dfe28/images/32x32/yellow.gif" /> </j:otherwise> </j:choose> </TD> <TD valign="center"> <B style="font-size: 200%;">BUILD ${build.result}</B> </TD> </TR> <TR> <TD>Build URL</TD> <TD> <A href="${rooturl}${build.url}">${rooturl}${build.url}</A> </TD> </TR> <TR> <TD>Project:</TD> <TD>${project.name}</TD> </TR> <TR> <TD>Date of build:</TD> <TD>${it.timestampString}</TD> </TR> <TR> <TD>Build duration:</TD> <TD>${build.durationString}</TD> </TR> <TR> <!-- BRANCH --> <TD>Subversion Repo:</TD> <TD>${build.scm}</TD> </TR> <tr> <td>Build Cause:</td> <td> <j:forEach var="cause" items="${build.causes}">${cause.shortDescription} </j:forEach> </td> </tr> </TABLE> <BR /> <!-- CHANGE SET --> <j:set var="changeSet" value="${build.changeSet}" /> <j:if test="${changeSet!=null}"> <j:set var="hadChanges" value="false" /> <TABLE width="100%"> <TR> <TD class="bg1" colspan="2"> <B>CHANGES</B> </TD> </TR> <j:forEach var="cs" items="${changeSet}" varStatus="loop"> <j:set var="hadChanges" value="true" /> <j:set var="aUser" value="${cs.hudsonUser}" /> <TR> <TD colspan="2" class="bg2">${spc}Revision <B>${cs.commitId?:cs.revision?:cs.changeNumber}</B>by <B>${aUser!=null?aUser.displayName:cs.author.displayName}:</B> <B>(${cs.msgAnnotated})</B></TD> </TR> <j:forEach var="p" items="${cs.affectedFiles}"> <TR> <TD width="10%">${spc}${p.editType.name}</TD> <TD>${p.path}</TD> </TR> </j:forEach> </j:forEach> <j:if test="${!hadChanges}"> <TR> <TD colspan="2">No Changes</TD> </TR> </j:if> </TABLE> <BR /> </j:if> <!-- ARTIFACTS --> <j:set var="artifacts" value="${build.artifacts}" /> <j:if test="${artifacts!=null and artifacts.size()&gt;0}"> <TABLE width="100%"> <TR> <TD class="bg1"> <B>BUILD ATRIFACTS</B> </TD> </TR> <TR> <TD> <j:forEach var="f" items="${artifacts}"> <li> <a href="${rooturl}${build.url}artifact/${f}">${f}</a> </li> </j:forEach> </TD> </TR> </TABLE> <BR /> </j:if> <!-- MAVEN ARTIFACTS --> <j:set var="mbuilds" value="${build.moduleBuilds}" /> <j:if test="${mbuilds!=null}"> <TABLE width="100%"> <TR> <TD class="bg1"> <B>BUILD ATRIFACTS</B> </TD> </TR> <j:forEach var="m" items="${mbuilds}"> <TR> <TD class="bg2"> <B>${m.key.displayName}</B> </TD> </TR> <j:forEach var="mvnbld" items="${m.value}"> <j:set var="artifacts" value="${mvnbld.artifacts}" /> <j:if test="${artifacts!=null and artifacts.size()&gt;0}"> <TR> <TD> <j:forEach var="f" items="${artifacts}"> <li> <a href="${rooturl}${mvnbld.url}artifact/${f}">${f}</a> </li> </j:forEach> </TD> </TR> </j:if> </j:forEach> </j:forEach> </TABLE> <BR /> </j:if> <!-- JUnit TEMPLATE --> <j:set var="junitResultList" value="${it.JUnitTestResult}" /> <j:if test="${junitResultList.isEmpty()!=true}"> <TABLE width="100%"> <TR> <TD class="bg1" colspan="2"> <B>${project.name} Functional Tests</B> </TD> </TR> <j:forEach var="junitResult" items="${it.JUnitTestResult}"> <j:forEach var="packageResult" items="${junitResult.getChildren()}"> <TR> <TD class="bg2" colspan="2">Name: ${packageResult.getName()} Failed: ${packageResult.getFailCount()} test(s), Passed: ${packageResult.getP assCount()} test(s), Skipped: ${packageResult.getSkipCount()} test(s), Total: ${packageResult.getPassCount()+packageResult.getFailCount()+packageResult.getSkipCount()} test(s)</TD> </TR> <j:forEach var="failed_test" items="${packageResult.getFailedTests()}"> <TR bgcolor="white"> <TD class="test_failed" colspan="2"> <B> <li>Failed: ${failed_test.getFullName()} <br /> <pre> ${failed_test.errorDetails} </pre></li> </B> </TD> </TR> <TR bgcolor="white"> <TD class="test_failed" colspan="2"> <B> <li>StackTrace: ${failed_test.getFullName()} <br /> <pre> ${failed_test.errorStackTrace} </pre></li> </B> </TD> </TR> </j:forEach> </j:forEach> </j:forEach> </TABLE> <BR /> </j:if> <!-- COBERTURA TEMPLATE --> <j:set var="coberturaAction" value="${it.coberturaAction}" /> <j:if test="${coberturaAction!=null}"> <j:set var="coberturaResult" value="${coberturaAction.result}" /> <j:if test="${coberturaResult!=null}"> <table width="100%"> <TD class="bg1" colspan="2"> <B>Cobertura Report</B> </TD> </table> <table width="100%"> <TD class="bg2" colspan="2"> <B>Project Coverage Summary</B> </TD> </table> <table border="1px" class="pane"> <tr> <td>Name</td> <j:forEach var="metric" items="${coberturaResult.metrics}"> <td>${metric.name}</td> </j:forEach> </tr> <tr> <td>${coberturaResult.name}</td> <j:forEach var="metric" items="${coberturaResult.metrics}"> <td data="${coberturaResult.getCoverage(metric).percentageFloat}">${coberturaResult.getCoverage(metric).percentage}% (${coberturaResult.ge tCoverage(metric)})</td> </j:forEach> </tr> </table> <j:if test="${coberturaResult.sourceCodeLevel}"> <h2>Source</h2> <j:choose> <j:when test="${coberturaResult.sourceFileAvailable}"> <div style="overflow-x:scroll;"> <table class="source"> <thead> <tr> <th colspan="3">${coberturaResult.relativeSourcePath}</th> </tr> </thead>${coberturaResult.sourceFileContent}</table> </div> </j:when> <j:otherwise> <p> <i>Source code is unavailable</i> </p> </j:otherwise> </j:choose> </j:if> <j:forEach var="element" items="${coberturaResult.childElements}"> <j:set var="childMetrics" value="${coberturaResult.getChildMetrics(element)}" /> <table width="100%"> <TD class="bg2" colspan="2">Coverage Breakdown by ${element.displayName}</TD> </table> <table border="1px" class="pane sortable"> <tr> <td>Name</td> <j:forEach var="metric" items="${childMetrics}"> <td>${metric.name}</td> </j:forEach> </tr> <j:forEach var="c" items="${coberturaResult.children}"> <j:set var="child" value="${coberturaResult.getChild(c)}" /> <tr> <td>${child.xmlTransform(child.name)}</td> <j:forEach var="metric" items="${childMetrics}"> <j:set var="childResult" value="${child.getCoverage(metric)}" /> <j:choose> <j:when test="${childResult!=null}"> <td data="${childResult.percentageFloat}">${childResult.percentage}% (${childResult})</td> </j:when> <j:otherwise> <td data="101">N/A</td> </j:otherwise> </j:choose> </j:forEach> </tr> </j:forEach> </table> </j:forEach> </j:if> <BR /> </j:if> <!-- HEALTH TEMPLATE --> <div class="content"> <j:set var="healthIconSize" value="16x16" /> <j:set var="healthReports" value="${project.buildHealthReports}" /> <j:if test="${healthReports!=null}"> <h1>Health Report</h1> <table> <tr> <th>W</th> <th>Description</th> <th>Score</th> </tr> <j:forEach var="healthReport" items="${healthReports}"> <tr> <td> <img src="${rooturl}${healthReport.getIconUrl(healthIconSize)}" /> </td> <td>${healthReport.description}</td> <td>${healthReport.score}</td> </tr> </j:forEach> </table> <br /> </j:if> </div> <!-- CONSOLE OUTPUT --> <j:getStatic var="resultFailure" field="FAILURE" className="hudson.model.Result" /> <j:if test="${build.result==resultFailure}"> <TABLE width="100%" cellpadding="0" cellspacing="0"> <TR> <TD class="bg1"> <B>CONSOLE OUTPUT</B> </TD> </TR> <j:forEach var="line" items="${build.getLog(100)}"> <TR> <TD class="console">${line}</TD> </TR> </j:forEach> </TABLE> <BR /> </j:if> </BODY> </j:jelly> <!-- BRANCH --> <TD>Subversion Repo:</TD> <TD>${build.scm}</TD> </TR> does not work, and i am not sure which argument to use with build object to get subversion url. outside jelly script, i can get the Subversion URL, using: Subversion URL: ${ENV, var="SVN_URL"}

    Read the article

  • How to work with file and streams in php,case: if we open file in Class A and pass open stream to Cl

    - by Rachel
    I have two class, one is Business Logic Class{BLO} and the other one is Data Access Class{DAO} and I have dependency in my BLO class to my Dao class. Basically am opening a csv file to write into it in my BLO class using inside its constructor as I am creating an object of BLO and passing in file from command prompt: Code: $this->fin = fopen($file,'w+') or die('Cannot open file'); Now inside BLO I have one function notifiy, which call has dependency to DAO class and call getCurrentDBSnapshot function from the Dao and passes the open stream so that data gets populated into the stream. Code: Blo Class Constructor: public function __construct($file) { //Open Unica File for parsing. $this->fin = fopen($file,'w+') or die('Cannot open file'); // Initialize the Repository DAO. $this->dao = new Dao('REPO'); } Blo Class method that interacts with Dao Method and call getCurrentDBSnapshot. public function notifiy() { $data = $this->fin; var_dump($data); //resource(9) of type (stream) $outputFile=$this->dao->getCurrentDBSnapshot($data); // So basically am passing in $data which is resource((9) of type (stream) } Dao function: getCurrentDBSnapshot which get current state of Database table. public function getCurrentDBSnapshot($data) { $query = "SELECT * FROM myTable"; //Basically just preparing the query. $stmt = $this->connection->prepare($query); // Execute the statement $stmt->execute(); $header = array(); while ($row=$stmt->fetch(PDO::FETCH_ASSOC)) { if(empty($header)) { // Get the header values from table(columnnames) $header = array_keys($row); // Populate csv file with header information from the mytable fputcsv($data, $header); } // Export every row to a file fputcsv($data, $row); } var_dump($data);//resource(9) of type (stream) return $data; } So basically in am getting back resource(9) of type (stream) from getCurrentDBSnapshot and am storing that stream into $outputFile in Blo class method notify. Now I want to close the file which I opened for writing and so it should be fclose of $outputFile or $data, because in both the cases it gives me: var_dump(fclose($outputFile)) as bool(false) var_dump(fclose($data)) as bool(false) and var_dump($outputFile) as resource(9) of type (Unknown) var_dump($data) as resource(9) of type (Unknown) My question is that if I open file using fopen in class A and if I call class B method from Class A method and pass an open stream, in our case $data, than Class B would perform some work and return back and open stream and so How can I close that open stream in Class A's method or it is ok to keep that stream open and not use fclose ? Would appreciate inputs as am not very sure as how this can be implemented.

    Read the article

< Previous Page | 50 51 52 53 54 55 56  | Next Page >