Search Results

Search found 60836 results on 2434 pages for 'system io directory'.

Page 451/2434 | < Previous Page | 447 448 449 450 451 452 453 454 455 456 457 458  | Next Page >

  • Can't get my head around background workers in .NET

    - by Connel
    I have wrote an application that syncs two folders together. The problem with the program is that it stops responding whilst copying files. A quick search of stack-overflow told me I need to use something called a background worker. I have read a few pages on the net about this but find it really hard to understand as I'm pretty new to programming. Below is the code for my application - how can I simply put all of the File.Copy(....) commands into their own background worker (if that's even how it works)? Below is the code for the button click event that runs the sub procedure and the sub procedure I wish to use a background worker on all the File.Copy lines. Button event: protected virtual void OnBtnSyncClicked (object sender, System.EventArgs e) { //sets running boolean to true booRunning=true; //sets progress bar to 0 prgProgressBar.Fraction = 0; //resets values used by progressbar dblCurrentStatus = 0; dblFolderSize = 0; //tests if user has entered the same folder for both target and destination if (fchDestination.CurrentFolder == fchTarget.CurrentFolder) { //creates message box MessageDialog msdSame = new MessageDialog(this, DialogFlags.Modal, MessageType.Error, ButtonsType.Close, "You cannot sync two folders that are the same"); //sets message box title msdSame.Title="Error"; //sets respone type ResponseType response = (ResponseType) msdSame.Run(); //if user clicks on close button or closes window then close message box if (response == ResponseType.Close || response == ResponseType.DeleteEvent) { msdSame.Destroy(); } return; } //tests if user has entered a target folder that is an extension of the destination folder // or if user has entered a desatination folder that is an extension of the target folder if (fchTarget.CurrentFolder.StartsWith(fchDestination.CurrentFolder) || fchDestination.CurrentFolder.StartsWith(fchTarget.CurrentFolder)) { //creates message box MessageDialog msdContains = new MessageDialog(this, DialogFlags.Modal, MessageType.Error, ButtonsType.Close, "You cannot sync a folder with one of its parent folders"); //sets message box title msdContains.Title="Error"; //sets respone type and runs message box ResponseType response = (ResponseType) msdContains.Run(); //if user clicks on close button or closes window then close message box if (response == ResponseType.Close || response == ResponseType.DeleteEvent) { msdContains.Destroy(); } return; } //gets folder size of target folder FileSizeOfTarget(fchTarget.CurrentFolder); //gets folder size of destination folder FileSizeOfDestination(fchDestination.CurrentFolder); //runs SyncTarget procedure SyncTarget(fchTarget.CurrentFolder); //runs SyncDestination procedure SyncDestination(fchDestination.CurrentFolder); //informs user process is complete prgProgressBar.Text = "Finished"; //sets running bool to false booRunning = false; } Sync sub-procedure: protected void SyncTarget (string strCurrentDirectory) { //string array of all the directories in directory string[] staAllDirectories = Directory.GetDirectories(strCurrentDirectory); //string array of all the files in directory string[] staAllFiles = Directory.GetFiles(strCurrentDirectory); //loop over each file in directory foreach (string strFile in staAllFiles) { //string of just the file's name and not its path string strFileName = System.IO.Path.GetFileName(strFile); //string containing directory in target folder string strDirectoryInsideTarget = System.IO.Path.GetDirectoryName(strFile).Substring(fchTarget.CurrentFolder.Length); //inform user as to what file is being copied prgProgressBar.Text="Syncing " + strFile; //tests if file does not exist in destination folder if (!File.Exists(fchDestination.CurrentFolder + "/" + strDirectoryInsideTarget + "/" + strFileName)) { //if file does not exist copy it to destination folder, the true below means overwrite if file already exists File.Copy (strFile, fchDestination.CurrentFolder + "/" + strDirectoryInsideTarget + "/" + strFileName, true); } //tests if file does exist in destination folder if (File.Exists(fchDestination.CurrentFolder + "/" + strDirectoryInsideTarget + "/" + strFileName)) { //long (number) that contains date of last write time of target file long lngTargetFileDate = File.GetLastWriteTime(strFile).ToFileTime(); //long (number) that contains date of last write time of destination file long lngDestinationFileDate = File.GetLastWriteTime(fchDestination.CurrentFolder + "/" + strDirectoryInsideTarget + "/" + strFileName).ToFileTime(); //tests if target file is newer than destination file if (lngTargetFileDate > lngDestinationFileDate) { //if it is newer then copy file from target folder to destination folder File.Copy (strFile, fchDestination.CurrentFolder + "/" + strDirectoryInsideTarget + "/" + strFileName, true); } } //gets current file size FileInfo FileSize = new FileInfo(strFile); //sets file's filesize to dblCurrentStatus and adds it to current total of files dblCurrentStatus = dblCurrentStatus + FileSize.Length; double dblPercentage = dblCurrentStatus/dblFolderSize; prgProgressBar.Fraction = dblPercentage; } //loop over each folder in target folder foreach (string strDirectory in staAllDirectories) { //string containing directories inside target folder but not any higher directories string strDirectoryInsideTarget = strDirectory.Substring(fchTarget.CurrentFolder.Length); //tests if directory does not exist inside destination folder if (!Directory.Exists(fchDestination.CurrentFolder + "/" + strDirectoryInsideTarget)) { //it directory does not exisit create it Directory.CreateDirectory(fchDestination.CurrentFolder + "/" + strDirectoryInsideTarget); } //run sync on all files in directory SyncTarget(strDirectory); } } Any help will be greatly appreciated as after this the program will pretty much be finished :D

    Read the article

  • Can't get my head arround background workers in c#

    - by Connel
    I have wrote an application that syncs two folders together. The problem with the program is that it stops responding whilst copying files. A quick search of stack-overflow told me I need to use something called a background worker. I have read a few pages on the net about this but find it really hard to understand as I'm pretty new to programming. Below is the code for my application - how can I simply put all of the File.Copy(....) commands into their own background worker (if that's even how it works)? Below is the code for the button click event that runs the sub procedure and the sub procedure I wish to use a background worker on all the File.Copy lines. Button event: protected virtual void OnBtnSyncClicked (object sender, System.EventArgs e) { //sets running boolean to true booRunning=true; //sets progress bar to 0 prgProgressBar.Fraction = 0; //resets values used by progressbar dblCurrentStatus = 0; dblFolderSize = 0; //tests if user has entered the same folder for both target and destination if (fchDestination.CurrentFolder == fchTarget.CurrentFolder) { //creates message box MessageDialog msdSame = new MessageDialog(this, DialogFlags.Modal, MessageType.Error, ButtonsType.Close, "You cannot sync two folders that are the same"); //sets message box title msdSame.Title="Error"; //sets respone type ResponseType response = (ResponseType) msdSame.Run(); //if user clicks on close button or closes window then close message box if (response == ResponseType.Close || response == ResponseType.DeleteEvent) { msdSame.Destroy(); } return; } //tests if user has entered a target folder that is an extension of the destination folder // or if user has entered a desatination folder that is an extension of the target folder if (fchTarget.CurrentFolder.StartsWith(fchDestination.CurrentFolder) || fchDestination.CurrentFolder.StartsWith(fchTarget.CurrentFolder)) { //creates message box MessageDialog msdContains = new MessageDialog(this, DialogFlags.Modal, MessageType.Error, ButtonsType.Close, "You cannot sync a folder with one of its parent folders"); //sets message box title msdContains.Title="Error"; //sets respone type and runs message box ResponseType response = (ResponseType) msdContains.Run(); //if user clicks on close button or closes window then close message box if (response == ResponseType.Close || response == ResponseType.DeleteEvent) { msdContains.Destroy(); } return; } //gets folder size of target folder FileSizeOfTarget(fchTarget.CurrentFolder); //gets folder size of destination folder FileSizeOfDestination(fchDestination.CurrentFolder); //runs SyncTarget procedure SyncTarget(fchTarget.CurrentFolder); //runs SyncDestination procedure SyncDestination(fchDestination.CurrentFolder); //informs user process is complete prgProgressBar.Text = "Finished"; //sets running bool to false booRunning = false; } Sync sub-procedure: protected void SyncTarget (string strCurrentDirectory) { //string array of all the directories in directory string[] staAllDirectories = Directory.GetDirectories(strCurrentDirectory); //string array of all the files in directory string[] staAllFiles = Directory.GetFiles(strCurrentDirectory); //loop over each file in directory foreach (string strFile in staAllFiles) { //string of just the file's name and not its path string strFileName = System.IO.Path.GetFileName(strFile); //string containing directory in target folder string strDirectoryInsideTarget = System.IO.Path.GetDirectoryName(strFile).Substring(fchTarget.CurrentFolder.Length); //inform user as to what file is being copied prgProgressBar.Text="Syncing " + strFile; //tests if file does not exist in destination folder if (!File.Exists(fchDestination.CurrentFolder + "/" + strDirectoryInsideTarget + "/" + strFileName)) { //if file does not exist copy it to destination folder, the true below means overwrite if file already exists File.Copy (strFile, fchDestination.CurrentFolder + "/" + strDirectoryInsideTarget + "/" + strFileName, true); } //tests if file does exist in destination folder if (File.Exists(fchDestination.CurrentFolder + "/" + strDirectoryInsideTarget + "/" + strFileName)) { //long (number) that contains date of last write time of target file long lngTargetFileDate = File.GetLastWriteTime(strFile).ToFileTime(); //long (number) that contains date of last write time of destination file long lngDestinationFileDate = File.GetLastWriteTime(fchDestination.CurrentFolder + "/" + strDirectoryInsideTarget + "/" + strFileName).ToFileTime(); //tests if target file is newer than destination file if (lngTargetFileDate > lngDestinationFileDate) { //if it is newer then copy file from target folder to destination folder File.Copy (strFile, fchDestination.CurrentFolder + "/" + strDirectoryInsideTarget + "/" + strFileName, true); } } //gets current file size FileInfo FileSize = new FileInfo(strFile); //sets file's filesize to dblCurrentStatus and adds it to current total of files dblCurrentStatus = dblCurrentStatus + FileSize.Length; double dblPercentage = dblCurrentStatus/dblFolderSize; prgProgressBar.Fraction = dblPercentage; } //loop over each folder in target folder foreach (string strDirectory in staAllDirectories) { //string containing directories inside target folder but not any higher directories string strDirectoryInsideTarget = strDirectory.Substring(fchTarget.CurrentFolder.Length); //tests if directory does not exist inside destination folder if (!Directory.Exists(fchDestination.CurrentFolder + "/" + strDirectoryInsideTarget)) { //it directory does not exisit create it Directory.CreateDirectory(fchDestination.CurrentFolder + "/" + strDirectoryInsideTarget); } //run sync on all files in directory SyncTarget(strDirectory); } } Any help will be greatly appreciated as after this the program will pretty much be finished :D

    Read the article

  • How do I stop Sophos anti virus from scanning directories that are under source control

    - by user26453
    From googling it seems its well known that SophosAV as well as other AV programs have issues with how they interact and can inhibit source control utilities like TortoiseHG or TortoiseSVN. One solution is to exclude directories under source control from on-access scanning as detailed here on Sophos's support site. There is a corollary article that mentions some issues related to this, namely the need to place multiple entries for exclusions based on the possibility of the location being accessed through the short vs. long name (e.g., Progra~1 vs. "Program Files"). One other twist is I am using a junction to relocate my user directory, C:\Users\Username, to a second hard drive, E:. Since I am not sure how this interacts I have included the source control directory as they are nested in both locations. As a result, I have included the two exclusions for the on-access scanning exclusions (and to be on the safe side on-demand exclusions as well, although this should only come into play when I select a parent directory of the exclusion to be scanned on-demand, but still). You'll notice I have no need to add extra exclusions for those locations based on short vs. long name distinctions. The two exclusion I have then, for both on-access and on-demand scanning exclusions are: C:\Users\Username\source-control-directory E:\source-control-directory However, this does not seem to work as TortoiseHG still lags terribly in response to any request as AV software starts scanning when the directory is accessed via TortoiseHG. I can verify without a doubt that Sophos is causing the problems: I can completely disable on-access scanning. Once this is done TortoiseHG responds very fast to all operations. I cannot leave this disabled obviously, but since the exclusion don't seem to be working, what next?

    Read the article

  • how to install newer GCC version in CentOS 5.7?

    - by gkdsp
    Using CentOS 5.7, how do I install GCC version 4.6? I just installed version 4.4 using # yum install gcc44 but that version still doesn't support variable length arrays from C99 standard. I don't see a newer version than 4.4 when I type: [root@host2 /etc]# yum list gcc\* Excluding Packages in global exclude list Finished Installed Packages gcc.x86_64 4.1.2-51.el5 installed gcc-c++.x86_64 4.1.2-51.el5 installed gcc-gfortran.x86_64 4.1.2-51.el5 installed gcc44.x86_64 4.4.4-13.el5 installed Available Packages gcc-gnat.x86_64 4.1.2-51.el5 system-base gcc-java.x86_64 4.1.2-51.el5 system-base gcc-objc.x86_64 4.1.2-51.el5 system-base gcc-objc++.x86_64 4.1.2-51.el5 system-base gcc44-c++.x86_64 4.4.4-13.el5 system-base gcc44-gfortran.x86_64 4.4.4-13.el5 system-base I wonder if the newer versions of GCC are not available to CentOS because they're deemed not yet reliable/stable enough (?) Can I download gcc-4.5.3.tar.gz from here: http://fileboar.com/gcc/releases/gcc-4.5.3/ but then how to install?

    Read the article

  • Last (I think and hope) problems configuring SSL certificate with Apache and VirtualHosts

    - by user65567
    Finally I set apache2 to get a single certificate for all subdomains. [...] # Go ahead and accept connections for these vhosts # from non-SNI clients SSLStrictSNIVHostCheck off # Apache setup which will listen for and accept SSL connections on port 443. Listen 443 # Listen for virtual host requests on all IP addresses NameVirtualHost *:443 # Because this virtual host is defined first, it will # be used as the default if the hostname is not received # in the SSL handshake, e.g. if the browser doesn't support # SNI. <VirtualHost *:443> ServerName domain.localhost DocumentRoot "/Users/<my_user_name>/Sites/domain/public" <Directory "/Users/<my_user_name>/Sites/domain/public"> Order allow,deny Allow from all </Directory> # SSL Configuration SSLEngine on ... </VirtualHost> <VirtualHost *:443> ServerName subdomain1.domain.localhost DocumentRoot "/Users/<my_user_name>/Sites/subdomain1/public" <Directory "/Users/<my_user_name>/Sites/subdomain1/public"> Order allow,deny Allow from all </Directory> # SSL Configuration SSLEngine on ... </VirtualHost> <VirtualHost *:443> ServerName subdomain2.domain.localhost DocumentRoot "/Users/<my_user_name>/Sites/subdomain2/public" <Directory "/Users/<my_user_name>/Sites/subdomain2/public"> Order allow,deny Allow from all </Directory> # SSL Configuration SSLEngine on ... </VirtualHost> So, for example, I can correctly access https://subdomain1.domain.localhost https://subdomain2.domain.localhost ... Now, anyway, I have problems on accessing http://subdomain1.domain.localhost http://subdomain2.domain.localhost ... Since I use a Mac Os, on accessing the "http: version", I get a default page "Your website." (instead of a error). Why does it happen?

    Read the article

  • Hudson Mercurial checkout throws exception on Debian

    - by Jack
    I'm trying to configure Hudson to checkout my site's sources from Mercurial but it throws an exception. The /var/lib/hudson/jobs/jobname directory does exist, and I can create a workspace directory in there (even after su hudson), but as soon as I run the Hudson job again this directory disappears and the job ends with the same error: java.io.IOException: Cannot run program "hg" (in directory "/var/lib/hudson/jobs/jobname/workspace"): java.io.IOException: error=2, No such file or directory at java.lang.ProcessBuilder.start(ProcessBuilder.java:460) at hudson.Proc$LocalProc.<init>(Proc.java:192) at hudson.Proc$LocalProc.<init>(Proc.java:164) at hudson.Launcher$LocalLauncher.launch(Launcher.java:639) at hudson.Launcher$ProcStarter.start(Launcher.java:274) at hudson.Launcher$ProcStarter.join(Launcher.java:281) at hudson.plugins.mercurial.MercurialSCM.joinWithPossibleTimeout(MercurialSCM.java:298) at hudson.plugins.mercurial.HgExe.popen(HgExe.java:191) at hudson.plugins.mercurial.HgExe.tip(HgExe.java:171) at hudson.plugins.mercurial.MercurialSCM.calcRevisionsFromBuild(MercurialSCM.java:254) at hudson.scm.SCM._calcRevisionsFromBuild(SCM.java:304) at hudson.model.AbstractProject.calcPollingBaseline(AbstractProject.java:1183) at hudson.model.AbstractProject.checkout(AbstractProject.java:1172) at hudson.model.AbstractBuild$AbstractRunner.checkout(AbstractBuild.java:499) at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:415) at hudson.model.Run.run(Run.java:1362) at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46) at hudson.model.ResourceController.execute(ResourceController.java:88) at hudson.model.Executor.run(Executor.java:145) Caused by: java.io.IOException: java.io.IOException: error=2, No such file or directory at java.lang.UNIXProcess.<init>(UNIXProcess.java:148) at java.lang.ProcessImpl.start(ProcessImpl.java:65) at java.lang.ProcessBuilder.start(ProcessBuilder.java:453) Running on Debian 6.0.1 I wonder if anyone has ran into this before, and hopefully solved it?

    Read the article

  • How to stop SophosAV from scanning directories under source control

    - by user26453
    From googling it seems its well known that SophosAV as well as other AV programs have issues with how they interact and can inhibit source control utilities like TortoiseHG or TortoiseSVN. One solution is to exclude directories under source control from on-access scanning as detailed here on Sophos's support site. There is a corollary article that mentions some issues related to this, namely need to place multiple entries for exclusions based on the possibility of the location being accessed through the short vs. long name (e.g., Progra~1 vs. "Program Files"). One other twist is I am using a junction to relocate my user directory, C:\Users\Username, to a second hard drive, E:. Since I am not sure how this interacts I have included the source control directory as they are nested in both locations. As a result, I have included the two exclusions for the on-access scanning exclusions (and to be on the safe side on-demand exclusions as well, although this should only come into play when I select a parent directory of the exclusion to be scanned on-demand, but still). You'll notice I have no need to add extra exclusions for those locations based on short vs. long name distinctions. The two exclusion I have then, for both on-access and on-demand scanning exclusions are: C:\Users\Username\source-control-directory E:\source-control-directory However, this does not seem to work as TortoiseHG still lags terribly in response to any request as AV software starts scanning when the directory is accessed via TortoiseHG. I can verify without a doubt that Sophos is causing the problems: I can completely disable on-access scanning. Once this is done TortoiseHG responds very fast to all operations. I cannot leave this disabled obviously, but since the exclusion don't seem to be working, what next?

    Read the article

  • Fedora 16 can connect to samba share using smbclient but not in nautilus 3.2.1

    - by Nathan Jones
    I have a machine running Ubuntu 11.10 Server acting as a Samba server to share my home directory. Everything works fine on my Windows 7 machine, but on my Fedora 16 laptop, if I use Nautilus to try to access the share using smb://192.168.0.8/nathan in the location bar, it just has the loading cursor and does nothing. It never shows any errors, nothing. Using smbclient works just fine, but I'd like to get it working in Nautilus. I know that there can be problems with SELinux and Samba, so I created a file called booleans.local that contains samba_enable_home_dirs=1. My smb.conf file looks like this: # For Unix password sync to work on a Debian GNU/Linux system, the following # parameters must be set (thanks to Ian Kahan <<[email protected]> for # sending the correct chat script for the passwd program in Debian Sarge). passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . # This boolean controls whether PAM will be used for password changes # when requested by an SMB client instead of the program listed in # 'passwd program'. The default is 'no'. pam password change = yes # This option controls how unsuccessful authentication attempts are mapped # to anonymous connections map to guest = bad user ########## Domains ########### # Is this machine able to authenticate users. Both PDC and BDC # must have this setting enabled. If you are the BDC you must # change the 'domain master' setting to no # ; domain logons = yes # # The following setting only takes effect if 'domain logons' is set # It specifies the location of the user's profile directory # from the client point of view) # The following required a [profiles] share to be setup on the # samba server (see below) ; logon path = \\%N\profiles\%U # Another common choice is storing the profile in the user's home directory # (this is Samba's default) # logon path = \\%N\%U\profile # The following setting only takes effect if 'domain logons' is set # It specifies the location of a user's home directory (from the client # point of view) ; logon drive = H: # logon home = \\%N\%U # The following setting only takes effect if 'domain logons' is set # It specifies the script to run during logon. The script must be stored # in the [netlogon] share # NOTE: Must be store in 'DOS' file format convention ; logon script = logon.cmd # This allows Unix users to be created on the domain controller via the SAMR # RPC pipe. The example command creates a user account with a disabled Unix # password; please adapt to your needs ; add user script = /usr/sbin/adduser --quiet --disabled-password --gecos "" %u # This allows machine accounts to be created on the domain controller via the # SAMR RPC pipe. # The following assumes a "machines" group exists on the system ; add machine script = /usr/sbin/useradd -g machines -c "%u machine account" -d /var/lib/samba -s /bin/false %u # This allows Unix groups to be created on the domain controller via the SAMR # RPC pipe. ; add group script = /usr/sbin/addgroup --force-badname %g ########## Printing ########## # If you want to automatically load your printer list rather # than setting them up individually then you'll need this # load printers = yes # lpr(ng) printing. You may wish to override the location of the # printcap file ; printing = bsd ; printcap name = /etc/printcap # CUPS printing. See also the cupsaddsmb(8) manpage in the # cupsys-client package. ; printing = cups ; printcap name = cups ############ Misc ############ # Using the following line enables you to customise your configuration # on a per machine basis. The %m gets replaced with the netbios name # of the machine that is connecting ; include = /home/samba/etc/smb.conf.%m # Most people will find that this option gives better performance. # See smb.conf(5) and /usr/share/doc/samba-doc/htmldocs/Samba3-HOWTO/speed.html # for details # You may want to add the following on a Linux system: # SO_RCVBUF=8192 SO_SNDBUF=8192 # socket options = TCP_NODELAY # The following parameter is useful only if you have the linpopup package # installed. The samba maintainer and the linpopup maintainer are # working to ease installation and configuration of linpopup and samba. ; message command = /bin/sh -c '/usr/bin/linpopup "%f" "%m" %s; rm %s' & # Domain Master specifies Samba to be the Domain Master Browser. If this # machine will be configured as a BDC (a secondary logon server), you # must set this to 'no'; otherwise, the default behavior is recommended. # domain master = auto # Some defaults for winbind (make sure you're not using the ranges # for something else.) ; idmap uid = 10000-20000 ; idmap gid = 10000-20000 ; template shell = /bin/bash # The following was the default behaviour in sarge, # but samba upstream reverted the default because it might induce # performance issues in large organizations. # See Debian bug #368251 for some of the consequences of *not* # having this setting and smb.conf(5) for details. ; winbind enum groups = yes ; winbind enum users = yes # Setup usershare options to enable non-root users to share folders # with the net usershare command. # Maximum number of usershare. 0 (default) means that usershare is disabled. ; usershare max shares = 100 # Allow users who've been granted usershare privileges to create # public shares, not just authenticated ones usershare allow guests = yes #======================= Share Definitions ======================= # Un-comment the following (and tweak the other settings below to suit) # to enable the default home directory shares. This will share each # user's home director as \\server\username [homes] comment = Home Directories browseable = yes # By default, the home directories are exported read-only. Change the # next parameter to 'no' if you want to be able to write to them. read only = no # File creation mask is set to 0700 for security reasons. If you want to # create files with group=rw permissions, set next parameter to 0775. ; create mask = 0775 # Directory creation mask is set to 0700 for security reasons. If you want to # create dirs. with group=rw permissions, set next parameter to 0775. ; directory mask = 0775 # By default, \\server\username shares can be connected to by anyone # with access to the samba server. Un-comment the following parameter # to make sure that only "username" can connect to \\server\username # The following parameter makes sure that only "username" can connect # # This might need tweaking when using external authentication schemes valid users = %S # Un-comment the following and create the netlogon directory for Domain Logons # (you need to configure Samba to act as a domain controller too.) ;[netlogon] ; comment = Network Logon Service ; path = /home/samba/netlogon ; guest ok = yes ; read only = yes # Un-comment the following and create the profiles directory to store # users profiles (see the "logon path" option above) # (you need to configure Samba to act as a domain controller too.) # The path below should be writable by all users so that their # profile directory may be created the first time they log on ;[profiles] ; comment = Users profiles ; path = /home/samba/profiles ; guest ok = no ; browseable = no ; create mask = 0600 ; directory mask = 0700 [printers] comment = All Printers browseable = no path = /var/spool/samba printable = yes guest ok = no read only = no create mask = 0700 # Windows clients look for this share name as a source of downloadable # printer drivers [print$] comment = Printer Drivers path = /var/lib/samba/printers browseable = yes read only = yes guest ok = no # Uncomment to allow remote administration of Windows print drivers. # You may need to replace 'lpadmin' with the name of the group your # admin users are members of. # Please note that you also need to set appropriate Unix permissions # to the drivers directory for these users to have write rights in it ; write list = root, @lpadmin # A sample share for sharing your CD-ROM with others. ;[cdrom] ; comment = Samba server's CD-ROM ; read only = yes ; locking = no ; path = /cdrom ; guest ok = yes # The next two parameters show how to auto-mount a CD-ROM when the # cdrom share is accesed. For this to work /etc/fstab must contain # an entry like this: # # /dev/scd0 /cdrom iso9660 defaults,noauto,ro,user 0 0 # # The CD-ROM gets unmounted automatically after the connection to the # # If you don't want to use auto-mounting/unmounting make sure the CD # is mounted on /cdrom # ; preexec = /bin/mount /cdrom ; postexec = /bin/umount /cdrom smbusers: <nathan> = <"nathan"> Any help would be very much appreciated! Thanks!

    Read the article

  • httpd not starting with systemd on F17

    - by malfy
    Title says it all. This is a fresh Fedora 17 system running on a Xen hypervisor. No idea why it won't start [root@box~]  uname -a Linux box.localhost 3.5.4-2.fc17.i686.PAE #1 SMP Wed Sep 26 22:10:23 UTC 2012 i686 i686 i386 GNU/Linux [root@box~]  cat /etc/redhat-release Fedora release 17 (Beefy Miracle) [root@box~]  systemctl enable httpd.service ln -s '/usr/lib/systemd/system/httpd.service' '/etc/systemd/system/multi-user.target.wants/httpd.service' [root@box~]  systemctl start httpd.service Job failed. See system journal and 'systemctl status' for details. [root@box~]  systemctl status httpd.service httpd.service - The Apache HTTP Server (prefork MPM) Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled) Active: failed (Result: exit-code) since Fri, 19 Oct 2012 22:43:37 -0500; 3s ago Process: 18225 ExecStart=/usr/sbin/httpd $OPTIONS -k start (code=exited, status=226/NAMESPACE) CGroup: name=systemd:/system/httpd.service Oct 19 22:43:37 box.localhost systemd[18225]: Failed at step NAMESPACE spawning /usr/sbin/httpd: No such file or directory [root@box~]  ls -al /usr/sbin/httpd -rwxr-xr-x 1 root root 343496 Apr 30 04:56 /usr/sbin/httpd

    Read the article

  • MemCache-repcached compile error

    - by Ramy Allam
    I'm trying to install [memcached-1.2.8-repcached-2.2.1]( http://sourceforge.net/projects/repcached/files/latest/download?source=files) And I have the following error after running the make command: make all-recursive make[1]: Entering directory `/usr/local/src/pro/memcached-1.2.8-repcached-2.2.1' Making all in doc make[2]: Entering directory `/usr/local/src/pro/memcached-1.2.8-repcached-2.2.1/doc' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/usr/local/src/pro/memcached-1.2.8-repcached-2.2.1/doc' make[2]: Entering directory `/usr/local/src/pro/memcached-1.2.8-repcached-2.2.1' gcc -DHAVE_CONFIG_H -I. -DNDEBUG -g -O2 -MT memcached-memcached.o -MD -MP -MF .d eps/memcached-memcached.Tpo -c -o memcached-memcached.o test -f 'memcached.c' || echo './'memcached.c memcached.c: In function ‘add_iov’: memcached.c:697: error: ‘IOV_MAX’ undeclared (first use in this function) memcached.c:697: error: (Each undeclared identifier is reported only once memcached.c:697: error: for each function it appears in.) make[2]: * [memcached-memcached.o] Error 1 make[2]: Leaving directory `/usr/local/src/pro/memcached-1.2.8-repcached-2.2.1' make[1]: * [all-recursive] Error 1 make[1]: Leaving directory `/usr/local/src/pro/memcached-1.2.8-repcached-2.2.1' make: * [all] Error 2 OS : Centos5.7 64bit gcc-4.1.2-51.el5 gcc-c++-4.1.2-51.el5 libgcc-4.1.2-51.el5 Note : Memcached and memcache extension for php are already installed root@server[~]# memcached -h memcached 1.4.5 php ext http://pecl.php.net/get/memcache-2.2.6.tgz

    Read the article

  • WAMP vhost issues with all vhost pointing to the first vhost statement

    - by Rick
    I am trying to setup vhosts in WAMPSERVER and I am running into an issue where all sites are pointing to the first vhosts and not delegating properly. Has anyone had this issue? Here is my setup. In windows hosts file: 127.0.0.1 siteabc.local 127.0.0.1 sitexyz.local In httpd-vhosts.conf: <VirtualHost 127.0.0.1> DocumentRoot "C:\Users\Rick\Documents\Projects\siteabc" ServerName siteabc.local ErrorLog "logs/siteabc-error.log" CustomLog "logs/siteabc-access.log" common <Directory "C:\Users\Rick\Documents\Projects\siteabc"> Options Indexes FollowSymLinks AllowOverride all Order Deny,Allow Allow from all </Directory> </VirtualHost> <VirtualHost 127.0.0.1> DocumentRoot "C:\Users\Rick\Documents\Projects\sitexyz" ServerName sitexyz.local ErrorLog "logs/sitexyz-error.log" CustomLog "logs/sitexyz-access.log" common <Directory "C:\Users\Rick\Documents\Projects\sitexyz"> Options Indexes FollowSymLinks AllowOverride all Order Deny,Allow Allow from all </Directory> </VirtualHost> <VirtualHost 127.0.0.1> DocumentRoot "C:\Users\Rick\Documents\Projects" ServerName localhost ErrorLog "logs/localhost-error.log" CustomLog "logs/localhost-access.log" common <Directory "C:\Users\Rick\Documents\Projects"> Options Indexes FollowSymLinks AllowOverride all Order Deny,Allow Allow from all </Directory> </VirtualHost> Ok so from this setup going to siteabc works...but going to sitexyz, it still goes to siteabc. Not sure what I did wrong here. Thanks for looking.

    Read the article

  • Error with Apache, Nagios and Snorby integration

    - by user1428366
    I'm trying to use apache to serve two different websites (Nagios and Snorby). The problem is that when I try to see the "/snorby" website, apache sends me the "It works" page. If I try to access to "/nagios" it works perfectly. Snorby is running under ruby passenger .This are the config files. <VirtualHost *:80> ScriptAlias /nagios/cgi-bin "/srv/nagios/sbin" <Directory "/srv/nagios/sbin"> # SSLRequireSSL Options ExecCGI AllowOverride None Order allow,deny Allow from all # Order deny,allow # Deny from all # Allow from 127.0.0.1 AuthName "Nagios Access" AuthType Basic AuthUserFile /srv/nagios/etc/htpasswd.users Require valid-user </Directory> Alias /nagios "/srv/nagios/share" <Directory "/srv/nagios/share"> # SSLRequireSSL Options None AllowOverride None Order allow,deny Allow from all # Order deny,allow # Deny from all # Allow from 127.0.0.1 AuthName "Nagios Access" AuthType Basic AuthUserFile /srv/nagios/etc/htpasswd.users Require valid-user </Directory> </VirtualHost> And the other one is this: <VirtualHost *:80> #Alias /snorby "/var/www/snorby-2.6.0/public" # !!! Be sure to point DocumentRoot to 'public'! DocumentRoot /var/www/snorby-2.6.0/public <Directory /var/www/snorby-2.6.0/public> # This relaxes Apache security settings. AllowOverride all # MultiViews must be turned off. Options -MultiViews </Directory> </VirtualHost> If I disable the Nagios webpage, the Snorby webpage works. I think the problem is Snorby because when I try to access to the Ip address with Nagios page disable, the webapplication redirects me to http:// myserverip/dashboard. Can anyone help me please? Thank you so much! Regards

    Read the article

  • How to setup a virtual host in Ubuntu?

    - by Rade
    I have an app that's accessible via 1.2.3.4/myapp. The app is installed in /var/www/myapp. I've set up a subdomain(apps.mydomain.com) that points to 1.2.3.4. I want the server to point to var/www/myapp if I type apps.mydomain.com/myapp, how do I do that? I have experience creating virtual hosts(lots of them) locally but I'm lost because it's now in production and it's a little different. Here's my virtual host config: <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName apps.mydomain.com/myapp DocumentRoot /var/www/myapp/public <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride All Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> Any idea why I still see the files instead of pointing me to the document root? Just in case someone might ask, the app is based on Laravel 4 framework. It's really bad right now because anyone can access the files from the browser.

    Read the article

  • should the same machine key be used in development and production environments?

    - by Henry Troup
    Our production servers all have the same machine key. However, our production and development systems do not have identical machine keys. We get heaps (about one per second) of exceptions of the form System.Security.Cryptography.CryptographicException: Padding is invalid and cannot be removed. at System.Security.Cryptography.RijndaelManagedTransform.DecryptData() at System.Security.Cryptography.RijndaelManagedTransform.TransformFinalBlock() at System.Security.Cryptography.CryptoStream.FlushFinalBlock() at System.Web.Configuration.MachineKeySection.EncryptOrDecryptData() at System.Web.UI.Page.DecryptStringWithIV()... We deploy the code after a build, .cs source is not present on production. aspx files are present on production. (Should I have posted in Stack Overflow? It's not a coding question.) From experimentation, we've found using the dev machine key value causes the exceptions to go away. Does anyone have documentation that I can use with the security team on the need for identical keys at compile and deployment time?

    Read the article

  • Problem configuring virtual host.

    - by Zeeshan Rang
    I am tring to configure apache virtual host for my computer. But i am facing problem in doing so. i have made required changes in my C:\WINDOWS\system32\drivers\etc\hosts then C:\xampp\apache\conf\extra\httpd-vhosts.conf I added the following lines in httpd-vhosts.conf ########################Virtual Hosts Config below################## NameVirtualHost 127.0.0.1 <VirtualHost localhost> ServerName localhost DocumentRoot "C:\xampp\htdocs" DirectoryIndex index.php index.html <Directory "C:\xampp\htdocs"> AllowOverride All </Directory> </VirtualHost> <VirtualHost virtual.cloudse7en.com> ServerName virtual.cloudse7en.com DocumentRoot "C:\development\virtual.cloudse7en.com\httpdocs" DirectoryIndex index.php index.html <Directory "C:\development\virtual.cloudse7en.com\httpdocs"> Options Indexes FollowSymLinks Includes ExecCGI AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> <VirtualHost virtual.app.cloudse7en.com> ServerName virtual.app.cloudse7en.com DocumentRoot "C:\development\virtual.app.cloudse7en.com\httpdocs" DirectoryIndex index.php index.html <Directory "C:\development\virtual.app.cloudse7en.com\httpdocs"> Options Indexes FollowSymLinks Includes ExecCGI AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> ######################################################################## I started my xampp and tried http://localhost in a browser. This works and open up http://localhost/xampp/ but when i try http:http://virtual.app.cloudse7en.com it again opens up http://virtual.app.cloudse7en.com/xampp/ I do not understand the reason. Also i have a windows vista 64 bit, operating system. Do i need to make some other changes too? Regards Zee

    Read the article

  • ext4 filesystem corruption -- maybe hardware error?

    - by pts
    I'm getting these errors in dmesg after about half an hour after I turn on the computer: [ 1355.677957] EXT4-fs error (device sda2): htree_dirblock_to_tree: inode #1318420: (comm updatedb.mlocat) bad entry in directory: directory entry across blocks - block=5251700offset=0(0), inode=1802725748, rec_len=179136, name_len=32 [ 1355.677973] Aborting journal on device sda2-8. [ 1355.678101] EXT4-fs (sda2): Remounting filesystem read-only [ 1355.690144] EXT4-fs error (device sda2): htree_dirblock_to_tree: inode #1318416: (comm updatedb.mlocat) bad entry in directory: directory entry across blocks - block=5251699offset=0(0), inode=2194783952, rec_len=53280, name_len=152 [ 1356.864720] EXT4-fs error (device sda2): htree_dirblock_to_tree: inode #1312795: (comm updatedb.mlocat) bad entry in directory: directory entry across blocks - block=5251176offset=1460(13748), inode=1432317541, rec_len=208208, name_len=119 /dev/sda is an SSD, and it's using the noop scheduler. /etc/fstab entry: UUID=acb4eefa-48ff-4ee1-bb5f-2dccce7d011f / ext4 errors=remount-ro,noatime,discard,user_xattr 0 1 System information: $ cat /proc/mounts | grep /dev/sd /dev/sda1 /boot ext2 rw,noatime,errors=continue 0 0 $ cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=10.04 DISTRIB_CODENAME=lucid DISTRIB_DESCRIPTION="Ubuntu 10.04.3 LTS" $ uname -a Linux leetpad 2.6.35-30-generic-pae #61~lucid1-Ubuntu SMP Thu Oct 13 21:14:29 UTC 2011 i686 GNU/Linux I've run memtest for 7 hours, it didn't found any memory errors. Any obvious ideas what can go wrong in this case? The most reasonable thing I can imagine is that the SSD is silently dropping some write requests, which eventually leads to an EXT4 filesystem inconsistency (but no disk I/O errors). How can this happen? Is there a relevant configuration option I should ensure to be set correctly? What tools should I use to diagnose the hardware failures? Would it be possible to diagnose the SSD failure without overwriting data?

    Read the article

  • Windows 7 PC crashes after being locked a few minutes

    - by White Island
    I have a problem with an Optiplex 755 PC running (32-bit) Windows 7 Professional. I just installed the operating system last week, and ever since, it seems to crash when I lock the screen. It doesn't happen immediately; it's usually after I walk away, so a few minutes (but I'm not sure exactly how many). I have this non-specific error in the Event log. I've run a chkdsk, thinking it might be hard drive-related, but that didn't help. Any ideas on what to check and how are appreciated. :) Log Name: System Source: Microsoft-Windows-Kernel-Power Date: 11/17/2010 3:09:28 PM Event ID: 41 Task Category: (63) Level: Critical Keywords: (2) User: SYSTEM Computer: WORKSTATION.domain.local Description: The system has rebooted without cleanly shutting down first. This error could be caused if the system stopped responding, crashed, or lost power unexpectedly. Event Xml: 41 2 1 63 0 0x8000000000000002 6250 System WORKSTATION.domain.local 0 0x0 0x0 0x0 0x0 false 0

    Read the article

  • Install mod_perl2 on Apache 2.2.14 (Ubuntu10.04)

    - by MICADO
    I have installed via synaptic package ibapache2-mod-perl2. I tried this line in httpd.conf: "LoadModule perl_module modules/mod_perl.so" Apache tells me when I reload the server : "[warn] module perl_module is already loaded, skipping". Well ok, but when i try to look in the browser to a repertory i don't have access .Apache send me the error : Forbidden You don't have permission to access /cgi-bin/ on this server. Apache/2.2.14 (Ubuntu) Server at 192.168.0.10 Port 90 But this should show modperl is installed and that's not the case... I would like my virtual host that follows run with mod_perl2 <VirtualHost v1:80> ServerAdmin webmaster@localhost ServerName v1 DocumentRoot /var/www/v1 <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/v1/html/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /var/www/v1/cgi-bin/ <Directory "/var/www/v1/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> I'd like to know how to configure mod_perl2. Do i have to change something in the apache configuration file to make my cgi repertory works with mod_perl2? Thanks to any help!

    Read the article

  • Cannot get mod_rewrite to work on Mac OSX Mountain Lion

    - by Joel Joel Binks
    I have tried everything I can think of and it still doesn't work. I am trying to get the example code from Larry Ullman's Advanced PHP book to work. His instructions were a bit lacking so I had to do some research. Here is what I have configured: username.conf <Directory "/Users/me/Sites/"> Options Indexes MultiViews FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> httpd.conf LoadModule rewrite_module libexec/apache2/mod_rewrite.so DocumentRoot "/Users/me/Sites" <Directory /> Options Indexes MultiViews FollowSymLinks AllowOverride All Order deny,allow Allow from all </Directory> <Directory "Users/me/Sites"> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny Allow from all </Directory> .htaccess <IfModule mod_rewrite.so> RewriteEngine on RewriteBase /phplearning/ADVANCED/ch02/ # Redirect certain paths to index.php: RewriteRule ^(about|contact|this|that|search)/?$ index.php?p=$1 RewriteLog "/var/log/apache/rewrite.log" RewriteLogLevel 2 </IfModule> Nothing has worked and it won't even log to the rewrite.log file. What have I done wrong? FYI even when I set up an extremely simple rule or use the root as the rewrite base, it still fails. I have also verified the mod_rewrite module is running. I am really angry.

    Read the article

  • Reinstall after a Root Compromise?

    - by Zoredache
    After reading this question on a server compromise, I started to wonder why people continue to seem to believe that they can recover a compromised system using detection/cleanup tools, or by just fixing the hole that was used to compromise the system. Given all the various root kit technologies and other things a hacker can do most experts suggest you should reinstall the operating system. I am hoping to get a better idea why more people don't just take off and nuke the system from orbit. Here are a couple points, that I would like to see addressed. Are there conditions where a format/reinstall would not clean the system? Under what types conditions do you think a system can be cleaned, and when must you do a full reinstall? What reasoning do you have against doing a full reinstall? If you choose not to reinstall, then what method do you use to be reasonably confident you have cleaned and prevented any further damage from happening again.

    Read the article

  • 1 TB hdd and no space to create a partition for linux !

    - by rangalo
    I have a brand new Acer aspire 5811 with core i5 processor and all that. There is windows 7 Home Premium pre-installed on it. I want to install arch and setup a dual boot system. The problem is: Windows shows 4 partitions 14 MB UNKNOWN recovery partition 100MB NTFS System Reserved partition for Windows 7 448GB NTFS Windows 7 system partition 468GB NTFS Data partition for windows 7 But GpartedLive cd and also arch setup show 5 partitions 938Kb UNKOWN system reserved partition 14 MB UNKNOWN recovery partition 100MB NTFS System Reserved partition for Windows 7 448GB NTFS Windows 7 system partition 468GB unusable space Because of this, I cannot create another primary partition. Can any body guide me about how should I go for creating partition for installing arch ? Note: I need to keep windows 7 working. regards.

    Read the article

  • vhost configuration for owncloud

    - by Razer
    I'm using apache2 for hosting owncloud. I configured a vhost file for owncloud, but every time I go on the site my browser downloads a ruby file. Here is my vhost configuration: <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName http://rsserver.fritz.box DocumentRoot /var/www/owncloud/ <Directory /var/www/owncloud/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> Apache error log tells me: [Sat Jun 16 20:46:04 2012] [error] [client xx.xx.xx.xx] Options FollowSymLinks or SymLinksIfOwnerMatch is off which implies that RewriteRule directive is forbidden: /var/www/owncloud/core/templates/403.php mod_rewrite is enabled. Where is the problem?

    Read the article

  • Apache server configuration name resolution (virtual host naming + security)

    - by Homunculus Reticulli
    I have just setup a minimal (hopefully secure? - comments welcome) apache website using the following configuration file: <VirtualHost *:80> ServerName foobar.com ServerAlias www.foobar.com ServerAdmin [email protected] DocumentRoot /path/to/websites/foobar/web DirectoryIndex index.php # CustomLog with format nickname LogFormat "%h %l %u %t \"%r\" %>s %b" common CustomLog "|/usr/bin/cronolog /var/log/apache2/%Y%m.foobar.access.log" common LogLevel notice ErrorLog "|/usr/bin/cronolog /var/log/apache2/%Y%m.foobar.errors.log" <Directory /> AllowOverride None Order Deny,Allow Deny from all </Directory> <Directory /path/to/websites/> Options -Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> </VirtualHost> I am able to access the website by using www.foobar.com, however when I type foobar.com, I get the error 'Server not found' - why is this? My second question concerns the security implications of the directive: <Directory /path/to/websites/> Options -Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> in the configuration above. What exactly is it doing, and is it necessary?. From my (admitedly limited) understanding of Apache configuration files, this means that anyone will be able to access (write to?) the /path/to/websites/ folder. Is my understanding correct? - and if yes, how is this not a security risk?

    Read the article

  • Strange Apache Webdav situation (OSX Will connect, Ubuntu will not)

    - by mewrei
    So basically my situation is that I have an Apache 2.2 webserver running on Linux on another box, and I have it configured to serve up webdav. Now here's the weird part, I can access the server just fine on my Mac using the "Connect to Server" dialog (even moved like 5GB of files over the connection). On my Ubuntu desktop cadaver will connect as well and allow me to browse. However when I try to use Xmarks (BYOS Edition) or the GNOME "Connect to Server" dialog, it gives me a 403 Forbidden error. My server does digest authentication if that makes any difference. Here's part of my apache2.conf file <VirtualHost *:80> DocumentRoot "/path" <Directory "/path"> Dav on AuthType Digest AuthName iTools AuthDigestDomain "/" AuthUserFile /path/to/WebDavUsers Options None AllowOverride None <LimitExcept GET HEAD OPTIONS> require valid-user </LimitExcept> Order allow,deny Allow from All </Directory> <Directory "/path/*/Public"> Options +Indexes </Directory> <Directory "/path/user"> <LimitExcept GET HEAD OPTIONS> require user user </LimitExcept> </Directory> </VirtualHost>

    Read the article

< Previous Page | 447 448 449 450 451 452 453 454 455 456 457 458  | Next Page >