Search Results

Search found 17202 results on 689 pages for 'folder permissions'.

Page 176/689 | < Previous Page | 172 173 174 175 176 177 178 179 180 181 182 183  | Next Page >

  • Add an item to the Finder/Save dialog sidebar

    - by Clinton Blackmore
    I'm working on a script where a user logs into a guest account on OS and is prompted for their network credentials in order to mount their network home folder (while they benefit from working on a local user folder). As the guest folder is deleted when users log out, I want to discourage them from saving anything there. I would like to replace the items on the Finder and Open/Save sidebar lists (such as "Desktop", username, "Documents", etc) with ones that would save into their network home folder. It is possible to do this using AppleScript or Cocoa APIs, or do I need to modify a plist and restart the Finder? [Ack. Looking into ~/Library/Preferences/com.apple.sidebars.plist, it isn't at all clear how I'd populate it.] Similar Questions: AppleScript: adding mounted folder to Finder Sidebar? suggests using fstab; this code will most likely run as a user and really, automounting at that point would be too late. How do you programmatically put folder icons on the Finder sidebar, given that you have to use a custom icon for the folder? Says there is no Cocoa API, but that you can use a carbon-style LSSharedFileList API that is only documented in a single header file. Does anyone know of some example code to add an item to the Finder sidebar?

    Read the article

  • Powershell scripts to backup SQL, SVN

    - by bszom
    I'm trying to use PowerShell to create some backups, and then to copy these to a web folder (or, in other words, upload them to a WebDAV share). At first I thought I'd do the WebDAV stuff from within PowerShell, but it seems this still requires a fair amount of "manual labour", ie: constructing HTTP requests. I then settled for creating a web folder from the script and letting Windows handle the WebDAV stuff. It seems that all it takes to create a web folder is to create a standard shortcut, as described here. What I can't figure out is how to actually copy files to the shortcut's target..? Maybe I'm going about this the wrong way. It would be ideal if I could somehow encrypt the credentials for the WebDAV in the script, then have it create the web folder, shunt over the files, and delete the web folder again. Or even better, not use a web folder at all. Third option would be to just create the web folder manually and leave it there, though I'd rather not. Any ideas/pointers/tips? :)

    Read the article

  • How to extract part of the path and the ending file name with Regex?

    - by brasofilo
    I need to build an associative array with the plugin name and the language file it uses in the following sequence: /whatever/path/length/public_html/wp-content/plugins/adminimize/languages/adminimize-en_US.mo /whatever/path/length/public_html/wp-content/plugins/audio-tube/lang/atp-en_US.mo /whatever/path/length/public_html/wp-content/languages/en_US.mo /whatever/path/length/public_html/wp-content/themes/twentyeleven/languages/en_US.mo Those are the language files WordPress is loading. They are all inside /wp-content/, but with variable server paths. I'm looking only for those inside the plugins folder, grab the plugin folder name and the filename. Hipothetical case in PHP, where reg_extract_* functions are the parts I'm missing: $plugins = array(); foreach( $big_array as $item ) { $folder = reg_extract_folder( $item ); if( 'plugin' == $folder ) { // "folder-name-after-plugins-folder" $plugin_name = reg_extract_pname( $item ); // "ending-mo-file.mo" $file_name = reg_extract_fname( $item ); $plugins[] = array( 'name' => $plugin_name, 'file' => $file_name ); } } [update] Ok, so I was missing quite a basic function, pathinfo... :/ No problem to detect if /plugins/ is contained in the array. But what about the plugin folder name?

    Read the article

  • Problem with impersonating a specific user in WCF service

    - by aJ
    I am having a WCF service hosted in IIS on WindowsServer 2008. This service needs to write to a shared folder present on another machine(Windows XP). The shared folder has write permissions for a particular user say "X" which is present on both the machines .i.e on the server where the service is running as well as the machine where the shared folder is present. The service runs under the NETWORK SERVICE account. For the service to access the shared folder I have added code to impersonate the user "X"in the service so that it gets the permission to write to the shared folder. Since I want to impersonate user "X" only when I run a particular section of code I have used the sample code. Even after the impersonation the service fails to write to the shared folder sometimes. It works sporadically. Whereas if I add tag in the Web.config file it works perfectly fine. <identity impersonate="true" userName="accountname" password="password" /> But the above is not desirable since it impersonates a specific user for all the requests. What I need is to impersonate a specific user only when I run a particular section of code. Also, the impersonation code works absolutely fine when the shared folder is present on another WindowsServer 2008. Could anyone give me ideas on what's going wrong here.

    Read the article

  • Configuring TeamCity + NUnit unit tests so files can be loaded properly

    - by Dave
    In a nutshell, I have a solution that builds fine in the IDE, and the unit tests all run fine with the NUnit GUI (via the NUnitit VS2008 plugin). However, when I execute my TeamCity build runner, all unit tests that require file access (e.g. for running tests against specific XML files), I just get System.IO.DirectoryNotFoundExceptions. The reason for this is clear: it's looking for those supporting XML files loaded by various unit tests in the wrong folder. The way my unit tests are structured looks like this: +-- project folder +-- unit tests folder +-- test.xml +-- test.cs +-- project file.xaml +-- project file.xaml.cs All of my projects own their own UnitTests folder, which contains the .cs file and any XML files, XML Schemas, etc that are necessary to run the tests. So when I write my test.cs, I have it look for "test.xml" in the code because they are in the same folder (actually, I do something like ....\unit tests\test.xml, but that's kind of silly). As I said before, the tests run great in NUnit. But that's because the unit tests are part of the project. When running the unit tests from TeamCity, I am executing them against the assemblies that get copied to the main app's output folder. These unit test XML files should not be copied willy-nilly to the output folder just to make the tests pass. Can anyone suggest a better method of organizing my unit tests in each project (which are dependencies for the main app), such that I can execute the unit tests from NUnit and from the TeamCity build runner? The only other option I can come up with is to just put the testing XML data in code, rather than loading it from a file. I would rather not do this.

    Read the article

  • what is a root directory in IIS 6 and How do I make one of my subfolder in ASP.NET website the root directory?

    - by R_Coder
    I need to integrate a third party plugin in my asp.net website. To install the plugin, they have mentioned this sentence, "Create an application through your IIS control panel with root directory at -(some path from my website folder)?". I am not much aware with IIS and rarely worked with it. Though I tried every possible way i could do in IIS, I am not able to work it out. After installation, there is a test page provided by plugin which i have to run to check but when I run it, it shows this error. "It is an error to use a section registered as allowDefinition='MachineToApplication' beyond application level. This error can be caused by a virtual directory not being configured as an application in IIS." I searched this error too and found that it is because the two Web.Config file, one from the main project and another from plugin folder. The only way to work with this is to make the plugin folder they specified as root directory in IIS. Someone kindly tell me some easy steps to do this. What I was doing is, in IIS6, I added New website with the main folder of my asp.net website, then I right clickadd application and choosed the gievn path, thought it would become root directory but it ain't. Help would be appreciated. ALso note that, i have to put the plugin folder in my main website folder only. So, there are two web.config. I tried to rename one of them too, it solved the above error but gave another errors but I think main problem is of root directory. P.S they show me above error on web.config file of plugin folder on this sentence- "Line 51: < authentication mode="Windows" />"

    Read the article

  • Can I store and join based on external attributes in Lucene/Solr

    - by Kibbee
    Is there a way to store information about documents that are stored in Lucene such that I don't have to update the entire document to update certain attributes about the documents? For instance, let's say I had a bunch of documents, and that I wanted to update a permissions list of who was allowed to see the documents on a daily, or more frequent, basis. Would it be possible to update all the permissions each day, without updating all the documents. I could do it by keeping a exactly which permissions were added and removed, but I would rather just be able to take the end list of permissions, and use that, rather than have to keep track of all the permission changes and post those entire documents to Lucene.

    Read the article

  • Performing centralized authorization for multiple applications

    - by Vaibhav
    Here's a question that I have been wrestling with for a while. We have a situation wherein we have a number of applications that we have created. These have grown organically over a period of time. All of these applications have permissions code built into them that controls access to various parts of the application depending on whether the currently logged in user has the necessary permissions or not. Alongside these applications is a utility application which allows an administrator to map users to permissions for all applications - the way it works is that every application has code which reads this external database of the said utility application to check if the currently logged in user has the necessary permission or not. Now, the question is this. Should the user-permissions mapping information reside in and be owned by the applications themselves, or is it okay to have this information reside within an external entity/DB (as in this case the utility application's database). Part of me thinks that application permissions are very specific to the application context itself, so shouldn't be separated from the application itself. But I am not sure. Any comments?

    Read the article

  • Selenium Webdrivers: Load Page without any resources

    - by Biffy
    I am trying to prevent Javascript from changing the site's source code I'm testing with Selenium. The problem is, I can't just simply turn Javascript off in the Webdriver, because I need it for a test. Here's what I'm doing for the Firefox Webdriver: firefoxProfile.setPreference("permissions.default.image", 2); firefoxProfile.setPreference("permissions.default.script", 2); firefoxProfile.setPreference("permissions.default.stylesheet", 2); firefoxProfile.setPreference("permissions.default.subdocument", 2); I don't allow Firefox to load any Images, Scripts and Stylesheets. How can I do this with the Internet Explorer Webdriver and the Chrome Webdriver? I have not found any similar preferences. Or is there even a more elegant way to stop the webdrivers from loading the site's JS Files after all? Thank you!

    Read the article

  • How to create workspace in TFS

    - by kumar
    Hi, I followed this way. To create a workspace to manage your source-controlled files 1. From the File menu, select Source Control, and then click Workspaces. 2. In the Manage Workspaces dialog box, click Add. 3. Type a descriptive name in the Name box, enter a comment describing the new workspace in the Comment box, and provide alternative Owner and Computer name values, as necessary. 4. Under Working Folders, in the Source Control Folder box, click the text box and then the ellipsis (…). 5. In the Browse for Folder dialog box, select a server folder, and then click OK. 6. Under Working Folders, in the Local Folder box, click the text box, and then click the ellipsis (…). 7. In the Browse for Folder dialog box, select a folder on your computer, and then click OK. 8. In the Add Workspace dialog box, click OK to create the workspace. 9. In the Manage Workspaces dialog box, click Close. when I click OK button it should get all the folder from TFS to my Local machine? but its not doing that after clcking ok and Close nothing is happening and my local floder does not contain this files tooo? Thanks

    Read the article

  • Stop Rewrite htaccess create random pages

    - by Vistol
    Recently I saw in my Webmaster tools that some random sites are linking to my site. Actually this is not an big issue. The issue comes when the pages that are linked are not real pages because of my httaccess file. This is the htaccess code that Im running: <pre> #Options +FollowSymLinks RewriteEngine on RewriteRule ^([^/\.]+)/?$ index.php?id=$1 [L] RewriteRule ^([0-9]+)/(.*)$ index.php?id=$1 [L] </pre> So the real URLs would be: mysite.com/folder/999/TITLE-OR-NAME But cecause I only check the 1st folder ($1) which is an I numberD, this htaccess file is allowing hackers linking to my site with random URLs like: mysite.com/folder/999/TITLE-OR-NAME1 mysite.com/folder/999/TITLE-OR-NAME2 mysite.com/folder/999/TITLE-OR-NAME3 mysite.com/folder/999/TITLE-OR-NAME4 mysite.com/folder/999/TITLE-OR-NAME5 The worst part comes when google tells me that I am duplicating content!!! Actually I am not duplicating content, the htaccess is duplicating it for me. And yes I know, Im a bad newbie programmer but Id really appreciate your help with this cause Im struggling to find a solution but it never. Thank you very much for all your support to this newbie :)

    Read the article

  • Permission based access control

    - by jellysaini
    I am trying to implement permission based access control in ASP.NET. To implement this I have created some database tables that hold all the information about which roles are assigned what permissions and which roles are assigned to what user. I am checking the permissions in the business access layer. Right now I have created a method which checks the permissions of the user. If the user has permissions then okay otherwise it redirects to another page. I want to know if the following things are possible? class User { [PremissionCheck(UserID,ObjectName,OperationName)] public DataTable GetUser() { //coding for user } } I have seen it in MVC3. Can I Create it in ASP.NET? If yes then how can I implement it?

    Read the article

  • Maven. How to include specific folder or file when assemblying project depending on is it dev build or production?

    - by user563588
    Using maven-assembly-plugin <plugin> <artifactId>maven-assembly-plugin</artifactId> <version>2.1</version> <configuration> <descriptors> <descriptor>descriptor.xml</descriptor> </descriptors> <finalName>xxx-impl-${pom.version}</finalName> <outputDirectory>target/assembly</outputDirectory> <workDirectory>target/assembly/work</workDirectory> </configuration> in descriptor.xml file we can specify <fileSets> <fileSet> <directory>src/install</directory> <outputDirectory>/</outputDirectory> </fileSet> </fileSets> Is it possible to include specific file from this folder or sub-folder depending on profile? Or some other way... Like this: <profiles> <profile> <id>dev</id> <activation> <activeByDefault>false</activeByDefault> </activation> <build> <resources> <resource> <directory>src/install/dev</directory> <includes> <include>**/*</include> </includes> </resource> </resources> </build> </profile> <profile> <id>prod</id> <build> <resources> <resource> <directory>src/install/prod</directory> <includes> <include>**/*</include> </includes> </resource> </resources> </build> </profile> </profiles> But it puts resources in jar when packaging. But we need to put it in zip when assemblying as I already mentioned above :( Thanks!

    Read the article

  • Maven. How to include specific folder or file when assemblying project depending on is it dev build or production?

    - by user563588
    Using maven-assembly-plugin <plugin> <artifactId>maven-assembly-plugin</artifactId> <version>2.1</version> <configuration> <descriptors> <descriptor>descriptor.xml</descriptor> </descriptors> <finalName>xxx-impl-${pom.version}</finalName> <outputDirectory>target/assembly</outputDirectory> <workDirectory>target/assembly/work</workDirectory> </configuration> in descriptor.xml file we can specify <fileSets> <fileSet> <directory>src/install</directory> <outputDirectory>/</outputDirectory> </fileSet> </fileSets> Is it possible to include specific file from this folder or sub-folder depending on profile? Or some other way... Like this: <profiles> <profile> <id>dev</id> <activation> <activeByDefault>false</activeByDefault> </activation> <build> <resources> <resource> <directory>src/install/dev</directory> <includes> <include>**/*</include> </includes> </resource> </resources> </build> </profile> <profile> <id>prod</id> <build> <resources> <resource> <directory>src/install/prod</directory> <includes> <include>**/*</include> </includes> </resource> </resources> </build> </profile> </profiles> But it puts resources in jar when packaging. But we need to put it in zip when assemblying as I already mentioned above :( Thanks!

    Read the article

  • DCOM Authentication Fails to use Kerberos, Falls back to NTLM

    - by Asa Yeamans
    I have a webservice that is written in Classic ASP. In this web service it attempts to create a VirtualServer.Application object on another server via DCOM. This fails with Permission Denied. However I have another component instantiated in this same webservice on the same remote server, that is created without problems. This component is a custom-in house component. The webservice is called from a standalone EXE program that calls it via WinHTTP. It has been verified that WinHTTP is authenticating with Kerberos to the webservice successfully. The user authenticated to the webservice is the Administrator user. The EXE to webservice authentication step is successful and with kerberos. I have verified the DCOM permissions on the remote computer with DCOMCNFG. The default limits allow administrators both local and remote activation, both local and remote access, and both local and remote launch. The default component permissions allow the same. This has been verified. The individual component permissions for the working component are set to defaults. The individual component permissions for the VirtualServer.Application component are also set to defaults. Based upon these settings, the webservice should be able to instantiate and access the components on the remote computer. Setting up a Wireshark trace while running both tests, one with the working component and one with the VirtualServer.Application component reveals an intresting behavior. When the webservice is instantiating the working, custom, component, I can see the request on the wire to the RPCSS endpoint mapper first perform the TCP connect sequence. Then I see it perform the bind request with the appropriate security package, in this case kerberos. After it obtains the endpoint for the working DCOM component, it connects to the DCOM endpoint authenticating again via Kerberos, and it successfully is able to instantiate and communicate. On the failing VirtualServer.Application component, I again see the bind request with kerberos go to the RPCC endpoing mapper successfully. However, when it then attempts to connect to the endpoint in the Virtual Server process, it fails to connect because it only attempts to authenticate with NTLM, which ultimately fails, because the webservice does not have access to the credentials to perform the NTLM hash. Why is it attempting to authenticate via NTLM? Additional Information: Both components run on the same server via DCOM Both components run as Local System on the server Both components are Win32 Service components Both components have the exact same launch/access/activation DCOM permissions Both Win32 Services are set to run as Local System The permission denied is not a permissions issue as far as I can tell, it is an authentication issue. Permission is denied because NTLM authentication is used with a NULL username instead of Kerberos Delegation Constrained delegation is setup on the server hosting the webservice. The server hosting the webservice is allowed to delegate to rpcss/dcom-server-name The server hosting the webservice is allowed to delegate to vssvc/dcom-server-name The dcom server is allowed to delegate to rpcss/webservice-server The SPN's registered on the dcom server include rpcss/dcom-server-name and vssvc/dcom-server-name as well as the HOST/dcom-server-name related SPNs The SPN's registered on the webservice-server include rpcss/webservice-server and the HOST/webservice-server related SPNs Anybody have any Ideas why the attempt to create a VirtualServer.Application object on a remote server is falling back to NTLM authentication causing it to fail and get permission denied? Additional information: When the following code is run in the context of the webservice, directly via a testing-only, just-developed COM component, it fails on the specified line with Access Denied. COSERVERINFO csi; csi.dwReserved1=0; csi.pwszName=L"terahnee.rivin.net"; csi.pAuthInfo=NULL; csi.dwReserved2=NULL; hr=CoGetClassObject(CLSID_VirtualServer, CLSCTX_ALL, &csi, IID_IClassFactory, (void **) &pClsFact); if(FAILED( hr )) goto error1; // Fails here with HRESULT_FROM_WIN32(ERROR_ACCESS_DENIED) hr=pClsFact->CreateInstance(NULL, IID_IUnknown, (void **) &pUnk); if(FAILED( hr )) goto error2; Ive also noticed that in the Wireshark Traces, i see the attempt to connect to the service process component only requests NTLMSSP authentication, it doesnt even attmept to use kerberos. This suggests that for some reason the webservice thinks it cant use kerberos...

    Read the article

  • How can I get Thunderbird to automatically move messages?

    - by David Heffernan
    I have Thunderbird 15. I'd like to automatically move messages from one folder to another. My mail account is an IMAP account. My Blackberry is also connected to the account and when it sends mail, it places a copy on the IMAP server in a folder named Sent Items. I'd like those messages to be moved to my Inbox automatically. By default message filters are only applied automatically to the Inbox. There is an extension to do this, Filter Subfolders, but it's only for TB3. What I have tried so far is: Use the FiltaQuilla add-on to be able to filter messages for folder name. Set the string property mail.server.default.applyIncomingFilters to true. As recommended here: http://blog.mozilla.org/bcrowder/ But I can't get these filters to run automatically. I have a suspicion that filters only run automatically for incoming mail. And these are sent items. Perhaps that's it. I just don't know. On the other hand, if I run the filters manually on that folder, it does indeed move the mail. Or perhaps the issue is that these messages are saved into the Sent Items folder marked as read. Is it possible that filters are only automatically applied to unread items? If I could install an add-in that automatically ran the message filter on my folder, that would do it. Anyway, I'm at a loss now. Any suggestions are welcome. I'm not at all wedded to using filters. I just want to find a way to get these messages moved without human interaction!

    Read the article

  • With a username passed to a script, find the user's home directory

    - by Clinton Blackmore
    I am writing a script that gets called when a user logs in and check if a certain folder exists or is a broken symlink. (This is on a Mac OS X system, but the question is purely bash). It is not elegant, and it is not working, but right now it looks like this: #!/bin/bash # Often users have a messed up cache folder -- one that was redirected # but now is just a broken symlink. This script checks to see if # the cache folder is all right, and if not, deletes it # so that the system can recreate it. USERNAME=$3 if [ "$USERNAME" == "" ] ; then echo "This script must be run at login!" >&2 exit 1 fi DIR="~$USERNAME/Library/Caches" cd $DIR || rm $DIR && echo "Removed misdirected Cache folder" && exit 0 echo "Cache folder was fine." The crux of the problem is that the tilde expansion is not working as I'd like. Let us say that I have a user named george, and that his home folder is /a/path/to/georges_home. If, at a shell, I type: cd ~george it takes me to the appropriate directory. If I type: HOME_DIR=~george echo $HOME_DIR It gives me: /a/path/to/georges_home However, if I try to use a variable, it does not work: USERNAME="george" cd ~$USERNAME -bash: cd: ~george: No such file or directory I've tried using quotes and backticks, but can't figure out how to make it expand properly. How do I make this work?

    Read the article

  • Setup site folders on Apache and PHP

    - by Cobus Kruger
    I'm trying to set up my first Apache server on my Windows PC at home and I have real trouble finding out which configuration settings go where. I downloaded and installed XAMPP which seemed to get everything nicely set up and can see a working website on http://localhost. So far so good. The point of this is to develop a website of course, and to make my life easier (irony?), I wanted to let the web site root point to my Eclipse project folder. So I opened httpd-vhosts.conf, uncommented a VirtualHost block and changed its DocumentRoot to my local path. Now when I try to load http://localhost I get a 403 (Access denied) error. So where do I configure permissions for my folder? And is that all I need to let my site run from the folder specified or am I going to have to clear another hurdle? Update: I tried to simplify things a little, so I reinstalled XAMPP and got back to a working http://localhost. Then I confirmed that httpd-vhosts.conf is included in httpd.conf and made the following changes to httpd-vhosts.conf: Uncommented the line NameVirtualHost *:80 Added a virtual host shown below. Restarted Apache and saw the expected page on http://localhost <VirtualHost *:80> DocumentRoot "C:/xampp/htdocs/" ServerName localhost ErrorLog "logs/dummy-host2.localhost-error.log" CustomLog "logs/dummy-host2.localhost-access.log" combined </VirtualHost> I then created a new folder named C:\testweb, added an index.html file and changed the DocumentRoot line shown above. For all intents and purposes I would then expect the two configurations to be equivalent. But this setup gives me an error 403. Even though the C:\testweb folder already had the same permissions as the C:\xampp\htdocs folder, I then went further and gave the Everyone group full control of C:\testweb and got exactly the same problem. So what did I miss?

    Read the article

  • htaccess not properly rewriting urls

    - by Cameron Ball
    This is a bit of a weird one. I'm doing some work on a server, and I need rewrite rules for directories that actually exist (in some cases, they are more than one level deep) At the moment my .htaccess looks like this: RewriteEngine on RewriteRule ^simfiles/([-\ a-zA-Z0-9:/]+)$ http://mydomain.com/?portal=simfiles&folder=$1 [L] And this is working OK, for example, a url like: mydomain.com/sifmiles/my-files Will get redirected to mydomain.com/?portal=simfiles&folder=my-files Or in the case of a directory structure that is deeper than one level: mydomain.com/sifmiles/my-files/more-of-my-files Will get redirected to mydomain.com/?portal=simfiles&folder=my-files/more-of-my-files I wrote the regex so that it won't match things with a . in the path, because there are css and js files which reside in simfiles/somedirectory, and if I redirect everything then these cannot be loaded. I tried a configuration like this: RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^simfiles/([-\ a-zA-Z0-9:/\.]+)$ http://mydomain.com/?portal=simfiles&folder=$1 [L] But that doesn't work, things still don't load properly. So my first question is, how can I achieve this "properly"? I don't like my solution because it means redirects won't occur if the folder has a . in its name. My second problem, is that while the redirection is happening properly, the url becomes: http://mydomain.com/?portal=simfiles&folder=my-files I want the URL to remain clean, like: http://mydomain.com/sifmiles/my-files How can I achieve this?

    Read the article

  • Cannot read status the monit daemon, even with allowed group

    - by jefflunt
    I cannot seem to get monit status or other CLI commands to work. I've built monit v5.8 to run on a Raspberry Pi. I'm able to add services to be monitored, and the web interface can be accessed just fine, as I've set it up for public read-only access (it's a test server, not my final production setup, so not a big deal right now). Problem is, when I run monit status while logged in as root I get: # monit status monit: cannot read status from the monit daemon I also have monit started on boot via this /etc/inittab file entry: mo:2345:respawn:/usr/local/bin/monit -Ic /etc/monitrc I've verified that monit is running, and I'm getting email alerts anytime I either kill the monit process manually, or reboot my raspberry pi. So, next I check my monitrc file permissions to see which group is allowed access. # ls -al /etc/monitrc -rw------- 1 root root 2359 Aug 24 14:48 /etc/monitrc Here's my relevant allow section of the control file. set httpd port 80 allow [omitted] readonly allow @root allow localhost allow 0.0.0.0/0.0.0.0 Also tried setting permissions on this file to 640 to allow group read permissions, but no matter what I try I either get the same error as noted above, or when the permissions are set to 640 I get: # monit status monit: The control file '/etc/monitrc' must have permissions no more than -rwx------ (0700); right now permissions are -rw-r----- (0640). What am I missing here? I know that the httpd must be enabled, as that's the interface that the CLI uses to get information (or so I've read), so I've done that. And in terms of monit doing its monitoring job and sending email alerts, that's all working as well. Here's my entire monitrc file - again, this is version v5.8, and it was build with both PAM and SSL support. The process runs under the root user: # Global settings set daemon 300 with start delay 5 set logfile /var/log/monit.log set pidfile /var/run/monit.pid set idfile /var/run/.monit.id set statefile /var/run/.monit.state # Mail alerts ## Set the list of mail servers for alert delivery. Multiple servers may be ## specified using a comma separator. If the first mail server fails, Monit # will use the second mail server in the list and so on. By default Monit uses # port 25 - it is possible to override this with the PORT option. # set mailserver smtp.gmail.com port 587 username [omitted] password [omitted] using tlsv1 ## Send status and events to M/Monit (for more informations about M/Monit ## see http://mmonit.com/). By default Monit registers credentials with ## M/Monit so M/Monit can smoothly communicate back to Monit and you don't ## have to register Monit credentials manually in M/Monit. It is possible to ## disable credential registration using the commented out option below. ## Though, if safety is a concern we recommend instead using https when ## communicating with M/Monit and send credentials encrypted. # # set mmonit http://monit:[email protected]:8080/collector # # and register without credentials # Don't register credentials # # ## Monit by default uses the following format for alerts if the the mail-format ## statement is missing:: set mail-format { from: [email protected] subject: $SERVICE $DESCRIPTION message: $EVENT Service: $SERVICE Date: $DATE Action: $ACTION Host: $HOST Description: $DESCRIPTION Monit instance provided by chicagomeshnet.com } # Web status page set httpd port 80 allow [omitted] readonly allow @root allow localhost allow 0.0.0.0/0.0.0.0 ## You can set alert recipients whom will receive alerts if/when a ## service defined in this file has errors. Alerts may be restricted on ## events by using a filter as in the second example below.

    Read the article

  • Can't resolve offline file conflicts

    - by Bryan
    We use roaming profiles on our Server 2008 R2 domain, with folder redirection for 'desktop', 'my documents' and 'application data'. But as our network is split across two sites, we have one file server at each site, which are configured to use domain based DFS namespaces and DFS replication to keep things in sync. The DFS path for the replication folder is as follows: \\domain\folderredirection$\<username>\<redirected-folder-name> The real paths are \\site-1-server\folderredirection$\<username>\<redirected-folder-name> and \\site-2-server\folderredirection$\<username>\<redirected-folder-name> As our users all switch between sites (sometimes several time per day), our folder redirection policy has to redirect to the DFS roots rather than hardcoded to a specific server. Both DFS and DFS-R have been proven to be working perfectly. On our laptops, we use offline files for the redirected folders, and this also works fine, however the problem is as follows: When conflicts occur in offline files, it is impossible to resolve the conflicts. I'm given the usual conflict resolution options (i.e. 'Ignore', 'Keep Both', 'Keep network' and 'Keep local'), however, not one of these options will resolve any conflict, yet no error is produced. We only use offline files on laptops, which have either Windows XP Professional or Windows 7 Professional installed. The problem is not specific to any one laptop, it affects every laptop and every conflicting file in exactly the same way. I would have thought the set up we have is common for companies that have multiple sites, so I'm hoping someone will have seen this before?

    Read the article

  • Exchange 2010 remove Arbitration mailbox and mailbox store db

    - by JNM
    I have a problem with Exchange 2010 which is a nightmare for me. The problem is, that in Exchange management console i have several store databases in database management tab. only one is mounted, because i am using it. the second one is mounted, but it was used on other server before (now that server is dead). that database mounted status is UNKNOWN. The file of that database does not exist, but it still shows there. I can't remove it from management console, because it has mailboxes. i removed all mailboxes and disabled two arbitrary mailboxes. i can't delete it because i still have one arbitrary mailbox left. i can't move it, because it requires connection to dead server. i can't disable it, because i get error that it is the last one in organization. Can somebody help me? Solved it by using this command: Get-Mailbox -Arbitration -Database db1 | Remove-Mailbox -Arbitration -RemoveLastArbitrationMailboxAllowed Now i have another problem. Exchange management console shows public folder from different server which is dead now. That folder was copied here, but it is not needed anymore. Public folder file has been deleted, and records from ADSI edit has been removed too. But i can't remove that folder from management console. i get an error Exchange isn't able to check for public folder replicas for "My Public Folder Database". Anybody can help me with that?

    Read the article

  • Millions of files in php's tmp error - how to delete?

    - by Jonatan Littke
    Hey. I've got a tmp-folder with 14 million php session files in my home directory. At least that's what I think it is, it's not like I could ls it or anything. How can I empty this folder? I've tried using find with the -exec rm {} \; commands but that didn't work. ls 'sess_0*' | xargs rm did neither. I'm currently running rm -rf tmp but after two hours the folder appears to be the same size. REFERENCE INFO: I suddenly encountered an error where SESSIONS could no longer be written to disk: [Mon Apr 19 19:58:32 2010] [warn] mod_fcgid: stderr: PHP Warning: Unknown: open(/var/www/clients/client1/web1/tmp/sess_8e12742b62aa68a3f9476ec80222bbfb, O_RDWR) failed: No space left on device (28) in Unknown on line 0 [Mon Apr 19 19:58:32 2010] [warn] mod_fcgid: stderr: PHP Warning: Unknown: Failed to write session data (files). Please verify that the current setting of session.save_path is correct (/var/www/clients/client1/web1/tmp) in Unknown on line 0 I ran: $ df -h Filesystem Size Used Avail Use% Mounted on /dev/md0 457G 126G 308G 29% / tmpfs 1.8G 0 1.8G 0% /lib/init/rw udev 10M 664K 9.4M 7% /dev tmpfs 1.8G 0 1.8G 0% /dev/shm But as you can see, the disk isn't full. So I had a look in the syslog which says the following 20 times per second: kernel: [19570794.361241] EXT3-fs warning (device md0): ext3_dx_add_entry: Directory index full! This led me thinking to a full folder, obviously, but since my web folder only has 60k files (having counted them), I guessed it was the tmp folder (the local one, for this instance of php) that messed things up. Some commands I ran: $ sudo ls sess_a* | xargs rm -f bash: /usr/bin/sudo: Argument list too long find . -exec rm {} \; rm: cannot remove directory '.' find: cannot fork: Cannot allocate memory I'm running Debian Lenny, php5, ISPConfig, SuEXEC and Fast-CGI.

    Read the article

  • Is there any way to synchronize Outlook RSS Feeds with BlackBerry?

    - by nvuono
    Does anyone know how I can view the contents of my Outlook 2007 RSS Feeds from a corporate-issued BlackBerry? Our Inbox and Calendar are already integrated with corporate exchange servers but it looks like nobody cares too much about the RSS Feeds. Is there some setting on my Blackberry or in Outlook I could possibly tweak to include these updates? I know there are many standalone RSS readers available for blackberry (Google Reader for example) but I mention Outlook RSS Feeds specifically in my question because I am subscribing to a number of RSS feeds I've setup on my intranet for various version control systems that would be inaccessible to an external RSS reader. It seems like I might have to setup some sort of email commit notifications if I want anything from my blackberry but I much prefer the 'pull' method of an RSS feed viewer over receiving streams of emails. Please feel free to suggest any alternatives! Edit: I've additionally tried moving my "SVN Repository" folder directly into my Mailbox instead of keeping it as a child of the RSS Feeds folder. This allows me to view the SVN Repository folder on my blackberry where previously the RSS Feeds folder and all children were hidden but unfortunately it never seems to get populated with the items that are displaying in Outlook. I've even made a fresh commit to make sure that the SVN Repository folder still works correctly in Outlook from outside the RSS Feeds folder but no luck on the BlackBerry end of things. BlackBerry Model Details: BlackBerry 8310 smartphone (EDGE) v4.2.2.170 Platform 2.5.0.30

    Read the article

  • how do i completely delete ask.com from my computer?

    - by celyn
    I have used Final Uninstaller (unregistered version) to remove it. So it removed the toolbar and the things in its folder from C:Program Files/Ask.com except for one thing; remaining are "Ask.com" folder > "Updater" folder > "Updater.exe" I have not checked my registry yet. But if there is something I want it to be gone! As to why I can't delete that updater thing, my laptop asks me permission (says need to be admin) whenever I tried to delete anything from ask.com folder, or its folder at all. I have googled, came to and followed the instructions from "Scott McClenning" in this post. Does not really work. When I say "not really", means, this error message pops up everytime I tried to do that: An error occurred applying attributes to the file: C:/Program Files/Ask.com Access is denied. How can I gain access? I AM the admin for this computer. And... don't ask me to download too many things for my computer, it adds to my frustration. Just in case you are wondering, I got this from FormatFactory when I updated it to 2.70. I should not have done so. Update: Now after I restarted my computer, I got the "EVERYONE" group in and it is under Full Control with every box ticked except for the last one (Special). When I tried to delete that folder and the .exe file, this keeps popping up as i click "try again", only goes away when I click "cancel"

    Read the article

< Previous Page | 172 173 174 175 176 177 178 179 180 181 182 183  | Next Page >