Search Results

Search found 17278 results on 692 pages for 'directory conventions'.

Page 270/692 | < Previous Page | 266 267 268 269 270 271 272 273 274 275 276 277  | Next Page >

  • Specify a custom dictionary for FxCop and Visual Studio source analysis

    - by Marko Apfel
    Renaming the default custom dictionary from CustomDictionary.xml to an other name – for instance FxCop.CustomDictionary.xml needs some additional changes to work in involved applications. Visual Studio Team System code analysis For Visual Studio Team System code analysis this file should be added as a link to all projects and setted to be the Build Action CodeAnalysisDirectory. Build target In a build target the command line tool FxCopCmd should be called with the /dictionary parameter: <Target Name="FxCop"> <Exec Command="&quot;$(ProjectDir)..\..\build\FxCop\FxCopCmd.exe&quot; /file:&quot;$(TargetPath)&quot; /project:&quot;$(ProjectDir)..\EsriDE.SfgPraxair.FxCop&quot; /directory:&quot;$(ProjectDir)..\..\lib\Esri.ArcGIS&quot; /directory:&quot;$(ProjectDir)..\..\lib\Microsoft&quot; /dictionary:&quot;$(ProjectDir)..\FxCop.CustomDictionary.xml&quot; /out:&quot;$(OutDir)..\$(ProjectName).FxCopReport.xml&quot; /console /forceoutput /ignoregeneratedcode"> </Exec> <Message Text="FxCop finished." /> </Target> FxCop-GUI (standalone application) In FxCop-GUI is no option to specify an own file name – but you could add a hint in the FxCop project file. Open your this file and look for the line: <CustomDictionaries SearchFxCopDir="True" SearchUserProfile="True" SearchProjectDir="True" /> Then change it to: <CustomDictionaries SearchFxCopDir="True" SearchUserProfile="True" SearchProjectDir="True"> <CustomDictionary Path="FxCop.CustomDictionary.xml"/> </CustomDictionaries> Ready :-)

    Read the article

  • How to configure custom error page in Plesk 9.3 for non existing folder?

    - by Junior Mayhé
    I'm trying to configure Plesk in order to show website visitors a custom error html. The current hosted site is an ASP.NET site. This site shows its custom errors on error403.aspx and error404.aspx files. Now to comply with plesk, I've created error_docs with required files like forbidden.html, etc... When user try to navigate http://mysite.com/a_missing_page.aspx, the visitor is redirected to error404.aspx correctly. But when user try to navigate to a non existent directory http://mysite.com/a_missing_folder/ the site takes me to IIS 404 regular page. Plesk has Custom error documents activated on Web hosting settings. ASP.NET Error pages defined in web.config are showing fine. But it seems plesk wont show its custom html error documents. The bottom line here is about setting up a custom error page to a directory. Is it possible to do this using Plesk or do I have to change it manually on IIS?

    Read the article

  • apache 2.4 php-fpm proxypassmatch for pretty urls

    - by tubaguy50035
    I have a URL like http://newsymfony.dev/app_dev.php/_profiler/5080653d965eb. I would like to send this script to PHP-FPM. I currently have this as my vhost: VirtualHost *:80> ServerName newsymfony.dev DocumentRoot /home/COMPANY/nwalke/www/newsymfony.dev/web/ ErrorLog /var/log/apache2/error_log LogLevel info CustomLog /var/log/apache2/access_log combined <Directory /home/COMPANY/nwalke/www/newsymfony.dev/web/> AllowOverride All Require all granted DirectoryIndex app.php </Directory> ProxyPassMatch ^/(.*\.php)$ fcgi://127.0.0.1:9003/home/COMPANY/nwalke/www/newsymfony.dev/web/$1 </VirtualHost> If I browse to app_dev.php it works just fine. But if I do app_dev.php/_profiler/5080653d965eb I get a 404 and the request never gets sent to FPM. How can I alter my ProxyPassMatch to pass anything with .php in the URL? I'm trying to do this with Symfony, but I'm pretty sure it applies to everything.

    Read the article

  • Creating hard drive backup images efficiently

    - by Arrieta
    We are in the process of pruning our directories to recuperate some disk space. The 'algorithm' for the pruning/backup process consists of a list of directories and, for each one of them, a set of rules, e.g. 'compress *.bin', 'move *.blah', 'delete *.crap', 'leave *.important'; these rules change from directory to directory but are well known. The compressed and moved files are stored in a temporary file system, burned onto a blue ray, tested within the blue ray, and, finally, deleted from their original locations. I am doing this in Python (basically a walk statement with a dictionary with the rules for each extension in each folder). Do you recommend a better methodology for pruning file systems? How do you do it? We run on Linux.

    Read the article

  • magento on Zend Server (Win7) installation error

    - by czerasz
    I try to install magento for the first time. I've created the database with the name "project" in my C:\Zend\Apache2\conf\httpd.conf I added on the end: <Directory "C:\Zend\Apche2\htdocs\project"> Options Indexes FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> in my ZendServer/Server Setup/Extensions: PDO_MySQL, simplexml, mcrypt, hash, GD, DOM, iconv, curl, SOAP are on in C:\Zend\ZendServer\etc\php.ini I set: safe_mode = Off ;<-- was set to off ... memory_limit = 512M; Maximum amount of memory a script may consume (128MB) After step "Configuration" of magento installation (with Use Web Server (Apache) Rewrites enabled) I get: Internal Server Error My database is full of tables (that schould be ok) My Zend Server shows: 27-Oct 06:55 6 Severe Slow Request Execution (Absolute) http://localhost/project/index.php/install/wizard/installDb/ Critical Open 27-Oct 06:55 4 Fatal PHP Error C:\Zend\Apache2\htdocs\project\lib\Varien\Db\Adapter\Pdo\Mysql.php Critical Open 27-Oct 06:55 5 Slow Function Execution curl_exec Warning Open 27-Oct 06:55 5 Slow Request Execution (Absolute) http://localhost/project/index.php/install/wizard/configPost/ What can be wrong?

    Read the article

  • How to install PHP, Pear, PECL, and APC with Homebrew on Mac OS X?

    - by Andrew
    I'm trying to install APC for PHP 5.3 in the easiest way possible. I love Homebrew so I started down that route. I was able to install PHP 5.3.6 with this command: brew install https://github.com/adamv/homebrew-alt/raw/master/duplicates/php.rb --with-mysql I think this is supposed to install PHP, Pear, and PECL. It seems to install these just fine. Now when I try to install APC: $ pecl install apc downloading APC-3.1.9.tgz ... Starting to download APC-3.1.9.tgz (155,540 bytes) .................................done: 155,540 bytes Warning: require_once(Archive/Tar.php): failed to open stream: No such file or directory in PackageFile.php on line 305 Warning: require_once(Archive/Tar.php): failed to open stream: No such file or directory in /usr/local/Cellar/php/5.3.6/lib/php/PEAR/PackageFile.php on line 305 Fatal error: require_once(): Failed opening required 'Archive/Tar.php' (include_path='/usr/local/Cellar/php/5.3.6/lib/php') in /usr/local/Cellar/php/5.3.6/lib/php/PEAR/PackageFile.php on line 305 How can I fix this?

    Read the article

  • how to compile VTK on Mac Air

    - by Ilan Tal
    I am trying to compile VTK on a Mac after successfully doing so on Linux. I am using CMake 2.8.9 and for a long time I had a problem where it couldn't find jni.h. If anyone else has the problem, the answer is to have JAVA_INCLUDE_PATH2 point to "-framework JavaVM". After that it successfully finds JNI and does a "Configure" followed by "Generate". According to the instructions I have the next step should be easy: in a terminal run "make -j 2". I get a message "-bash: make: command not found". Strange.... I am in the vtkBuild directory, which seems the most reasonable place. I even tried the parent directory where I have the VTK source but that didn't work either. I ran a make under Linux and had no troubles, so what do I need to do with the Mac? Thanks, Ilan

    Read the article

  • Can I use Apache 2.0 ErrorDocument and Mod Alias to store error pages outside the DocumentRoot?

    - by Pete
    I'd like to store my nicely designed set of http error documents outside the DocumentRoot. Alias /errors /data/opt/apache-httpd-2.0.63/htdocs/errorpages/ <Directory "/data/opt/apache-httpd-2.0.63/htdocs/errorpages/"> order deny,allow allow from all </Directory> ErrorDocument 500 /errors/500.html ErrorDocument 404 /errors/404.html ErrorDocument 401 /errors/default.html ErrorDocument 403 /errors/default.html ErrorDocument 502 /errors/default.html ErrorDocument 503 /errors/default.html However, when I do this, all I get is "The requested URL /errors/default.html resulted in an error." Is it possible to use mod_alias and the ErrorDocument directive together like this in Apache 2.0?

    Read the article

  • Unexpected Access Denied error while accessing EFS encrypted file

    - by pozi
    I am getting Access Denied error when I try to access some files. ACL is OK, all ACE's all intherited, I have full access to these files and I am the owner of these files. ACE's are exactly same as other files in the same directory which are accessible without problems (doublechecked through Security Tab on file properties and cacls command). Files are EFS encrypted, however I should have access to these files, because they were encrypted by the same user account I am trying to access (decrypt) them. EFS settings are exactly same as other files in the same directory which are also encrypted and accessible without problems (doublechecked through cipher command and efsdump command (SysInternals)). In ProcMon utility (SysInternals) I am getting Access Denied entry while accessing these files. Files are not used (locked), checked by Unlocker utility. Up to now, I tought I understand NTFS ACL's and EFS mechanisms fairly well, but now I am completely stuck and I do not know how to access these files. Any thoughts?

    Read the article

  • Best practices for re-IP'ing / migrating servers and applications

    - by warren
    Some of this question would be highly application-specific, but what approaches do you take when looking to migrate applications from one server/platform to another and servers form one network segment to another? For applications that can't be re-IP'd (many exist in this category), the general answer is to nuke and pave (or extend a clusterable application, then remove the segment that needs to be "moved"). For "normal" applications (httpd, mail, directory services, etc), what are the checks ou perform before, during, and after a move to ensure the health of the migrated app/server? An example with Apache: backup httpd conf directory change httpd conf files to use new IP address of server change (or add) IP of server restart Apache verify web server still serves pages reboot server verify environment comes back up healthy

    Read the article

  • What linux permissions are need for www?

    - by Xeoncross
    I know that 777 is full read/write/execute permission for owner/group/other. So this doesn't seem to be needed as it leaves random users full permissions. What permissions are need to be used on /var/www so that... Source control like git or svn Normal users in a group like "websites" or added to "www-data" Servers like apache or lighthttpd And PHP/Perl/Ruby can all read, create, and run files there? If I'm correct, Ruby and PHP scripts are not "executed" directly - but passed to an interpreter. So there is no need for execute permission on files in /var/www. Therefore, it seems like the correct permission would be chmod -R 1660 which would make all files shareable by these four entities all files non-executable by mistake block everyone else from the directory entirely set the permission mode to "sticky" for all future files Is this correct? Update: I just realized that files and directories might need different permissions - I was talking about files above so i'm not sure what the directory permissions would need to be.

    Read the article

  • Crontab -e gives me error messages

    - by DNA
    I get a bunch of error messages when I run crontab -e Here are the error messages. And here is my crontab file under `/usr/bin/': # /etc/crontab: system-wide crontab # Unlike any other crontab you don't have to run the `crontab' # command to install the new version when you edit this file # and files in /etc/cron.d. These files also have username fields, # that none of the other crontabs do. SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin # m h dom mon dow user command 17 * * * * root cd / && run-parts --report /etc/cron.hourly 25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ) 47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly ) 52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly ) 30 * * * * root rsync /home/dnaneet/Downloads/*.pdf /home/dnaneet/Downloads/pdfs/ # I notice that the last task ('rsync') NEVER RUNS! Why is this happening? What did I do wrong? Running Ubuntu 11.10/Bash. I have read this... Am I missing a shebang? And I don't know if my anacron jobs run. Edit 1 In light of Masi's comment, I commented out lines 17 thru 25 of my crontab file with #. Now when I run sudo crontab -e, all I get is: /usr/bin/crontab: 11: 17: not found /usr/bin/crontab: 12: 25: not found (gedit:4301): Gtk-WARNING **: Attempting to store changes into `/root/.local/share/recently-used.xbel', but failed: Failed to create file '/root/.local/share/recently-used.xbel.GOHVBW': No such file or directory (gedit:4301): Gtk-WARNING **: Attempting to set the permissions of `/root/.local/share/recently-used.xbel', but failed: No such file or directory What in the world?

    Read the article

  • Mac and windows 7 file sharing specific user

    - by all-R
    Hi guys, I try to share a specific directory to my windows 7 computer, but I want it to use a specific user that I created on my mac to connect to it. I saw this tutorial: http://www.trickyways.com/2010/06/how-to-access-mac-files-from-windows-7/ wich is exactly what I want to do, but it ain't working. For some reason, it never prompts me for username/password when I try to connect on my mac when I'm on Windows 7. On top of that, when I set the permission "No Access" to the "Everyone" user on my mac, my windows computer simply don't see the directory. If I set the permission to "Read/Write" or "Read only" it works. I simply don't want that everyone in my workgroup to be able to read my files. I want to create specific users on my mac and share them to the persons I want... Any thoughts?

    Read the article

  • Strategy for managing lots of pictures for a website

    - by Nate
    I'm starting a new website that will (hopefully) have a lot of user generated pictures. I'm trying to figure out the best way to store and serve these pictures. The CMS I'm using (umbraco) has a media library that puts a folder on the server for each image. Inside of there you can have different sizes of that same image. That folder has an ID on it and the database has additional information for that image along with the ID of the folder. This works great for small sites, but what if the pictures get up to 10,000, 100,000 or 1,000,000? It seems like the lookup on the directory would take a long time to find the correct folder. I'm on windows 2008 if that makes a difference. I'm not so worried about load. I can load balance my server pretty easily and replicate the images across the servers. The nature of the site won't have a lot of users on it either, but it could have a lot of pics. Thanks. -Nate EDIT After some thought I think I'm going to create a directory for each user under a root image folder then have user's pictures under that. I would be pretty stoked if I had even 5,000 users, so that shouldn't be too bad of a linear lookup. If it does get slow I will break it down into folders like /media/a/adam/image123.png. If it ever gets really big I will expand the above method to build a bigger tree. That would take a LOT of content though.

    Read the article

  • Dummy/default page for apache

    - by Ency
    I'm trying to set up default page for my apache2, for following cases: User is accessing http://IP_Address instead of hostname Requested protocol (HTTP/HTTPS) is not available (eg. only http*s*://domain.com exists) Currently I've got something like that <VirtualHost eserver:80> ServerAdmin webmaster@localhost DocumentRoot /var/www/local/ <Directory /> Options FollowSymLinks AllowOverride None </Directory> </VirtualHost> I think, it works well, i'm trying to do similar thing for HTTPS, but it does not work. <VirtualHost eserver:443> SSLCertificateKeyFile /etc/apache2/ssl/dummy.key SSLCertificateFile /etc/apache2/ssl/dummy.crt SSLProtocol all SSLCipherSuite HIGH:MEDIUM ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn ServerSignature Off </VirtualHost> My default is places in sites-enabled as a first one 000-default I do not care about not certificate validity during accessing default page, my goal is not show different HTTPS page if user one of points is applied

    Read the article

  • How do I tar dot files but not dot directories

    - by bjackfly
    The following tar command will exclude all dot files and dot directories. tar -cvzf /media/bjackfly/bkup/bkup.gz --exclude '.*' --one-file-system /home/bjackfly In my case I want the dot files to be backed up in the home directory (.vimrc, .bashrc) etc. but not the dot directories /.config /.cache /.eclipse etc. Any Linux gurus with a command for this, or do I need to run a find into a tar or do two different tar commands which is non-ideal? One for dot files in the home directory and one for everything else?

    Read the article

  • Losing internet connectivity on server after installing LogMeIn Hamachi (with server set as gateway node)

    - by Kim Jong-Un
    Our domain controller (SBS 2003) completely lost internet and network connectivity yesterday after I remotely installed LogMeIn Hamachi on it and set it to be a gateway node- in an attempt to create a VPN link between the server and a remote site. I had to go in to the office to resolve the problem as, unsurprisingly, my own remote access to the server was also lost. I was only able to restore network connectivity by deleting a virtual network adapter Hamachi created when making the server the gateway node (called "Hamachi bridge" I believe), then rebooting the server. This is a repeatable problem. Every time I try to get this to work, it just takes the server offline. Why would this bridge affect regular TCP/IP connectivity on the NIC in this way? I have tried a "hub-and-spoke" configuration between the server and our PC at a remote site (server set as hub, remote site as spoke). This caused no such problems with general internet connectivity, and file transfer worked well between the two computers. However, there was a DNS issue with the VPN between the two sites- resulting in Active Directory not being able to communicate between them (could not log on using domain user accounts at remote site if they were not already cached on that machine). I only tried a "gateway" network as LogMeIn support told me: If you can get the Active Directory to work it would only be through a "Gateway" network type with the server acting as the Gateway Node. You would configure the gateway settings on the server in the Hamachi client on that machine to push whatever IP's/DNS settings you prefer and at that point AD would be able (all things being equal) communicate to the client node when it attaches. We do not have any ActiveDirectory configuration info as that's outside the scope of our support. I hope this helps. It would be fantastic if I could get Active Directory to work over a Hamachi VPN connection, without worrying about the server going offline in this way. Does anyone have any ideas how I should proceed, or any theories as to what is going on when I try to use the "gateway" network type? I want to try to narrow down what is going on here.

    Read the article

  • Collocation in Code

    - by Dan McGrath
    Quite some time ago I remember reading an article from 'Joel on Software' that mentioned collocation of information in code was important. By collocation, I mean that relevant information about the code is present when the code is. I'm currently writing an article that has a small bit in it about collocation so I went searching for sources and found the quote in the article 'Making Wrong Code Look Wrong' In order to make code really, really robust, when you code-review it, you need to have coding conventions that allow collocation. In other words, the more information about what code is doing is located right in front of your eyes, the better a job you’ll do at finding the mistakes. When you have code that says For me, collocation isn't just about the code itself, but the tool used to view the code. If it can help with the 'collocation factor' (term coined by me?) I believe it can help with the programmers productivity. Take for example the modern IDEs that show you the variables type by hovering over it. Are their any other articles written about collocation in code and/or are their other terms that this is known by?

    Read the article

  • Remove IP address from the URL of website using apache

    - by sapatos
    I'm on an EC2 instance and have a domain domain.com linked to the EC2 nameservers and it happily is serving my pages if I type domain.com in the URL. However when the page is served it resolves the url to: 1.1.1.10/directory/page.php. Using apache I've set up the following VirtualHost, following examples provided at http://httpd.apache.org/docs/2.0/dns-caveats.html Listen 80 NameVirtualHost 1.1.1.10:80 <VirtualHost 1.1.1.10:80> DocumentRoot /var/www/html/directory ServerName domain.com # Other directives here ... <FilesMatch "\.(ico|pdf|flv|jpg|jpeg|png|gif|js|css|swf)$"> Header set Cache-Control "max-age=290304000, public" </FilesMatch> </VirtualHost> However I'm not getting any changes to how the URL is displayed. This is the only VirtualHost configured on this site and I've confirmed its the one being used as I've managed to break it a number of times whilst experimenting with the configuration. The route53 entries I have are: domain.com A 1.1.1.10 domain.com NS ns-11.awsdns-11.com ns-111.awsdns-11.net ns-1111.awsdns-11.org ns-1111.awsdns-11.co.uk domain.com SOA ns-11.awsdns-11.com. awsdns-hostmaster.amazon.com. 1 1100 100 1101100 11100

    Read the article

  • 403 Forbidden when trying to download file that was uploaded using SSH

    - by Simon Hartcher
    I have FTP access to an Apache server on linux to upload files so that they can be downloadable from the web. I recently was granted SSH access for extra permissions and figured that it would be quicker to download the files directly to the server, instead of downloading them to my machine then FTPing to the server. When I downloaded a file using SSH to the server, and then placed it in the public_html directory, it was not visible from the web. The permissions (from SSH and the FTP client) were the same as all the other files that are visible, but it was not visible in the directory listing, and if I tried to type in the filename into my browser I would get a 403 error. Obviously, when I FTP a file to the server something else happens that makes it web visible, that I am not currently privy to. What am I missing that is causing the file to be invisible from the web?

    Read the article

  • Configuring Weblogic Server 10.3.6 from 32-bit mode to 64-bit mode

    - by Ekta Malik
    This post pertains to the configuration of Weblogic Server from 32-bit mode to 64-bit mode on Solaris OS. Just in case, you have WLS 10.3.6 running in 32-bit mode and the JDK being used is installed for 64-bit mode [On Solaris OS, JDK 64-bit installation comprises of installing 32-bit JDK followed by a patch for 64-bit JDK].  Verification of the mode being used One can verify the mode of Weblogic Server in the following ways Either check the commonEnv.sh script located at $MIDDLEWARE_HOME/wlserver_10.3/common/bin where $MIDDLEWARE_HOME refers to the install directory of Middleware. Look for the patterns - SUN_ARCH_DATA_MODEL and JAVA_USE_64BIT in the file. For 32-bit mode, the parameters would appear as shown belowSUN_ARCH_DATA_MODEL="32"JAVA_USE_64BIT=false Check the server console logs; which JDK is being used during start-up By checking which JDK is used by the running process of Weblogic Server Configuration Steps Take a backup of the commonEnv.sh script located at $MIDDLEWARE_HOME/wlserver_10.3/common/bin where $MIDDLEWARE_HOME refers to the install directory of Middleware Modify the commonEnv.sh script for the following parameters: The values should be 64 and true respectively for 64-bit modeSUN_ARCH_DATA_MODEL="64"JAVA_USE_64BIT=true  Restart the weblogic server. One can confirm that the JDK being used is 64-bit by looking at the Weblogic console logs during server start up or by looking at the running process.

    Read the article

  • Strategy for managing lots of pictures for a website

    - by Nate
    I'm starting a new website that will (hopefully) have a lot of user generated pictures. I'm trying to figure out the best way to store and serve these pictures. The CMS I'm using (umbraco) has a media library that puts a folder on the server for each image. Inside of there you can have different sizes of that same image. That folder has an ID on it and the database has additional information for that image along with the ID of the folder. This works great for small sites, but what if the pictures get up to 10,000, 100,000 or 1,000,000? It seems like the lookup on the directory would take a long time to find the correct folder. I'm on windows 2008 if that makes a difference. I'm not so worried about load. I can load balance my server pretty easily and replicate the images across the servers. The nature of the site won't have a lot of users on it either, but it could have a lot of pics. Thanks. -Nate EDIT After some thought I think I'm going to create a directory for each user under a root image folder then have user's pictures under that. I would be pretty stoked if I had even 5,000 users, so that shouldn't be too bad of a linear lookup. If it does get slow I will break it down into folders like /media/a/adam/image123.png. If it ever gets really big I will expand the above method to build a bigger tree. That would take a LOT of content though.

    Read the article

  • Copying files to zfs mountpoint doesn't work - the files aren't actually copied to the other filesystem,

    - by user113904
    I have 3 x 4 TB disks in a NAS that I want to group together and access as if they were one whole 'unit' of some kind. I also have a 250GB disk containing the OS - this is full of films and tv shows currently. I thought zfs sounded good so I created a raidz zpool, after installing the ppa sudo zpool create store raidz /dev/sdb /dev/sdc /dev/sdd and set the mountpoint to /mnt/store sudo zfs set mountpoint=/mnt/store /store checked it was successful - I think it was sudo zfs list NAME USED AVAIL REFER MOUNTPOINT store 266K 7.16T 170K /mnt/store Then I wanted to move over a whole load of files from my home directory. I went to where the to-be-copied folder was (called media) and entered sudo cp -R * /mnt/store cp: cannot create directory `/mnt/store/media': No space left on device It seems like it's not copying over to the new filesystem I made (or thought I did). I've never really done this type of thing until a few days ago so may be running before I can walk... is this not the right way to copy files across? I've only used windows before so the idea of mountpoints is a bit mind boggling. I'm using XBMCbuntu 12 beta 2.0 which is based on 12.04. Will retry with normal Ubuntu 12.04 desktop to see if that's the problem. thanks for the help!

    Read the article

  • Cron not able to succesfully change background

    - by Solenoid
    I'm running 12.04 with a custom XML background (modification on Day of Ubuntu) that changes based on time of day. I've noticed that there's a significant delay between when the changes are scheduled to take place in the XML file and when they actually show up on the background. I've also noticed that when I resume from suspend I don't get the correct background image either. I've found that cycling the wallpaper manually will fix this, and I've written a script to automate the process. If I execute the script manually it works fine. However, when I schedule the script to run in cron, cron doesn't change the background. To make sure that the script was being run properly by cron, I had it create a directory in my home folder after running the background change, and the directory is created successfully, so I know cron is running and executing the script. My script: #!/bin/bash sleep 5 gsettings set org.gnome.desktop.background picture-uri file:///home/zak/Pictures/Wallpaper/DOU2.xml sleep 1 gsettings set org.gnome.desktop.background picture-uri file:///home/zak/Pictures/Wallpaper/DOU.xml sleep 1 mkdir /home/zak/iscronworking exit Is cron just not able to access gsettings? The job is on my user crontab so it shouldn't be running as root. Alternately, is there any way to make Precise play nicely with XML wallpaper?

    Read the article

  • Understanding the Linux Root

    - by Zac
    I've been using Linux (Ubuntu) for about 2 weeks now and am still struggling with some basic concept surrounding the root user: (1) Some terminal operations (such as making subdirectories inside a FHS directory such as /opt) require me to prefix the command with sudo - why? I guess what I'm choking on is: if I'm already logged in as a valid system user, why do I have to be a superuser/root in order to modify things that the sysadmin has already deemed me worthy of accessing? (2) Is there a GUI (Gnome, KDE) equivalent to sudo? Is there a way to assume a superuser role through a graphical context, rather than from inside a new shell? (3) I can't access the /root directory logged in as myself... but I installed the system to begin with and was never asked to create a root account! How do I log in as root and gain access to /root?!? Thanks for all feedback & input!

    Read the article

< Previous Page | 266 267 268 269 270 271 272 273 274 275 276 277  | Next Page >