Search Results

Search found 18333 results on 734 pages for 'temporary directory'.

Page 237/734 | < Previous Page | 233 234 235 236 237 238 239 240 241 242 243 244  | Next Page >

  • Virtual Host under MacOSX not working

    - by David Casillas
    I have setup a virtualhost for MacOSX Apache instalation. This are my steps: edit /private/etc/apache2/httpd.conf removing comment from: Include /private/etc/apache2/extra/httpd-vhosts.conf edit /private/etc/apache2/extra/httpd-vhosts.conf, added: <VirtualHost *:80> ServerName test.local DocumentRoot "/Users/myusername/Sites/Test/public" <Directory "/Users/myusername/Sites/Test/public"> Options Indexes FollowSymLinks Includes AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> edit /private/etc/hosts added 127.0.0.7 test.local Restart Apache But the VirtualHost does not work. To further isolate the problem I check the same configuration with MAMP and the virtual host worked rigth, so the configuration files should be fine. What can be wrong?

    Read the article

  • MSR Issue on 12.1 Enterprise Controllers

    - by Owen Allen
    We've noticed a problem with MSR initialization and synchronization on Enterprise Controllers that are using Java 7u45. If you're running into the issue, these jobs fail with Java errors. Java 7u45 is bundled with Oracle Solaris 11.1 SRU 12, so if you're using that version or if you plan to use it, you should be aware of this issue. There's a simple fix. You can do the fix before upgrading to SRU 12, but you can't do it before you install the Enterprise Controller. First, log on to the Enterprise Controller system and stop the EC using the ecadm command. This command is in the /opt/SUNWxvmoc/bin directory on Oracle Solaris systems and in the /opt/sun/xvmoc/bin directory on Linux systems: ecadm stop -w Then run this command to fix the issue: cacaoadm set-param java-flags=`cacaoadm get-param -v java-flags -i oem-ec | sed 's/Xss256k/Xss384k/'` -i oem-ec And then restart the EC: ecadm start -w Once you apply this fix, you should be set.

    Read the article

  • Apache2 Syntex, cant acces 000-default

    - by enrique2334
    I have been using Apache2 and webmin with my raspberry pi. after a restart and reinstalations apache wont start. > sudo /etc/init.d/apache2 restart apache2: Syntax error on line 268 of /etc/apache2/apache2.conf: Could not open configuration file /etc/apache2/sites-enabled/000-default: No such file or directory Action 'configtest' failed. The Apache error log may have more information. failed! The file 000-default is there and unopenable permisions to root-root My apache2.conf file looks like this (bottom half) # ErrorLog: The location of the error log file. # If you do not specify an ErrorLog directive within a <VirtualHost> # container, error messages relating to that virtual host will be # logged here. If you *do* define an error logfile for a <VirtualHost> # container, that host's errors will be logged there and not here. # ErrorLog ${APACHE_LOG_DIR}/error.log # # LogLevel: Control the number of messages logged to the error_log. # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. # LogLevel debug # Include module configuration: Include mods-enabled/*.load Include mods-enabled/*.conf # Include list of ports to listen on and which to use for name based vhosts Include ports.conf # # The following directives define some format nicknames for use with # a CustomLog directive (see below). # If you are behind a reverse proxy, you might want to change %h into %{X-Forwarded-For}i # LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %O" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent # Include of directories ignores editors' and dpkg's backup files, # see the comments above for details. # Include generic snippets of statements Include conf.d/ # Include the virtual host configurations: Include sites-enabled/ <VirtualHost *:80> DocumentRoot /var/www <Directory /var/www> allow from all Options +Indexes </Directory> ServerName IMASERVER </VirtualHost> does anyone know what the cause of this?

    Read the article

  • How can I tweak this split-tar-gz-archive script?

    - by Sai
    I came across this shell script that can be used to split an entire directory across multiple compressed files. I am interested in making a minor tweak to this script. Lets say I have n GB of files in a directory. I do not want the contents of a particular folder to be split across multiple tar files but inside the same file. I want the n MB folder to be fit within a single compressed file and the remaining (n GB -n MB) split across multiple files. I am not sure whether it is possible with this script. I was looking forward to some suggestions. Though the script has been well documented, it is quite complex to understand for me

    Read the article

  • How to update coffee script?

    - by Tetsu
    I got a following error when I tried to watch coffee scripts by coffee -o js -cw coffee. /usr/local/lib/node_modules/coffee-script/lib/coffee-script/command.js:321 throw e; ^ Error: watch Unknown system errno 28 at errnoException (fs.js:636:11) at FSWatcher.start (fs.js:663:11) at Object.watch (fs.js:691:11) at /usr/local/lib/node_modules/coffee-script/lib/coffee-script/command.js:287:27 at Object.oncomplete (/usr/local/lib/node_modules/coffee-script/lib/coffee-script/command.js:100:11) I have no idea what is going with error. Then I checked the versions, coffee -v is 1.6.1 and node -v is v0.6.12. According the official site( http://coffeescript.org/ ) the latest version is 1.6.3, so I wanted update coffee by npm update -g coffee-script, but this fails also. npm WARN [email protected] package.json: bugs['name'] should probably be bugs['url'] npm http GET https://registry.npmjs.org/coffee-script npm http 304 https://registry.npmjs.org/coffee-script How can I update coffee script? Edit 2013/10/11 In my coffee script directory there is only one file box_wrapper.coffee. $ -> $("body").children().wrap -> "<div id='#{$(@).attr "id"}_box' class='wrapper'/>" Edit 2013/10/16 I tried to re-install coffee, so I've done like this. $ sudo npm -g rm coffee npm WARN Not installed in /usr/local/lib/node_modules coffee $ coffee -v CoffeeScript version 1.6.1 I can't remove coffee. And I tried also like this. $ sudo apt-get remove npm $ npm -v -bash: /usr/bin/npm: No such file or directory $ sudo apt-get install npm $ npm -v 1.1.4 $ sudo npm -g install coffee # I omit a lot of `GET` parts. npm http 304 https://registry.npmjs.org/mkdirp/0.3.4 npm ERR! error installing [email protected] npm http 304 https://registry.npmjs.org/assertion-error/1.0.0 npm http 304 https://registry.npmjs.org/growl npm http 304 https://registry.npmjs.org/jade/0.26.3 npm http 304 https://registry.npmjs.org/diff/1.0.2 npm http 304 https://registry.npmjs.org/mkdirp/0.3.5 npm http 304 https://registry.npmjs.org/glob/3.2.1 npm http 304 https://registry.npmjs.org/ms/0.3.0 npm ERR! error rolling back [email protected] Error: UNKNOWN, unknown error '/usr/local/lib/node_modules/coffee/node_modules/express' npm ERR! error installing [email protected] npm ERR! EEXIST, file already exists '/usr/local/lib/node_modules/coffee/node_modules/mocha/node_modules' npm ERR! File exists: /usr/local/lib/node_modules/coffee/node_modules/mocha/node_modules npm ERR! Move it away, and try again. npm ERR! npm ERR! System Linux 3.2.0-54-generic-pae npm ERR! command "node" "/usr/bin/npm" "-g" "install" "coffee" npm ERR! cwd /home/ironsand npm ERR! node -v v0.6.12 npm ERR! npm -v 1.1.4 npm ERR! path /usr/local/lib/node_modules/coffee/node_modules/mocha/node_modules npm ERR! fstream_path /usr/local/lib/node_modules/coffee/node_modules/mocha/node_modules/___debug.npm npm ERR! fstream_type Directory npm ERR! fstream_class DirWriter npm ERR! code EEXIST npm ERR! message EEXIST, file already exists '/usr/local/lib/node_modules/coffee/node_modules/mocha/node_modules' npm ERR! errno {} npm ERR! fstream_stack /usr/lib/nodejs/fstream/lib/writer.js:161:23 npm ERR! fstream_stack Object.oncomplete (/usr/lib/nodejs/mkdirp.js:34:53) npm ERR! EEXIST, file already exists '/usr/local/lib/node_modules/coffee/node_modules/mocha/node_modules' npm ERR! File exists: /usr/local/lib/node_modules/coffee/node_modules/mocha/node_modules npm ERR! Move it away, and try again. npm ERR! npm ERR! System Linux 3.2.0-54-generic-pae npm ERR! command "node" "/usr/bin/npm" "-g" "install" "coffee" npm ERR! cwd /home/ironsand npm ERR! node -v v0.6.12 npm ERR! npm -v 1.1.4 npm ERR! path /usr/local/lib/node_modules/coffee/node_modules/mocha/node_modules npm ERR! fstream_path /usr/local/lib/node_modules/coffee/node_modules/mocha/node_modules/___debug.npm npm ERR! fstream_type Directory npm ERR! fstream_class DirWriter npm ERR! code EEXIST npm ERR! message EEXIST, file already exists '/usr/local/lib/node_modules/coffee/node_modules/mocha/node_modules' npm ERR! errno {} npm ERR! fstream_stack /usr/lib/nodejs/fstream/lib/writer.js:161:23 npm ERR! fstream_stack Object.oncomplete (/usr/lib/nodejs/mkdirp.js:34:53) npm ERR! npm ERR! Additional logging details can be found in: npm ERR! /home/ironsand/npm-debug.log npm not ok And npm-debug.log is a blank file.

    Read the article

  • Symbolic link not allowed or link target not accessible: /var/www on Ubuntu 11.04

    - by Jamie Hutber
    I am getting a 403 when i access http://mayfieldafc.local/ upon looking in the apache logs i am getting [Wed Nov 16 12:32:59 2011] [error] [client 127.0.0.1] Symbolic link not allowed or link target not accessible: /var/www I have what i believe to be the correct permissions set on /var/www. hutber can create and delete files, hutber being my user. I can also execute as program on this folder. in mayfields vhost its: <Directory /var/www/mayfieldafc/docroot> Options +FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> I am pulling my hair out not being able to work on my sites with my work ubuntu install. I know of nothing else that could be effecting this. So any ideas?

    Read the article

  • Kubuntu login problem

    - by Balázs Mária Németh
    I have a problem when trying to login to my Kubuntu 10.10. The login screen shows up, then I type my password, a blank screen is shown with my desktop background picture and then throws me back on the login screen. if I choose to login from console, I can do that, and by typing startx I can log in just fine but in the end I cannot shutdown the computer from the k menu nor will my settings remembered for the next time I log back. I have my home directory mounted from a different partition but I tried to create a new user account and I could log in without any kind of problem. The home directory is not encrypted, at least not to my knowledge. Some log files: Xorg.0.log dmesg.out kdm.log xsession.errors

    Read the article

  • How did Ubuntu know my WPA Key?

    - by nebukadnezzar
    I've recently moved my /home to another computer (keeping all configuration files), with a fresh installation of Ubuntu 10.10. After Installation I've installed wicd and ndiswrapper, to get my Internet Connection up and running. However, after changing /etc/network/interfaces from auto lo iface lo inet loopback To auto wlan0 iface wlan0 inet dhcp and running sudo /etc/init.d/networking restart to get wlan0 set up, wicd just suddenly connected... to my modem. Without supplying any information about the modem whatsoever. Of course wicd creates a local directory at ~/.wicd/, but that directory is empty, and the temporary global configuration at /var/lib/wicd/configurations/ didn't exist due to fresh installation of ubuntu. So what's the deal? Where did wicd get the ESSID and the WPA Key? There hasn't been any activity in this question, but it's still open (and even worth some rep)!

    Read the article

  • Problem with installing programs

    - by Brian Buck
    I am unable to install programs for the Ubuntu 10.10 system. These download ok, but when attempting to install them, the following message is displayed: AN ERROR OCCURRED WHILE OPENING THE ARCHIVE END-OF-CENTRAL-DIRECTORY SIGNATURE NOT FOUND etc....... ZIPINFO: CANNOT FIND ZIPFILE DIRECTORY IN etc...... As I am new to Ubuntu and also fairly "green" as far as computer terminology is concerned, I have no idea what this means and don't have a clue on how to fix it. Can you help please? Many thanks, Brian Buck

    Read the article

  • WebLogic JDBC Use of Oracle Wallet for SSL

    - by Steve Felts
    Introduction Secure Sockets Layer (SSL) can be used to secure the connection between the middle tier “client”, WebLogic Server (WLS) in this case, and the Oracle database server.  Data between WLS and database can be encrypted.  The server can be authenticated so you have proof that the database can be trusted by validating a certificate from the server.  The client can be authenticated so that the database only accepts connections from clients that it trusts. Similar to the discussion in an earlier article about using the Oracle wallet for database credentials, the Oracle wallet can also be used with SSL to store the keys and certificates.  By using it correctly, clear text passwords can be eliminated from the JDBC configuration and client/server configuration can be simplified by sharing the wallet across multiple datasources. There is a very good Oracle Technical White Paper on using SSL with the Oracle thin driver at http://www.oracle.com/technetwork/database/enterprise-edition/wp-oracle-jdbc-thin-ssl-130128.pdf [LINK1].  The link http://www.oracle.com/technetwork/middleware/weblogic/index-087556.html [LINK2] describes how to use WebLogic Server with Oracle JDBC Driver SSL. The information in this article is a guide on what steps need to be taken in the variety of available options; use the links above for details. SSL from the driver to the database server is basically turned on by specifying a protocol of “tcps” in the URL.  However, there is a fair amount of setup needed.  Also remember that there is an overhead in performance. Creating the wallets The common use cases are 1. “data encryption and server-only authentication”, requiring just a trust store, or 2. “data encryption and authentication of both tiers” (client and server), requiring a trust store and a key store. It is recommended to use the auto-login wallet type so that clear text passwords are not needed in the datasource configuration to open the wallet.  The store type for an auto-login wallet is “SSO” (Single Sign On), not “JKS” or “PKCS12” as in [LINK2].  The file name is “cwallet.sso”. Wallets are created using the orapki tool.  They need to be created based on the usage (encryption and/or authentication).  This is discussed in detail in [LINK1] in Appendix B or in the Advanced Security Administrator’s Guide of the Database documentation. Database Server Configuration It is necessary to update the sqlnet.ora and listener.ora files with the directory location of the wallet using WALLET_LOCATION.  These files also indicate whether or not SSL_CLIENT_AUTHENTICATION is being used (true or false). The Oracle Listener must also be configured to use the TCPS protocol.  The recommended port is 2484. LISTENER = (ADDRESS_LIST= (ADDRESS=(PROTOCOL=tcps)(HOST=servername)(PORT=2484))) WebLogic Server Classpath The WebLogic Server CLASSPATH must have three additional security files. The files that need to be added to the WLS CLASSPATH are $MW_HOME/modules/com.oracle.osdt_cert_1.0.0.0.jar $MW_HOME/modules/com.oracle.osdt_core_1.0.0.0.jar $MW_HOME/modules/com.oracle.oraclepki_1.0.0.0.jar One way to do this is to add them to PRE_CLASSPATH environment variable for use with the standard WebLogic scripts. Setting the Oracle Security Provider It’s necessary to enable the Oracle PKI provider on the client side.  This can either be done statically by updating the java.security file under the JRE or dynamically by setting it in a WLS startup class using java.security.Security.insertProviderAt(new oracle.security.pki.OraclePKIProvider (), 3); See the full example of the startup class in [LINK2]. Datasource Configuration When creating a WLS datasource, set the PROTOCOL in the URL to tcps as in the following. jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=host)(PORT=port))(CONNECT_DATA=(SERVICE_NAME=myservice))) For encryption and server authentication, use the datasource connection properties: - javax.net.ssl.trustStore=location of wallet file on the client - javax.net.ssl.trustStoreType=”SSO” For client authentication, use the datasource connection properties: - javax.net.ssl.keyStore=location of wallet file on the client - javax.net.ssl.keyStoreType=”SSO” Note that the driver connection properties for the wallet require a file name, not a directory name. Active GridLink ONS over SSL For completeness, there is another SSL usage for WLS datasources.  The communication with the Oracle Notification Service (ONS) for load balancing information and node up/down events can use SSL also. Create an auto-login wallet and use the wallet on the client and server.  The following is a sample sequence to create a test wallet for use with ONS. orapki wallet create -wallet ons -auto_login -pwd ONS_Wallet orapki wallet add -wallet ons -dn "CN=ons_test,C=US" -keysize 1024 -self_signed -validity 9999 -pwd ONS_Wallet orapki wallet export -wallet ons -dn "CN=ons_test,C=US" -cert ons/cert.txt -pwd ONS_Wallet On the database server side, it’s necessary to define the walletfile directory in the file $CRS_HOME/opmn/conf/ons.config and run onsctl stop/start. When configuring an Active GridLink datasource, the connection to the ONS must be defined.  In addition to the host and port, the wallet file directory must be specified.  By not giving a password, a SSO wallet is assumed. Summary To use SSL with the Oracle thin driver without any clear text passwords, use an SSO Oracle Wallet.  SSL support in the Oracle thin driver is available starting in 10g Release 2.

    Read the article

  • JDeveloper and Upgrading Your JDK on Ubuntu

    - by Duncan Mills
    One little gotcha, if you, as I did recently, upgrade your JDK on Ubuntu then you may have to make sure you reflect that change in a couple of places for JDeveloper to stay happy. Assuming that you've installed from the jar version of the JDeveloper installer, then the JDK that you specified at install time will be recorded in the .jdev_jdk file in your home directory.However, be aware that this is not the only reference to the absolute path of the JDK. When you run the embedded WebLogic for the first time then the .jdeveloper/system11.1.1.3.37.56.60/DefaultDomain/bin/startWebLogic.shscript will be created, and associated with that, the setDomainEnv.sh script in the same directory. So, if you do want to change the JDK location be sure to change this file as well. (or of course do everything with symbolic links)

    Read the article

  • Hosting multiple client website on single

    - by Bhavesh Gangani
    I'm WebDesiner and i've currently only a few clients for making website. i've unlimited hosting account and i want to host their websites in my account without reseller account ( actually it is not needed for constness). only my client's need is ftp access to their personal directory. so as i questioned it is possible to give them saperate phpmyadmin access in this strategy ? as per my knowledge it is done with "addon" domain pointing on my hosting account's directory with cpanel, am i right ? or there is another solution for it except reseller account ?

    Read the article

  • Unity does not use the categories from the .desktop files

    - by Melissa Newman
    I installed the educational version of Ubuntu with Unity. This is for kids. The most important applications are the ones that the descriptions says are specially added for kids. Trying to find them is a pain in the applications directory. They are organized in the main menu, but Unity does not use the main menu information for anything. Bottom line, I am now going to reinstall Ubuntu and NOT include Unity. The panels feature is nice, but there needs to some ability to organize the applications -- either with a menu or a directory structure that is read. The .desktop files indicate categories ... like education. Why does Unity not use this information?

    Read the article

  • IIS Not Accepting Login Credentials

    - by Dale Jay
    I have an ASP.NET web form using Microsoft's boilerplate Active Directory login page, set up exactly as suggested. (See http://msdn.microsoft.com/en-us/library/ms180890%28v=vs.80%29.aspx) Windows Authentication is activated on the "Default Website" and "MyWebsite" levels, and Domain\This.User is given "Allow" access to the site. After entering the valid credentials for This.User on the web form, a popup window appears asking me to enter my credentials yet again. Despite entering valid credentials for This.User (after attempting Domain\This.User and This.User formats), it rejects the credentials and returns an unauthorized user page. Active Directory user This.User is valid, the IP address of the AD server has been verified and SPN's have been set up for the server. Any thoughts as to what may be causing this? I can post code if needed.

    Read the article

  • Migrating from GlassFish 2.x to 3.1.x

    - by alexismp
    With clustering now available in GlassFish since version 3.1 (our Spring 2011 release), a good number of folks have been looking at migrating their existing GlassFish 2.x-based clustered environments to a more recent version to take advantage of Java EE 6, our modular design, improved SSH-based provisioning and enhanced HA performance. The GlassFish documentation set is quite extensive and has a dedicated Upgrade Guide. It obviously lists a number of small changes such as file layout on disk (mostly due to modularity), some option changes (grizzly, shoal), the removal of node agents (using SSH instead), new JPA default provider name, etc... There is even a migration tool (glassfish/bin/asupgrade) to upgrade existing domains. But really the only thing you need to know is that each module in GlassFish 3 and beyond is responsible for doing its part of the upgrade job which means that the migration is as simple as copying a 2.x domain directory to the domains/ directory and starting the server with asadmin start-domain --upgrade. Binary-compatible products eligible for such upgrades include Sun Java System Application Server 9.1 Update 2 as well as version 2.1 and 2.1.1 of Sun GlassFish Enterprise Server.

    Read the article

  • How To Get Browser To Open From Same Folder?

    - by Bad Learner
    I don't know how to make the title any more clear. Let me give an example - When I want to add an image to an article on my blog, I click "Browse" button in the image uploader, and then upload the image to the blog. The problem is, when I want to upload another image to my blog, I want the browser to look into the same directory; but it's always takes me to the "Recently Used" directory. Is there a way to change this? I hope I have made myself clear enough. EDIT: I only use Chrome, so I am referring to it.

    Read the article

  • g++ cannot find include files (qt3)

    - by Allan
    allan@allan-VirtualBox:~/blackjack_for_the_hopelessly_luckless$ make g++ -c -pipe -g -Wall -W -O2 -D_REENTRANT -DQT_NO_DEBUG -DQT_THREAD_SUPPORT -DQT_SHARED -DQT_TABLET_SUPPORT -I/usr/share/qt3/mkspecs/default -I. -I. -I/usr/include/qt3 -o advicewindow.o advicewindow.cpp advicewindow.cpp:32:19: fatal error: QWidget: No such file or directory compilation terminated. make: *** [advicewindow.o] Error 1 allan@allan-VirtualBox:~/blackjack_for_the_hopelessly_luckless$ qt3 was installed using apt-get. Header files are located in /usr/include/qt3/ Is there a g++ config file or something I need to update? I'm new to compiling from source and not sure what to do. Makefile was created using Qmake from project file. Files in include directory are all lower case, should I change the code in advicewindow.cpp to qwidget.h? Any help appreciated. Thanks.

    Read the article

  • What are some client-side sitemap generators?

    - by BHare
    Most of the sitemap generators I've found all scan my internal files and base it on that. However using apache and htaccess I have many aliases for things for example: /brand/model/photos/ is empty directory wise, but on the web it relocates it to a zenphoto gallery using htaccess. http://www.xml-sitemaps.com/ does What I need, but I'd like to have more control over it and I need it to be dynamic and run weekly or daily. Allow for specific change frequencies based on the file/directory. I also want to be able to make a "pretty" sitemap html file. Counting the photo gallery I have around 300 links, without the photo gallery more like 100. I have MySQL, PHP 5.3 installed.

    Read the article

  • Why does quickly package --extras fail (where quickly package doesn't)?

    - by Pablo
    When I attempt to use quickly package --verbose --extras on my application I get these errors at the end: sed -i "s|__soundboard_data_directory__ =.*|__soundboard_data_directory__ = '/opt/extras.ubuntu.com/soundboard/share/soundboard/'|" debian/soundboard/opt/extras.ubuntu.com/soundboard/soundboard*/soundboardconfig.py sed: can't read debian/soundboard/opt/extras.ubuntu.com/soundboard/soundboard*/soundboardconfig.py: No such file or directory make[1]: *** [override_dh_install] Error 2 make[1]: Leaving directory `/home/pablo/soundboard' make: *** [binary] Error 2 dpkg-buildpackage: error: fakeroot debian/rules binary gave error exit status 2 I haven't a clue what is wrong here. When I run package --extras on a clean template it runs fine. soundboardconfig.py is an unmodified appnameconfig.py the template makes. I'm not sure if my full source code is needed for this or not, but can be provided. EDIT: Forgot to mention quickly package creates a working package, only --extras fails.

    Read the article

  • Can't copy from hfs+ external

    - by Janine
    I've spent days and days trying to figure this out, but still can't. I think I've read every article on this and tried all there is to try, and I still can't copy any file from my mac-formatted hfs+ external drive. Sorry if there's still an article I've missed.. I have disabled journaling and tried all the hfsprogs commands I could find, but still whenever I click on a folder on the external and try to copy it to my home directory, I get this: "The folder xxx cannot be handled because you don't have permission to read its content." I then found an article about inoring this by copying files through the Terminal. When trying to run the sudo cp -r command in the Terminal with my external drive path, I always get 'no such file or directory'.. Does anyone have another suggestion for me? Thanks in advance!

    Read the article

  • Update from Ola Hallengren: Target multiple devices during SQL Server backup

    - by Greg Low
    Ola has produced another update of his database management scripts. If you haven't taken a look at them, you should. At the very least, they'll give you good ideas about what to implement and how others have done so. The latest update allows targeting multiple devices during backup. This is available in native SQL Server backup and can be helpful with very large databases. Ola's scripts now support it as well.Details are here: http://ola.hallengren.com/sql-server-backup.html http://ola.hallengren.com/versions.html The following example shows it backing up to 4 files on 4 drives, one file on each drive:EXECUTE dbo.DatabaseBackup@Databases = 'USER_DATABASES',@Directory = 'C:\Backup, D:\Backup, E:\Backup, F:\Backup',@BackupType = 'FULL',@Compress = 'Y',@NumberOfFiles = 4And this example shows backing up to 16 files on 4 drives, 4 files on each drive: EXECUTE dbo.DatabaseBackup@Databases = 'USER_DATABASES',@Directory = 'C:\Backup, D:\Backup, E:\Backup, F:\Backup',@BackupType = 'FULL',@Compress = 'Y',@NumberOfFiles = 16Ola mentioned that you can now back up to up to 64 drives. 

    Read the article

  • vgaswitcheroo isn't working in Ubuntu 12.04

    - by Forbidden404
    I always used vgaswitcheroo to turn off my discrete graphic, I did a script to turn off in boot, but now, vgaswitcheroo isn't working anymore, I don't know why... look forbidden404@forbidden404-Inspiron-N5110:~$ ls -l /sys/kernel/debug/vgaswitcheroo/switch ls: cannot access /sys/kernel/debug/vgaswitcheroo/switch: Permission denied forbidden404@forbidden404-Inspiron-N5110:~$ grep -i switcheroo /boot/config-* /boot/config-3.2.0-20-generic:CONFIG_VGA_SWITCHEROO=y /boot/config-3.2.0-23-generic:CONFIG_VGA_SWITCHEROO=y /boot/config-3.2.0-24-generic:CONFIG_VGA_SWITCHEROO=y forbidden404@forbidden404-Inspiron-N5110:~$ sudo su [sudo] password for forbidden404: root@forbidden404-Inspiron-N5110:/home/forbidden404# echo OFF > /sys/kernel/debug/vgaswitcheroo/switch bash: /sys/kernel/debug/vgaswitcheroo/switch: No such file or directory root@forbidden404-Inspiron-N5110:/home/forbidden404# ls -l /sys/kernel/debug/vgaswitcheroo/switch ls: cannot access /sys/kernel/debug/vgaswitcheroo/switch: No such file or directory And I'm not using any proprietary driver, k?

    Read the article

  • How can I keep directories in sync

    - by Guillaume Boudreau
    I have a directory, dirA, that users can work in: they can create, modify, rename and delete files & sub-directores in dirA. I want to keep another directory, dirB, in sync with dirA. What I'd like, is a discussion on finding a working algorithm that would achieve the above, with the limitations listed below. Requirements: 1. Something asynchronous - I don't want to stop file operations in dirA while I work in dirB. 2. I can't assume that I can just blindly rsync dirA to dirB on regular interval - dirA could contain millions of files & directories, and terrabytes of data. Completely walking the dirA tree could take hours. Those two requirements makes this really difficult. Having it asynchronous means that when I start working on a specific file from dirA, it might have moved a lot since it appeared. And the second limitation means that I really need to watch dirA, and work on atomic file operations that I notice. Current (broken) implementation: 1. Log all file & directory operations in dirA. 2. Using a separate process, read that log, and 'repeat' all the logged operations in dirB. Why is it broken: echo 1 > dirA/file1 # Allow the 'log reader' process to create dirB/file1: log = "write dirA/file1"; action = cp dirA/file1 dirB/file1; result = OK echo 1 > dirA/file2 mv dirA/file1 dirA/file3 mv dirA/file2 dirA/file1 rm dirA/file3 # End result: file1 contains '1' # 'log reader' process starts working on the 4 above file operations: log = "write file2"; action = cp dirA/file2 dirB/file2; result = failed: there is no dirA/file2 log = "rename file1 file3"; action = mv dirB/file1 dirB/file3; result = OK log = "rename file2 file1"; action = mv dirB/file2 dirB/file1; result = failed: there is no dirB/file2 log = "delete file3"; action = rm dirB/file3; result = OK # End result in dirB: no more files! Another broken example: echo 1 > dirA/dir1/file1 mv dirA/dir1 dirA/dir2 # 'log reader' process starts working on the 2 above file operations: log = "write file1"; action = cp dirA/dir1/file1 dirB/dir1/file1; result = failed: there is no dirA/dir1/file1 log = "rename dir1 dir2"; action = mv dirB/dir1 dirB/dir2; result = failed: there is no dirA/dir1 # End result if dirB: nothing!

    Read the article

< Previous Page | 233 234 235 236 237 238 239 240 241 242 243 244  | Next Page >