Search Results

Search found 28288 results on 1132 pages for 'home directory'.

Page 337/1132 | < Previous Page | 333 334 335 336 337 338 339 340 341 342 343 344  | Next Page >

  • PHP composer question

    - by kdub
    just getting started with composer and I have a couple of questions. When I use composer to add a dependency, the dependency gets added to my folder's Vendor directory. The newly added package not only comes with the source code for that package, but all packagist required files for the developer to test and add that package to packagist repo (composer.json, .travis.yaml, license, readme.md, etc). For my project, do I need to keep the vendor's required packagist files in my project? Can I clean the package folder structure up a little? I added the package, Slim micro framework, which nests the source files three directories deep upon installation, ../vendor/slim/slim/Slim/(source Files). Is it worth moving these files to the top Slim directory like: ../vendor/Slim/(source files)? Or will this ruin the integrity of the package?

    Read the article

  • Allow Firefox 8.0 access to /proc/[0-9]*/fd/?

    - by cvt76
    Today I discovered that Firefox 8.0 on Ubuntu 11.10 wanted to have read access to /proc/[0-9]*/fd/. I have made a custom usr.bin.firefox apparmor profile so I have spent many hours testing, but Firefox never wanted to have read access to the /proc/[0-9]*/fd/ directory before, and now suddenly it want it every time I browser the web. The orginal apparmor Firefox profile or any af the abstractions do not allow read access to /proc/[0-9]*/fd/ so I do not understand why this is suddenly happening. Does it sound reasonable that Firefox want to read the /proc/[0-9]*/fd/ directory? or is my system defective?

    Read the article

  • Silverlight 4 + RIA Services - Ready for Business: Consuming Data in the Silverlight Client

    To continue our series, lets see where the fun comes in my look at how easy that is to consume from the client.  First just to help you understand what is happening behind the covers, lets look at a code-behind solution.  In View\Home.xaml put a simple DataGrid on the form. <sdk:DataGrid Name="dataGrid1" Height="152" Width="692" /> Then add these lines of code to Home.xaml.cs   1: var context = new DishViewDomainContext(); 2: this.dataGrid1.ItemsSource...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Nginx + PHP-FPM on Centos 6.5 gives me 502 Bad Gateway (fpm error: unable to read what child say: Bad file descriptor)

    - by Latheesan Kanes
    I am setting up a standard LEMP stack. My current setup is giving me the following error: 502 Bad Gateway This is what is currently installed on my server: Here's the configurations I've created/updated so far, can some one take a look at the following and see where the error might be? I've already checked my logs, there's nothing in there (http://i.imgur.com/iRq3ksb.png). And I saw the following in /var/log/php-fpm/error.log file. sidenote: both the nginx and php-fpm has been configured to run under a local account called www-data and the following folders exits on the server nginx.conf global nginx configuration user www-data; worker_processes 6; worker_rlimit_nofile 100000; error_log /var/log/nginx/error.log crit; pid /var/run/nginx.pid; events { worker_connections 2048; use epoll; multi_accept on; } http { include /etc/nginx/mime.types; default_type application/octet-stream; # cache informations about FDs, frequently accessed files can boost performance open_file_cache max=200000 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 2; open_file_cache_errors on; # to boost IO on HDD we can disable access logs access_log off; # copies data between one FD and other from within the kernel # faster then read() + write() sendfile on; # send headers in one peace, its better then sending them one by one tcp_nopush on; # don't buffer data sent, good for small data bursts in real time tcp_nodelay on; # server will close connection after this time keepalive_timeout 60; # number of requests client can make over keep-alive -- for testing keepalive_requests 100000; # allow the server to close connection on non responding client, this will free up memory reset_timedout_connection on; # request timed out -- default 60 client_body_timeout 60; # if client stop responding, free up memory -- default 60 send_timeout 60; # reduce the data that needs to be sent over network gzip on; gzip_min_length 10240; gzip_proxied expired no-cache no-store private auth; gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/xml; gzip_disable "MSIE [1-6]\."; # Load vHosts include /etc/nginx/conf.d/*.conf; } conf.d/www.domain.com.conf my vhost entry ## Nginx php-fpm Upstream upstream wwwdomaincom { server unix:/var/run/php-fcgi-www-data.sock; } ## Global Config client_max_body_size 10M; server_names_hash_bucket_size 64; ## Web Server Config server { ## Server Info listen 80; server_name domain.com *.domain.com; root /home/www-data/public_html; index index.html index.php; ## Error log error_log /home/www-data/logs/nginx-errors.log; ## DocumentRoot setup location / { try_files $uri $uri/ @handler; expires 30d; } ## These locations would be hidden by .htaccess normally #location /app/ { deny all; } ## Disable .htaccess and other hidden files location /. { return 404; } ## Magento uses a common front handler location @handler { rewrite / /index.php; } ## Forward paths like /js/index.php/x.js to relevant handler location ~ .php/ { rewrite ^(.*.php)/ $1 last; } ## Execute PHP scripts location ~ \.php$ { try_files $uri =404; expires off; fastcgi_read_timeout 900; fastcgi_pass wwwdomaincom; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } ## GZip Compression gzip on; gzip_comp_level 8; gzip_min_length 1000; gzip_proxied any; gzip_types text/plain application/xml text/css text/js application/x-javascript; } /etc/php-fpm.d/www-data.conf my php-fpm pool config ## Nginx php-fpm Upstream upstream wwwdomaincom { server unix:/var/run/php-fcgi-www-data.sock; } ## Global Config client_max_body_size 10M; server_names_hash_bucket_size 64; ## Web Server Config server { ## Server Info listen 80; server_name domain.com *.domain.com; root /home/www-data/public_html; index index.html index.php; ## Error log error_log /home/www-data/logs/nginx-errors.log; ## DocumentRoot setup location / { try_files $uri $uri/ @handler; expires 30d; } ## These locations would be hidden by .htaccess normally #location /app/ { deny all; } ## Disable .htaccess and other hidden files location /. { return 404; } ## Magento uses a common front handler location @handler { rewrite / /index.php; } ## Forward paths like /js/index.php/x.js to relevant handler location ~ .php/ { rewrite ^(.*.php)/ $1 last; } ## Execute PHP scripts location ~ \.php$ { try_files $uri =404; expires off; fastcgi_read_timeout 900; fastcgi_pass wwwdomaincom; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } ## GZip Compression gzip on; gzip_comp_level 8; gzip_min_length 1000; gzip_proxied any; gzip_types text/plain application/xml text/css text/js application/x-javascript; } I've got a file in /home/www-data/public_html/index.php with the code <?php phpinfo(); ?> (file uploaded as user www-data).

    Read the article

  • chmod 700 and htaccess deny from all enough?

    - by John Jenkins
    I would like to protect a public directory from public view. None of the files will ever be viewed online. I chmoded the directory to 700 and created an htaccess file that has "deny from all" inside it. Is this enough security or can a hacker still gain access to the files? I know some people will say that hackers can get into anything, but I just want to make sure that there isn't anything else I can do to make it harder to hack. Reply: I am asking if chmod 700 and deny from all is enough security alone to prevent hackers from getting my files. Thanks.

    Read the article

  • TechEd 2010: First Time In New Orleans

    Weve teamed up with Habitat For Humanity to build a new home for a deserving family in New Orleans. The wall raising ceremony for the home will be this Friday, June 11th. If youre at TechEd and would like to help then come by the DevExpress booth on Aisle 2600 at Booth 13 and let us know. Check out these pictures of beautiful New Orleans, the city that is hosting Microsoft TechEd 2010. This is my first time visiting this historic city and its been fantastic. The pictures below represent just...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Compiz setting migration from 12.04 to 12.10

    - by Maksim
    I just make migration my ubuntu 12.04 home folder to fresh installed ubuntu 12.10. And again have problem with compiz. It have no one my custom settings! How it's possible then full copy home dir not transfer settings? Those things began the compiz start play with digits on config folders. like .compiz-1 or .compiz. I have now idea then I have to use this -1 then not. But I've rename my compiz folders in .gconf and .compiz. Which not help. How get my setting back. And why this mess happening?

    Read the article

  • How do I install penmountlpc?

    - by ændrük
    I would like to install the penmountlpc touchscreen driver in Ubuntu 11.10 on a Dialogue Flybook A33i. When I try installing it from the source packed in penmountlpc-source_1.1_all.deb, I receive the following build error (see also the full build log): # Install the module cp penmountlpc.o debian/penmountlpc-modules-3.0.0-12-generic/lib/modules/3.0.0-12-generic/misc cp: cannot create regular file `debian/penmountlpc-modules-3.0.0-12-generic/lib/modules/3.0.0-12-generic/misc': No such file or directory make[1]: *** [binary-modules] Error 1 make[1]: Leaving directory `/usr/src/modules/penmountlpc' make: *** [kdist_build] Error 2 How can I resolve this problem?

    Read the article

  • automount smb share customization

    - by Douda
    as a new linux user, used Ubuntu 12.10, I tried several tweaks with the SMB shares. I followed this tutorial to mount permanently 4 SMB shares from my local NAS. To resume, I : edit /etc/fstab added a line like : //servername/sharename /media/windowsshare cifs credentials=/home/ubuntuusername/.smbcredentials,iocharset=utf8,file_mode=0777,dir_mode=0777 0 0 created a credentials file for security reasons in my home folder (explained in the tutorial) ~/.smbcredentials It work perfectly, they are automounted on each reboot, but when I logon, I all the time get 4 file explorer open with each share. It is possible to avoid these file explorer window to be opened on every reboot ? I guess it's related to X or via a explicit deny or these graphical mount, but I don't have any clues on how to proceed Thank you for your time,

    Read the article

  • Can I speed up File Sync for Ubuntu One?

    - by tapan
    I am using ubuntu 11.04 on my home laptop and the same on my work laptop. I just wanted to sync the work folders on my home laptop to my office laptop which is ~300Mb of data. This would normally be a short download but ubuntu-one is taking forever to sync it. Any ideas what could cause this? I am not behind any firewall or anything of that sort. I have not checked the limit bandwidth box in the preferences.

    Read the article

  • how to add programs to ubuntu without internet access

    - by captainandcoke
    I don't have internet access at my home and it takes me about a half hour to ride my bike to the library. I have downloaded .deb files to try to install at my home computer but everyone I have downloaded says it can't install because it depends on package X. The next day I will download package X and it will require package Y. Is there anyway to find out what ALL the sub-dependencies are for deb files? I have tried to boot from USB or External Hard drive on the library computers but the security settings prevent this. Also, I do not know anyone with a Linux computer.

    Read the article

  • Lubuntu 14.04 Problem starting lxsession-default-apps

    - by user278179
    I have one problem, I can't execute lxsession-default-apps on Lubuntu 14.04 because I get because said to me "The database is updating, please wait" If I try to run lxsession-default-apps, I get this error: ** Message: utils.vala:30: config_path_directory: /home/USER/.config/lxsession-default-apps ** Message: desktop-files-backend.vala:171: test config_path: /home/USER/.config/lxsession-default-apps/settings.conf ** Message: desktop-files-backend.vala:237: Scanning folder: /usr/share/applications ** Message: desktop-files-backend.vala:278: Start scanning ** Message: desktop-files-backend.vala:257: Scanning folder: /usr/share/app-install/desktop ** Message: desktop-files-backend.vala:278: Start scanning Error: list_files failed: No such file or directory ** Message: desktop-files-backend.vala:333: Finishing scanning ** Message: desktop-files-backend.vala:189: Signal finish scanning with mode: write ** Message: desktop-files-backend.vala:333: Finishing scanning Any help would be appreciated. Thanks. Regards.

    Read the article

  • Microsoft Office 2013 Takes New Approach

    You can check out an article from Computerworld for a good look at the questions and answers about the new software. For instance, you've probably noticed that I'm not giving the full name. That's because Microsoft seems to be using several names. If you go the traditional route and pay the one-time upfront fee for the shrink-wrapped edition, it's Office 2013. There's also a tablet version called Office Home and Student 2013 RT - but that won't include the iPad, or at least not at first. The consumer preview, which I'll be linking to in a minute, is dubbed Office 365 Home Premium. There ...

    Read the article

  • XUbuntu: Open file browser via "run command" menu

    - by mbelow
    In older Xubuntu versions, if I entered a path to a directory in the "run command" dialog, a thunar-window was opened showing this directory. I found that to be very handy if I would open f.e.g "/tmp", I just needed to press WindowsKey+r, enter /tmp and press enter, there I was. This was also great for URLs (ftp, http etc) Unfortunately, since 12.04, this doesn't work anymore. It seem as if the "run command" is now integrated with the program finder. It still works when I type "thunar ftp://...." or "thunar /tmp", but it's a bit tedious now. Is there a way to restore the old behavior? Note: I'm running a german localization of Xubuntu - I hope I translated "run command" and "program finder" correctly...

    Read the article

  • files power_profile and power_method missing on ubuntu 12.04 after clean isntall

    - by Nikola
    OK here is the problem,I am using gnome-shell, ubuntu 12.04, kernel 3.2.0-32-generic-pae and the proprietary drivers for my ati card (Installed via "additional drivers") , the laptops is a hp 4310s probook and i want to control the power_profiles and power_method , because my GPU temp is high. before i reinstalled ubuntu 12.04, i used the .sh method on startup to write to those files, and everything worked like a charm, but now they are missing, and i can't create them.this is what i get when i try to create the directories mkdir: cannot create directory `/sys/class/drm': No such file or directory How can i can get them back?if you need some information , just ask and i will give it.

    Read the article

  • HIB Games (Aquaria & Penumbra) cannot find libGL.so.1 even though it exists

    - by aberration
    I'm try to play some Humble Indie Bundle (HIB) games, but I'm getting errors with Aquaria and Penumbra: Overture that are related to the libGL.so.1 file. Aquaria gives this error on launch: Message: SDL_GL_LoadLibrary Error: Failed loading libGL.so.1 And Penumbra: Overture gives this error on launch: ./penumbra.bin: error while loading shared libraries: libGL.so.1: cannot open shared object file: No such file or directory I know that the file libGL.so.1 does exist (in /usr/lib/x86_64-linux-gnu/mesa/libGL.so.1). From past errors like this, I'm guessing that you need to symlink the library to another directory, but I can't figure out which one.

    Read the article

  • User script at logout

    - by GUI Junkie
    The problem: I'm sharing a directory with my wife. I've placed us both in a 'shared' group and the directory belongs to the 'shared' group as well. Whenever one of us creates a file, this file belongs to user:user, instead of user:shared... The solution: I can do sudo chown, but my wife can't. So, I want to run a script when I logout of the session. If I understand correctly, the startup scripts go in /etc/init.d/ and the runlevel scripts go /etc/rc0.d/ where 0 is the runlevel (0-6). Do the runlevel scripts execute only on exit/logout? Do these depend on the user, that is, I'd like to run it only for my user (not so important in this case, mind)? Should I place the script somewhere else? Also, I imagine that the script will be run by root, so there's no need for sudo within the script, is that correct?

    Read the article

  • /etc/apt/apt.conf gets cleared every time I change proxy settings under settings->network->Network proxy

    - by Muriuki David
    I use a proxy server settings at work but when i get home, my network connection uses no proxy settings. every time i get home and use the proxy settings under settings-networks-Network Proxy to set to "none", the file /etc/apt/apt.conf gets cleared and the following day in the morning i have to edit the file and type in the command again, or at least copy paste from a backup file. How can i avoid this situation, its tiring, how can i make the proxy settings gui write to this file for apt-get and software center to work when i set proxy under network settings?

    Read the article

  • Creating basic ACPI event makes the system unusably slow

    - by skerit
    I want to change a few settings on my laptop when I switch to battery power. I created a new event in /etc/acpi/events/cust-battery and it looks like this: event=battery action=/home/skerit/power.sh I put a simple command in the power.sh file: echo This is a test >> /home/skerit/powertest Now, when I tail this file it shows "This is a test" 4-5 times upon switching to battery power. However, the system becomes totally unstable. It slows down significantly. I can't change anything in the terminal. The terminal and certain parts of the screen (like the gnome system monitor applet) go blank from time to time. What can be the cause of that? It's a simple echo that gets executed a few times!

    Read the article

  • Reinstall Ubuntu on Custom Partitions

    - by Forerunner117
    I am attempting to reinstall ubuntu 13.04 without losing my installed software and /home docs. I have read countless threads on this same topic, but nothing seems to apply to my situation. When I originally installed, I had created a separate partition for /home, but I am now unsure of which partition that was. Based on the picture below, where should I be installing the new copy? Also, will I run into problems since I am now running 13.10 and want to put 13.04 back on it? Should I grab 12.04 or 13.10 for this reinstall? Picture: http://i.stack.imgur.com/FL2SY.png (Note: I am performing this reinstall due to a complete muck up of my unity/compiz settings and configuration, resulting in no desktop. I've done my best to resolve this problem first before resorting to this.)

    Read the article

  • How to Update Remote Diagnostic Agent (RDA) to Latest Version?

    - by Daniel Mortimer
    Remote Diagnostic Agent (RDA) 4.28 was released on 12th June. Full details can be found in this My Oracle Support documentRDA 4 Release Notes [ID 414970.1]From a Fusion Middleware Core Component, Install and Administration perspective this latest release does not offer any significant new features or changes. However, despite the lack of Fusion Middleware specific new features in version 4.28, Remote Diagnostic Agent still comes as highly recommended. It is incredibly useful problem solving / troubleshooting aid. Support engineers dealing with Service Requests often request RDA output as it collects just about everything you might need to get a view of the state and configuration of the host operating system, network setup and Fusion Middleware components. To find out more take a look at Running RDA Against Oracle Fusion Middleware 11g [ID 853437.1] Getting Started With Remote Diagnostic Agent: Case Study - Oracle WebLogic Server (Video) [ID 1262157.1] Note: While the latter document looks at RDA from the perspective of WebLogic Server, much of the advice given in the videos can be applied to other Fusion Middleware products.Ok, let's get back on track with the topic suggested by the title. If you are already familiar with Remote Diagnostic Agent you may ask the question - 'How do I keep my RDA at the latest version?' The answer is in "Running RDA Against Oracle Fusion Middleware 11g [ID 853437.1]". To quote: There are two methods: 1. Upgrade RDA via OCM (Oracle Configuration Manager) Refer to the advice given in: Remote Diagnostic Agent (RDA) Upgrade README [ID 1309034.1] OR 2. Manually download and upgrade to the latest version. To quote from Remote Diagnostic Agent (RDA) 4 - FAQ [ID 330363.1] +++++++++++++++++++++++++++++++++++++++++++++++++++++++ How do I upgrade my RDA 4.x installation from the prior release? The most simplest and reliable way to upgrade your RDA installation is delete or move your old installation to a new location. Then install the new release into the location you had the prior release installed. If you want to reuse you old setup.cfg file, you can place the older version into the new <rda> directory and it will try to upgrade your setup.cfg to the new features. A second approach is to install the latest RDA into another directory, then if needed copy the old setup.cfg file to the new RDA directory. When the new RDA is run for the first time, it will try to upgrade your setup.cfg to the new features. +++++++++++++++++++++++++++++++++++++++++++++++++++++++ The upgrade method via Oracle Configuration Manager is nice because it allows RDA to be auto updated whenever a new release of RDA is made available (which roughly speaking is every 3 months). However, it does require you to install and configure Oracle Configuration Manager in addition to RDA. A quick guide to Fusion Middleware 11g and OCM can be found in this support document.Configuring OCM in Oracle Fusion Middleware 11g? A Quick and Easy Guide [ID 1096871.1]

    Read the article

  • Installling a brother printer in Ubuntu 12.04 (MFCJ6510DW)

    - by Gelterfinger
    I am trying to install a Brother MfCJ6510DW printer on Ubuntu 12.04. I have tried various ways, but nothing woks for me. I have downloaded the drivers from the solutions brother.com I tried to install from the "System setting- printing" on the pull-down menu. There I get the message "failed to read PPL file - it says something about a missing Asterisk in colum1 Line 1 In the terminal I get the message cannot find file or directory. Under localhost:631/printer, I get the message " no such file or directory" I tried to install the file I downloaded from the Brothers Solution Center on the Ubuntu Software Centre, there I get the message " The package is of bad quality" Help I downloaded Mfcj6510dwlpr-3.0.0-1.i386.deb and mfcj6510dwcupswrapper-3.0.0-1.i386 I also downloaded linux-bprinter-installer-1.0.0-1.gz, but this does not help either.

    Read the article

  • Why are Back In Time snapshots so large?

    - by Chethan S.
    I just backed up the contents of my home partition onto my external hard drive using Back In Time. I browsed to the backed up contents in the external drive and under properties it showed me the size as 9.6 GB. As I read that in next snapshots I create, Back In Time does not backup everything but creates hard links for older contents and saves newer contents, I wanted to test it. So I copied two small files into my home partition and ran 'Take Snapshot' again. The operation completed within a minute - first it checked previous snapshot, assessed the changes, detected two new files and synced them. After this when I browsed to the backed up contents, I was surprised to see the newer and older backup taking up 9.6 GB each. Isn't this a waste of hard drive space? Or did I interpret something wrongly?

    Read the article

  • Adding your website to free web directories as a link building strategy

    - by Man
    It's been two month I've launched a website. I recently ran into some websites which list directory of other websites. Some examples can be http://www.addgoogleurl.com/ and webdirectorieslist.com, etc. I was talking with my colleague and he says, adding the URL of my website to these kind of websites will negate the effect of other organic real links. Does google consider positive/negative points for these kind of links from web directory websites? Do you have any source for your answer to refer to? I found this question asked before on webmasters.SE, this is asking about many links from a website.

    Read the article

< Previous Page | 333 334 335 336 337 338 339 340 341 342 343 344  | Next Page >