Search Results

Search found 17278 results on 692 pages for 'directory conventions'.

Page 497/692 | < Previous Page | 493 494 495 496 497 498 499 500 501 502 503 504  | Next Page >

  • emacs not load (.emacs) configuration file

    - by ant2009
    I am using emacs on ubuntu 9.04. I have my emacs configuration file in ~/.emacs.d directory. My emacs file is called .emacs I have some basic configuration. However, everytime I start emacs it never loads my configuration and I have to keep doing it manually using i.e. M-X Transient-mark-mode My emacs file is listed below: ;; Emac customization file path (add-to-list 'load-path "~/emacs.d") ;; Use font lock mode (global-font-lock-mode t) ;; Highlight cursor line (global-hl-line-mode t) ;; Highlight selected region (transient-mark-mode t) I want to add to this configuration instead of manually added entries. Many thanks for any advice,

    Read the article

  • Distributing my Application inside a Debian Virtual Machine Image-- How to meet GPL obligations?

    - by bdk
    I have a Linux application I've developed, and I have created a standalone VMWare Image that people can download to try out the application without needing to install and configure a Linux Server. I created this VMWare Image by starting with a base Debian system, installing a bunch of packages and then configuring all the packages and daemons my application depends on. Upon load, the VMWare Image boots right into an X Server running only my application and no Window manager, so its more of a "Virtual Appliance" than a normal Linux Desktop environment. Users generally will never see a command prompt or any application other than my own. (My application itself I have a handle on the licensing issues of) Now I would like to distribute this image, but I'm not sure how to meet my GPL (and other licenses the various Debian components are released under) Obligations. As I understand it, I have two primary obligations to meet. Providing Copyright and License Information for each component I use. As I understand it, all the information I am required to present is located in the /usr/share directory in the Debian, but since my users generally will never touch a console or terminal, they will never see this. Does providing a text file containing a concatenation of all the files inside /usr/share meet this obligation Making source code available for all components I distribute. Since I am not creating the image from source, but from binary packages, I can't provide the actual source code that results in exactly my image being generated. Does providing an ftp mirror and an offer to send that mirror on DVDs of the Debian source debs for all the packages I use meet this obligation? Anything Else I'm required to do to legally distribute this image?

    Read the article

  • Having problems with Grub2 booting Ubuntu from my External Hard Drive

    - by anonymous
    I installed Ubuntu on my external hard drive but it won't boot on my laptop. what do i do? i did some reading and traced the source of the problem to grub2. Apparently, grub2 doesn't use the device's UUID, and uses the linux directory instead (/dev/sdf2). This means that whenever i plug my E-HDD into a system that has a different number of drives connected to it, i won't be able to boot without editting the boot command. I don't understand it too well but that's what i got from what i read. Is there anyway to fix this?

    Read the article

  • centos cron job running php file

    - by user50946
    I have a php file under php called test.php set to run every 5th minute of the hour. When ever I run the file manually (by going to the web browser and runnint the path) it works fine. But when the cron job tries to run it I get the error message my cron job is #### Delete Records 5 * * * * /var/www/html/phpsysinfo/cronUpdateLeadBucketOnEnergycAlliance.php my phpfile is (path : /var/www/html/phpsysinfo/phpfile) <?php require("dbconnect.php"); $sql = mysql_query("DELETE FROM list where status <> 'LEAD'") or die(mysql_error()); ?> and the error that I get is: /var/www/html/phpsysinfo/phpFile.php: line 1: ?php: No such file or directory /var/www/html/phpsysinfo/phpFile.php: line 2: syntax error near unexpected token `"dbconnect.php"' /var/www/html/phpsysinfo/phpFile.php: line 2: `require("dbconnect.php"); thanks

    Read the article

  • How do I fix this error? Windows server 2003 the application failed to initialize properly (0xc0000022)

    - by Sharon
    Opening one of the programs from the user desktop I get the above Aplication error. It is a proram stored on the server and then the icon put on the users desktop (this is how I was told to do it) but it does not run the application. I don't know anything about group policies etc and can just about manage to add users in the active directory and that is it. We just have a folder which we drop the program icons into. Any ideas? I must be doing something wrong as it doesn't always show up on their desktop either. What is the simplest way to do this? Thanks

    Read the article

  • Ubuntu is not detecting my android device

    - by user3514160
    I am new to android. I have just downloaded and installed android sdk. Now when I run the application from eclipse, my device is not getting detected. I have googled and was brought up with this as my solution, but that also didn't worked. Here's the 51-android.rules SUBSYSTEMS=="usb", ATTR{idProduct}=="0bb4", ATTR{idProduct}=="0c03", MODE="0666", GROUP="plugindev", OWNER="<username>" After that I rebooted my laptop, and ran this command: username@laptopname:~/Android/adt-bundle/sdk/platform-tools$ adb devices The output i get is: * daemon not running. starting it now on port 5037 * * daemon started successfully * List of devices attached ???????????? no permissions EDIT crazydeveloper@crazydeveloper:~$ lsusb Bus 002 Device 004: ID 0bb4:0c03 HTC (High Tech Computer Corp.) Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 003: ID 04f2:b337 Chicony Electronics Co., Ltd Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub crazydeveloper@crazydeveloper:~$ ls -l /dev/bus/usb/004/ ls: cannot access /dev/bus/usb/004/: No such file or directory crazydeveloper@crazydeveloper:~$ Edit: 2 After the answer submitted here's the output that i got: crazydeveloper@crazydeveloper:~$ ls -l /dev/bus/usb/002 total 0 crw-rw-r-- 1 root root 189, 128 May 7 09:45 001 crw-rw-r--+ 1 root root 189, 129 May 7 09:45 002 crw-rw-rw- 1 root plugdev 189, 130 May 7 09:48 003 I am using Micromax Canvas 2.2 A114 - Android Version 4.2.2 Please help me. Thanks.

    Read the article

  • Nginx HTTPS redirects causing loop

    - by Ben Chiappetta
    I've been banging my head against the wall trying to figure this out, so if anyone can help I'd appreciate it. My Nginx conf has three different redirect loops, haven't been able to get any of the three to work right. The three problem areas are: Redirecting memcache directory to SSL Redirecting accounts directory to SSL Redirecting SSL to www if non-www nginx.conf: user nginx; worker_processes 1; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; error_log /var/log/nginx/error.log notice; sendfile on; #tcp_nopush on; keepalive_timeout 65; proxy_set_header X-Url-Scheme $scheme; #gzip on; rewrite_log on; include /etc/nginx/conf.d/*.conf; } conf.d/default.conf: server { listen 80; server_name <redacted>.net; rewrite ^(.*) http://www.<redacted>.net$1; } server { listen 80; server_name www.<redacted>.net; set_real_ip_from 192.168.30.4; set_real_ip_from 192.168.30.5; set_real_ip_from 192.168.30.10; real_ip_header X-Forwarded-For; #charset koi8-r; access_log /var/log/nginx/host.access.log main; root /var/www/html; index index.php index.html index.htm; location =/memcache { rewrite ^/(.*)$ https://$server_name$request_uri? permanent; } location /accounts { rewrite ^/(.*)$ https://$server_name$request_uri? permanent; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /etc/nginx/fastcgi_params; try_files $uri = 404; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # location ~ /\.ht { deny all; } } conf.d/ssl.conf: # HTTPS server # server { listen 443; server_name <redacted>.net; rewrite ^(.*) https://www.<redacted>.net$1; } server { listen 443 default_server ssl; server_name www.<redacted>.net; set_real_ip_from 192.168.30.4; set_real_ip_from 192.168.30.5; set_real_ip_from 192.168.30.10; real_ip_header X-Forwarded-For; proxy_set_header X-Forwarded_Proto https; proxy_set_header Host $host; proxy_redirect off; proxy_max_temp_file_size 0; proxy_set_header X-Forwarded-Ssl on; set $https_enabled on; ssl_certificate <redacted>.crt; ssl_certificate_key <redacted>.key; ssl_session_timeout 5m; ssl_protocols SSLv2 SSLv3 TLSv1; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; root /var/www/html; index index.php index.html index.htm; location /memcache { auth_basic "Restricted"; auth_basic_user_file $document_root/memcache/.htpasswd; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param HTTPS on; include /etc/nginx/fastcgi_params; try_files $uri = 404; } }

    Read the article

  • Sharing accounts between multiple computers running Ubuntu Linux

    - by john
    My school has a computer lab full of machines running Red Hat Linux. They have it set up so that you can log into any computer in the lab, and it automatically loads your desktop, home directory, etc, which makes it so all computers in the lab look the same to you, regardless or which one you're using. I have two computer at home running Ubuntu Linux. Could I do this same thing with my computers at home? What's it called, and how do I find documentation on how to set it up? Thanks!

    Read the article

  • tar incremental backup is backing everything up, every time

    - by Cyclic
    I made an incremental backup about 10 months ago (on Jan 27, 2013), creating a .snar metadata file. Now, when I try to make an incremental backup using tar --create --file=dropbox_incremental_1.tar --listed-incremental=dropbox_0.snar Dropbox the command just re-backs up everything. I'm not an expert at Unix timestamps, but I noticed that virtually all of my directory timestamps are way more recent than the last time they changed. For my actual files, they look like this: Access: 2013-03-12 19:04:51.000000000 -0500 Modify: 2012-09-30 15:10:47.000000000 -0500 Change: 2013-03-12 19:04:51.306209672 -0500 The 'Modify' timestamp seems correct, but the files were definitely not changed (at least not doing anything that I know of) at the time they say they were. These files still seem to go into the incremental archive. What's happening here? Is there a way to tell tar to look at the 'modify' timestamp? Isn't this what it's supposed to be doing?

    Read the article

  • Best practice to create an ftp administrator account on vsftpd

    - by jtd
    Background: My manager would like me to create an administration account for out FTP server. When logged in via ftp, it should instantly display all of the home directories of the users, and be able to modify any directory or file in any way possible. What would be the best way to go about this? I planned on chrooting this ftp admin to /home, but I don't know how to properly go about the permissions. Maybe make a group called ftp_admins, and chgrp the /home folder? But then wouldn't it affect the users accessing their folders? any help is appreciated.

    Read the article

  • Can a working Tomcat 6 webapp be turned into a usable .war file?

    - by Bill Cole
    Problem: I have a working webapp on a FreeBSD 8.1 Tomcat 6 test server that I need to move to a production system. The developer who last touched it (and had root on that server) has moved on and isn't helpful. The running app seems to have been deployed from a CVS server that is now unavailable. My thinking is that I would like to find a way to wrap the working webapp into a proper .war so that I can deploy it on a pristine host and (after testing) send the existing system to a very deep bitbucket. But I'm not having luck finding a way to do that. I'm a sysadmin not a developer and don't work much with Tomcat systems so I may be (likely am) overlooking something blindingly simple. I gather that I may be able to just tar up the deployed directory and untar it on the new machine, but I have a nagging feeling that there are pitfalls in that.

    Read the article

  • Client side certificates in client browsers with unix server for management

    - by user146253
    We are currently running Unix dedicated servers for everything (Web cluster, database, FTP, batch, ...) except for a Microsoft Active Directory Certificate Services. The sole purpose of this Windows box is to provide client side certificates to our clients browsers. All our clients are required to install a client side certificate on order for them to be able to access our website. Is there an alternative in the Unix space? The purpose is to make sure only the approved hardware of an approved client can access our website. I'm open for any solution that provides me with this level of security. We are however talking about thousands of certified computers just so you can factor that in in a proposed solution. Optionally we would also like to be able to revoke access. With Regards.

    Read the article

  • How to package static content outside of web application?

    - by chinto
    Our web application has static content packaged as part of WAR. We have been planning to move it out of the project and host it directly on Apache to achieve the following objectives. It's getting too big and bloating the EAR size resulting in slower deployment across nodes. Faster deployment times. Take the load of Application Server Host the static content on a sub domain allowing some browsers (IE) to load resources simultaneously Give us an option to use further caching such as Apache mod_cache apart from the cache headers we send out to browsers. We use yuicompressor-maven-plugin to aggregate and minimize JS file. My question is how do package and manage this static content out side of the web application? My current options are. New maven war project. Still use the same plugin for aggregation and compression. Just a plain directory in SVN and use YUI/Google compressor directly. Or is there a better technology out there to manage static content as a project?

    Read the article

  • Is it possible to push DNS search suffices from DNS server to client?

    - by Mark
    Our (active directory, windows-server-based) intranet used to be called "intranet", and DNS worked fine for windows machines and iPads/Android devices. We have changed it to be "apps.intranet", and it still works for windows machines, but no longer for iPads/Android devices. I think this is because out windows clients are configured to append .company.com when searching DNS, to make it a fully qualified lookup (this search suffix list is pushed to the PCs via AD group policies). I must admit, though, I don't know why it worked with just "intranet"! Does anyone know if it's possible to get DNS to "tell" the iPads/Android devices to append .company.com ... or how we can make it work some other way (but still using the multi-label, non-qualified DNS names) ? Thanks!

    Read the article

  • correct format for datetime appended to filename

    - by jhayes
    I'm trying to setup a batch file to execute a set of stored procs and dump the output to a timestamped text file. I'm having problems finding the correct format for the timestamp. Here is what I'm using osql.exe -S <server> -E -Q "EXEC <stored procedure> " -o "c:\filename_%date:~-0,10%_%time:~-0,10%.txt" The error I get is: Cannot open output file - x:\filename_Thu 06/25/_16:26:43.1.txt No such file or directory I can't find the documentation and I've played around with it but can't find the correct format.

    Read the article

  • SIMPLEST way to set up password protection for a static site, with basic admin UI?

    - by Joseph Turian
    I have a static site. I would like the simplest approach to password protecting a directory, with a basic admin UI for adding/removing users. I will have so few users that I don't care about performance. I don't care if it's PHP or Django or whatever, I just want a complete software package. Apache basic auth isn't good, because you can't log out. Nor is there a UI for adding users. I tried throwing everything behind Django auth and serving the files through Django. However, Chrome treats all my text/css headers as text/plain, so I don't get any stylesheets showing. I can't use mod_xsendfile on my server because I can't reconfigure Apache to add new modules. I think this approach is overkill anyway. I can try configuring Nginx's X-Accel-Redirect, however that requires implementing all the Django code for auth myself, and I'd prefer an existing solution. However, this is my backup plan. Is there a code package that implements authentication with basic admin for a static site?

    Read the article

  • process running on login: can't find in AD or login batch scripts

    - by tombull89
    Hallo, I'm trying to deploy some classroom control software (NetSupport School) to some of the machines on our network but for some reason when you log off and restart the computer any user who logs on ends up re-installing the software while logging on. I spent two hours on the phone to the complanys support and we eventually nailed it down to most likely a setting in Active Directory or in the login.bat (drive mapping and settings) but we can't find anything in those that would say "run this installer at logon". Is there anywhere else on the system that would set something like this? Server 2003/XP. Ta!

    Read the article

  • How can I automatically convert all source code files in a folder (recursively) to a single PDF with syntax highlighting?

    - by Bentley4
    I would like to convert source code of a few projects to one printable file to save on a usb and print out easily later. How can I do that? Edit First off I want to clarify that I only want to print the non-hidden files and directories(so no contents of .git e.g.). To get a list of all non-hidden files in non-hidden directories in the current directory you can run the find . -type f ! -regex ".*/\..*" ! -name ".*" command as seen as the answer in this thread. As suggested in that same thread I tried making a pdf file of the files by using the command find . -type f ! -regex ".*/\..*" ! -name ".*" ! -empty -print0 | xargs -0 a2ps -1 --delegate no -P pdf but unfortunately the resulting pdf file is a complete mess.

    Read the article

  • Mac failing (failed?) hard drive - is all hope lost?

    - by Daniel
    It's a 500 GB Seagate laptop hard drive that came with my Macbook Pro. Apple partition format. Already replaced and now have it external, connected via SATA/USB adapter. Trying to get just a few files that I worked on while out of town when it crashed (and thus did not have my time machine backup drive). Drive will not mount, but OS X Disk Utility detects it and can read the capacity, model number, and even the name of the partition, which leads me to believe all hope may not be lost. Failed attempts so far: Disk Utility verify+repair says drive cannot be repaired and that I should back up immediately (lovely) Disk Warrior says it cannot rebuild the directory due to hardware failure Data Rescue quick & deep scans immediately failed PhotoRec says "error reading sector" for every sector (at least for the few minutes I let it run before closing it to explore other options) What else can I try here? Again, I'm just looking for a few, small files (python scripts to be specific) - not a full recovery.

    Read the article

  • How did we get saddled with the (hierarchical) filesystem as the basic data structure?

    - by user1936
    I'm self-taught and I don't have a CS degree. The more I've been learning about data structure, the more I wonder, in this day and age, how are we still saddled with the filesystem, with directories and files, as the basic data storage structure on the OS? I understand the simplicity of it, but it seems nowadays that there could be more options available natively. As far as I'm aware, the only project to improve the basic functionality of the filesystem was ReiserFS, where you could tell what line of a file was changed by whom, and when. For instance, if I could have native tagging for files, where I could tag images, diagrams, word-processing documents, an entire code repository, all as belonging to a single project, that would really be helpful to me. Since I'm stuck in the filesystem paradigm, I know that I could put all those into a single folder/directory, but what if they already exist in disparate directories, and they need to stay there? I know there are programs out there that can do this, but why aren't they on the filesystem? Something that would be nice to have is some kind of relational feature in the filesystem, like you get with RDBMSes. I understand that that was supposed to be part of Vista/7, but that fell off the feature list too. Sure, any program can store a binary file and have any data structure it wants in it, by why couldn't the OS offer more complex ways of storing data, beyond the simple heirarchy of the filesystem?

    Read the article

  • Upgraded to Ubuntu 12.04 from 10.04 and have to transfer database from Postgresql 8.4 to 9.1

    - by Stpn
    I upgraded server with a Rails application to Ubuntu 12.04 from 10.04 and cannot connect to Postgresql database now... Here is the error message from Rails app: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432" Also the pg_ctl start is not recognized as a command.. EDIT: Turns out my database in on Postgresl 8.4 and my sever is now running on 9.1. So all the database files / configs are on 8.4.. How can I transfer them? Just straight copy from old pg_hba.conf?

    Read the article

  • Do I need to recompile PHP to make use of CURL API?

    - by amn
    I have both Apache and PHP set up manually, albeit the latter without CURL. There is this jungle of instructions and explanations on extensions for PHP. I have a very straightforward question - what do I need to do to enable CURL in a more dynamic way. I resent the idea of static linking, in fact I hate and avoid static linking like the plague. Is it possible to have my Apache and PHP understand that there is CURL in town? I can compile CURL if necessary. Package management may be out of the question, because I built PHP myself - I am on Ubuntu, and it does not provide PHP without Suhosin and a a whole lot of time, so I removed it and built PHP myself. The whole slew of related questions simply propse installing "php5-curl" package, which is exactly one thing I CANNOT do since it installs it in a completely unrelated directory, which my PHP does not even seem to bother linking to.

    Read the article

  • What to do when 'dpkg --configure -a' fails with too many errors?

    - by rudivonstaden
    During an upgrade from lucid (10.04) to precise (12.04), the X session froze, and I have been trying to recover the upgrade to get a stable system. I have performed the following steps: Used ssh to log in to the stalled system over the network. Checked the contents of the /var/log/dist-upgrade directory. There was no activity on main.log, apt.log or term.log. top showed that process 'precise' was using about 3% CPU, but I could find no evidence that the upgrade process was still doing anything. 'dpkg' did not show up in top, but it came up with pgrep dpkg | xargs ps Killed the 'dpkg' and 'precise' processes Tried to recover the upgrade by running sudo fuser -vki /var/lib/dpkg/lock;sudo dpkg --configure -a. This was partially successful (some packages were configured), but failed with the message Processing was halted because there were too many errors. I ran the same command a few times, and each time some packages were configured but others failed. Tried running sudo apt-get -f install. It fails with similar errors to dpkg. The current situation is that dpkg --configure -a and sudo apt-get -f install fails with two kinds of error: Dependency issues, e.g.: dpkg: dependency problems prevent configuration of cifs-utils: cifs-utils depends on samba-common; however: Package samba-common is not configured yet. dpkg: error processing cifs-utils (--configure): dependency problems - leaving unconfigured Resource conflict, e.g.: debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable Additionally, it seems there's reference to potential boot problems, so I'm not keen to reboot without fixing the install first: dpkg: too many errors, stopping Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-3.2.0-25-generic cryptsetup: WARNING: failed to detect canonical device of /dev/sda1 cryptsetup: WARNING: could not determine root device from /etc/fstab So my question is, how to get a working install when dpkg --configure -a fails?

    Read the article

  • Adding text to the beginning and end of a number of files?

    - by John Feminella
    I have a number of files in a directory hierarchy. For each file, I'd like to add "abcdef" to the beginning, on its own line, and "ghijkl" to the end, on its own line. For example, if the files initially contained: # one/foo.txt apples bananas # two/three/bar.txt coconuts Then afterwards, I'd expect them to contain: # one/foo.txt abcdef apples bananas ghijkl # two/three/bar.txt abcdef coconuts ghijkl What's the best way to do this? I've gotten as far as: # put stuff at start of file find . -type f -print0 | xargs -0 sed -i 's/.../abcdef/g' # put stuff at end of file find . -type f -print0 | xargs -0 sed -i 's/.../ghijkl/g' but I can't seem to figure out how what to put in the ellipses.

    Read the article

  • Recurring network issues the same time every day.

    - by Peter Turner
    Something has been happening on my company's network at 9:30 every day. I'm not the sysadmin but he's not a ServerFault guy so I'm not privy to every aspect of the network but I can ask questions if follow up is needed. The symptoms are the following : Sluggish network and download speed (I don't notice it, but others do) 3Com phones start ringing without having people on the other end. We've got the following ports exposed to the public for a web server, a few other ports for communicating with our clients for tech support and a VPN. We've got a Cisco ASA blocking everything else. We've got a smallish network (less than 50 computers/vms on at any time). An Active Directory server and a few VM servers. We host our own mail server too. I'm thinking the problem is internal, but what's a good way to figure out where it's coming from?

    Read the article

< Previous Page | 493 494 495 496 497 498 499 500 501 502 503 504  | Next Page >