Search Results

Search found 13318 results on 533 pages for 'svn config'.

Page 191/533 | < Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >

  • Ubuntu 13.04 client cannot connect to Raspbian samba share

    - by envoyweb
    I have a client Ubuntu 13.04 machine trying to connect to a server running Raspbian with samba and samba-common-bin installed on the server I can see my share and when I try to login I get this error: Unable to access location: Failed to write windows share Cannot allocate memory. I have installed ntfs-3g for the usb hard drive that already auto mounts on the server so I never had to create a directory or edit fstab. Testparm on the server states the following: [global] workgroup = ENVOYWEB server string = %h server map to guest = Bad User obey pam restrictions = Yes pam password change = Yes passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . unix password sync = Yes syslog = 0 log file = /var/log/samba/log.%m max log size = 1000 dns proxy = No usershare allow guests = Yes panic action = /usr/share/samba/panic-action %d idmap config * : backend = tdb [homes] comment = Home Directories valid users = %S create mask = 0700 directory mask = 0700 browseable = No [printers] comment = All Printers path = /var/spool/samba create mask = 0700 printable = Yes print ok = Yes browseable = No [print$] comment = Printer Drivers path = /var/lib/samba/printers [BigDude] comment = Sharing BigDude's Files path = /media/BigDude/ valid users = @users read only = No create mask = 0755 testparm on the client which is running ubuntu is as follows [global] workgroup = ENVOYWEB server string = %h server (Samba, Ubuntu) map to guest = Bad User obey pam restrictions = Yes pam password change = Yes passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . unix password sync = Yes syslog = 0 log file = /var/log/samba/log.%m max log size = 1000 dns proxy = No usershare allow guests = Yes panic action = /usr/share/samba/panic-action %d idmap config * : backend = tdb [printers] comment = All Printers path = /var/spool/samba create mask = 0700 printable = Yes print ok = Yes browseable = No [print$] comment = Printer Drivers path = /var/lib/samba/printers

    Read the article

  • Ubuntu 12.04 nomodeset fixes boot problem but causes screen resolution to get stuck

    - by Thunder
    I've been searching the askubuntu forum for the past 3 days trying to figure out what's going on with my system and I have tried a lot of things but to no avail. So, I will explain my situation and tell you what I have tried and I hope someone can help me :) I have an: HP Workstation xw4100 Pentium(R) 4 3.00 GHz 1.5 GB RAM NVIDIA Quadro4 380 XGL graphics card It came with Windows XP and I set it up (with WUBI) to dual boot with Ubuntu 12.04 After installation I had the problem that so many people had with it booting to a black screen (mine was actually booting to the terminal basic shell) that is fixed by adding nomodeset into the grub. When I do that, MY screen resolution becomes stuck in 1280x768 (as opposed to 1366x768 before adding nomodeset) (and also, when running XP the best resolution is 1280x720) When I go to "additional drivers" it doesn't show any proprietary drivers, so I manually downloaded them using this command: sudo apt-add-repository ppa:ubuntu-x-swat/x-updates sudo apt-get update sudo apt-get install nvidia-current but after rebooting, that made the graphics even worse (now stuck as 800x600) SO I tried to configure the drivers with sudo nvidia-xconfig but that simply created an empty xorg.config file. I found one place where a guy gave information to manually input into the xorg.config file but that had no effect at all. Lastly I tried to install previous versions of the NVIDIA drivers, but they wouldn't even fully install. So now I have just re-installed Ubuntu 12.04 and I either need to find a better solution to the first problem (nomodeset) or get the nouveau driver to correctly configure to work with my nvidia graphics. Thanks for your help ahead of time!

    Read the article

  • Setting up dual monitors, Xorg.conf issues

    - by JTS
    I just got a new computer (W520, Graphics card nVidia GF106 [Quadro 2000]) and installed ubuntu on it using wubi. I have everything working, so I wanted to set it up to be able to use two monitors with an extended screen. I figured I had to edit Xorg.conf, but the file didnt exist. So I tried to create it by booting in recovery mode, and executing Xorg -configure but I am getting these errors: (EE) Failed to load module "vmwgfx" (module does not exist, 0) (EE) vmware: Please ignore the above warnings about not being able to load module/driver vmwgfx (++) Using config file: "/root/xorg.conf.new" (==) Using system config directory "/usr/share/X11/xorg.conf.d" (EE) [drm] No DRICreatedPCIBusID symbol Number of created screens does not match number of detected devices. Configuration failed. ddxSigGiveUp: Closing log Any idea how I can get Xorg -configure to work, so that I can have an xorg.conf file that I can edit to enable twinview? EDIT: Another way I could ask the same question to solve this problem is, why can't I boot with an xorg.conf file generated by nvidia-xconfig? Is there something in the generated xorg.conf file that might need editing?

    Read the article

  • Ping Access to a Nested VM !! Plz Help

    - by Shivaramakrishnan
    I have a problem in communication between the host and the nested VM.This is my layout. I have installed KVM on host machine having a single nic interface (public ip) running Ubuntu.On top of this,I have VM running Ubuntu.I have installed KVM in this VM too.I then have a VM inside this running a web server. I am able to ping the host from this web server VM and ssh into it.But from host to VM ,ping is being unsuccessful. The VM (named L1hyp) on host was created using libvirt-manager and has IP of 192.168.122.8. The vswitch interface created at host is in default config (NAT-ed). Its IP is 192.168.122.1. Now this VM is also having a vswitch interface which is in default config (NAT-ed).Its IP is 192.168.100.1. The Web server VM is created on top of this L1hyp VM, is having an IP of 192.168.100.186. The Webserver VM uses 192.168.100.1 as its default gw. The L1hyp uses 192.168.122.1 as its default gw. From Host: ping 192.168.122.8 - SUCCEEDS ping 192.168.122.1 - SUCCEEDS ping 192.168.100.1 - SUCCEEDS ping 192.168.100.186 - FAILS Comes up with Destination Host Unreachable From 192.168.122.1. But there is route to 192.168.100.0/24 subnet from host.Ping to 192.168.100.1 succeeds. From Webserver VM: ping 192.168.100.1 - SUCCEEDS ping 192.168.122.1 - SUCCEEDS SSH from web server VM to host succeeds. Can anyone help me out what needs to be modified to have two way communication between the host and Webserver VM at the earliest? I am pondering over this problem for over a week now.

    Read the article

  • Do you know how to move the Team Foundation Server cache

    - by Martin Hinshelwood
    There are a number of reasons why you may want to change the folder that you store the TFS Cache. It can take up “some” amount of room so moving it to another drive can be beneficial. This is the source control Cache that TFS uses to cache data from the database. Moving the Cache is pretty easy and should allow you to organise your server space a little more efficiently. You may also get a performance improvement (although small) by putting it on another drive.. Create a new directory to store the Cache. e.g. “d:\TfsCache\” Give the local TFS WPG group full control of the directory Figure: You need to use the App Tier service WPG In the application tier web.config (~\Application Tier\Web Services\web.config) add the following setting (to the appSettings section). <appsettings> ... <add value="D:\" key="dataDirectory" /> ... </appsettings> The app pool will automatically recycle and Team Web Access will start using the new location.  If you then download a file (not via a proxy) a folder with a GUID should be created immediately in the folder from #1.  If the folder doesn’t appear, then you probably don’t have permissions set up properly.

    Read the article

  • How to get httpd to forward to multiple tomcats for different urls, including / ?

    - by Nick Foote
    Ok So I've got multiple tomcat instances setup on several AJP ports, I also have Apache httpd listening on port 8090 (cos I've got another app already using 8080 at the moment). I've successfully mapped urls such as mydomain.com:8090/demo and mydomain.com:8090/preprod to their respective tomcat instances using Jk Mount and the following vhosts config; <VirtualHost *:8090> JkMount /preprod* preprod JkMount /demo* demo </VirtualHost> But I also want the "root" address to map to another tomcat instance, what will become live/production, ie I want mydomain.com:8090/ to map a 3rd tomcat instance. At the moment nothing happens or changes if I just add to the above config a line; JkMount /* rootwar if I browse to mydomain.com:8090 I just get the same boring apache httpd landing page letting me know its running (ie index.html in httpd/htdocs) Is it possible to use JkMount to redirect the "root" address stuff to a tomcat instance? I can see that a rule like /* will also match URLs like mydomain.com/preprod but I was hoping the rules are applied in order so if /* appears at the end it effectively would be a "if its not one of the other environments, then direct to root/production" Just to be clear I'm trying to setup the following; mydomain.com:8090/preprod --> myApp running in tomcat1 mydomain.com:8090/demo --> myApp running in tomcat2 mydomain.com:8090 --> myApp running in tomcat3

    Read the article

  • Can not login Dashboard / Unable to find the server at mykeystoneurl

    - by neo0
    I installed Dashboard following this guide: http://wiki.openstack.org/OpenStackDashboard Everything fine, but when I run the server, I can not login with the username and password in DATABASE config in local_settings.py. Here's my config: DATABASES = { 'default': { 'ENGINE': 'django.db.backends.mysql', 'NAME': 'dashboarddb', 'USER': 'nova', 'PASSWORD': 'nova', 'HOST': 'localhost', 'default-character-set': 'utf8' }, } When I run the Dashboard server and enter username + password. It returned this error on browser: Unable to find the server at mykeystoneurl (HTTP 400) And in the command line: DEBUG:openstack_dashboard.settings:Running in debug mode without debug_toolbar. DEBUG:openstack_dashboard.settings:Running in debug mode without debug_toolbar. Validating models... 0 errors found Django version 1.3.1, using settings 'openstack_dashboard.settings' Development server is running at http://0.0.0.0:8888/ Quit the server with CONTROL-C. Request returned failure status. Traceback (most recent call last): File "/home/us/horizon/.venv/src/python-keystoneclient/keystoneclient/client.py", line 121, in request body = json.loads(body) File "/usr/lib/python2.7/json/__init__.py", line 326, in loads return _default_decoder.decode(s) File "/usr/lib/python2.7/json/decoder.py", line 366, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/usr/lib/python2.7/json/decoder.py", line 384, in raw_decode raise ValueError("No JSON object could be decoded") ValueError: No JSON object could be decoded [06/Mar/2012 15:20:03] "POST /auth/login/ HTTP/1.1" 200 3735 I also tried login as "admin" with password is "password" or "secrete" but I didn't work. What's wrong? Thank you!

    Read the article

  • What is the `ServerName` attribute for apache2 and what does it do?

    - by freddydoggie
    I do not know what this config setting means. Does it mean that it registers a domain name? Is it like DNS? Here is what I have for my apache2 default config ServerName staugie.org ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks Indexes MultiViews AllowOverride All </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride All Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> also, is there any way to register a free domain through the apache foundation?

    Read the article

  • Can I delete Generic kernel if I use Generic

    - by user206049
    I currently can't update my release as there is not enough space on boot. I just have the one kernel version there, but seem to have both the Generic and Low Latency versions. uname -r just shows 3.8.0-32-lowlatency ls -lah /boot shows -rw-r--r-- 1 root root 899K Oct 2 00:00 abi-3.8.0-32-generic -rw-r--r-- 1 root root 899K Oct 7 09:27 abi-3.8.0-32-lowlatency -rw-r--r-- 1 root root 152K Oct 2 00:00 config-3.8.0-32-generic -rw-r--r-- 1 root root 152K Oct 7 09:27 config-3.8.0-32-lowlatency drwxr-xr-x 3 root root 2.0K Jan 1 1970 efi drwxr-xr-x 5 root root 1.0K Oct 22 10:05 grub -rw-r--r-- 1 root root 32M Oct 22 09:51 initrd.img-3.8.0-32-generic -rw-r--r-- 1 root root 32M Oct 22 10:05 initrd.img-3.8.0-32-lowlatency drwxr-xr-x 2 root root 12K Feb 25 2013 lost+found -rw-r--r-- 1 root root 173K Dec 5 2012 memtest86+.bin -rw-r--r-- 1 root root 175K Dec 5 2012 memtest86+_multiboot.bin -rw------- 1 root root 3.0M Oct 2 00:00 System.map-3.8.0-32-generic -rw------- 1 root root 3.0M Oct 7 09:27 System.map-3.8.0-32-lowlatency -rw------- 1 root root 5.2M Oct 2 00:00 vmlinuz-3.8.0-32-generic -rw------- 1 root root 5.2M Oct 7 09:27 vmlinuz-3.8.0-32-lowlatency So what can I do to allow me to update? Apparently I need 174m on boot and am 40m short.

    Read the article

  • Restrictive routing best practices for Google App Engine with python?

    - by Aleksandr Makov
    Say I have a simple structure: app = webapp2.WSGIApplication([ (r'/', 'pages.login'), (r'/profile', 'pages.profile'), (r'/dashboard', 'pages.dash'), ], debug=True) Basically all pages require authentication except for the login. If visitor tries to reach a restrictive page and he isn't authorized (or lacks privileges) then he gets redirected to the login view. The question is about the routing design. Should I check the auth and ACL privs in each of the modules (pages.profile and pages.dash from example above), or just pass all requests through the single routing mechanism: app = webapp2.WSGIApplication([ (r'/', 'pages.login'), (r'/.+', 'router') ], debug=True) I'm still quite new to the GAE, but my app requires authentication as well as ACL. I'm aware that there's login directive on the server config level, but I don't know how it works and how I can tight it with my ACL logic and what's worse I cannot estimate time needed to get it running. Besides, it looks only to provide only 2 user groups: admin and user. In any case, that's the configuration I use: handlers: - url: /favicon.ico static_files: static/favicon.ico upload: static/favicon.ico - url: /static/* static_dir: static - url: .* script: main.app secure: always Or I miss something here and ACL can be set in the config file? Thanks.

    Read the article

  • How to install Tor (Web Browser) in Ubuntu 12.10?

    - by Zignd
    I would like to install the Tor, but I'm having some problems. I know that someone will say "This question is a exactly duplication of How to install tor?", but it's not, because the another question can not be applied to Ubuntu 12.10 as the deb command is not available anymore. I did a research and even at the Tor's Official Website the available resource can not be applied to Ubuntu 12.10. I tried to use the deb command (as the above question says: deb http://deb.torproject.org/torproject.org <DISTRIBUTION> main) and the Terminal says deb: command not found and when I try to install it says E: Unable to locate package deb. I've also tried to use the ppa: ubun-tor, but it's not compatible with Quantal Quetzal, because it's too old. I've also tried to use sudo apt-get install tor, but browser icon don't shows up after installation and if you try to use the command tor in the Terminal I get the following error message: Nov 26 10:59:25.731 [notice] Tor v0.2.3.22-rc (git-4a0c70a817797420) running on Linux. Nov 26 10:59:25.731 [notice] Tor can't help you if you use it wrong! Learn how to be safe at https://www.torproject.org/download/download#warning Nov 26 10:59:25.731 [notice] Read configuration file "/etc/tor/torrc". Nov 26 10:59:25.737 [notice] Initialized libevent version 2.0.19-stable using method epoll (with changelist). Good. Nov 26 10:59:25.737 [notice] Opening Socks listener on 127.0.0.1:9050 Nov 26 10:59:25.737 [warn] Could not bind to 127.0.0.1:9050: Address already in use. Is Tor already running? Nov 26 10:59:25.737 [warn] Failed to parse/validate config: Failed to bind one of the listener ports. Nov 26 10:59:25.737 [err] Reading config failed--see warnings above. Thanks in advance.

    Read the article

  • Compiling custom kernel 3.7.x lowlatency on Ubuntu 12.04

    - by FlabbergastedPickle
    All, I have a peculiar problem with trying to compile a lowlatency flavor of the latest 3.7 kernel. I retrieved the prepatched source from the launchpad using bzr, compiled it using the usual make-kpkg using the current config file plus default options for the rest, installed the kernel and booted into it. Everything works except for the fglrx and wl drivers that I had to install in the original 12.04 lowlatency kernel. So, I tried recompiling these and succeeded with both of them (no errors were reported)--wl driver required a minor adjustment to system.h include while latest fglrx 12.11 beta11 (released yesterday, Dec. 3rd, 2012) compiled without the hitch. Yet, when I try to modprobe either module (both having in common the fact that they were built after the kernel, fglrx as a deb, and wl via the usual make/make install), I get "FATAL: no MODULENAME module found" (MODULENAME being either wl or fglrx). The graphic driver watermark shows 3D crossed out and "for testing purposes" (or "unsupported hardware," can't remember), and no fglrx or wl is loaded. More mysteriously, dmesg shows no attempt on kernel's behalf to load the said drivers, even though they are clearly in the right /lib/modules/KERNEL_VERSION folder. How is this possible? Has something fundamentally changed in 3.7 kernel that would prevent modprobing of these? I know that there is driver signing option that was merged recently but as far as I could tell the kernel config file generated by the build process had that disabled. OTOH, while building wl driver, I did get a warning that the driver was not signed... Then again, even if the kernel disallowed loading of those modules, shouldn't dmesg reflect that? Any thoughts on this one are most appreciated.

    Read the article

  • How can i make Brightness Control work?

    - by Semen
    I have Xubuntu 12.04 with latest updates on Toshiba Satellite A210-15K laptop semen@bloknot:~$ uname -a Linux bloknot 3.2.0-24-generic #37-Ubuntu SMP Wed Apr 25 08:43:22 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux ...and I can't adjust LCD brightness (. I tried to use fnfx tool: semen@bloknot:~$ fnfx-client FnFX Client v0.3 (c) 2003, 2004 Timo Hoenig <[email protected]> fatal error: Could not open "/home/semen/.fnfxrc". Please make sure that the default config is accessible. ...and xbacklight: semen@bloknot:~$ xbacklight No outputs have backlight property ...and I tried to add acpi_osi=Linux and acpi_backlight=vendor parameters in GRUB config, but nothing happens. I soppose .fnfxrc file must be available after installation or first fnfx demon lauch. Isn't it? semen@bloknot:~$ cat /sys/class/backlight/toshiba/brightness -5 semen@bloknot:~$ cat /sys/class/backlight/toshiba/max_brightness 7 semen@bloknot:~$ echo 6 | sudo tee /sys/class/backlight/toshiba/brightness [sudo] password for semen: 6 ... ...but brightness is the same. Help, please. P.S. Execuse me for my poor English. UPD. I have spent 2 days to solve this boring problem, but I can't. So. If I wish linux work correctly, I have to choose another distro.

    Read the article

  • Problem with missing JSON functions on PHP 5.2.6 / Plesk 8.4

    - by Drachenviech
    I have a vserver running openSuse 10.3, Apache 2 and Plesk 8.4. I can update/upgrade neither, as it is apparently not recommended to upgrade openSuse 10.3 (and an update to the EOL 10.4 does not seem to make much sense) and Plesk fails to update no matter what version I try (even fails to upgrade to 8.4.1). Still I can live with that somehow, primarily because I don’t have the time to do a fresh remote install on the vserver. What really is a problem is, that though the installed PHP is 5.2.6 it has no zip library and no json functions. The first is probably because PHP was not compiled with --enable-zip. The second is a big mystery though. As I understand it, it always comes with PHP unless its compiled with the --disable-json configure option. This is however not the case. And the json extension module is just not there. I even tried to enable it with extension=json.so with no luck either. the configure options of my PHP are (as shipped with Plesk 8.4) '../configure' '--prefix=/usr' '--datadir=/usr/share/php5' '--mandir=/usr/share/man' '--bindir=/usr/bin' '--with-libdir=lib' '--includedir=/usr/include' '--sysconfdir=/etc/php5/apache2' '--with-config-file-path=/etc/php5/apache2' '--with-config-file-scan-dir=/etc/php5/conf.d' '--enable-libxml' '--enable-session' '--with-mm' '--with-pcre-regex=/usr' '--enable-xml' '--enable-simplexml' '--enable-spl' '--enable-filter' '--disable-debug' '--enable-inline-optimization' '--disable-rpath' '--disable-static' '--enable-shared' '--program-suffix=5' '--with-pic' '--with-gnu-ld' '--with-system-tzdata=/usr/share/zoneinfo' '--with-apxs2=/usr/sbin/apxs2' '--disable-all' '--disable-cli' As I understand it, PECL is not an option with 5.2.6. Or am I mistaken? Even if I was not, the openSuse repository only goes as far as PHP 5.2.4. The openSuse install even came without zypper, which I had to manually install. So is there a way to get ziplib and json running in PHP 5.2.6 without having to recompile the binary?

    Read the article

  • polipo E: Sub-process /usr/bin/dpkg returned an error code (1)

    - by ICXC
    @me:/home$ sudo apt-get install polipo Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: polipo 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 198 kB of archives. After this operation, 799 kB of additional disk space will be used. Get:1 http://sy.archive.ubuntu.com/ubuntu/ precise/universe polipo amd64 1.0.4.1-1.1 [198 kB] Fetched 198 kB in 2s (97.5 kB/s) Selecting previously unselected package polipo. (Reading database ... 169595 files and directories currently installed.) Unpacking polipo (from .../polipo_1.0.4.1-1.1_amd64.deb) ... Processing triggers for doc-base ... Processing 1 added doc-base file... Processing triggers for man-db ... Processing triggers for install-info ... Processing triggers for ureadahead ... Setting up polipo (1.0.4.1-1.1) ... Starting polipo: Couldn't open config file /etc/polipo/config: 2. invoke-rc.d: initscript polipo, action "start" failed. ****dpkg: error processing polipo (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: polipo E: Sub-process /usr/bin/dpkg returned an error code (1)****

    Read the article

  • StarterSTS 1.1 CTP &ndash; ActAs Support

    - by Your DisplayName here!
    Due to popular demand, I added identity delegation (aka ActAs) support to StarterSTS. To give this feature a try, first download the new bits and add a enableActAs = true to startersts.config. You then have to configure which user account is allowed to delegate, as well as the target realm to delegate to. This is done in usermappings.config, e.g.: <userMappings xmlns="http://www.thinktecture.com/configuration/usermappings">     <user name="middletier">       <mappings>         <mapping type="ActAs"                  value="https://server/service.svc" />       </mappings>     </user>   </users> </userMappings> Please use the forum for any feedback. thanks!

    Read the article

  • What to do when 'dpkg --configure -a' fails with too many errors?

    - by rudivonstaden
    During an upgrade from lucid (10.04) to precise (12.04), the X session froze, and I have been trying to recover the upgrade to get a stable system. I have performed the following steps: Used ssh to log in to the stalled system over the network. Checked the contents of the /var/log/dist-upgrade directory. There was no activity on main.log, apt.log or term.log. top showed that process 'precise' was using about 3% CPU, but I could find no evidence that the upgrade process was still doing anything. 'dpkg' did not show up in top, but it came up with pgrep dpkg | xargs ps Killed the 'dpkg' and 'precise' processes Tried to recover the upgrade by running sudo fuser -vki /var/lib/dpkg/lock;sudo dpkg --configure -a. This was partially successful (some packages were configured), but failed with the message Processing was halted because there were too many errors. I ran the same command a few times, and each time some packages were configured but others failed. Tried running sudo apt-get -f install. It fails with similar errors to dpkg. The current situation is that dpkg --configure -a and sudo apt-get -f install fails with two kinds of error: Dependency issues, e.g.: dpkg: dependency problems prevent configuration of cifs-utils: cifs-utils depends on samba-common; however: Package samba-common is not configured yet. dpkg: error processing cifs-utils (--configure): dependency problems - leaving unconfigured Resource conflict, e.g.: debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable Additionally, it seems there's reference to potential boot problems, so I'm not keen to reboot without fixing the install first: dpkg: too many errors, stopping Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-3.2.0-25-generic cryptsetup: WARNING: failed to detect canonical device of /dev/sda1 cryptsetup: WARNING: could not determine root device from /etc/fstab So my question is, how to get a working install when dpkg --configure -a fails?

    Read the article

  • For an inexperienced VPS administrator, is Nginx a suitable alternative to Apache?

    - by James
    I couldn't think of the best way to set the title, so if somebody wants to edit it to something more appropriate, I'd be grateful ;) I'm what I would consider to be an inexperienced user/ administrator when it comes to running my VPS. I can get by with a few CLI commands, I can set up Webmin and I can set up Yum repos, but beyond the very basic stuff, I'm out of my depth. So far, I'm running Apache. I don't know it particularly well, but I can get by with editing httpd.conf if I'm told what to edit. I've heard good things about Nginx and that it's not as resource-hungry as Apache. I'd like to give it a go, but I can't find any information about its suitability for administrators like me, with little experience of sysadmin or web server config. Webmin now has support for Nginx, so getting it installed and running probably won't be too much of a problem. What I'm wondering is, from a site adminstrator perspective, is running Nginx as transparent as running Apache? IE, at the moment, I can just throw up Wordpress and Drupal sites without having much to worry about or having to make any config changes to Apache. Would Nginx be as transparent?

    Read the article

  • x server startx error?

    - by Chris
    root@thewhitewox:~# startx X.Org X Server 1.7.6 Release Date: 2010-03-17 X Protocol Version 11, Revision 0 Build Operating System: Linux 2.6.24-29-server i686 Ubuntu Current Operating System: Linux thewhitewox 2.6.18-238.9.1.el5.028stab089.1 #1 SMP Thu Apr 14 14:06:01 MSD 2011 i686 Kernel command line: quiet Build Date: 20 October 2011 03:05:54PM xorg-server 2:1.7.6-2ubuntu7.10 (For technical support please see ww w.ubuntu . com/support) Current version of pixman: 0.16.4 Before reporting problems, check http: //wiki.x.org to make sure that you have the latest version. Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: "/var/log/Xorg.0.log", Time: Tue Nov 15 00:50:32 2011 (==) Using config directory: "/usr/lib/X11/xorg.conf.d" Fatal server error: xf86OpenConsole: Cannot open /dev/tty0 (No such file or directory) Please consult the The X.Org Foundation support at http ://wiki.x.org for help. Please also check the log file at "/var/log/Xorg.0.log" for additional information. ddxSigGiveUp: Closing log What does this mean and how can I fix it?

    Read the article

  • Suggestion for setting web application parameters

    - by user40730
    I'm creating a web application on GWT. I'm using MVP pattern with activities and places. I have a xml config file containing some parameters to be used by the application. Content of this xml file is sent to the client using HttpRequest; I'm using a singleton class to hold the information from the xml file. Right now, the application is getting the data when the user starts the application in the home page, that is working well. Now, since I'm using activities and places, a user can bookmark a page and starts the application in any other page (Place). And here comes the problem: Since I'm using some of the information from the xml file to set some ui widgets, I have to check if the xml config file was read and the application already has the parameters (I do this by checking the singleton class). But the xml file is read by using an HttpRequest, so I got errors 'cause the application needs some parameters to initialize some ui widgets, but these parameters aren't ready on time. I was thinking on using an synchronous request to fix the problem, but it seems complicated and not recommendable to do that. So, I'd like to hear some other suggestions. Thanks.

    Read the article

  • Nvidia GeForce Gt-520M-cn on intel dh61ww Ubuntu 12.04

    - by j goseeped
    hi people i hope you can help a little bit , i appreciate your time look: i have a this desktop i7 2600, 8gb ram ddr3, board intel dh61ww, Geforce Nvidia GT520-cn 2Gb ddr3, i just install ubuntu 64bits 12.04 kernel 3.2.0-23-generic , i want to setup two monitors samsung led 22" and get start mi video card 1) i download and installed nvidia driver 295.59 and also try with 302.17 to apt-update and upgrade, apt-get install build-essential linux-headers-$(uname -r), apt-get remove --purge nvidia*, apt-get remove --purge xserver-xorg-video-nouveau, vim /etc/modprobe.d/blacklist.conf blacklist vga16fb blacklist nouveau blacklist rivafb blacklist nvidiafb blacklist rivatv sh NVIDIA.run, sudo service lightdm start, reboot, nvidia-xorgconf 2)after reboot i get 800x600 and nvidia-settings say this. You do not appear to be using the NVIDIA X driver. Please edit your X configuration file (just run nvidia-xconfig as root), and restart the X server. 3) i change a little bit xorg.conf to set up a resolution to work property 4) i dont have any image in the monito and i dont have any option on Nvidia X server settings lspci | grep VGA 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 01:00.0 VGA compatible controller: NVIDIA Corporation GF119 [GeForce GT 520] (rev a1) egrep -i 'glx|nvidia' /var/log/Xorg.0.log [ 12.005] (II) LoadModule: "glx" [ 12.005] (II) Loading /usr/lib/xorg/modules/extensions/libglx.so [ 12.575] (II) Module glx: vendor="NVIDIA Corporation" [ 12.585] (II) NVIDIA GLX Module 302.17 Tue Jun 12 16:22:45 PDT 2012 [ 12.585] (II) Loading extension GLX [ 13.037] (EE) Failed to initialize GLX extension (Compatible NVIDIA X driver not found) [ 13.044] (II) config/udev: Adding input device HDA NVidia HDMI/DP,pcm=3 (/dev/input/event10) [ 13.044] (II) config/udev: Adding input device HDA NVidia HDMI/DP,pcm=7 (/dev/input/event9) glxinfo | grep direct Xlib: extension "GLX" missing on display ":0.0". Error: couldn't find RGB GLX visual or fbconfig sorry my english is no very well. and thanks guys

    Read the article

  • Allow non-sudo group to control Upstart job

    - by Angle O'Saxon
    I'm trying to set up an Upstart job to run on system startup, and that can also be started/stopped by members of a group other than sudo. With a previous version, I usedupdate-rc.d and scripts stored in /etc/init.d/ to get this working by adding %Group ALL = NOPASSWD: /etc/init.d/scriptname to my sudoers file, but I can't seem to get an equivalent working for Upstart. I tried adding %Group ALL = NOPASSWD: /sbin/initctl start jobname to the sudoers file, but trying to run the command start jobname produces this error: start: Rejected send message, 1 matched rules; type="method_call", sender=":1.21" (uid=1000 pid=5148 comm="start jobname " interface="com.ubuntu.Upstart0_6.Job" member="Start" error name="(unset)" requested_reply="0" destination="com.ubuntu.Upstart" (uid=0 pid=1 comm="/sbin/init") As near as I can tell, that's a complaint about how my user account isn't given the power to send 'Start' messages in the D-Bus config file for Upstart. I haven't been able to actually find any information on how to edit that file to give a group permission to access a specific service--does such an option exist? Is there a way to edit the Sudoers file so I can run the job without editing the config file? Am I better off just sticking with the previous version?

    Read the article

  • Stop Time Sync Between Host and Guest - VirtualBox

    - by KoopaTroopa
    So I've been googling and I've tried the following commands. I want to stop the virtual machine from having the correct time and just to stick with what time it last recorded. The host operating system is Ubuntu 12.04 and the guest is Windows XP. I've turned time syncing off in XP so it won't do that when I connect to the Internet. However it does appear to take the time from ubuntu and set it as it's own. VBoxManage setextradata XP11 “VBoxInternal/TM/WarpDrivePercentage” 200 vboxmanage setextradata XP11 "VBoxInternal/Devices/VMMDev/0/Config/GetHostTimeDisabled" "1" Both commands haven't work as the time is always set to the exact same as Ubuntu's. I've located the extradata entry in the virtualbox XML file. It states both changes that the commands above are set to make. But of course it still hasn't stopped updating the time. <ExtraData> <ExtraDataItem name="GUI/LastGuestSizeHint" value="1152,864"/> <ExtraDataItem name="GUI/LastNormalWindowPosition" value="74,52,1152,911"/> <ExtraDataItem name="VBoxInternal/Devices/VMMDev/0/Config/GetHostTimeDisabled" value="1"/> <ExtraDataItem name="&#x201C;VBoxInternal/TM/WarpDrivePercentage&#x201D;" value="200"/> </ExtraData>

    Read the article

  • Run script at user login as root, with a catch

    - by tubaguy50035
    I'm trying to run a PHP script as root on user login. The PHP script adds a Samba share to the Samba config, thus the need for root privileges. The only issue here, is that the user doesn't exist yet. This system is integrated with active directory. So when a user logs in for the first time, a home directory for them is created under /home/DOMAIN/username. I've found this question and that seems like the correct way to get what I want, but I'm having trouble with the syntax since I don't know the user's name. Would it be something like: ALL ALL=(ALL) NOPASSWD: /home/DOMAIN/*/createSambaShare.php This doesn't seem to work as it is currently. Anyone have any ideas or a "scripted" way to add a Samba share on user login? Since I've made other changes to /etc/skel, I just added the bash necessary to run the PHP script in .profile in there. This then gets copied to the "new" user's home and it tries to run the PHP script. But it fails, because these are not privileged users. Changing permissions on the PHP script will not help. It needs to be run as sudo because it opens the Samba config file for writing. Letting any user run the PHP script would result in a PHP error. The homes Samba directive doesn't work for my use case. I need the Samba share to exist once they exist on the server, even when they're not logged in.

    Read the article

  • maxItemsInObjectGraph limit required to be changed for server and client

    - by Michael Freidgeim
    We have a wcf service, that expects to return a huge XML data. It worked ok in testing, but in production it failed with error  "Maximum number of items that can be serialized or deserialized in an object graph is '65536'. Change the object graph or increase the MaxItemsInObjectGraph quota."The MSDN article about   dataContractSerializer xml configuration  element  correctly  describes maxItemsInObjectGraph attribute default as 65536, but documentation for of the DataContractSerializer.MaxItemsInObjectGraph property and DataContractJsonSerializer.MaxItemsInObjectGraph Property are talking about Int32.MaxValue, which causes confusion, in particular because Google shows properties articles before configuration articles.When we changed the value in WCF service configuration, it didn't help, because the similar change must be ALSO done on client.There are similar posts:http://stackoverflow.com/questions/6298209/how-to-fix-maxitemsinobjectgraph-error/6298356#6298356You need to set the MaxItemsInObjectGraph on the dataContractSerializer using a behavior on both the client and service. See  for an example.http://devlicio.us/blogs/derik_whittaker/archive/2010/05/04/setting-maxitemsinobjectgraph-for-wcf-there-has-to-be-a-better-way.aspxhttp://stackoverflow.com/questions/2325321/maxitemsinobjectgraph-ignored/4455209#4455209 I had forgot to place this setting in my client app.config file.http://stackoverflow.com/questions/9191167/maximum-number-of-items-that-can-be-serialized-or-deserialized-in-an-object-graphttp://stackoverflow.com/questions/5867304/datacontractjsonserializer-and-maxitemsinobjectgraph?rq=1 -It seems that DataContractJsonSerializer.MaxItemsInObjectGraph has actual default 65536, because there is no configuration for JSON serializer, but  it complains about the limit.I believe that MS should clarify the properties documentation re default limit and make more specific error messages to distinguish server side and client side errors.Note, that as a workaround it's possible to use commonBehaviors section which can be defined only in machine.config:<commonBehaviors> <behaviors> <endpointBehaviors> <dataContractSerializer maxItemsInObjectGraph="..." /> </endpointBehaviors> </behaviors></commonBehaviors>v

    Read the article

< Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >