Search Results

Search found 12569 results on 503 pages for 'root plist'.

Page 61/503 | < Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >

  • Possible to recover mysql root pass with sudo server access?

    - by jonathonmorgan
    I've inherited development for a website on vps hosting, and have login info for a user with sudo privileges, but don't have the password for the mysql root user. After digging around a little, it looks like the only way to fix this is to stop mysql (something like this: http://waoewaoe.wordpress.com/2010/02/03/recover-reset-mysql-root-password/). But because the website it's serving is currently in production, I'm hoping you guys can enlighten me to any potential consequences (or let me know if there's typically a file where the password would be accessible). a) during the time mysql is stopped, information in the database won't be accessible, right -- even by other users? b) will resetting the root password have any impact on other users after mysql has restarted? Will their username/passwords still be valid? The current application is using an account with limited privileges to read/write to the database, and while 5min downtime in the middle of the night would probably go unnoticed, half a day while I tie up loose ends/figure out what I screwed up will land in me hot water. Thanks in advance for your help!

    Read the article

  • What is the best practice in regards to building composite dtos off of an aggregate root with domain

    - by Chance
    I'm trying to figure out the best approach/practice for assembling a composite data transfer object off of an aggregate root and would love to hear people's thoughts on this. For example, lets say I have a root that has a few domain objects as children. I want to assemble a specific view dto, based on some business logic, that either has attributes or full dto's of it's objects. What I'm struggling with is trying to figure out where that assembly should happen. I can see it going on the domain object of the aggregate root as there is some business logic associated with it. The benefits of this approach from what I've deduced thus far is that it should reduce the inevitable business logic from bleeding outisde of the domain object. It also allows for private methods that take care of tasks that could become more complex from an external builder. The downsides being that the domain object becomes much more entrenched in the application's workflow and represents much more than just the domain object. It also could become very large in the scenario where you need multiple composite Dtos. Alternatively, I could also see it belonging to some form of transfer object assembler where there is a builder for each domain object. The domain objects would still be responsible for GetDto() and UpdateFromDto(dto). Outside of that, the builder would handle the construction and deconstruction of composite dtos. The downside is kind of mentioned above, where I fear this will easily lead to developers unfamiliar with DDD bleeding a ton of business logic into the assembler which is what I want to desperately avoid. Any thoughts would be greatly apperciated.

    Read the article

  • Xcode compile error: Can't find an (old) file I used to have

    - by Carol
    This is what happens when I try to compile my iPhone app with Xcode v3.1.4 What in the world does it all mean? (And how do I fix it?) Processing /Users/carol/Documents/MyApp/build/Release-iphonesimulator/MyApp.app/Info.plist TabBarDemo2-Info.plist cd /Users/carol/Documents/MyApp setenv PATH "/Developer/Platforms/iPhoneSimulator.platform/Developer/usr/bin:/Developer/usr/bin:/usr/bin:/bin:/usr/sbin:/sbin" TabBarDemo2-Info.plist -genpkginfo /Users/carol/Documents/MyApp/build/Release-iphonesimulator/MyApp.app/PkgInfo -expandbuildsettings -format binary -o /Users/carol/Documents/MyApp/build/Release-iphonesimulator/MyApp.app/Info.plist error: The file “TabBarDemo2-Info.plist” does not exist.

    Read the article

  • Why does `rpm` show 3 httpd packages, and which one provides the real httpd?

    - by Stefan Lasiewski
    I ran yum update on my CentOS5 webserver a few days ago. Today I just noticed that I have 3 httpd-* rpms! How can I end up with three RPMs for httpd (My other servers only have one httpd rpm). I want to make sure that my server has a patched, updated version of /usr/sbin/httpd. How can I tell which one of these packages provides the httpd binary at /usr/sbin/httpd? [root@node1 ~]# rpm -q httpd httpd-2.2.3-76.el5.centos httpd-2.2.3-78.el5.centos httpd-2.2.3-83.el5.centos [root@node1 ~]# /usr/sbin/httpd -V |grep version Server version: Apache/2.2.3 [root@node1 ~]# rpm -q httpd-2.2.3-76.el5.centos --list |grep -w /usr/sbin/httpd /usr/sbin/httpd /usr/sbin/httpd.event /usr/sbin/httpd.worker [root@node1 ~]# rpm -q httpd-2.2.3-78.el5.centos --list |grep -w /usr/sbin/httpd /usr/sbin/httpd /usr/sbin/httpd.event /usr/sbin/httpd.worker [root@node1 ~]# rpm -q httpd-2.2.3-83.el5.centos --list |grep -w /usr/sbin/httpd /usr/sbin/httpd /usr/sbin/httpd.event /usr/sbin/httpd.worker [root@node1 ~]# root@node1 ~]# rpm -q --provides httpd |grep -w httpd config(httpd) = 2.2.3-76.el5.centos httpd-mmn = 20051115 httpd = 2.2.3-76.el5.centos config(httpd) = 2.2.3-78.el5.centos httpd-mmn = 20051115 httpd = 2.2.3-78.el5.centos config(httpd) = 2.2.3-83.el5.centos httpd-mmn = 20051115 httpd = 2.2.3-83.el5.centos Update: Answering Mark Wagner's questions: [root@node1 ~]# rpm -q -f /usr/sbin/httpd httpd-2.2.3-76.el5.centos httpd-2.2.3-78.el5.centos httpd-2.2.3-83.el5.centos [root@node1 ~]# rpm -V httpd-2.2.3-83.el5.centos S.5..... c /etc/logrotate.d/httpd S.5..... c /etc/rc.d/init.d/httpd ....L... /var/www

    Read the article

  • SSHing thru an HTTP proxy

    - by Siler
    Typical scenario: I'm trying to SSH thru a corporate HTTP proxy to a remote machine using corkscrew, and I get: ssh_exchange_identification: Connection closed by remote host Obviously, there's a lot of reasons this might be happening - the proxy might not allow this, the remote box might not be running sshd, etc. So, I tried to tunnel manually via telnet: $ telnet proxy.evilcorporation.com 82 Trying XX.XX.XX.XX... Connected to proxy.evilcorporation.com. Escape character is '^]'. CONNECT myremotehost.com:22 HTTP/1.1 HTTP/1.1 200 Connection established So, unless I'm mistaken... it looks like the connection is working. So, why then, doesn't it work via corkscrew? ssh -vvv [email protected] -p 22 -o "ProxyCommand corkscrew proxy.evilcorporation.com 82 myremotehost.com 22" OpenSSH_6.6, OpenSSL 1.0.1f 6 Jan 2014 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug1: Executing proxy command: exec corkscrew proxy.evilcorporation.com 82 myremotehost.com 22 debug1: permanently_set_uid: 0/0 debug1: permanently_drop_suid: 0 debug1: identity file /root/.ssh/id_rsa type -1 debug1: identity file /root/.ssh/id_rsa-cert type -1 debug1: identity file /root/.ssh/id_dsa type -1 debug1: identity file /root/.ssh/id_dsa-cert type -1 debug1: identity file /root/.ssh/id_ecdsa type -1 debug1: identity file /root/.ssh/id_ecdsa-cert type -1 debug1: identity file /root/.ssh/id_ed25519 type -1 debug1: identity file /root/.ssh/id_ed25519-cert type -1 debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_6.6p1 Ubuntu-2ubuntu1 ssh_exchange_identification: Connection closed by remote host

    Read the article

  • /dev/input/uinput Device appears to be 'broken'

    - by Adam Luchjenbroers
    I'm trying to setup Pystromo so that I can remap the keys on my Belkin N52TE gamepad. Pystromo basically captures the key strokes and then outputs the remapped keystrokes to the uinput device. However, at the moment it simply swallows the input and outputs absolutely nothing. I've tracked the issue to something being wrong with my uinput device, with the smoking gun being: # ls -l /dev/input/uinput crw-rw---- 1 root plugdev 10, 223 Dec 31 2009 /dev/input/uinput # cat /dev/input/uinput cat: /dev/input/uinput: No such device The uinput module is loaded, and can be clearly seen via lsmod. Anyone seen this before, or can think of something worth attempting? Current Setup Gentoo Linux Kernel 2.6.32 (Gentoo Sources 2.6.32-r1) HP DV7 Laptop Output dmesg dmesg | grep uinput does nothing, and no new lines appear if I run modprobe -r uinput && modprobe uinput. Yet the uinput module can clearly be seen when running lsmod: # lsmod | grep uinput uinput 6200 0 lsusb # lsusb Bus 005 Device 003: ID 050d:0200 Belkin Components Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 008 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 002: ID 1532:0101 Razer USA, Ltd Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 002: ID 5986:0143 Acer, Inc Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 007 Device 002: ID 03f0:171d Hewlett-Packard Wireless (Bluetooth + WLAN) Interface [Integrated Module] Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub lsusb -v PasteBin Update Hmm, updating evdev and hal seems to have partially fixed it. /dev/input/uinput still can't be accessed but Pystromo is now remapping keys successfully. I'm a little bit mystified about what's going on here, but it seems that my understanding of how all this works is flawed. Since I've posted a bounty, I'll leave this here for someone to post an explanation for how user-space input devices work under the hood.

    Read the article

  • What exactly is a Deamon ? ( how to run a root command from apache binded script that uses www-data user )?

    - by user224235
    I am trying to run this command from WSGI script service httpd restart The problem is this command can only be run by root and apache uses the www-data user. it has been said the solution is to use a Deamon Process i suppose the idea is to send the command to a file that will be executed by a script that is considered "root" user.. its difficult to understand why they would call this a Deamon Process and try to scare me. Perhaps it should have been called : proxy process when i got the idea that this was a proxy process.. i thought about adding a line to /var/spool/cron/root that way the cron would execute the command for me. but of course this means i have to get the system time and then add 1 second to it and then add it to that line so cron would execute the command for me as root but my script demands an output instantly. so i suppose i need to create a DEAMON PROCESS that works like the cron. in other words it is a bash file that will execute the command in a plain file.. but will this DEAMON PROCESS be running a while command 24/7 every second ? would that not waste resources ? it only needs to activate itself to check for a command to execute when there is a command to be executed. i mean in PHP and other programming languages.. running a while statement when there is nothing to be executed could waste resources of the server.. so why should a deamon process constantly be listening for anything. i only want it to listen and execute when it is needed. i do not need a process that is constantly listening.

    Read the article

  • rpmbuild gives seg fault

    - by Deepti Jain
    I am trying to build an rpm using the rpmbuild tool. I have source code which build binaries around 30 GB. This software for which I am making the rpm has dozens of executables. When I copy only the binaries of a single executable (Eg. init) my rpm builds successfully. But when I dump the entire build to the rpm, rpmbuild does everything but gives a seg fault in the end. Here is my spec file: # This is a sample spec file for wget %define _topdir /root/mywget %define name source %define release 1 %define version 1.12 %define _builddir /root/mywget/BUILD/glenlivet %define _buildrootdir /root/mywget/BUILDROOT %define _buildroot /root/mywget/BUILDROOT %define _sourcedir /root/mywget/SOURCES BuildRoot: %{_buildroot} Summary: GNU source License: GPL Name: %{name} Version: %{version} Release: %{release} Source: %{name}-%{version}.tar.gz Prefix: /usr Group: Development/Tools %description The GNU sample program downloads files from the Internet using the command-line. %prep %setup -q -n glenlivet %build cd %{_builddir} make all %install rm -rf %{_buildrootdir} mkdir -p %{_buildrootdir}/bin cp -p -r %{_builddir}/build/obj-x64/* %{_buildrootdir}/bin/ %files %defattr(-,root,root) /bin/* If I only copy some of the binaries (let say one utility and its dependent binaries) it works fine. But when I try to copy the entire build, I get a seg fault. I get the seg fault after rpmbuild has executed these sections: %prep %build %install rpmbuild also processes my source file. Processing files: source-1.12-1 Finding Provides: Finding Requires: Finding Supplements: Provides:...... Requires:...... Checking for unpackaged file(s):/ usr/lib/rpm/check-files /root/mywget/BUILDROOT Checking for unpackaged file(s):/ usr/lib/rpm/check-files /root/mywget/BUILDROOT Segmentation fault Any clue what wrong is going on or where does rpmbuild fails? Thanks in advance

    Read the article

  • Mounted NFS directory not writable by Apache / PHP

    - by phpfour
    Need some help here with NFS. Here's what I have (all servers running CentOS 5.6 with SELinux): 172.17.20.1 - Primary server with static IP. Varnish redirects requests to the web servers. 172.17.20.2 - Web server 1 172.17.20.3 - Web server 2 The application residing on the web servers is running Drupal and I need both of them to share the same files directory. I have created a folder in 172.17.20.1 called /var/nfs with root user. Here is my /etc/exports content: /var/nfs 172.17.20.2(rw,sync,no_root_squash) 172.17.20.3(rw,sync,no_root_squash) On both the web servers (172.17.20.2/3), I have it mounted like below: [root@web2 ~]# mount ... 172.17.20.1:/var/nfs on /mnt/nfs/var/nfs type nfs (rw,sync,hard,intr,addr=172.17.20.1) On all the servers, I've added the user apache to the root group to get the desired write access: [root@main ~]# cat /etc/group root:x:0:root,apache .... .... apache:x:48: [root@web1 ~]# cat /etc/group root:x:0:root,apache .... .... apache:x:48: Despite all this, when I try to write files into the /mnt/nfs/var/nfs folder from Drupal/PHP, it cannot write to it. I even tried with a simple PHP upload script but it doesn't work, so the problem is not with Drupal. Any help you guys can do is much appreciated. I've spent hours and hours with it, without any success :( Thanks in advance.

    Read the article

  • Slow wifi on Ubuntu 12.04 wifi driver ath9k

    - by lunar
    For the last couple of days my wifi connection is extremely slow. I am pretty sure that it is caused by the driver. Can this be improved? lo no wireless extensions. wlan0 IEEE 802.11bgn ESSID:"MyWiFi" Mode:Managed Frequency:2.437 GHz Access Point: 00:18:68:FE:7B:C7 Bit Rate=58.5 Mb/s Tx-Power=15 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=48/70 Signal level=-62 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:6960 Missed beacon:0 eth0 no wireless extensions. sudo lshw -class network *-network description: Wireless interface product: AR9285 Wireless Network Adapter (PCI-Express) vendor: Atheros Communications Inc. physical id: 0 bus info: pci@0000:03:00.0 logical name: wlan0 version: 01 serial: 74:f0:6d:34:c2:4e width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=ath9k driverversion=3.2.0-31-generic-pae firmware=N/A ip=192.168.1.2 latency=0 link=yes multicast=yes wireless=IEEE 802.11bgn resources: irq:17 memory:d7400000-d740ffff *-network description: Ethernet interface product: AR8131 Gigabit Ethernet vendor: Atheros Communications Inc. physical id: 0 bus info: pci@0000:06:00.0 logical name: eth0 version: c0 serial: 48:4b:38:78:f6:ae capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress vpd bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=atl1c driverversion=1.0.1.0-NAPI firmware=N/A latency=0 link=no multicast=yes port=twisted pair resources: irq:51 memory:d3800000-d383ffff ioport:8000(size=128) lsusb Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 001 Device 002: ID 8087:0020 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 002: ID 8087:0020 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 003: ID 04f2:b1bb Chicony Electronics Co., Ltd Bus 001 Device 004: ID 0b05:1788 ASUSTek Computer, Inc. lspci 00:00.0 Host bridge: Intel Corporation Core Processor DRAM Controller (rev 18) 00:01.0 PCI bridge: Intel Corporation Core Processor PCI Express x16 Root Port (rev 18) 00:02.0 VGA compatible controller: Intel Corporation Core Processor Integrated Graphics Controller (rev 18) 00:16.0 Communication controller: Intel Corporation 5 Series/3400 Series Chipset HECI Controller (rev 06) 00:1a.0 USB controller: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller (rev 06) 00:1b.0 Audio device: Intel Corporation 5 Series/3400 Series Chipset High Definition Audio (rev 06) 00:1c.0 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 1 (rev 06) 00:1c.1 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 2 (rev 06) 00:1c.3 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 4 (rev 06) 00:1c.4 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 5 (rev 06) 00:1c.5 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 6 (rev 06) 00:1d.0 USB controller: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller (rev 06) 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev a6) 00:1f.0 ISA bridge: Intel Corporation Mobile 5 Series Chipset LPC Interface Controller (rev 06) 00:1f.2 SATA controller: Intel Corporation 5 Series/3400 Series Chipset 4 port SATA AHCI Controller (rev 06) 00:1f.6 Signal processing controller: Intel Corporation 5 Series/3400 Series Chipset Thermal Subsystem (rev 06) 01:00.0 VGA compatible controller: NVIDIA Corporation GF108 [GeForce GT 425M] (rev a1) 03:00.0 Network controller: Atheros Communications Inc. AR9285 Wireless Network Adapter (PCI-Express) (rev 01) 04:00.0 USB controller: Fresco Logic Device 1400 (rev 01) 06:00.0 Ethernet controller: Atheros Communications Inc. AR8131 Gigabit Ethernet (rev c0) ff:00.0 Host bridge: Intel Corporation Core Processor QuickPath Architecture Generic Non-core Registers (rev 05) ff:00.1 Host bridge: Intel Corporation Core Processor QuickPath Architecture System Address Decoder (rev 05) ff:02.0 Host bridge: Intel Corporation Core Processor QPI Link 0 (rev 05) ff:02.1 Host bridge: Intel Corporation Core Processor QPI Physical 0 (rev 05) ff:02.2 Host bridge: Intel Corporation Core Processor Reserved (rev 05) ff:02.3 Host bridge: Intel Corporation Core Processor Reserved (rev 05) rfkill list all 0: hci0: Bluetooth Soft blocked: yes Hard blocked: no 1: phy0: Wireless LAN Soft blocked: no Hard blocked: no

    Read the article

  • How does the Trash Can work, and where can I find official documentation, reference, or specification for it?

    - by MestreLion
    When trying to manage trash can from mounted NTFS volumes, I ended up reading FreeDesktop.org's reference on it. Poking around and doing some tests, I realized Ubuntu/Gnome does not follow the specs 100%. Here's why: For non-/ partitions, it always uses <driveroot>/.Trash-<uid>, It never used <driveroot>/.Trash/<uid>, even when i created it in advance. While this works, it's annoying: if I have 15 users, I end up with 15 /.Trash-xxx folders in my drive, while the other approach would still give a single folder (with 15 sub-folders). That "pollution" in my drives is very unpleasant. And specs say "If an $topdir/.Trash directory is absent, an $topdir/.Trash-$uid directory is to be used". Well, it IS present, so why does it never use it? root trash does not work, at least not out of the box. Open nautilus as root and click on trash; it gives an error. Try to delete any file, it says "it can't move to trash". Ok, I know this can be fixed by creating /root/.local/share. But specs says "A “home trash” directory SHOULD be automatically created for any new user. If this directory is needed for a trashing operation but does not exist, the implementation SHOULD automatically create it, without any warnings or delays.". Why the error then? Bug? Why must I change /etc/fstab entries for mounted volumes, adding options like uid and guid, if the volumes are already mounted as RW for everyone? These are just some examples of deviation from the standard. So, the question is: "If Ubuntu does not adhere 100% to the spec, HOW exactly does the trash work? WHERE can i find a technical reference for Ubuntu's implementation of the trash?" By the way: if Ubuntu does happen to follow specs, please tell me what I am doing wrong, especially regarding the /.Trash-<uid> vs /.Trash/<uid> issue. Thanks! EDIT: Some more info: If a given fs has no support for the sticky bit (VFAT, NTFS), it probably doesn't have for permissions either (at least VFAT surely doesn't). So what prevents one user from purging / restoring other users' ./Trash-xxx ? If one can read/write his own Trash, one can do the same for the whole drive, including other's trashes, correct? Or does Gnome have some kind of "extra" protection on ./Trash-xxx folders on VFAT/NTFS fs? If Linux can "emulate" file permissions on NTFS mounting by editing /fstab uid and gid options, can it also "emulate" the sticky bit? I would really prefer to use /.Trash/xxx format... For the root issue: for the / partition, I can use trash as root, and it goes to /root/.local/shate/Trash. But if I click on Nautilus "Trash" (as root), I get an error. Don't you? So files are correctly trashed, but I can't access it. All I can do is manually "purge" them (by deleting files on /root/.local/shate/Trash), but restoring would be very tricky (opening info files and manually moving, etc.). For non-/ partitions (or at least for VFAT/NTFS), I can not even use trash as root: it does not create a ./Trash-0 folder, it simply says "Cannot trash, want to permanently delete?" Why? About fstab: i use it for a permanent mount for my NTFS partitions. I have several, and if not "pre-mounted" they really clutter the desktop and/or Nautilus. I'd rather have it pre-mounted, integrated in my fs, in mounts like /data , /windows/xp , /windows/vista , and so on, and leave /media and its "mount/unmount" flexibility just for truly removable drives. So, if Ubuntu/Gnome truly follows the spec, is there any way to fix the root issues and to "emulate" the sticky bit for (at least) my fstab'ed NTFS fixed partitions?

    Read the article

  • How does Trash Can works? Where can i find official specification / documentation / reference about it?

    - by MestreLion
    When trying to manage trash can from mounted NTFS volumes, I ended up reading FreeDesktop.org's reference on it. Poking around and doing some tests, I realized Ubuntu/Gnome does not follow the specs 100%. Here's why: For non-/ partitions, it always use <driveroot>/.Trash-<uid>, It never used <driveroot>/.Trash/<uid>, even when i created it in advance. While this works, its annoying: if i have 15 users, i end up with 15 /.Trash-xxx folders in my drive, while the other approach would still give a single folder (with 15 sub-folders). That "pollution" in my drives is very unpleasant. And specs say "If an $topdir/.Trash directory is absent, an $topdir/.Trash-$uid directory is to be used". Well, it IS present, so why it never uses it? root trash does not work, at least not out of the box. Open nautilus as root and click on trash, it gives error. Try to delete any file, it says "it cant move to trash". Ok, i know this can be fixed by creating /root/.local/share. But specs says "A “home trash” directory SHOULD be automatically created for any new user. If this directory is needed for a trashing operation but does not exist, the implementation SHOULD automatically create it, without any warnings or delays.". Why error then? Bug? Why do i must change /etc/fstab entries for mounted volumes, adding options like uid and guid, if the volumes are already mounted as RW for everyone? These are just some examples of deviation from standard. So, the question is: "If Ubuntu does not adhere 100% to the spec, HOW exactly does the trash work? WHERE can i find technical reference about Ubuntu's implementation of the trash?" By the way: if Ubuntu does happen to follow specs, please tell me what am i doing wrong, specially regarding the /.Trash-<uid> vs /.Trash/<uid> issue. Thanks! EDIT: Some more info: If a given fs has no support for sticky bit (VFAT, NTFS), it probably dont have for permitions either (at least VFAT surely doesnt). So what prevents one user for purging / restoring other users ./Trash-xxx ? If one can read/write his own Trash, he can also do the same for the whole drive, including other's trashes, isnt it? Or does Gnome has any "extra" protection on ./Trash-xxx folders on VFAT/NTFS fs? If Linux can "emulate" file permitions on NTFS mounting by editing /fstab uid and gid options, can it also "emulate" the sticky bit? I would really want to use /.Trash/xxx format... For the root issue: for the / partition, i can trash as root, and it goes to /root/.local/shate/Trash. But if i click on Nautilus "Trash" (as root), i get an error. Dont you? So files are correctly trashed, but i cant access it. All i can do is manually "purge" them (by deleting files on /root/.local/shate/Trash), but restoring would be very tricky (opening info files and manually moving, etc) For non-/ partitions (or at least for VFAT/NTFS), I can not even trash as root: it does not create a ./Trash-0 folder, it simply says "Cannot trash, want to permantly delete?" Why? About fstab: i use it for a permanent mount for my NTFS partitions. I have several, and if not "pre-mounted" they really cluttter desktop and/or Nautilus. Id rather have it pre mounted, integrated in my fs, in mounts like /data , /windows/xp , /windows/vista , and so on, and leave /media and its "mount/unmount" flexibility just for truly removable drives Si, if Ubuntu/Gnome truly follow the spec, is there any way to fix the root issues and to "emulate" the sticky bit for (at least) my fstab'ed NTFS fixed partitions?

    Read the article

  • Backup SQL Database Federation

    - by Herve Roggero
    One of the amazing features of Windows Azure SQL Database is the ability to create federations in order to scale your cloud databases. However until now, there were very few options available to backup federated databases. In this post I will show you how Enzo Cloud Backup can help you backup, and restore your federated database easily. You can restore federated databases in SQL Database, or even on SQL Server (as regular databases). Generally speaking, you will need to perform the following steps to backup and restore the federations of a SQL Database: Backup the federation root Backup the federation members Restore the federation root Restore the federation members These actions can be automated using: the built-in scheduler of Enzo Cloud Backup, the command-line utilities, or the .NET Cloud Backup API provided, giving you complete control on how you want to perform your backup and restore operations. Backing up federations Let’s look at the tool to backup federations. You can explore your existing federations by using the Enzo Cloud Backup application as shown below. As you can see, the federation root and the various federations available are shown in separate tabs for convenience. You would first need to backup the federation root (unless you intend to restore the federation member on a local SQL Server database and you don’t need what’s in the federation root). The steps are similar than those to backup a federation member, so let’s proceed to backing up a federation member. You can click on a specific federation member to view the database details by clicking at the tab that contains your federation member. You can see the size currently consumed and a summary of its content at the bottom of the screen. If you right-click on a specific range, you can choose to backup the federation member. This brings up a window with the details of the federation member already filled out for you, including the value of the member that is used to select the federation member. Notice that the list of Federations includes “Federation Root”, which is what you need to select to backup the federation root (you can also do that directly from the root database tab).  Once you provide at least one backup destination, you can begin the backup operation.  From this window, you can also schedule this operation as a job and perform this operation entirely in the cloud. You can also “filter” the connection, so that only the specific member value is backed up (this will backup all the global tables, and only the records for which the distribution value is the one specified). You can repeat this operation for every federation member in your federation. Restoring Federations Once backed up, you can restore your federations easily. Select the backup device using the tool, then select Restore. The following window will appear. From here you can create a new root database. You can also view the backup properties, showing you exactly which federations will be created. Under the Federations tab, you can select how the federations will be created. I chose to recreate the federations and let the tool perform all the SPLIT operations necessary to recreate the same number of federation members. Other options include to create the first federation member only, or not to create the federation members at all. Once the root database has been restored and the federation members have been created, you can restore the federation members you previously backed up. The screen below shows you how to restore a backup of a federation member into a specific federation member (the details of the federation member are provided to make it easier to identify). Conclusion This post gave you an overview on how to backup and restore federation roots and federation members. The backup operations can be setup once, then scheduled daily.

    Read the article

  • Good ways to restart all the computers in a remote cluster?

    - by vgm64
    I have a cluster that I manage and from time to time I get emails from each node (and head node) begging to be restarted after an automatic upgrade. Currently, my best solution so far is a shell script like: $> cat cluster_reboot.sh ssh [email protected] reboot ssh [email protected] reboot ssh [email protected] reboot ssh [email protected] reboot ssh [email protected] reboot ssh [email protected] reboot I end up just typing the root password six times, but it works, I guess. Is there a better way? Can I force the head node to reboot the computers for me?

    Read the article

  • Dynamips Virtual Router not pinging external host after bridging

    - by maiky
    I have setup a virtual router image with dynamips and setup bridging between tap0 interface of the virtual router and eth0 and br0 with commands [root@cisco_host]# brctl addbr br0 [root@cisco_host]# ifconfig br0 up [root@cisco_host]# ifconfig eth1 0.0.0.0 [root@cisco_host]# brctl addif br0 eth1 [root@cisco_host]# ifconfig br0 192.168.0.100 netmask 255.255.255.0 up In dynagen configuration I have: f0/0 = NIO_tap:tap0 So, [root@cisco_host]# brctl addif br0 tap0 [root@cisco_host]# ifconfig tap0 up router(config)#int fa0/0 router(config-if)#ip address 192.168.0.101 255.255.255.0 router(config-if)#no shutdown After the configuration above I was expecting that from inside the router I could ping an external machine with IP 192.168.0.1 Actually from the host I can ping the external host 192.168.0.1 as well as 192.168.0.100 and 192.168.0.101. So what am I missing here? tap0 is bridged with br0 and in turn with eth1. So why the router is not pingable from 192.168.0.1 and vice-versa??

    Read the article

  • Few Google Checkout Questions

    - by Richard Knop
    I am planning to integrate a Google Checkout payment system on a social networking website. The idea is that members can buy "tokens" for real money (which are sort of the website currency) and then they can buy access to some extra content on the website etc. What I want to do is create a Google Checkout button that takes a member to the checkout page where he pays with his credit or debit card. What I want is the Google Checkout to notify notify my server whether the purchase of tokens was successful (if the credit/debit card was charged) so I can update the local database. The website is coded in PHP/MySQL. I have downloaded the sample PHP code from here: code.google.com/p/google-checkout-php-sample-code/wiki/Documentation I know how to create a Google checkout button and I have also placed the responsehandlerdemo.php file on my server. This is the file the Google Checkout is supposed to send response to (of course I set the path to the file in Google merchant account). Now in the response handler file there is a switch block with several case statements. Which one means that the payment was successful and I can add tokens to the member account in the local database? switch ($root) { case "request-received": { break; } case "error": { break; } case "diagnosis": { break; } case "checkout-redirect": { break; } case "merchant-calculation-callback": { // Create the results and send it $merchant_calc = new GoogleMerchantCalculations($currency); // Loop through the list of address ids from the callback $addresses = get_arr_result($data[$root]['calculate']['addresses']['anonymous-address']); foreach($addresses as $curr_address) { $curr_id = $curr_address['id']; $country = $curr_address['country-code']['VALUE']; $city = $curr_address['city']['VALUE']; $region = $curr_address['region']['VALUE']; $postal_code = $curr_address['postal-code']['VALUE']; // Loop through each shipping method if merchant-calculated shipping // support is to be provided if(isset($data[$root]['calculate']['shipping'])) { $shipping = get_arr_result($data[$root]['calculate']['shipping']['method']); foreach($shipping as $curr_ship) { $name = $curr_ship['name']; //Compute the price for this shipping method and address id $price = 12; // Modify this to get the actual price $shippable = "true"; // Modify this as required $merchant_result = new GoogleResult($curr_id); $merchant_result->SetShippingDetails($name, $price, $shippable); if($data[$root]['calculate']['tax']['VALUE'] == "true") { //Compute tax for this address id and shipping type $amount = 15; // Modify this to the actual tax value $merchant_result->SetTaxDetails($amount); } if(isset($data[$root]['calculate']['merchant-code-strings'] ['merchant-code-string'])) { $codes = get_arr_result($data[$root]['calculate']['merchant-code-strings'] ['merchant-code-string']); foreach($codes as $curr_code) { //Update this data as required to set whether the coupon is valid, the code and the amount $coupons = new GoogleCoupons("true", $curr_code['code'], 5, "test2"); $merchant_result->AddCoupons($coupons); } } $merchant_calc->AddResult($merchant_result); } } else { $merchant_result = new GoogleResult($curr_id); if($data[$root]['calculate']['tax']['VALUE'] == "true") { //Compute tax for this address id and shipping type $amount = 15; // Modify this to the actual tax value $merchant_result->SetTaxDetails($amount); } $codes = get_arr_result($data[$root]['calculate']['merchant-code-strings'] ['merchant-code-string']); foreach($codes as $curr_code) { //Update this data as required to set whether the coupon is valid, the code and the amount $coupons = new GoogleCoupons("true", $curr_code['code'], 5, "test2"); $merchant_result->AddCoupons($coupons); } $merchant_calc->AddResult($merchant_result); } } $Gresponse->ProcessMerchantCalculations($merchant_calc); break; } case "new-order-notification": { $Gresponse->SendAck(); break; } case "order-state-change-notification": { $Gresponse->SendAck(); $new_financial_state = $data[$root]['new-financial-order-state']['VALUE']; $new_fulfillment_order = $data[$root]['new-fulfillment-order-state']['VALUE']; switch($new_financial_state) { case 'REVIEWING': { break; } case 'CHARGEABLE': { //$Grequest->SendProcessOrder($data[$root]['google-order-number']['VALUE']); //$Grequest->SendChargeOrder($data[$root]['google-order-number']['VALUE'],''); break; } case 'CHARGING': { break; } case 'CHARGED': { break; } case 'PAYMENT_DECLINED': { break; } case 'CANCELLED': { break; } case 'CANCELLED_BY_GOOGLE': { //$Grequest->SendBuyerMessage($data[$root]['google-order-number']['VALUE'], // "Sorry, your order is cancelled by Google", true); break; } default: break; } switch($new_fulfillment_order) { case 'NEW': { break; } case 'PROCESSING': { break; } case 'DELIVERED': { break; } case 'WILL_NOT_DELIVER': { break; } default: break; } break; } case "charge-amount-notification": { //$Grequest->SendDeliverOrder($data[$root]['google-order-number']['VALUE'], // <carrier>, <tracking-number>, <send-email>); //$Grequest->SendArchiveOrder($data[$root]['google-order-number']['VALUE'] ); $Gresponse->SendAck(); break; } case "chargeback-amount-notification": { $Gresponse->SendAck(); break; } case "refund-amount-notification": { $Gresponse->SendAck(); break; } case "risk-information-notification": { $Gresponse->SendAck(); break; } default: $Gresponse->SendBadRequestStatus("Invalid or not supported Message"); break; } I guess that case 'CHARGED' is the one, am I right? Second question, do I need an SSL certificate to receive response from Google Checkout? According to this I do: groups.google.com/group/google-checkout-api-php/browse_thread/thread/10ce55177281c2b0 But I don's see it mentioned anywhere in the official documentation. Thank you.

    Read the article

  • Mysql install and remove issues

    - by Matt
    I installed mysql on ubuntu server and i dont know what went wrong...it didnt install a mysql root user so i tried to uninstall and start over and now i cant unistall i tried this apt-get remove php5-mysql apt-get remove mysql-server mysql-client apt-get autoremove but when i do ps aux | grep mysql root 6066 0.0 0.0 1772 540 pts/1 S 03:21 0:00 /bin/sh /usr/bin/mysqld_safe mysql 7065 0.0 0.6 58936 11900 pts/1 Sl 03:33 0:00 /usr/sbin/mysqld -- basedir=/usr --datadir=/var/lib/mysql --user=mysql --pid-file=/var/run/mysqld/mysqld.pid -- socket=/var/run/mysqld/mysqld.sock --port=3306 root 7066 0.0 0.0 2956 688 pts/1 S 03:33 0:00 logger -t mysqld -p daemon.error root 22804 0.0 0.0 3056 780 pts/1 R+ 04:14 0:00 grep mysql so i killed the processes and then tried to reinstall like this apt-get -f install sudo apt-get install mysql-server mysql-client sudo mysqladmin -u root -h localhost password 'root' but i get this mysqladmin: connect to server at 'localhost' failed error: 'Access denied for user 'root'@'localhost' (using password: NO)' im confused..i keep installing and uninstalling mysql and the same result..any ideas

    Read the article

  • How can I recreate root dnsNode objects and their RootDNSServers folder in AD after they are deleted?

    - by TonyD
    A few days ago I was trying to permanently remove root hints from my DNS server. After much ado, I decided to go a different route and am now trying to put everything back as it was. During the original process, I opened ADUC, clicked ViewAdvanced Features, and then browsed to System MicrosoftDNS and deleted the folder RootDNSServers. Now in ADUC, I cannot create a folder here to replace the one I deleted. I can run adsiedit and load DomainDNSZones for my domain. Under there, I see MicrosoftDNS, RootDNSServers, with all of the objects still inside of it. Is there a way for me to undo what I did? Can I recreate these objects in ADUC? Can I do something else to cause them to show back up there? Thanks!

    Read the article

  • Symlink - Permission Denied

    - by John Smith
    I'm facing an interesting problem with plenty of Permission Denied outputs when using SymLinks Linux: Slackware 13.1 Directory with Symlink: root@Tower:/var/lib# ls -lah drwxr-xr-x 8 root root 0 2012-12-02 20:09 ./ drwxr-xr-x 15 root root 0 2012-12-01 21:06 ../ lrwxrwxrwx 1 ntop ntop 21 2012-12-02 20:09 ntop - /mnt/user/media/ntop6/ Symlinked Directory: root@Tower:/mnt/user/media# ls -lah drwxrwx--- 1 nobody users 1.4K 2012-12-02 19:28 ./ drwxrwx--- 1 nobody users 128 2012-11-18 16:06 ../ drwxrwxrwx 1 ntop ntop 320 2012-12-02 20:22 ntop6/ What I have done: I have used chown -h ntop:ntop on the ntop directory in /var/lib Just to be sure, I have chmod 777 to both directories Permission denied actions: root@Tower:/var/lib# sudo -u ntop mkdir /var/lib/ntop/test mkdir: cannot create directory `/var/lib/ntop/test': Permission denied Any ideas?

    Read the article

  • Custom XML Serialization, how to write custom root element?

    - by Beta033
    I'm probably just doing this wrong, i know. i'm using custom serialization and when the xml is generated it's putting the class name as the root element Example: <MyClassName> <MyIntendedRootNode> <ObjectType> <Property1/> <Property2/> ... I'm inoking the serialization by calling xmlserializer.Serialize(writer,Me) so i'm sure that has something to do with it. I've tried putting XMLRoot onto the class, but i think as vb is compiling this partial class with it's aspx page, it's either overwriting this property or ignoring it entirely. ideally i'd like to just tell it to either throw away everything it has and use a different root element. Anybody else do this except me? Thanks

    Read the article

  • Issue with clipping rectangles and back to front rendering

    - by Milo
    Here is my problem. My rendering algorithm renders from back to front. But logically, clipping rectangles need to be applied from front to back. Hence why the following does not work: void AguiWidgetManager::recursiveRender(const AguiWidget *root) { //recursively calls itself to render widgets from back to front AguiWidget* nonConstRoot = (AguiWidget*)root; if(!nonConstRoot->isVisable()) { return; } //push the clipping rectangle if(nonConstRoot->isClippingChildren()) { graphicsContext->pushClippingRect(nonConstRoot->getClippingRectangle()); } if(nonConstRoot->isEnabled()) { nonConstRoot->paint(AguiPaintEventArgs(true,graphicsContext)); for(std::vector<AguiWidget*>::const_iterator it = root->getPrivateChildBeginIterator(); it != root->getPrivateChildEndIterator(); ++it) { recursiveRender(*it); } for(std::vector<AguiWidget*>::const_iterator it = root->getChildBeginIterator(); it != root->getChildEndIterator(); ++it) { recursiveRender(*it); } } else { nonConstRoot->paint(AguiPaintEventArgs(false,graphicsContext)); for(std::vector<AguiWidget*>::const_iterator it = root->getPrivateChildBeginIterator(); it != root->getPrivateChildEndIterator(); ++it) { recursiveRenderDisabled(*it); } for(std::vector<AguiWidget*>::const_iterator it = root->getChildBeginIterator(); it != root->getChildEndIterator(); ++it) { recursiveRenderDisabled(*it); } } //release clipping rectangle if(nonConstRoot->isClippingChildren()) { graphicsContext->popClippingRect(); } } I could ofcourse go to the top of the tree, then apply clipping rectangles inward until I get to the currently rendered widget, but that would involve lots of clipping rectangles @ 60 frames per second. I want to minimize calls to pushing and popping rectangles. What could I do, Thanks

    Read the article

  • unable to re-attach screen session on freebsd

    - by Michael
    I have a screen session that I am unable to re-attach to. I have tried kill -CHLD 6859, with zero success. Is there anything else that I can try to get this session re-attached q4# screen -ls No Sockets found in /tmp/screens/S-root. q4# ls -la /tmp/screens/S-root/ total 8 drwx------ 2 root wheel 512 May 26 12:52 . drwxr-xr-x 4 root wheel 512 Feb 26 2013 .. prwx------ 1 root wheel 0 May 26 10:14 6859.pts-0.q4 q4# ps uax 6859 USER PID %CPU %MEM VSZ RSS TT STAT STARTED TIME COMMAND root 6859 0.0 1.2 84732 50444 ?? Ss 2Jan13 34:06.71 screen -h 9999 q4# screen -r There is no screen to be resumed. q4# whoami root q4#

    Read the article

  • What would be the time complexity of counting the number of all structurally different binary trees?

    - by ktslwy
    Using the method presented here: http://cslibrary.stanford.edu/110/BinaryTrees.html#java 12. countTrees() Solution (Java) /** For the key values 1...numKeys, how many structurally unique binary search trees are possible that store those keys? Strategy: consider that each value could be the root. Recursively find the size of the left and right subtrees. */ public static int countTrees(int numKeys) { if (numKeys <=1) { return(1); } else { // there will be one value at the root, with whatever remains // on the left and right each forming their own subtrees. // Iterate through all the values that could be the root... int sum = 0; int left, right, root; for (root=1; root<=numKeys; root++) { left = countTrees(root-1); right = countTrees(numKeys - root); // number of possible trees with this root == left*right sum += left*right; } return(sum); } } I have a sense that it might be n(n-1)(n-2)...1, i.e. n!

    Read the article

< Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >