Search Results

Search found 3690 results on 148 pages for 'apt mirror'.

Page 26/148 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • Software Installation Failure!

    - by NIKOS ANTONIOU
    I get the same error whenever I try to install software on my laptop, for example: I want to install Pavucontrol. So, I open the terminal and I type sudo apt-get install pavucontrol and my terminal output is: Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: libgconfmm-2.6-1c2 libglademm-2.4-1c2a libpulse-mainloop-glib0 padevchooser paman paprefs pavumeter pulseaudio-module-zeroconf The following NEW packages will be installed: libgconfmm-2.6-1c2 libglademm-2.4-1c2a libpulse-mainloop-glib0 padevchooser paman paprefs pavucontrol pavumeter pulseaudio-module-zeroconf 0 upgraded, 9 newly installed, 0 to remove and 172 not upgraded. 1 not fully installed or removed. Need to get 0B/345kB of archives. After this operation, 2044kB of additional disk space will be used. Do you want to continue [Y/n]? Y perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "el_GR.UTF-8" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). Can't exec "locale": No such file or directory at /usr/share/perl5/Debconf/Encoding.pm line 16. Use of uninitialized value $Debconf::Encoding::charmap in scalar chomp at /usr/share/perl5/Debconf/Encoding.pm line 17. dpkg: `ldconfig' not found on PATH. dpkg: 1 expected program(s) not found on PATH. NB: root's PATH should usually contain /usr/local/sbin, /usr/sbin and /sbin. E: Sub-process /usr/bin/dpkg returned an error code (2) What is the problem and how do I fix it?

    Read the article

  • Trying to build/install patched gtk3-engines-oxygen to test bugfix, get shared changelog.Debian.gz is different from other instances of package

    - by andlabs
    I want to just quickly test the patch in this bug report to gtk3-engines-oxygen so it can go upstream. I could test it either temporarily or permanently; I would just like to do it. I currently have the package installed. So far, I've tried: $ mkdir /tmp/o # keep everything self-contained $ cd /tmp/o $ apt-get source gtk3-engines-oxygen $ cd oxygen-gtk3-1.3.5/ $ patch -p1 < /path/to/patchfile $ dpkg-source --commit # to make debuild happy (name 'layout'; just save the default; this is a test) $ debuild -us -uc # bypass signature checks $ sudo debi ../oxygen-gtk3_1.3.5-0ubuntu1_amd64.changes According to some people on #ubuntu-packaging, this is what I have to do. It's this last step that's the problem; I'm getting (Reading database ... 503333 files and directories currently installed.) Preparing to unpack gtk3-engines-oxygen_1.3.5-0ubuntu1_amd64.deb ... Unpacking gtk3-engines-oxygen:amd64 (1.3.5-0ubuntu1) over (1.3.5-0ubuntu1) ... dpkg: error processing archive gtk3-engines-oxygen_1.3.5-0ubuntu1_amd64.deb (--install): trying to overwrite shared '/usr/share/doc/gtk3-engines-oxygen/changelog.Debian.gz', which is different from other instances of package gtk3-engines-oxygen:amd64 Errors were encountered while processing: gtk3-engines-oxygen_1.3.5-0ubuntu1_amd64.deb debi: debpkg -i failed What's going on? How do I fix it? Or am I doing this completely wrong (and ergo so are they)? I'm using Kubuntu 14.04 amd64. Thanks.

    Read the article

  • Autojump in 12.04 doesn't work

    - by hnasarat
    https://launchpad.net/ubuntu/+source/autojump I installed with apt-get, checked out the man page and added . /usr/share/autojump/autojump.sh to my .bashrc, like it says. When I cd around the filesystem, nothing gets added to ~/.local/share/autojump. I then tried adding . /usr/share/autojump/autojump.bash, but that didn't work either. autojump -a ~/Dropbox properly creates a file ~/.local/share/autojump/autojump.txt, but running j Drop < TAB > doesn't autocomplete to j ~/Dropbox/ as it should. However, j < TAB > does autocomplete to j ~/Dropbox. I know my bash-completion is working since it works for git, dd, and others. I know there's a newer version in the repositories set for Quantal. Perhaps that would work? I don't know how to install that version though. I've used autojump with mac homebrew (and it installed without any issue), so I know there is missing functionality. In general I'm really annoyed that I can't get this working...I've spent hours on it! Needless to say, help would be very appreciated.

    Read the article

  • How to get automatic upgrades to work on Ubuntu Server?

    - by J. Pablo Fernández
    I followed the documentation for enabling automatic upgrades in Ubuntu servers, but it's not really updating anything at all. My /etc/apt/apt.conf.d/50unattended-upgrades looks almost like the default. // Automatically upgrade packages from these (origin, archive) pairs Unattended-Upgrade::Allowed-Origins { "Ubuntu karmic-security"; "Ubuntu karmic-updates"; }; // List of packages to not update Unattended-Upgrade::Package-Blacklist { // "vim"; // "libc6"; // "libc6-dev"; // "libc6-i686"; }; // Send email to this address for problems or packages upgrades // If empty or unset then no email is sent, make sure that you // have a working mail setup on your system. The package 'mailx' // must be installed or anything that provides /usr/bin/mail. Unattended-Upgrade::Mail "[email protected]"; // Automatically reboot *WITHOUT CONFIRMATION* if a // the file /var/run/reboot-required is found after the upgrade //Unattended-Upgrade::Automatic-Reboot "false"; The directory /var/log/unattended-upgrades/ is empty. Running /etc/init.d/unattended-upgrades start is not very nice: root@mozart:~# /etc/init.d/unattended-upgrades start Checking for running unattended-upgrades: root@mozart:~# Something seems to be broken, but I'm not sure why. I have pending updates and they are not being applied: root@mozart:~# aptitude safe-upgrade Reading package lists... Done Building dependency tree Reading state information... Done Reading extended state information Initializing package states... Done The following packages will be upgraded: linux-libc-dev 1 packages upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Need to get 0B/743kB of archives. After unpacking 4096B will be used. Do you want to continue? [Y/n/?] In all the servers I have, unattended upgrades seems to have been disabled: root@mozart:~# apt-config shell UnattendedUpgradeInterval APT::Periodic::Unattended-Upgrade root@mozart:~# Any ideas what am I missing?

    Read the article

  • 301 redirects mirrored domain

    - by Dave
    I'm redesigning a site for a friend on my localhost. His old site is an .asp based site and we're replacing it with a WordPress site on LAMP hosting. The old site sits on domain A and also has another domain, domain B parked on top of it mirroring it. Google has picked up domain B for most of his search engine results and yahoo and bing etc have picked up domain A. The plan is to 301 redirect the the old pages of his site on domain A to the new WordPress versions and park domain B on top of it like before. My question is, will this work, if not what would be a better way to approach it? We'd prefer not to lose any of the search engine listings in the redesign, and the search engines don't appear to have penalized him for duplicate content. Thanks very much in advance!

    Read the article

  • Upgrading only certain packages via the getdeb repo

    - by intuited
    I'm a bit confused about how getdeb.net works now. The last time I got a package from there was a while ago; at that point the procedure was that you would just download a .deb for each package that you wanted to install/upgrade and then install it using dpkg -i. However the inexorable march of progress has lent its trumpets to this system as well, and getdeb installs are now done via their repo, which is registered with apt in /etc/apt/sources.list.d, after you install a single package that makes the changes to the apt database. I've installed that package, and I've discovered that aptitude dist-upgrade now wants to upgrade a lot of packages on my system that weren't ready for upgrades prior to the installation of the getdeb package. If I rename the file /etc/apt/sources.list.d/getdeb.list to something with a different extension, then do aptitude update && aptitude dist-upgrade, it stops wanting to upgrade packages. So I gather that the default behaviour is now to upgrade all packages to the version available at getdeb. This is not particularly appropriate, since these packages are not as well tested as the officially released versions. Is there a config setting somewhere that will prevent upgrading packages to versions from the getdeb repo unless this action is specifically selected? I'd like to be able to pick and choose what packages are upgraded via getdeb.

    Read the article

  • Can't install mplayer or vlc on ubuntu

    - by mirko4
    I am trying to install Mplayer or VLC player on ubuntu feisty but i can't do it. I try with apt-get: Reading package lists... Done Building dependency tree Reading state information... Done You might want to run `apt-get -f install' to correct these: The following packages have unmet dependencies: mplayer: Depends: libasound2 (> 1.0.16) but 1.0.13-1ubuntu5 is to be installed Depends: libavcodec51 (>= 0.svn20080206-8) but it is not going to be installed or libavcodec-unstripped-51 (>= 0.svn20080206-8) but it is not installable Depends: libavformat52 (>= 0.svn20080206-8) but it is not going to be installed or libavformat-unstripped-52 (>= 0.svn20080206-8) but it is not installable Depends: libavutil49 (>= 0.svn20080206-8) but it is not going to be installed or libavutil-unstripped-49 (>= 0.svn20080206-8) but it is not installable Depends: libcaca0 (>= 0.99.beta14-1) but 0.99.beta11.debian-2build1 is to be installed Depends: libcdparanoia0 (>= 3.10.2+debian) but 3.10+debian~pre0-4build1 is to be installed Depends: libcucul0 (>= 0.99.beta14-1) but 0.99.beta11.debian-2build1 is to be installed Depends: libfaad0 (>= 2.6.1) but it is not going to be installed Depends: libfribidi0 (>= 0.10.9) but 0.10.7-4build1 is to be installed Depends: libgif4 (>= 4.1.6) but it is not going to be installed Depends: libjack0 (>= 0.109.2) but it is not going to be installed Depends: liblzo2-2 but it is not going to be installed Depends: libopenal1 but it is not going to be installed Depends: libpostproc51 (>= 0.svn20080206-8) but it is not going to be installed or libpostproc-unstripped-51 (>= 0.svn20080206-8) but it is not installable Depends: libspeex1 (>= 1.2~beta3-1) but 1.1.12-3 is to be installed Depends: libsvga1 Depends: libswscale0 (>= 0.svn20080206-8) but it is not going to be installed or libswscale-unstripped-0 (>= 0.svn20080206-8) but it is not installable Depends: mplayer-skin python-apt: Depends: libapt-inst-libc6.7-6-1.1 Depends: libapt-pkg-libc6.7-6-4.6 scim-gtk2-immodule: Depends: libscim8c2a (>= 1.4.6) but 1.4.4-7ubuntu1 is to be installed scim-modules-socket: Depends: libscim8c2a (>= 1.4.6) but 1.4.4-7ubuntu1 is to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). I try apt-get -f install but it doesn't work neither. What to do please help me ?!

    Read the article

  • Another website is mirroring and ranks above my site in search results

    - by Marlboro Goodluck
    There is a site of ill-repute known as thedirty which has completely mirrored my site and now has links appearing on Google at the #1 spot using my content. I checked my log files and noticed that this site has been crawling mine for sometime, and also has 10,000 links from their site to mine. I have blocked user access which is referred from this site and reported them as web spam to Google already. I also disavowed the domain. How are they getting top links in Google (even overtaking mine) for such nefarious tactics? What are the steps to completely eliminating an issue such as this?

    Read the article

  • Another website is mirroring my site

    - by Marlboro Goodluck
    Question for you all. There is a site of ill repute known as thedirty which has completely mirrored my site and now has links appearing on Google at the #1 spot using my content. I checked my log file and noticed that this site has been crawling mine from sometime, and also has 10k links from their site to mine. I have blocked user access which is referred from this site and reported them as web spam to Google already. I also disavowed the domain. How are they getting top links in Google (even overtaking mine) for such nefarious tactics? What are the steps to completely eliminating an issue such as this?

    Read the article

  • NFS server generating "invalid extent" on EXT4 system disk?

    - by Stephen Winnall
    I have a server running Xen 4.1 with Oneiric in the dom0 and each of the 4 domUs. The system disks of the domUs are LVM2 volumes built on top of an mdadm RAID1. All the domU system disks are EXT4 and are created using snapshots of the same original template. 3 of them run perfectly, but one (called s-ub-02) keeps on being remounted read-only. A subsequent e2fsck results in a single "invalid extent" diagnosis: e2fsck 1.41.14 (22-Dec-2010) /dev/domu/s-ub-02-root contains a file system with errors, check forced. Pass 1: Checking inodes, blocks, and sizes Inode 525418 has an invalid extent (logical block 8959, invalid physical block 0, len 0) Clear<y>? yes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information /dev/domu/s-ub-02-root: 77757/655360 files (0.3% non-contiguous), 360592/2621440 blocks The console shows typically the following errors for the system disk (xvda2): [101980.903416] EXT4-fs error (device xvda2): ext4_ext_find_extent:732: inode #525418: comm apt-get: bad header/extent: invalid extent entries - magic f30a, entries 12, max 340(340), depth 0(0) [101980.903473] EXT4-fs (xvda2): Remounting filesystem read-only I have created new versions of the system disk. The same thing always happens. This, and the fact that the disk is ultimately on a RAID1, leads me to preclude a hardware disk error. The only obvious distinguishing feature of this domU is the presence of nfs-kernel-server, so I suspect that. Its exports file looks like this: /exports/users 192.168.0.0/255.255.248.0(rw,sync,no_subtree_check) /exports/media/music 192.168.0.0/255.255.248.0(rw,sync,no_subtree_check) /exports/media/pictures 192.168.0.0/255.255.248.0(rw,sync,no_subtree_check) /exports/opt 192.168.0.0/255.255.248.0(rw,sync,no_subtree_check) /exports/users and /exports/opt are LVM2 volumes from the same volume group as the system disk. /exports/media is an EXT2 volume. (There is an issue where clients see /exports/media/pictures as being a read-only volume, which I mention for completeness.) With the exception of the read-only problem, the NFS server appears to work correctly under light load for several hours before the "invalid extent" problem occurs. There are no helpful entries in /var/log. All of a sudden, no more files are written, so you can see when the disk was remounted read-only, but there is no indication of what the cause might be. Can anyone help me with this problem? Steve

    Read the article

  • dpkg crashing while trying to install a package

    - by Jonathan
    While attempting to install a package via apt-get the following. The first error I get is: E: dpkg was interrupted, you must manually run 'sudo dpkg --configure -a' to correct the problem. And, if I run that command the box spins out of control and I get the following in /var/log/syslog Aug 29 20:21:08 ip-10-202-191-4 kernel: [20571563.398546] ------------[ cut here ]------------ Aug 29 20:21:08 ip-10-202-191-4 kernel: [20571563.398552] WARNING: at /build/buildd/linux-3.0.0/arch/x86/xen/multicalls.c:182 xen_mc_flush+01c0() Aug 29 20:21:08 ip-10-202-191-4 kernel: [20571563.398561] Modules linked in: acpiphp Aug 29 20:21:08 ip-10-202-191-4 kernel: [20571563.398568] Pid: 31063, comm: java Tainted: G D W 3.0.0-14-virtual #23-Ubuntu Aug 29 20:21:08 ip-10-202-191-4 kernel: [20571563.398576] Call Trace: Aug 29 20:21:08 ip-10-202-191-4 kernel: [20571563.398580] [<c0648265>] ? printk+0x2d/0x2f Aug 29 20:21:08 ip-10-202-191-4 kernel: [20571563.398586] [<c0150462>] warn_slowpath_common+0x72/0xa0 Aug 29 20:21:08 ip-10-202-191-4 kernel: [20571563.398593] [<c0104883>] ? xen_mc_flush+0x1b3/0x1c0 Aug 29 20:21:08 ip-10-202-191-4 kernel: [20571563.398599] [<c0104883>] ? xen_mc_flush+0x1b3/0x1c0 Aug 29 20:21:08 ip-10-202-191-4 kernel: [20571563.398605] [<c01504b2>] warn_slowpath_null+0x22/0x30 Aug 29 20:21:08 ip-10-202-191-4 kernel: [20571563.398611] [<c0104883>] xen_mc_flush+0x1b3/0x1c0 Aug 29 20:21:08 ip-10-202-191-4 kernel: [20571563.398617] [<c0104e7a>] ? xen_extend_mmu_update+0x4a/0x70 Aug 29 20:21:08 ip-10-202-191-4 kernel: [20571563.398624] [<c0106565>] xen_set_pud_hyper+0x75/0x80 Aug 29 20:21:08 ip-10-202-191-4 kernel: [20571563.398630] [<c01065b9>] xen_set_pud+0x49/0x60 Aug 29 20:21:08 ip-10-202-191-4 kernel: [20571563.398636] [<c0132105>] pud_populate+0x45/0x60 Aug 29 20:21:08 ip-10-202-191-4 kernel: [20571563.398642] [<c0208a24>] __pmd_alloc+0x74/0x90 Aug 29 20:21:08 ip-10-202-191-4 kernel: [20571563.398648] [<c0208cb7>] handle_mm_fault+0x277/0x2c0 Aug 29 20:21:08 ip-10-202-191-4 kernel: [20571563.398655] [<c065f45b>] do_page_fault+0x15b/0x4a0 Aug 29 20:21:08 ip-10-202-191-4 kernel: [20571563.401923] [<c020ba24>] ? remove_vma+0x44/0x60 Aug 29 20:21:08 ip-10-202-191-4 kernel: [20571563.401923] [<c020d9b6>] ? sys_mmap_pgoff+0x106/0x1c0 Aug 29 20:21:08 ip-10-202-191-4 kernel: [20571563.401923] [<c065f300>] ? vmalloc_fault+0x190/0x190 Aug 29 20:21:08 ip-10-202-191-4 kernel: [20571563.401923] [<c065c79f>] error_code+0x67/0x6c Aug 29 20:21:08 ip-10-202-191-4 kernel: [20571563.401923] ---[ end trace 0b105e2a179ad013 ]---

    Read the article

  • Tomcat6 Manager Webapp is 404 on apt-get install on Ubuntu 10.10

    - by Noel
    http://localhost:8080/manager/html gives a 404 error on apt-get install of tomcat6 (6.0.28 on JVM 1.6.0_20-b20 on 2.6.35-27-generic amd64). http://localhost:8080/host-manager/html works. Lists one Host name, localhost. cat /usr/share/tomcat6/conf/tomcat-users.xml <tomcat-users> <role rolename="admin"/> <role rolename="manager" /> <user username="tomcatuser" password="Password1" roles="admin,manager"/> </tomcat-users> cat /usr/share/tomcat6/conf/Catalina/localhost/manager.xml <Context path="/manager" docBase="/usr/share/tomcat6-admin/manager" antiResourceLocking="false" privileged="true" /> <role name="manager" /> <user name="manager" password="Password1" roles="manager" /> <user name="tomcatuser" password="Password1" roles="manager" /> Those two files are the only documentation I've seen on how to setup the Manager webapp, and they seem to be compliant with the requirements.

    Read the article

  • upgrade glibc on RHEL4 without breaking anything

    - by SpliFF
    I have a static version of wkhtmltopdf which requires glibc-2.4 wkhtmltopdf: /lib/tls/libc.so.6: version `GLIBC_2.4' not found (required by wkhtmltopdf) I have apt installed with the DAG repos. Other than that the server is pretty stock standard except for Coldfusion MX7. My question is, is it safe to just "apt update glibc"? Will the updated glibc clobber the old one or will they co-exist? Should I "apt upgrade" the whole server? I'm pretty sure everything else (Apache2, Postgres8, etc) will handle the upgrade but Coldfusion concerns me due to its proprietry nature.

    Read the article

  • Nothing happens when trying to upgrade from Linux Mint 12 to 13

    - by Ares
    I am trying to upgrade from Linux Mint 12 to 13 using apt-get. After running the following commands, nothing seems to happen. I am fairly new to linux, what am I doing wrong? user@olympus /etc/default $ sudo apt-get dist-upgrade Reading package lists... Done Building dependency tree Reading state information... Done Calculating upgrade... Done 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. user@olympus /etc/default $ sudo apt-get upgrade Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.

    Read the article

  • Best way to replicate / mirror 100s of databases in SQL 2005

    - by mrwayne
    Hi, I currently host around 400-500 SQL 2005 databases of varying sizes (1-10 gig) each. I am aware of most of the different methods available and the general pros/cons of mirroring, log shipping, replication and clustering, but i am not aware of how well they tend to perform when its employed at the size i have specified (400-500 unique databases). Does anyone have any good advice on what is likely the best method for having the ability to fail over to another server with this sort of setup? Fail over does not need to be immediate, i'm just looking for something better than taking backups every day and moving them to storage. I'm preferably looking for something that would also makes it easy to manage the databases in bulk (as opposed to one at a time). Thanks for your input!

    Read the article

  • Dual Monitor difficulties (VirtualBox ubuntu host) - rdesktop sessions mirror

    - by rukus5
    I am running ubuntu 9.10 host with a Windows guest and need to extend my guest windows desktop into the second monitor (otherwise I will have to convert to a dual boot situation because this is a work furnished computer, please HELP!!) Current Situation: Windows Guest Running with VRDP enabled and successfully connecting. Guest Additions running and VBox set to 2 monitors and I see two monitors in display settings. connecting via 2 different rdesktop sessions mirrors the display. even though display settings of Guest Windows is set to extend desktop. is there a rdesktop option to signify to the VBox it is the second display? I need the second connection be the second display. any ideas?

    Read the article

  • WGet or cURL: Mirror Site from http://site.com And No Internal Access

    - by alharaka
    I have tried wget -m wget -r and a whole bunch of variations. I am getting some of the images on http://site.com, one of the scripts, and none of the CSS, even with the fscking -p parameter. The only HTML page is index.html and there are several more referenced, so I am at a loss. curlmirror.pl on the cURL developers website does not seem to get the job done either. Is there something I am missing? I have tried different levels of recursion with only this URL, but I get the feeling I am missing something. Long story short, some school allows its students to submit web projects, but they want to know how they can collect everything for the instructor who will grade it, instead of him going to all the externally hsoted sites. UPDATE: I think I figured out the issue. I though the links to the other pages were in the index.html page that downloaded. I was way off. Turns out the footer of the page, which has all the navigation links, is handled by a JavaScript file Include.js, which reads JLSSiteMap.js and some other JS files to do page navigation and the like. As a result, wget does not pick up an other dependencies because a lot of this crap is handled not on web pages. How can I handle such a website? This is one of several problem cases. I assume little can be done if wget cannot parse JavaScript.

    Read the article

  • Mirror a RAID0 volume

    - by Ghostrider
    I have two SSD running in RAID0. The capacity and speed are just great. I use Windows Home Server to do incremental daily backups. This is fine and well and I've successfully restored from these backups. However. When one of the disks physically died. I was stuck without a working system until the replacement arrives so that I can restore the array from backup. WHS restoration takes about 5 hours which basically means that I'm losing entire day for the process. Is it possible to set up kind of a recovery volume for the RAID array? Use a single mechanical HDD that would be updated with the exact clone of the RAID array on a daily basis. This way if the array goes offline for some reason, I can just boot from the mechanical HDD, lose some perf but will still be able to work. The machine in question runs Windows 7. Creating RAID01 is not an option because of the high price of the SSD and the fact that it still doesn't protect against failure of RAID controller. Is there any way it can be set up?

    Read the article

  • setting up a proxy to mirror an SSH SOCKS connection

    - by aresnick
    I have two remote machines, remote1 and remote2. remote2 is only running sshd, and I can't run anything else on it. remote1 is a full-fledged server to which I have complete access. I can run a SOCKS proxy on remote2 via ssh -f -N -D *:8080 me@remote2 which lets me expose a SOCKS proxy on port 8080 on remote1. I'd like to authenticate this so that the proxy isn't sitting open. How can I do this? It seems like I should be able to use delegate, but I can't even seem to get its HTTP proxy functionality working. When I run delegated -r -P8081 SERVER=http PERMIT="*:*:*" REMITTABLE="*" I can't even get it to work on port 8081. Anyway, I was hoping someone could point me in the right direction to let me authenticate access to the SOCKS proxy connection? That is, I want to be able to point my browser's proxy at remote1 and browse the internet through the SSH SOCKS proxy/tunnel to remote2. squid doesn't support a SOCKS parent =( Thanks!

    Read the article

  • Mirror a Dropbox repository in Sharepoint and restrict access

    - by Dan Robson
    I'm looking for an elegant way to solve the following problem: My development team uses Dropbox for sharing documents amongst our immediate group. We'd like to put some of those documents into a SharePoint repository for the larger group to be able to access, as granting Dropbox access to the group at large is not ideal. However, we'd like to continue to be able to propagate changes to the SharePoint site simply by updating the files in Dropbox on our local client machines, and also vice versa - users granted access on SharePoint that update files in that workspace should be able to save their files and the changes should appear automatically on our client PC's. I've already done the organization of the folders so that in Dropbox, there exists a SharePoint folder that looks something like this: SharePoint ----Team --------Restricted Access Folders ----Organization --------Open Access Folders The Dropbox master account and the SharePoint master account are both set up on my file server. Unfortunately, Dropbox doesn't seem to allow syncing of folders anywhere above the \Dropbox\ part of the file system's hierarchy - or all I would have to do is find where the Sharepoint repository is maintained locally, and I'd be golden. So it seems I have to do some sort of 2-way synchronization between the Dropbox folder on the file server and the SharePoint folder on the file server. I messed around with Microsoft SyncToy, but it seems to be lacking in the area of real-time updating - and as much as I love rsync, I've had nothing but bad luck with it on Windows, and again, it has to be kicked off manually or through Task Scheduler - and I just have a feeling if I go down that route, it's only a matter of time before I get conflicts all over the place in either Dropbox, SharePoint, or both. I really want something that's going to watch both folders, and when one item changes, the other automatically updates in "real-time". It's quite possible I'm going down the entirely wrong route, which is why I'm asking the question. For simplicity's sake, I'll restate the goal: To be able to update Dropbox and have it viewable on the SharePoint site, or to update the SharePoint site and have it viewable in Dropbox. And since I'm a SharePoint noob, I'll also need help hiding the "Team" subfolder from everyone not in a specific group in AD.

    Read the article

  • Windows 7 - cancel mirror synchronisation

    - by Chris W
    I've got basic disk OS managed disk mirroring setup in Windows 7 for a couple of volumes. After a power failure the mirrors are currently resynching. These are only small volumes of data but the sync has not completed after more than 24 hours. Is there any way to stop this as it's driving me nuts? I need to get the machine back to a usable state to get some work done but it's a bit of a dog whilst this synch is going on. I've tried removing the mirrors but it won't let me do that whilst the re-sync is in progress.

    Read the article

  • Windows 7 - cancel mirror synchronisation

    - by Chris W
    I've got basic disk OS managed disk mirroring setup in Windows 7 for a couple of volumes. After a power failure the mirrors are currently resynching. These are only small volumes of data but the sync has not completed after more than 24 hours. Is there any way to stop this as it's driving me nuts? I need to get the machine back to a usable state to get some work done but it's a bit of a dog whilst this synch is going on. I've tried removing the mirrors but it won't let me do that whilst the re-sync is in progress.

    Read the article

  • apt-get install phpmyadmin on debian doesn't install /etc/phpmyadmin/apache.conf

    - by Christian Nikkanen
    I'm trying to install phpmyadmin on my webserver, using this guide: http://www.howtoforge.com/ubuntu_debian_lamp_server I did that once, and it worked like a dream, but I hated the looks of phpmyadmin (maybe the oldest layout ever) and decided to delete it, and didn't know that deleting is done with apt-get remove phpmyadmin and did in phpmyadmin directory rm * and thought that it's done. However, as I can't find the debian build of phpmyadmin anywhere, I want to install it again, but when I add Include /etc/phpmyadmin/apache.conf to /etc/apache2/apache2.conf, and restart apache, it give's me this error: apache2: Syntax error on line 73 of /etc/apache2/apache2.conf: Could not open configuration file /etc/phpmyadmin/apache.conf: No such file or directory Action 'configtest' failed. The Apache error log may have more information. failed! No matter how I try, I always get this error, and phpmyadmin isn't there.

    Read the article

  • apticron, apt-get dist-upgrade and aptitude

    - by Kai
    I'm confused, on my debian server I've gotten the daily "updates available" message by apticron. I normally then just use aptitude to install the upgrades. Today I got a message which shows two upgrades. But they don't show up in aptitude. When I do a apt-get dist-upgrade they show up as "NEW" packages to be installed. aptitude dist-upgrade seems to ignore them. Can anyone explain to me why this is happening and how to get rid of the messages (It doesn't seem like I really need the new packages)

    Read the article

  • Robocopy Mirror Backup gone awry

    - by Aznfin
    I have created a simple batch file script for running Robocopy. It is set to make a backup of my user account folder to my external hard drive. Here's the parameters for Robocopy: ROBOCOPY "C:\Users\Finnly" "F:\Backups\Finnly (Backup)" /ZB /COPY:DAT /DCOPY:T /MIR /256 /MT:32 /XF *.log *.log* *.dat *.tmp *.temp *.old "ntuser*" "SyncToy*" "UpgKit.txt" ".recently-used.xbel" /XD ".gimp-2.6" ".thumbnails" ".VirtualBox" "AppData" "Application Data" "Adobe" "Camtasia Studio" "Cookies" "CyberLink" "DivX Movies" "DVD Architect Pro 5.0 Projects" "dwhelper" "GTA San Andreas User Files" "Lightroom" "Local Settings" "NetHood" "PrintHood" "Scripts" "temp" "Templates" "The KMPlayer" "Tracing" /R:3 /W:10 /V /TS /FP /ETA /LOG+:F:\Backups\Sync.log /TEE For some reason when I run it, it backs up the files and then it seems to back them up again. The size of my user account directory is 18.3 GB but the backup of it occupies over 30 GB. After reading the contents of the log generated, it is obvious that it's copying files more than once. Why is this happening? I'm running Windows Seven Home Premium 64-bit.

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >