Search Results

Search found 14772 results on 591 pages for 'full trust'.

Page 148/591 | < Previous Page | 144 145 146 147 148 149 150 151 152 153 154 155  | Next Page >

  • Free web-based software for team collaboration/documentation

    - by Jason Antman
    Looking for some advice here, as my search has turned up to be pretty fruitless. My group (9 people - SAs, programmers, and two network guys) is looking for some sort of web tool to... ahem... "facilitate increased collaboration" (we didn't use a buzzword generator, I swear). At the moment, we have an unified ticketing system that's braindead, but is here to stay for political/logistical reasons. We've got 2 wikis ("old" and "new"), neither of which fulfill our needs, and are therefore not used very often. We're looking for a free (as in both cost and open source) web-based tool. Management side: Wants to be able to track project status, who's doing what, whether deadlines are being met, etc. Doesn't want full-fledged "project management" app, just something where we can update "yeah this was done" or "waiting for Bob to configure the widgets". TeamBox (www.teambox.com) was suggested, but it seems almost too gimmicky, and doesn't meet any of the other requirements: Non-management side: - flexible, powerful wiki for all documentation (i.e. includes good tables, easy markup, syntax highlighting, etc.) - good full text search of everything (i.e. type in a hostname and get every instance anyone ever uttered that name) - task lists or ToDo lists, hopefully about to be grouped into a number of "projects" - file uploads - RSS or Atom feeds, email alerts of updates We're open to doing some customizations (adding some features, notification/feeds, searching, SVN integration, etc.) but need something F/OSS that will run under Apache. My conundrum is that most of the choices I've found so far fall into one of these categories: project management/task tracking with poor wiki/documentation/knowledge base support wiki with no task tracking support ticketing system with everything else bolted on (we already have one that we're stuck with) code-centric application (we do little "development", mostly SA work) Any suggestions? Or, lacking that, any comments on which software would be easiest to add the lacking features to (hopefully ending up with something that actually looks good and works well)?

    Read the article

  • Apache rewrite rules and special characters

    - by Massimo
    I have a server where some files have an actual %20 in their name (they are generated by an automated tool which handles spaces this way, and I can't do anything about this); this is not a space: it's "%" followed by "2" followed by "0". On this server, there is an Apache web server, and there are some web pages which links to those files, using their name in URLs like http://servername/file%20with%20a%20name%20like%20this.html; those pages are also generated by the same tool, so I (again!) can't do anything about that. A full search-and-replace on all files, pages and URLs is out of question here. The problem: when Apache gets called with an URL like the one above, it (correctly) translates the "%20"s into spaces, and then of course it can't find the files, because they don't have actuale spaces in their names. How can I solve this? I discovered than by using an URL like http://servername/file%2520name.html it works nicely, because then Apache translates "%25" into a "%" sign, and thus the correct filename gets built. I tried using an Apache rewrite rule, and I can succesfully replace spaces with hypens with a syntax like this: RewriteRule (.*)\ (.*) $1-$2 The problem: when I try to replace them with a "%2520" sequence, this just doesn't happen. If I use RewriteRule (.*)\ (.*) $1%2520$2 then the resulting URL is http://servername/file520name.html; I've tried "%25" too, but then I only get a "5"; it just looks like the initial "%2" gets somewhat discarded. The questions: How can I build such a regexp to replace spaces with "%2520"? Is this the only way I can deal with this issue (other than a full search-and-replace which, as I said, can't be done), or do you have any better idea?

    Read the article

  • CentOS security for lazy admins

    - by Robby75
    I'm running CentOS 5.5 (basic LAMP with Parallels Power Panel and Plesk) and have thus far neglected security (because it's not my full-time job, there is always something more important on my todo-list). My server does not contain any secret data and also no lives depend on it - Basically what I want is to make sure it does not become part of a botnet, that is "good enough" security in my case. Anyway, I don't want to become a full-time paranoid admin (like constantly watching and patching everything because of some obscure problem), I also don't care about most security problems like DOS attacks or problems that only exist when using some arcane settings. I'm in search of a "happy medium", for example a list of known important problems in the default installation of CentOS 5.5 and/or a list of security problems that have actually been exploited - not the typical endless list of buffer overflows that "maybe" a problem in some special case. The problem that I have with the usually recommended approaches (joining mailing lists, etc.) is that the really important problems (something where an exploit exists, that is exploitable in a common setup and where the attacker can do something really useful - i.e. not a DOS) are completely and utterly swamped by millions of tiny security alerts that surely are important for high-security servers, but not for me. Thanks for all suggestions!

    Read the article

  • how to properly set environment variables

    - by avorum
    I've recently started using Windows (having used Ubuntu up until now) and I find myself unable to properly set environment variables. Whenever I set them they don't seem to work. I've been going to Start-Edit Environment Variables for your Account and editing the PATH value in the upper half of the GUI. Here's what I've got so far. ;C:\Chocolatey\bin;C:\tools\mysql\current\bin;C:\Program Files (x86)\Git\bin;C:\Program Files\MySQL\MySQL Server 5.6\bin\;C:\Python33\Scripts; These are each the parent directories of the executables I'd like to be able to run by name from CMD, but mysql, git, and pip aren't being recognized. Am I doing something wrong syntactically or at a general understanding level? I'd like to be able to run these commands without having to specify the full path to the executables every time. EDIT: The full PATH extracted from CMD PATH=C:\Python33\;C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common;C:\Program Files (x86)\AMD APP\bin\x86_64;C:\Program Files (x86)\AMD APP\bin\x86;C:\Program Files\Common Files\Microsoft Shared\Windows Live;C:\Program Files (x86)\Common Files\Microsoft Shared\Windows Live;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Program Files (x86)\GTK2-Runtime\bin;C:\Program Files\WIDCOMM\Bluetooth Software\;C:\Program Files\WIDCOMM\Bluetooth Software\syswow64;C:\Program Files (x86)\ATI Technologies\ATI.ACE\Core-Static;C:\Program Files (x86)\Microsoft ASP.NET\ASP.NET Web Pages\v1.0\;C:\Program Files\Microsoft SQL Server\110\Tools\Binn\;C:\Program Files (x86)\QuickTime\QTSystem\;C:\Program Files (x86)\Common Files\Acronis\SnapAPI\;C:\Program Files (x86)\Java\jre7\bin;C:\Program Files (x86)\Windows Kits\8.1\Windows Performance Toolkit\;C:\Program Files (x86)\Microsoft SDKs\TypeScript\;C:\Program Files (x86)\MySQL\MySQL Utilities 1.3.4\; ;C:\Chocolatey\bin;C:\tools\mysql\current\bin I'm being forced to use Windows by my work environment, I don't enjoy the state of affairs.

    Read the article

  • Sync Two Exchange accounts or Ready Only access to subfolders

    - by cpgascho
    This is two questions kind of. The situation is as follows. I am running SBS 2008 with Exchange 2007. There is a shared account which has subfolders to keep track of the process of jobs that are coming into the company (ie: sales) I need to give other people in the company read access to this mailbox not full control. When I give ready only access to the root other users can only see the Inbox and not subfolders. Permissions have to be applied to each folder. One solution I have considered is creating a secondary mailbox that everyone could have full access too which would have a one way sync from the sales mailbox to the secondary mailbox. Then people could see what was happening without messing up the main mailbox by accident (at worst they would mess up the secondary mailbox) Ideally I could find a way to propgate the READ ONLY Permissiosn to all the subfolders. I have tried using PFDavAdmin to do this but have not been able to get it to connect successfully from Windows 7 To Exchange 2007 Any idea on how to 1. Propogate permissions (get PFDavAdmin to work??!) 2. Sync mailboxes 3. Other solution? Thanks Chris

    Read the article

  • SQL Server 2008 R2 - Cannot create database snapshot

    - by Chris Diver
    Server: Windows Server 2008 R2 X64 Enterprise SQL: SQL Server 2008 R2 Enterprise X64 I have a default SQL Server instance, the SQL Server service account is running as a domain user. I am trying to create a database snapshot in the directory where the mdf files are stored. The T-SQL syntax is correct. The file system is NTFS. The error message I get is: Msg 1823, Level 16, State 2, Line 1 A database snapshot cannot be created because it failed to start. Msg 5119, Level 16, State 1, Line 1 Cannot make the file "e:\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\TestDB.ss" a sparse file. Make sure the file system supports sparse files. The local SQLServerMSSQLUser$db$MSSQLSERVER group has Full Control permission on the folder where I am trying to create the snapshot. I can fix the problem in two ways, neither of which are suitable. Add the SQL Server service (domain) account to the local Administrators group and restart the SQL service. Grant the local SQLServerMSSQLUser$db$MSSQLSERVER group Full control on E:\ I have tried to change the owner of the DATA directory to SQLServerMSSQLUser$db$MSSQLSERVER to no avail. I have no issue creating a new database Why can I not create a snapshot by giving permission only on the DATA folder? Update 23/09/2010: I have tried mrdenny's suggestion with no luck (but learned something new in the process), I suspect the problem may be due to the fact that the domain is a windows 2000 domain running in mixed mode. I had to install hotfix KB976494 for Server 2008 R2, as the SQL Server 2008 R2 installer would not verify the service account correctly with the domain. I noticed that Server 2000 isn't a supported operating system for SQL 2008 R2 but cannot find anything that would suggest it shouldn't work in a 2000 domain. I dis-joined the test server from the domain and changed the service accounts to the local service account and I still have the same issue. I will try to re-install the server without joining the domain and without the hotfix and see if the issue persists.

    Read the article

  • Do registry issues with Win7 persist through a recovery from a system image?

    - by user59089
    So I need a bit of advice, please; here's my situation: I have 1) a system image on an a brand new external 1 TB SATA drive, that I managed to successfully capture before my 2) primary system drive went down. I realize this is a fairly simple matter of buying a new primary drive and performing the recovery to the fresh disk...however, the issue is that I believe Win7 was also having some significant issues of its own--basically, Update unable to install updates, and Backup continually ditching the auto backup schedule. I'd been trying to address those issues when my system was still working, but it's been so fruitless, I'm convinced a Win7 re-install would be best, and now I'm concerned that if I was in fact having what I believe are likely registry-related issues before, that these will persist through a recovery--would that likely be correct? I'm mainly worried about recovering my files, so if I did a full recovery from the image, should I be able to then access my individual files, and copy them manually to an external drive, so I can then do a full re-install of Win7? Sory if this seems obvious, but I've never done a recovery before and just trying to make sure there's no red flags with what I have in mind...

    Read the article

  • What is the fastest way to resize a large partition?

    - by Jook
    Due to a new HDD-Configuration I am currently handling larger backup/resize tasks with partitions between around 900MB, wich are 70-90% full. some background: First thing I've noticed was, that the Acronis-WesternDigital TrueImage was extremly slow while running it under Windows 7, even though on high priority. To create a normal backup for 650gb of data (900gb partition), it would have taken 3 days! The same task done with the boot-cd version of this acronis version took about 2 hours (SATA3 copy from one disk to another, both around 110MB/s). Now, after I have done all my backups, I've wanted to remove some obsolete partitions and resize the leftovers to full hdd size. Of course, usually this takes quite some time - in this case for this 900gb partition, to extend it to 931 (30gb+ from front, 1gb+ from end), it will take around 6 hours (using gparted)! Had I new that erlier, I would have just restored the image. But no - first it showed a reasonable time of 1:45h and 0 of 1 operations, but after finishing 1:45h it started again, only this time with 4h to go, still 0 of 1 operations, but now it was copying instead of moving. Question: However, why has it to be this slow to resize a partition? I am asking for a good explanaition. This has bugged me, since I started partitioning - why does it require to copy all the data around, can't it just stay in place?!

    Read the article

  • MSE updating fails, no warning or error message.

    - by WebDevHobo
    I'm running Windows 7 Ultimate, 32-bit. For the last couple of days, MSE doesn't fails to update, remaining stuck at version 1.75.119 I presume that an error log is created somewhere, or an event log, but I don't know where to find those. It just says "connection failed". Tried it at home, at work and friends places, but never works. Restarted computer a lot of times now, checked for Microsoft Updates in general, but nothing shows up. EDIT: I've opened a bounty for this, because I really don't know what to do anymore. The oldest answer(the long post) here did not work. Besides this problem, I'm having trouble using MSI installers too. I've had to add the SYSTEM group to a lot of maps and give them full control, but shouldn't the SYSTEM already be there? Also, I had to remove the "read-only" attribute from the ProgramData and Users folders, add the SYSTEM group there too and give them full control. Only then will the MSI install work and even then, it says I doesn't have the rights to create a shortcut on the desktop. Don't know what I need to modify and where for that. I'm saying this because I don't know how MSE updates, but if they use MSI files to do that, that might explain things. The SYSTEM group remains added, but every time I take away the read only attribute, click OK and check the settings again, read-only is still active... That's all I know. Screenshot, all those updates were manual:

    Read the article

  • Robocopy failure with Windows Server 2008 Scheduled Task

    - by CC
    So I have a batch script for robocopy. Running this from the command line does exactly what I want. robocopy "D:\SQL Backup" \\server1\Backup$\daily /mir /s /copyall /log:\\lmcrfs4g\NavBackup$\robocopyLog.txt /np Then I create a Scheduled Task in Windows Server 2008. If I set up the task to use my Domain Admin account, great. But I'm trying to get it to run as a separate domain account for Scheduled Tasks. If I use that account, folders get created, but files aren't copied. I get the following error: 2011/02/17 15:41:48 ERROR 1307 (0x0000051B) Copying NTFS Security to Destination Directory D:\SQL Backup\folder\ This security ID may not be assigned as the owner of this object. I've verified my domain\Scheduled Tasks account has Full Control NTFS permissions on both the source and destination, and the Full Control Sharing on my hidden \server1\backup$ share. Just for giggles, I've tried adding the domain account to the local Administrators group on both servers. This works fine, but that seems like a lot of privileges just to copy files. Any ideas on what I'm missing?

    Read the article

  • VMware and Windows Activation

    - by Peter M
    Yesterday I installed Slysoft's Virtual CloneDrive in order to mount an iso for some software installation on my host system (XP Pro SP3) This morning I fired up VMware and made a linked clone of an existing XP vm in order to do some software testing. This is the sort of thing that I do all the time, and the base XP vm that I clone was activated a long long time ago. The surprise today was that the newly cloned vm was no longer activated and XP cited major changes in hardware as the reason. I repeated the test with a full clone of the base system and got the same message. I then started up my base vm and it seemed to be activated, yet another vm (which I fully cloned from the base vm a long time ago) now started reporting that XP was not activated. At this point I guessed that Virtual DriveClone might have been the source of my hardware differences so I uninstalled it and rebooted. After this I made a new linked clone and full clone of the base vm and XP did not complain about not being activated. So I seem to be back to where I was before installing Virtual DriveClone with the exception that that one particular XP vm continues to complain about activation (even though 4 or 5 other XP vm's are fat and happy) Now to my questions: Why would adding Virtual CloneDrive to the host system affect XP activation on the vm's? From their point of view I would have thought that the environment had not changed as I had not enabled any new hard drives in their systems. Or is adding a hard drive to the host system enough to upset XP activation? Since this event, one of my fully cloned vm's is still reporting that XP is not activated even though I have removed Virtual CloneDrive. Is there anyway to convince XP that it is on the same system as yesterday? Or are my only options to do an activation or restore the vm from a previous backup?

    Read the article

  • Windows 7 Folder Redirection (GPO)

    - by Kev
    I have been fighting this issue for a day or two now, so I am looking for some insight. I am taking over admin duties in a domain of 800 users, and the previous admins really did not employ much of any GPO settings for the clients of the Domain. In each site, there is a location on the file server where "Home" folders were manually created. EX: \server\home\enduser Whenever a user got a machine, the admin would manually right-click on the "My Documents" folder and manually enter the path to the home folder. We are planning to start putting Windows 7 machines on the Network, and I am wanting to automate as much as I can, everything that was not done in the past. Since everyone has exising "Home" folders I have been fighting and trying to get Folder Redirection to work with a new Windows 7 machine (In a Test OU). I am getting all kinds of errors and I can't get the Windows 7 "Documents" folder to redirect to the users EXISTING home folders. As I stated earlier, all of the Home folders were (and still are) manually created on the File Server and are set with the following Security permissions - Domain Admins - Full Control euser (end user) - Modify (Everything but Full) Can someone point me in the right direction on the proper setting to put in the Folder Redirection GPO to get this to work with the Existing Home folders.

    Read the article

  • uWSGI cannot find "application" using Flask and Virtualenv

    - by skyler
    Using uWSGI to serve a simple wsgi app, (a simple "Hello, World") my configuration works, but when I try to run a Flask app, I get this in uWSGI's error logs: current working directory: /opt/python-env/coefficient/lib/python2.6/site-packages writing pidfile to /var/run/uwsgi.pid detected binary path: /opt/uwsgi/uwsgi setuid() to 497 your memory page size is 4096 bytes detected max file descriptor number: 1024 lock engine: pthread robust mutexes uwsgi socket 0 bound to TCP address 127.0.0.1:3031 fd 3 Python version: 2.6.6 (r266:84292, Jun 18 2012, 14:18:47) [GCC 4.4.6 20110731 (Red Hat 4.4.6-3)] Set PythonHome to /opt/python-env/coefficient/ *** Python threads support is disabled. You can enable it with --enable-threads *** Python main interpreter initialized at 0xbed3b0 your server socket listen backlog is limited to 100 connections *** Operational MODE: single process *** added /opt/python-env/coefficient/lib/python2.6/site-packages/ to pythonpath. unable to find "application" callable in file /var/www/coefficient/flask.py unable to load app 0 (mountpoint='') (callable not found or import error) *** no app loaded. going in full dynamic mode *** *** uWSGI is running in multiple interpreter mode ***` Note in particular this part of the log: unable to find "application" callable in file /var/www/coefficient/flask.py unable to load app 0 (mountpoint='') (callable not found or import error) **no app loaded. going in full dynamic mode** This is my Flask app: from flask import Flask app = Flask(__name__) @app.route("/") def hello(): return "Hello, World, from Flask!" Before I added my Virtualenv's pythonpath to my configuration file, I was getting an ImportError for Flask. I solved this though, I believe (I'm not receiving errors about it anymore) and here is my complete configuration file: uwsgi: #socket: /tmp/uwsgi.sock socket: 127.0.0.1:3031 daemonize: /var/log/uwsgi.log pidfile: /var/run/uwsgi.pid master: true vacuum: true #wsgi-file: /var/www/coefficient/coefficient.py wsgi-file: /var/www/coefficient/flask.py processes: 1 virtualenv: /opt/python-env/coefficient/ pythonpath: /opt/python-env/coefficient/lib/python2.6/site-packages This is how I start uWSGI, from an rc script: /opt/uwsgi/uwsgi --yaml /etc/uwsgi/conf.yaml --uid uwsgi And if I try to view the Flask program in a browser, I get this: **uWSGI Error** Python application not found Any help is appreciated.

    Read the article

  • Desktop virtualization

    - by gurpal2000
    Is there currently a proper Type 1 "desktop" hypervisor? Either free or not? This is just for tinkering around at home on some beefy Phenom machines. Basically i want to be able to run say 2 OSs on the same PC but without loading windows or a heavy flavour of linux and then use a hotkey to switch between them. I should get full performance out of them. So do i need something better than vmware workstation and/or virtualbox. I think these are "Type 2"? I already run VMWare w/s and VBox but is there a more performant solution? I saw a YouTube video from Citrix where a laptop was running XP and Vista. With the touch of a hot key they could switch between them. There was no visible underlying OS (there might be a hypervisor)? I have access to Citrix XenDesktop 3 enterprise edition evaluation. I realise this isn't for desktops but can i achieve my goal (geekiness) ? If i use the free XenServer 5.5.0 how do my client PCs access windows/linux/whatever from the xenserver? Is it via a thin client RDP type application? If so if there one for both windows and linux? Also if i do use XenServer can i use USB in either direction? What is Citrix receiver can i use that for (3) ? If so, is there some hotkey i can configure? whatever client is used to access the server software (whether it be on a different server or local) can i get full opengl/directx acceleration? what about Xen? i tried the Xen LiveCD but no clue as how to configure it. As you can see much confusion. Any help/pointers welcome. Cheers.

    Read the article

  • Mac OS Leopard: SyncServer process constantly using 100% CPU

    - by macca1
    I am running Leopard that I upgraded from Tiger. I've been noticing that every once in a while the SyncServer process starts up and eats up all the CPU. The fans will start going at full blast and the laptop will slow down to a crawl. I need to force quit the process from Activity Monitor to get it under control. It disappears for a while, but eventually gets started again. I do have an iphone as well that I sync so I'm wondering if syncServer might be an apple process checking for my phone plugged in. Edit: Tried iSync and the manual resetsync as suggested, but got this output: Vince-2:~ vince$ /System/Library/Frameworks/SyncServices.framework/Versions/A/Resources/resetsync.pl full 2010-03-12 08:03:50.230 perl[176:10b] SyncServer is unavailable: exception when connecting: connection timeout: did not receive reply PerlObjCBridge: NSException raised while sending reallyResetSyncData to NSObject object name: "ISyncServerUnavailableException" reason: "Can't connect to the sync server: NSPortTimeoutException: connection timeout: did not receive reply ((null))" userInfo: "" location: "/System/Library/Frameworks/SyncServices.framework/Versions/A/Resources/resetsync.pl line 16" ** PerlObjCBridge: dying due to NSException Vince-2:~ vince$ And during that syncServer started spinning up 95-100% just like it always does.

    Read the article

  • Minimum permissions needed to create a user Home Folder in Windows Active Directory

    - by Jim
    We would like the Help Desk to have the responsibility of creating User Home folders instead of our 2nd level support. The help desk global group is already an Account Operator, so in Active Directory they are able to edit all User Attributes just fine. The problem is figuring out the minimum level of permissions needed on the File Server to create the home share, with out giving them access to everyone home share. So if they open AD Users and Computer, open the properties for a user, and enter \home\users\%username% in the profile tab and then click OK, they get the following error. The \home\users\username home folder was not created because you do not have create access on the server. The user account has been updated with the new home folder value but you must create the directory manually after obtaining the required access right. Right now I have given the Helpdesk group Full Control on the root folder only (no files or subdirectories) The directory is actually created, but the permissions on the newly created folder only show administrators full control, and no permissions for the configured user account. It sure sounds like I'd have to make the helpdesk local admins on the file servers, which is what I'd like to avoid. Especially since the file servers are a large cluster hosting much much more than the entire orgs home share structure.

    Read the article

  • Zenoss No space left on device Error

    - by Pastelinux
    Site Error An error was encountered while publishing this resource. Sorry, a site error occurred. Traceback (innermost last): Module ZPublisher.Publish, line 231, in publish_module_standard Module ZPublisher.Publish, line 165, in publish Module Zope2.App.startup, line 211, in __call__ Module Products.ZenUI3.browser, line 105, in __call__ Module Products.Five.browser.pagetemplatefile, line 60, in __call__ Module zope.pagetemplate.pagetemplate, line 115, in pt_render Module zope.tal.talinterpreter, line 271, in __call__ Module zope.tal.talinterpreter, line 343, in interpret Module zope.tal.talinterpreter, line 858, in do_defineMacro Module zope.tal.talinterpreter, line 343, in interpret Module zope.tal.talinterpreter, line 533, in do_optTag_tal Module zope.tal.talinterpreter, line 518, in do_optTag Module zope.tal.talinterpreter, line 513, in no_tag Module zope.tal.talinterpreter, line 343, in interpret Module zope.tal.talinterpreter, line 620, in do_insertText_tal Module Products.PageTemplates.Expressions, line 203, in evaluateText Module Products.PageTemplates.Expressions, line 222, in _handleText Module zope.component._api, line 174, in queryUtility Module zope.component.registry, line 165, in queryUtility Module ZODB.Connection, line 834, in setstate Module ZODB.Connection, line 884, in _setstate Module ZEO.ClientStorage, line 815, in load Module ZEO.cache, line 143, in call Module ZEO.cache, line 607, in store IOError: [Errno 28] No space left on device Went in to check my server through zenoss today and it looks like somehow my server is full. Which when i look at my server its only 85% full: unclebob:~# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/unclebob--vg0-unclebob--root 1.9G 1.5G 335M 82% / tmpfs 471M 0 471M 0% /lib/init/rw udev 10M 820K 9.2M 9% /dev tmpfs 471M 0 471M 0% /dev/shm overflow 1.0M 1.0M 0 100% /tmp /dev/hde1 942M 36M 859M 5% /boot unclebob:/tmp# df -i Filesystem Inodes IUsed IFree IUse% Mounted on /dev/mapper/unclebob--vg0-unclebob--root 121920 54844 67076 45% / tmpfs 120489 3 120486 1% /lib/init/rw udev 120489 1520 118969 2% /dev tmpfs 120489 1 120488 1% /dev/shm overflow 120489 14 120475 1% /tmp /dev/hde1 61312 33 61279 1% /boot It looks like theres these two files: .ICE-unix/ .X11-unix/ They had been hidden. I'll remove those. Any idea upon what they maybe? Any ideas on a fix? Probably has something to do with Zenoss

    Read the article

  • What does this mean: "SATP VMW_SATP_LOCAL does not support device configuration"?

    - by Jason Tan
    Can anyone tell me what this means in ESXi 5.1?: SATP VMW_SATP_LOCAL does not support device configuration I've googled it and I get a lot of results, but as yet all the pages that contain the string are discussing other matters. The storage array is a HDS HUS-VM and the hosts are HP b460c G8 blades with flex fabric and flex fabric VCs which I am in the process of commissioning and would like to get it started on the right foot - i.e. error and warning free! naa.600508b1001c56ee3d70da65f071da23 Device Display Name: HP Serial Attached SCSI Disk (naa.600508b1001c56ee3d70da65f071da23) Storage Array Type: VMW_SATP_LOCAL Storage Array Type Device Config: SATP VMW_SATP_LOCAL does not support device configuration. Path Selection Policy: VMW_PSP_FIXED Path Selection Policy Device Config: {preferred=vmhba0:C0:T0:L1;current=vmhba0:C0:T0:L1} Path Selection Policy Device Custom Config: Working Paths: vmhba0:C0:T0:L1 Is Local SAS Device: true Is Boot USB Device: false This is the same LUN: ~ # esxcli storage core device list -d naa.60060e80132757005020275700000016 naa.60060e80132757005020275700000016 Display Name: HITACHI Fibre Channel Disk (naa.60060e80132757005020275700000016) Has Settable Display Name: true Size: 204800 Device Type: Direct-Access Multipath Plugin: NMP Devfs Path: /vmfs/devices/disks/naa.60060e80132757005020275700000016 Vendor: HITACHI Model: OPEN-V Revision: 5001 SCSI Level: 2 Is Pseudo: false Status: degraded Is RDM Capable: true Is Local: false Is Removable: false Is SSD: false Is Offline: false Is Perennially Reserved: false Queue Full Sample Size: 0 Queue Full Threshold: 0 Thin Provisioning Status: unknown Attached Filters: VAAI_FILTER VAAI Status: supported Other UIDs: vml.020001000060060e801327570050202757000000164f50454e2d56 Is Local SAS Device: false Is Boot USB Device: false ~ #

    Read the article

  • Windows 7 - User profile corrupted on standby/hibernate

    - by Dogbert
    I have a friend who uses Windows 7 for her home PC. She has a RAID1 array that is using up-to-date Intel Matrix storage drivers, and the entire array is backed up to a separate internal SATA HDD via Acronis True Image every night. Over the weekends, she lets her machine go into suspend after 4 hours of inactivity, and then later into hibernation after 6 hours of inactivity. Her Acronis backup system does nightly incremental backups, and full backups every Saturday night. She also has AVG Free Antivirus installed which does full scans every Monday. So far, on two occasions, on Sunday morning, her user profile is corrupt. I couldn't find any solution that allowed me to repair her profile, so I end up having to (as suggested by MS Knowledge Base, http://windows.microsoft.com/en-us/windows7/fix-a-corrupted-user-profile ;, Yep, no solution to fixing it, just clobber the whole thing) recreating her profile, then copying over data, recreating her Outlook profile, reconfiguring all third-party applications, etc. It's a real nightmare, and takes 4 hours to do each time. Are there any suggestions on how to resolve this profile corruption? This was happening even before the RAID/Acronis solution was in place, but I thought I'd provide as much information as possible.

    Read the article

  • SQL 2008 Backups to UNC Share Failing 0xC002F210

    - by Matty Brown
    This problem is driving me NUTS!! We take backups of all of our production databases to a network share, which are then backed up to tape nightly. 8pm Mon-Fri - Full backup, followed by log backup 7am-7pm Mon-Fri, at half-hour interval - Log backup Our backups have been working in this manner since we migrated from SQL Server Standard 2000 to 2008, 3 years ago. Recently, the first log backup on Mondays have been failing. Not every time, but almost every time! The rest of the week, we've had no problems. I guess the issue may have something to do with the size of the log backup that's attempted after a weekend of no backups. Now onto the issue I need a fix for... All this week, every full backup on our biggest two databases have failed (Both backups < 1GB compressed). There's plenty of disk space on the source and destination servers. I'm guessing the issue is to do with the amount of time it takes to complete the backups of these databases, and/or the size of the backup files required to complete these backups. Changing the backup destination to local storage works fine (and very, very fast in comparison). From the Job History, I can find a few hints as to what the problem could be... Code: 0xC002F210 (Always this code, but a mix of the following descriptions...) "The operating system returned the error '64(failed to retrieve text for this error. Reason: 1815)' while attempting 'SetEndOfFile' on '\drserver\SQLBackups\Database.bak'. BACKUP DATABASE is terminating abnormally. "The operating system returned the error '64(failed to retrieve text for this error. Reason: 1815)' while attempting 'FlushFileBuffers' on '\drserver\SQLBackups\Database.bak'. BACKUP DATABASE is terminating abnormally. Please help save my hair and sanity!!

    Read the article

  • Web Application Publishing on Citrix with Restricted Access

    - by Kanini
    We have a Citrix setup enabling users to access our applications from home. Basically, they login to our site using the Windows Authentication. Once, the are successfully logged in, they see the following icons Desktop - Full Screen (which provides them the Desktop as they would see when the login in our office) We now have a requirement where we would like to publish a web application, hxxp://ourlibrary on Citrix with the following security requirement. (this application is already accessible if the users launch the desktop and launch IE within it and navigate to it) The requirement is this - When the are successfully authenticated to our site, they should be able to see The Internet Explorer icon only, NOT the Dekstop - Full Screen icon. On clicking on the icon, Internet Explorer should open up and should automatically navigate to hxxp://ourlibrary They should not be able to access any other URL, such as Google, Hotmail etc., They should not be able to go FileOpen and Browse They should not be able to do FileSave and Browse In effect, they should be able to view the site and that should be it. Any ideas on how to accomplish the security feature? We have already published the application.

    Read the article

  • Looking for suitable backup solution Mac OS X to offsite Centos 6 server 1TB of working data

    - by Brady
    I'll start by saying what we have in place currently: On site file server (Mac OS X Server) that is used by GFX designers and they have a working 1TB of data. Offsite server with 2TB available storage (Centos 6) Mac OS X server rsync data to offsite server every 6 hours (rsync -avz --delete --progress -e ssh ...) Mac OS X server does full data backup to LTO 4 tape on a 10 day recycle (Mon-Fri for 2 weeks) rsync pushes about 60GB of file changes a day. The problem: The onsite tape backup is failing as 1TB of graphics files don't compress well to fit onto a 800GB LTO4 tape. Backup is incredibly slow doing a full backup. Pain in the backside getting people to remember to change the tape. Often gets forgotten etc The quick solution: Buy LTO5 Drive and tapes. However this has been turned down because of the cost... What I would like: Something that works in the same way rysnc works. Only changed data is sent over the wire and can be scheduled to run multiple times during the day. Data that is sent is compressed and sent over SSH. Something that keeps a 14day retention but doesn't keep duplicate data So as an example if I have 1TB of working data and 60GB of changes are made each day then I expect around 1.84TB of data to be stored on the offsite server. To work with the Mac OS X server and Centos 6 server. Not cost an arm and a leg. Must be a cheaper solution than buying an LTO5 drive with tapes (around £1500). Be able to be setup to run autonomously. Have some sort of control panel that will allow an admin to easily restore a file/folder. Any recommendations?

    Read the article

  • Mail.app doesn't detect sender in Address Book

    - by CoreSandello
    I don't understand, how does 'smart addresses' in Mail.app work. Recently I mentioned, that for some emails I don't see person's full name in 'From' column. I started to dig into this behavior and found out, that I have few contacts in my Address Book, that are not recognized by Mail.app. Here how it looks: I have a person in Address Book with filled email entry and filled first/last name (localized). I have an incoming email from that person (from email specified in Address Book), but first/last name in the email itself doesn't match with ones specified in Address Book (e. g. 'From' field in email looks like 'John [work] <[email protected]>' while Address Book entry is 'John Smith' (localized, in Russian)). And Mail.app doesn't recognize that this mail is originating from that person in Address Book: if I click on 'From' field, it suggests to me to add sender to Address Book, while for others' emails I have 'Show in Address Book' menu entry (especially for ones with full localized name in 'From' field). I'm wondering, is that behavior correct or I'm missing something? I'm using Snow Leopard & Mail 4.0; my system language set to English, if that matters. I'd like to have some clarifications on that Mail.app behavior: whenever it fixable or not (and if it's fixable, I'd like to see a fix). By the way, is it possible to match sender's address against Address Book entry in filter rules or not? That would be great, if I can create rules like 'move all mail from that person to that folder' without specifying exact source address. Thanks, Ivan.

    Read the article

  • backup of TFS with DPM 2007 - different backup times

    - by user46516
    I have DPM set up to back up my TFS server every 30 minutes, the reason being it's a way better interface than the quirky SQL backup interface. I also do a full backup nightly using an SQL maintenance job. My thinking is I would use DPM to restore my databases in case of losing my database and the nightly full backup would be a "just in case" the DPM restore doesn't work. I was thinking a little harder about this set up today and started to think about the fact that the DPM backup of the individual databases happens at different 30 min windows.. i.e. one happens at 13h30, another at 13h34 etc. Would this difference in time be a problem when it comes to restoring the TFS server? If I restore the databases and they are from different times, will this create corruption with pointers in one database pointing to missing items in the other database.. do the databases even rely on each other or are they completely interdependant. Lastly, how would SQL (log) backup cope with this?

    Read the article

  • Anonymous access to SMB share hosted on Server 2008 R2 Enterprise

    - by bwerks
    Hi all, First off, I have read through this post and a whole slew of non-SF posts which seem to address the same or a similar problem, however I was still unable to fix my problem. I've got three machines in this situation: a domain-joined server that runs Server 2008 R2 Enterprise ("share server") a domain-joined workstation running XP Pro SP3 ("test server") a domain-unjoined test server running Server 2003 R2 SP2 ("workstation") The share server is exposing a share on the network that the test server must access--it's a Source/Symbol Server share for our debugging purposes. I believe visual studio simply accesses the the share with its own credentials in this case, meaning that the share must be accessible anonymously since the test server isn't joined to the domain and there's no opportunity to supply domain authentication. I've attempted a lot of things to avoid the authentication window when accessing the share: I've enabled the Guest account on the share server and given Guest full sharing/NTFS permissions for the share. I've given ANONYMOUS LOGON full sharing/NTFS permissions for the share. I've added my share to “Network Access: Shares that can be accessed anonymously” in LSP. I've disabled “Network access: Restrict anonymous access to Named Pipes and Shares” in LSP. I've enabled “Network access: Let Everyone permissions apply to anonymous users” in LSP. Added ANONYMOUS LOGON to “Access this computer from the network” in LSP. Added the Guest account to “Access this computer from the network” in LSP. Attempted to provision the share using the Share and Storage Management MMC snap-in. Unfortunately when I attempt to access the share from the test server, I still see the prompt and I'm forced to enter "Guest" manually. I also tried this workflow using the local administrator account on a workstation, and the same thing happens both with and without XP Simple File Sharing enabled. Any idea why I'm getting these results, or what I should have done differently?

    Read the article

< Previous Page | 144 145 146 147 148 149 150 151 152 153 154 155  | Next Page >