Search Results

Search found 14439 results on 578 pages for 'folder customization'.

Page 504/578 | < Previous Page | 500 501 502 503 504 505 506 507 508 509 510 511  | Next Page >

  • Does Antivirus2009 or Antivirus360 automatically install on your computer and if so how?

    - by sergey
    I run Firefox on Vista, and unfortunately I got tricked (through a deceptive google result) into going to a page containing one of those fake "Your Computer Has all of this Spyware on it!" pages. I tried manually closing the tab, but it had a "Are you sure you want to navigate away" JavaScript alerts (HATE THOSE). So I clicked "OK," and the tab closed. Then I closed firefox altogether and rebooted. Now, before I could close the tab, it did prompt me to download a file, but of course I choose not to, and checking my downloads folder, nothing new is there. Also, even if I ?did? download it, ?I? would still have to choose to run it by double clicking on it for it to install itself, right? Also, I ran Malware Bites and Windows Defender and both said everything was fine. From this I would normally believe I am safe, but I have read everywhere that this thing "automatically installs" itself and that it is a bitch to get rid of. Is it really possible for this thing to dig in if you are running firefox and didn't choose to download it or run it after downloading?

    Read the article

  • Inactive users in windows server after some time according to first login instead of defining a solid expiration date

    - by smhnaji
    We want to give access to some Windows Server users so they can remotely have access to our server and download from a special folder of the server. The licenses we give to users, are time base. There should be 1 month, 2 month, ..., 1 year, ... licenses. CURRENT SITUATION (WHAT I DON'T WANT): When users are created and added to the OS, a solid expiration date is given. WHAT I WANT: Users' expiration date should be calculated automatically after first login. The user might not need his account right when purchases the license. In another words: When a license of the user we create is purchased at Jan 1st, he should use the license until Feb 1st. No matter whether he really logs in or not. He cannot come Feb 5th and begin using his license because that has expired then. What I want is that when he comes at Feb 5th and begins using, the license update until March 5th. CLARIFICATION (Update after MDMarra's comment) Working environment is Windows Server 2012. By the word 'user', I mean Native Windows Server Users. Whenever a new person purchases a license with me, I create them manually using net user command like this: net user ali pass /add /expires:2013-12-25

    Read the article

  • nginx 301 redirect to subfolder on primary domain

    - by 187j3x1
    sorry for my poor english. i just set up wordpress on my vps, so far its the only item on my site. there for seo reason, i think is better redirect all primary domain to the blog folder. primary domain is example.com wordpress is at example.com/blog what i want is rewrite www.example.com and example.com to example.com/blog. googled got some scripts, and make some change paste into nginx config file. here is: #301 redirect www to non-www server { server_name www.example.com; location = / { rewrite ^/(.*) http://example.com/$1 permanent; } } #301 non-www to subfolder server { server_name example.com; location = / { rewrite ^/(.*) http://example.com/blog$1 permanent; } } it works at some degree, successfully redirect to example.com/blog. the only problem is i get 404 not found error. then i only make nginx redirect www to example.com/blog. ok, this time i can access blog page. i know there is something wrong in the non-www to subfolder script. but do not how to fix it :(

    Read the article

  • File Server Resource Manager attempting to access quota.xml on System Reserved partition?

    - by pmellett
    I've got a new install of Server 2008 R2 that is designed to be our quota server for user home directories and shared areas. I installed FSRM and set up a few quotas to try out. They worked fine but at some point over the weekend it's stopped loading the FSRM console quota screen and gives the following error, with Event ID 8228: File Server Resource Manager was unable to access the following file or volume: '\\?\Volume{73649de6-7f04-11e1-a344-005056b10310}\System Volume Information\SRM\quota.xml'. This file or volume might be locked by another application right now, or you might need to give Local System access to it. I have removed and reinstalled the FSRM Role Service, cleared the \System Volume Information\SRM folder on each volume and am at the verge of just starting again. I'd rather not since then I have to go through and set up all my NTFS permissions again. Since it looks like the service is trying to access the System Reserved partition, which I assume won't have any files it could possibly need, how do I remove System Reserved partition as a volume to be monitored for the quota service? (I am not aware of configuring that to be the case originally though!)

    Read the article

  • legitimacy of the tasks in the task scheduler

    - by Eyad
    Is there a way to know the source and legitimacy of the tasks in the task scheduler in windows server 2008 and 2003? Can I check if the task was added by Microsoft (ie: from sccm) or by a 3rd party application? For each task in the task scheduler, I want to verify that the task has not been created by a third party application. I only want to allow standards Microsoft Tasks and disable all other non-standards tasks. I have created a PowerShell script that goes through all the xml files in the C:\Windows\System32\Tasks directory and I was able to read all the xml task files successfully but I am stuck on how to validate the tasks. Here is the script for your reference: Function TaskSniper() { #Getting all the fils in the Tasks folder $files = Get-ChildItem "C:\Windows\System32\Tasks" -Recurse | Where-Object {!$_.PSIsContainer}; [Xml] $StandardXmlFile = Get-Content "Edit Me"; foreach($file in $files) { #constructing the file path $path = $file.DirectoryName + "\" + $file.Name #reading the file as an XML doc [Xml] $xmlFile = Get-Content $path #DS SEE: http://social.technet.microsoft.com/Forums/en-US/w7itprogeneral/thread/caa8422f-6397-4510-ba6e-e28f2d2ee0d2/ #(get-authenticodesignature C:\Windows\System32\appidpolicyconverter.exe).status -eq "valid" #Display something $xmlFile.Task.Settings.Hidden } } Thank you

    Read the article

  • Django apache + mod_wsgi with virtualenv

    - by ArgsKwargs
    I have some questions running multiple Django sites on a VPS I have a server that uses openPanel to automatically create VirtualHosts within apache2. My ideal situation is that I would have multiple virtualenvs with different dependencies installed so the python dist-packages directory isn't contaminated for different Django sites. For example: /home/user/virtualenv1 /home/user/virtualenv2 My django applications reside at /var/www, so For example: /var/www/djangosite1 /var/www/djangosite2 Now I've read upon openPanel docs and figured out the best thing todo is create a django.conf file inside the mydomain.com.inc folder, which looks something like: /etc/apache2/openpanel.d/mydomain.com.inc/django.conf DocumentRoot /var/www/djangosite1/project WSGIScriptAlias / /var/www/djangosite1/project/wsgi.py WSGIDaemonProcess mydomain python-path=/home/user/virtualenv1/lib/python2.6/site-packages <Directory /var/www/djangosite1/project> Order allow,deny Allow from all </Directory> Alias /static /var/www/djangosite1/project/static-root Now my problem is that this setup seems unable to find the virtualenv site-packages thus not recognizing any dependencies available in the given virtualenv Also, commenting out this line doesn't seem to break or change a thing: WSGIDaemonProcess mydomain python-path=/home/user/virtualenv1/lib/python2.6/site-packages For example: > service apache2 start ImportError: No module named South When I install South outside the virtualenv everything works

    Read the article

  • What's the best / easiest way to combine two mailboxes on Exchange 2007?

    - by jmassey
    I've found this and this(2) (sorry, maximum hyperlink limit for new users is 1, apparently), but they both seem targeted toward much more complex cases than what I'm trying to do, and I just want to make sure I'm not missing some better approach. Here's the scenario: 'Alice' has retired. 'Bob' has taken over Alice's position. Bob was already with the organization in a different but related position, and so they already have their own Exchange account with mail, calendars, etc., that they need to keep. I need to get all of Alice's old mail, calendar entries, etc., merged into Bob's existing stuff. Ideally, I don't want to have all of Alice's stuff in a separate 'recovery' folder that Bob would have to switch back and forth between to look at older stuff; I want it all just merged into Bob's current Inbox / Calendar. I'm assuming (read: hoping) that there's a better way to do this than fiddling with permissions and exporting to and then importing from a .pst. Office version is 2007 for everybody that uses Exchange, if that helps. Exchange is version 8.1. What (preferably step-by-step - I'm new to Exchange) is the best way to do this? I can't imagine this is an uncommon scenario, but my google-fu has failed me; there seems to be nothing on this subject that isn't geared towards far more complex scenarios. (2): h t t p://technet.microsoft.com/en-us/library/bb201751%28EXCHG.80%29.aspx

    Read the article

  • Hadoop is not able to find JAVA_HOME properly

    - by Shekhar
    I am trying to run hadoop my Ubuntu OS. I have set JAVA_HOME variable in ~/.bashrc file to /usr/lib/jvm/jdk1.7.0_01/ but when I run hadoop namenode -format command it fails with following errors : shekhar@ubuntu:/usr$ hadoop namenode -format Warning: $HADOOP_HOME is deprecated. /host/Shekhar/Softwares/hadoop-1.0.0/bin/hadoop: line 321: /usr/jdk1.7.0_01/bin/java: No such file or directory /host/Shekhar/Softwares/hadoop-1.0.0/bin/hadoop: line 387: /usr/jdk1.7.0_01/bin/java: No such file or directory hadoop tries to locate java command at /usr/jdk1.7.0_01/bin/ path. Clearly somehow it missed /lib/jvm folder. I am not able to understand why and how this is happening. my echo $PATH command gives following output : shekhar@ubuntu:/usr$ echo $PATH /usr/lib/jvm/jdk1.7.0_01/bin:/usr/lib/lightdm/lightdm:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/lib/jvm/jdk1.7.0_01/bin:/host/Shekhar/Softwares/hadoop-1.0.0/bin If I run which java command I get following output : shekhar@ubuntu:/usr$ which java /usr/lib/jvm/jdk1.7.0_01/bin/java and echo $JAVA_HOME returns following output : shekhar@ubuntu:/usr$ echo $JAVA_HOME /usr/lib/jvm/jdk1.7.0_01 I would like to know why hadoop is taking JAVA_HOME path incorrectly. Please help...

    Read the article

  • MacBook Pro with OSX 10.6.3 (Snow Leopard) Wi-Fi network connection breaks after few minutes

    - by Yanick Landry
    I have a MacBook Pro with OSX 10.6.3 (Snow Leopard). After connecting on a Wi-Fi network, the connection "breaks" after a few minutes. What I mean by "breaking" is that all requests, whether it is loading a web page, connecting to a share folder, connecting to my local router at 192.168.0.1, or pinging anything doesn't get through (time out). When in a "break" situation, I can see in the Network Settings panel that I still have an active IP, which I can successfully ping. I have this problem at home with a router D-Link DI-624 and at work with a D-Link WBR-2310, all with updated firmwares. I thought DHCP was the issue. So I tried assigning a fixed IP address (192.168.0.166). It successfully connects, but after a few minutes, the connection still breaks. The solution I'm currently using is that I disable the AirPort (on the Network icon menu in the top bar), wait a few seconds then re-enable it. It then quickly works, but the connection still breaks after a few minutes. I tried Googling my problem but I think I can't find any good keywords ! It's my first question here, so sorry if I don't respect some rules.

    Read the article

  • What is causing ocassional white windows on my Mac?

    - by user63333
    Hello. I'm having a very strange problem with my Mac lately. When I'm working in an app and a new window pane or sheet is displayed, sometimes it comes up completely white. Once an app is having these problems, it will continue to bring up a blank screen for that particular window (although other windows work fine). After the app is relaunched, the window is fine again. What I'm noticing that's very strange is that although the interface turns completely white, the functions of the interface are still available. So I have to "navigate blindly" around the interface, until I can relaunch. This occurs throughout the operating system. Screenshots: This is what happened when I tried opening the File menu in Lightroom app. What happened to me on Lynda.com (in Firefox) after selecting the "Software..." dropdown. (All other dropdowns were fine. Reloading the page fixed it.) When I was decompressing a file, The Unarchiver launched and opened this white window. It still decompressed the file. This is what happened one time when I opened Finder (with TotalFinder) to my Downloads folder. This is something I've never seen before. This just started happening lately. What could be the problem? Thanks for your help. NOTE: since new users are not allowed to post images, just image blank white interface elements. And since new users also aren't allowed to post more than one link, here's the first screenshot:

    Read the article

  • SSL Connection Error

    - by toffee.beanns
    I have purchased a comodo ssl cert and have submitted the Certificate Signing Request (CSR) generated by my server to the ssl management site. With the 3 files it returned me with, - AddTrustExternalCARoot.crt - PositiveSSLCA2.crt - www_mydomainname_com.crt I have uploaded them to my /etc/ssl/ssl-certs folder and have updated my virtual host in my sites-available and restarted accordingly. NameVirtualHost 107.167.120.195:80 #sample ip address NameVirtualHost 107.167.120.195:443 #sample ip address ......... #normal http virtual host (working well) <VirtualHost 107.167.120.195:443> ServerAdmin [email protected] ServerName mydomainname.com ServerAlias www.mydomainname.com DocumentRoot /var/www/mydomainname SSLEngine on SSLCertificateFile /etc/ssl/ssl-certs/www_mydomainname.com.crt SSLCertificateKeyFile /etc/ssl/ssl-certs/server.key SSLCertificateChainFile /etc/ssl/ssl-certs/PositiveSSLCA2.crt </VirtualHost> I have also enabled ran 'a2enmod ssl' and it's enabled. This is the error I get when I access the webpage https in chrome: SSL connection error Error code: ERR_SSL_PROTOCOL_ERROR Unable to make a secure connection to the server. This may be a problem with the server, or it may be requiring a client authentication certificate that you don't have. I have also checked out my apache log files and there seems to be an error saying that the Common Name (CN) is not the same as the server. RSA server certificate CommonName (CN) `www.mydomainname.com' does NOT match server name!? and Invalid method in request \x16\x03\x01 What should I do?

    Read the article

  • Updated my WAMP Server and MySQL is eating up 580mB of memory

    - by Jon
    I updated my dev-box's WAMPSERVER, and along with updating PHP and Apache, MySQL updated to '5.6.12'. After doing that, I copied the data folder from my old (5.1.36) install to the new one and now MySQL takes up 580mB which is way too much, since I'm the only person using it (Locally) and there are only 20 or so databases on it, none of which have 'memory' tables. How can I get this down to a decent amount? My my.ini: # For advice on how to change settings please see # http://dev.mysql.com/doc/refman/5.6/en/server-configuration-defaults.html # *** DO NOT EDIT THIS FILE. It's a template which will be copied to the # *** default location during install, and will be replaced if you # *** upgrade to a newer version of MySQL. [mysqld] # Remove leading # and set to the amount of RAM for the most important data # cache in MySQL. Start at 70% of total RAM for dedicated server, else 10%. # innodb_buffer_pool_size = 128M # Remove leading # to turn on a very important data integrity option: logging # changes to the binary log between backups. # log_bin # These are commonly set, remove the # and set as required. # basedir = ..... # datadir = ..... # port = ..... # server_id = ..... # Remove leading # to set options mainly useful for reporting servers. # The server defaults are faster for transactions and fast SELECTs. # Adjust sizes as needed, experiment to find the optimal values. # join_buffer_size = 128M # sort_buffer_size = 2M # read_rnd_buffer_size = 2M sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES Database info: Storage Engine Data Size Index Size Total Size InnoDB 48.00 KB 0.00 B 48.00 KB MEMORY 0.00 B 0.00 B 0.00 B MyISAM 163.64 MB 122.49 MB 286.13 MB Total 163.69 MB 122.49 MB 286.18 MB

    Read the article

  • USB Virus, "Program too big to fit in memory"?

    - by ApprenticeHacker
    I got an installer for a piece of software via usb from a friend.Now what was weird was that although my friend and I had the same OS, same brand of laptop and I had a newer version (though I don't think this quite matters), the installer was running properly for him but not for me. It just showed a command line window and exited. I ran it via a batch file in which I included "pause" after running it. Here's the screenshot: Later my friend called me and told me that it was the USB that was the problem. It had some kind of virus on it and it corrupted every executable or folder on it , and it renamed all the sub_files inside folders to some weird jargon. He had tried using another USB and the installer worked fine. Now the friend has (unfortunately) gone back to his city and I can't get the installer again from him. My question is: Is there any way to repair the installer executable and run it? and Do you think the virus has infected my PC? (I have run a system scan with my antivirus and it showed nothing but still I'm worried)

    Read the article

  • Missing access log for virtual host on Plesk

    - by Cummander Checkov
    For some reason i don't understand, after creating a new virtual host / domain in Plesk a few months back, i cannot seem to find the access log. I noticed this when running /usr/local/psa/admin/sbin/statistics The host in question is being scanned Main HTML page is 'awstats.<hostname_masked>-http.html'. Create/Update database for config "/opt/psa/etc/awstats/awstats.<hostname_masked>.com-https.conf" by AWStats version 6.95 (build 1.943) From data in log file "-"... Phase 1 : First bypass old records, searching new record... Searching new records from beginning of log file... Jumped lines in file: 0 Parsed lines in file: 0 Found 0 dropped records, Found 0 corrupted records, Found 0 old records, Found 0 new qualified records. So basically no access logs have been parsed/found. I then went on to check if i could find the log myself. I looked in /var/www/vhosts/<hostname_masked>.com/statistics/logs but all i find is error_log Does anybody know what is wrong here and perhaps how i could fix this? Note: in the <hostname_masked>.com/conf/ folder i keep a custom vhost.conf file, which however contains only some rewrite conditions plus a directory statement that contains php_admin_flag and php_admin_value settings. None of them are related to logging though.

    Read the article

  • Change default profile directory per group

    - by Joel Coel
    Is it possible to force windows to create profiles for members of one active directory group in a different folder from members in another active directory group? The school here uses DeepFreeze to protect public computers. In a nutshell, DeepFreeze prevents all changes to a hard drive such that every time you restart the machine the disk is identical to it was at the time you froze it. This is a bit different than restoring to an image, in that it never really wrote changes to disk in a permanent way in the first place. This has a few advantages over images: faster recover times, and it's easy to thaw the machine for a few minutes to perform maintenance such as windows updates (which can even be automated). DeepFreeze also allows you to configure a "thawspace" partition, where changes are persistent across reboots. One of the weaknesses of DeepFreeze is that you end up needing to create a new profile every time you log in, unless your profile existed at the time the machine was frozen. And even then, any changes you make to your profile while working on a frozen machine are lost. As students have frequent legitimate needs to log in to our classroom machines, there is currently a lot of cleanup involved from time to time in removing their old profiles and changes, so I want to extend DeepFreeze to protect our classroom computers as well as public computers. The problem is that faculty have a real need to keep a stateful profile locally on these classroom computers. The solution I would like to use is to configure Windows via group policy (or even manually, if that's the way I'll have to do it) to place profile folders on the thawspace partition, but only for members of the faculty security group. Is this possible?

    Read the article

  • Why can't I see all of the client certificates available when I visit my web site locally on Windows 7 IIS 7?

    - by Jay
    My team has recently moved to Windows 7 for our developer machines. We are attempting to configure IIS for application testing. Our application requires SSL and client certificates in order to authenticate. What I've done: I have configured IIS to require SSL and require (and tried accept) certificates under SSL Settings. I have created the https binding and set it to the proper server certificate. I've installed all the root and intermediate chain certificates for the soft certificates properly in current user and local machine stores. The problem When I browse to the web site, the SSL connection is established and I am prompted to choose a certificate. The issue is that the certificate is one that is created by my company that would be invalid for use in the application. I am not given the soft certificates that I have installed using MMC and IE. We are able to utilize the soft certs from our development machines to our Windows 2008 servers that host the application. What I did: I have attempted to copy the Root CA to every folder location for the Current User and Location Machine account stores that the company certificate's root is in. My questions: Could I be mishandling the certs anywhere else? Could there be a local/group policy that could be blocking the other certs from use? What (if anything) should have to be done differently on Windows 7 from 2008 in regards to IIS? Thanks for your help.

    Read the article

  • batch copy files with error log on missing permissions

    - by sc911
    Hi *, I'm searching for a tool to batch-copy files, that should support the following points: copy files from a net-share report any errors show errors only or filter log on errors don't stop on an error also report if a file or a folder could not be copied due to missing permissions if possible it should have a queue where new job can be added while copying I tried the following tools: TerraCopy: takes a lot time to just calculate the time and the size of the job and does not report errors due to missing permissions (it doesn't even add those files to the copy-queue) Karne's replicator: does not report errors due to missing permissions xcopy: does a great job when using the right parameters and piping the output to a file (in the German localization xcopy /k /r /e /i /s /c /h SOURCE TARGET>LOGFILE 2>&1 will do the job. opening the logfile in IE will give you a great monitor). but quing jobs it not possible (ok, you can join them all in a batch-file, but you can not queue jobs while another one is running (hm, thinking of a batch-script that loops through a file with the source-target-config...)) to be continued Which tools do you use? Tell me! Thx sc911

    Read the article

  • Xampp can't start apachem windows xp

    - by jribeiro
    I formatted my computer a week ago. Installed wampp on windows xp, git and everything I needed. When I run git for the first time it told me it had a problem with my user folder (because of the accents) so I created a new user and migrated everything to the new one. After this wampp wouldn't start anymore. I uninstalled it and installed xampp which is what I'm using now. My problem is that even though I requesed xampp to install Apache as a service it isn't installed. It doesn't show in windows services screen. Xampp control panel shows mysql service running ok. When I click to install apache as a service it returns no error. When I click start apache no error is outputed. No file and no errors under c:/xampp/apache/log/ If I restart computer it says that apache service is not installed. I tried to reinstall wampp and the same problem occurs. If I run netstat -a -no nothing is running on port 80! What can I do??

    Read the article

  • grep --color=auto with -i option disables the matching text color, why?

    - by emptyset
    I was messing around with grep and put this in my .zshenv: export GREP_OPTIONS="--color=auto" export GREP_COLORS='mt=1;34' I was bonking my head on the keyboard and changing GREP_COLORS around for a minute trying to figure out why the folder colors were working, but the matching text wasn't. I was doing this: $ grep -R -n -i -e "functionFoo\(" --include=*.cs --exclude-dir=Logs * The line number and file names were set with the default colors, but the matching text wasn't. After spending way too much time, I thought to do this: $ grep -R -n -e "functionFoo\(" --include=*.cs --exclude-dir=Logs * (I removed the -i option.) That's all it took to get the matching text to correctly show up in bold blue. This is a Cygwin on Vista setup, with rxvt running zsh. Any idea why grep colors would break on specifying a case-insensitive match? Update: Under cygwin 1.7, it's a little bit better - case insensitive search works correctly, but it only highlights the word that matches the expression exactly. In other words, "FunctionFoo" highlights "FunctionFoo" but not "functionFoo" and vice versa. Probably a grep issue so I'll be submitting it to that list.

    Read the article

  • Windows XP / Outlook 2003 error messages

    - by AboutDev
    Can anyone help with this issue? I am trying to help someone and could use some expertise. Error Message #1: Microsoft Office Small Business Edition 2003 With CD icon "The feature you are trying to use is on a CD-ROM or other removable disk that is not available. Insert the 'Microsoft Office Small Business Edition 2003' disk and click OK. Use source: Microsoft Office Small Business Edition 2003" 1st got this message after CD was inserted to recover partial file STDP11N. Recovered STDP11N, however, still receiving pop up window with error message each time outlook opens. Had accidentally cleaned up old programs and suddenly this was missing. Reinstalled Microsoft Office Small Business Edition 2003 using install CD. Outlook worked buit keep getting error message pop up each time I open Outlook. Hit ok. Error Message #2: The path 'Microsoft Office Small Business Edition 2003' cannot be found. Verify that you have access to this location and try again, or try to find the installation package 'STDP11N.MSI' in a folder from which you can install the product Microsoft Office Small Business Edition 2003." Hit ok. Back to error message #1 Hit close window Error message #3: Error 1706. Setup cannot find the required files. Check your connection to the network, or CD-ROM drive. For other potential solutions to this problem, see C:\Program Files\Microsoft Office\ OFFICE11\1033\SETUP.CHM Error message #4 I'd created a file under D: drive on an external drive. "The path specified for the file D:...etc.. .pst is not valid. Hit ok. Brings up window to look in My Documents.

    Read the article

  • Changing PATH Environment Variable for all Users. (Ubuntu)

    - by Wally Glutton
    I recently compiled Ruby Enterprise Edition (REE) on an Ubuntu 8.04 server. I would like to update my PATH to ensure this new version of Ruby (found in /opt/ruby_ee/bin) supersedes the older version in /usr/local/bin. (I still want the old version around, though.) I would like these PATH changes to affect all users and crontabs. Attempted Solution #1: The REE documentation recommends placing the REE bin folder at the beginning of the global PATH in /etc/environment. I altered the PATH in this file to read: PATH="/opt/ruby_ee/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games" This did not affect my PATH at all. Attempted Solution #2: Next I followed these instructions and updated the PATH setting in /etc/login.defs and /etc/crontab. (I did not change /etc/sudoers.) This didn't affect my PATH either, even after logging out and rebooting the server. Other information: I seem to be having the same problem described here. I'm testing using the commands "echo $PATH" and "ruby -v". My shell is bash. My .bashrc doesn't override my PATH. Yes, I have heard of the Ruby Version Manager project. ;)

    Read the article

  • How to setup a virtual machine in Ubuntu desktop to run Debian Server

    - by stickman
    I want to run a virtual machine in my Ubuntu desktop that runs a Debian server. The purpose of this is to generate Debian packages. I have some C++ applications that were originally developed on my Ubuntu machine, and I need to (re)compile them on a Debian server in order to: build Deb packages for deployment on a Debian server make sure that the applications will definitely work on a debian server The idea is so that I can do 90% of my development on Ubuntu (where I am more comfortable), and deploy a binary package that definitely works on Debian. BTW, I am developing on Karmic Kola (Ubuntu 9.10). [Edit] Following the advice I got so far, I have installed debootstrap and Debian 'Lenny' on /srv/chroot/debian_lenny on my machine. I am not sure this is the server version, but in any case I dont think that matters for my purposes (though it would be useful to know how to specifically install the server version). At the moment though, I am like a fish out of water, since there is no GUI, and it is only a console that I have in the chroot jail. I had a look in the home folder (I cheated, by using the KNavigator in Ubuntu), and there are no folders there - which presumably mean that no users have been set up as yet in the Debian "system". I would like to know how to do the following: Download and install the dev tools needed for (re)compiling my C++ apps Copy my projects from the Ubuntu "system" to the Debian "system" After building the binaries, I would like to create a debian binary package containing all of my binaries, so that I can install the package on a Debian server (my remote server)

    Read the article

  • Process PHP files from a network share in a vmware virtual machine

    - by nhinkle
    As a testing environment, I have set up a vmware virtual machine running Windows Server 2008 R2. I have Apache and PHP installed (as part of the xampp package). I am doing the development outside of the VM, and so want Apache to serve PHP files from a VM shared folder (which appears as a network share in the VM). I have done this by creating an NTFS symbolic link in Apache's htdocs directory. I can access this directory from the browser, and plain-text files are readable. However, PHP fails to process files, instead returning the following error: Warning: Unknown: failed to open stream: No such file or directory in Unknown on line 0 Fatal error: Unknown: Failed opening required 'C:/xampplite/htdocs/path/to/file.php' (include_path='.;C:\xampplite\php\PEAR') in Unknown on line 0 It appears to be a permissions issue — PHP doesn't seem to be allowed to read the file to process it. However, Apache has no problem opening files in the directory. I cannot figure out how to give PHP the necessary permissions to process the file. Does anybody know of a way to make this work, or else another solution for getting the files into the VM automatically while I develop on the host machine?

    Read the article

  • Mavericks permission issues with Windows Server deduplicated shares

    - by dmohlmaster
    We have a number of 10.9-10.9.3 - Mavericks - machines installed throughout our facility. Much of the user content is pulled from shares stored on our Windows Server 2012 fileservers with deduplication enabled. I have found that files newly written or unoptimized are able to be accessed without issue - read, written, modified, etc. Once the file gets optomized/deduplicated and Windows adds the P & L attributes - sparse and symlink - the Macs running Mavericks begin to have access issues. Once the files get deduplicated, users begin receiving read access errors when copying files (see error1 below). This happens when copying to folders within the current folder tree or copying somewhere to the local system. If you 'stop' the copy operation and retry a few more times, it may eventually work for the specific instance but fail again later. I am however, able to copy these files without issue via the terminal. Other systems running 10.7 do not experience the same issues and are able to access file server resources without issue. Many of the systems having issues are newer and thus not able to be downgraded to 10.8 or 10.7. I have tried finder replacements such as Pathfinder but the results are the same. I know this is at least similar to the issues many Mac users are already experiencing and posting about but I haven't seen it directly linked to deduplication and the attributes written by Windows server. Has anyone seen this issue? Have any solutions been found? Error 1: When copying files after the PL attributes have been set by deduplication. "One or more items can't be copied to "Foler" because you don't have permissions to read them. ******************************************' Via the system.log, I am also seeing the following error when accessing these deduplicated file shares. The reparse point tag listed below is "IO_REPARSE_TAG_DEDUP" Reported error: "smbfs_nget: filename.ext - unknown reparse point tag 0x80000013"

    Read the article

  • poor performance when deleteing many files

    - by choppy
    I've got two machines: The first is IBM Blade with 24 cores 96GB RAM and single local hard drive with 278GB divided to 4 partitions: 1. c: - 40GB; 3GB free 2. d: - 40GB; 37GB free 3. e: - 198322GB; 198.1 free 4. 100MB (EFI system Partition) Formatted with GPT The other is pizza server with 4 cores 8GB RAM and single local hard drive with 273GB divided to 3 partitions: 1. c: - 136.81; 20GB free 2. d: - 88.74GB; 87.91 free 3. e: - 47.85GB; 46.91 free Formatted with MBR I have two scripts, the first creates 20,000 files in one directory, each file size is 192KB, the second delete the folder (recursive) and prints how much time it toke to delete all files. The problem is on the first server (blade) it takes about 2 minutes to delete all 20,000 files while on the second (pizza) it takes about 4 seconds!? Both servers have clean windows server 2008R2 with no special application running on background. Any ideas what is going on?

    Read the article

< Previous Page | 500 501 502 503 504 505 506 507 508 509 510 511  | Next Page >