Search Results

Search found 14231 results on 570 pages for 'folder redirection'.

Page 251/570 | < Previous Page | 247 248 249 250 251 252 253 254 255 256 257 258  | Next Page >

  • NTUSER.DAT and UsrClass.dat files building up by the thousands, why and can I delete?

    - by Anthony
    I've noticed that my web server, 2008 Xen VM, gradually loosing free space - more than I would of though from normal use and decided to investigate. There are two problem areas: *C:\Users\Administrator\ (6,755.0 MB)* with files: NTUSER.DAT{randomness}.TMContainer'0000 randomness'.regtrans-ms NTUSER.DAT{randomness}.TM.blf AND C:\Users\Administrator\AppData\Local\Microsoft\Windows\ (6,743.8 MB) with files UsrClass.dat{randomness}.TMContainer'0000 randomness'.regtrans-ms UsrClass.dat{randomness}.TM.blf From what I understand these are in-time backups of registry changes. If that is the case I cannot possibly understand why there would be 10000+ changes. (That's how many files there are per folder location, over 20,000 per folder in total.) The files are using almost 15GB of space and I want rid of them, I'm just wondering can I remove them. However, I need to understand why they are being created so I can avoid this in the future. Any ideas why there would be so many? Is there a way I can check to see what is making the modifications? Are they created with login attempts? Are they created in relation to every day Web Server use? etc. and so on

    Read the article

  • Windows 2008, IIS7 and virtual directories

    - by Thomas
    I created a virtual directory called test (C:\test) under the Default Web Site and added two simple test files (one html and one aspx). I thought I had to add the IUSR and NetworkService (for application pools) to C:\test and grant the users appropriate rights in order for IIS7 to serve the content. It appears that is not the case at all as I can view any files in the virtual directory (even if I convert it to an application) without changing or adding any security settings on the C:\test folder. I just installed IIS7 with ASP.NET on Windows 2008 without changing any settings besides adding the virtual directory. Am I missing something? Even my book on IIS7 states that the user accounts should be added an appropriate rights should be added. I added the following to answer the comments: I am referencing the file using a public IP http://xxx.xxx.xxx.xxx/test/one.html and the IP nor localhost is in my trusted sites. I am not signed in on the server at all as I am accessing the content from my home machine and the content is on my production server. The following users/groups have access to c:\test on the server (Creator Owner, System, Administrators, Users) and the app pool is running under the default NetworkService account. I basically installed win2008, added the IIS role with asp.net. I then opened IIS7, added a virtual directory and copied two files to the directory to test. It works which is great but I want to understand why it works. How is it that IIS7 can access files in the C:\test folder without any permissions set.

    Read the article

  • How do you install/configure JBoss on Linux/Unix?

    - by mafro
    I'm currently working on how install and configure multiple (30+) JBoss EAP 5 configurations (both standalone and clusters) for development, test and production at a client's site (running SuSE). I'm not to fancy about the jboss way of storing application/configuration together with system files, so I have tried to split things up (ie moving server config out of the jboss installation directory). I also would like minimize the amount of configuration needed when upgrading/patching jboss - but I'm not done thinking about that... It would be great to hear how you've done and what you think about my approach. This is how my installations look like (for the moment): Standard JBoss EAP install (minus server configs): /opt/jboss/jboss-eap-5.0/jboss-as /opt/jboss/jboss-eap-5.0/jboss-as/bin/ /opt/jboss/jboss-eap-5.0/jboss-as/lib/ /opt/jboss/jboss-eap-5.0/jboss-as/server/ [server configs removed to avoid starting them by mistake] /opt/jboss/jboss-eap-5.0/jboss-as/.../ Application (some jboss folders has been omitted - you'll get the point anyway): /app/<project>/ [$app.dir - application specific base folder] /app/<project>/jboss/ [$jboss.home] /app/<project>/jboss/bin/ -> /opt/jboss/jboss-eap-5.0/jboss-as/bin /app/<project>/jboss/lib/ -> /opt/jboss/jboss-eap-5.0/jboss-as/lib /app/<project>/jboss/server/<cfg>/ [project specific config based on 'production'] /app/<project>/jboss/server/<cfg>/log/ -> /log/<project>/<cfg> /app/<project>/jboss/server/<cfg>/... /app/<project>/jboss/.../ -> /opt/jboss/jboss-eap-5.0/jboss-as/.../ /app/<project>/bin/ [application specific scripts for start/stop etc - wraps jboss supplied scripts] /app/<project>/deploy/ [application deploy folder] /app/<project>/etc/ [application specific config] Questions: How do you install JBoss (on linux/unix systems)? Where do you put JBoss and what modifications do you do? Where do you put your applications and application specific files? Do you share JBoss instances between applications or run one instance/cluster per application? How do you manage configuration changes (i.e. your modifications of jboss standard config)?

    Read the article

  • Plesk SSL Certificate (Default cert when SSL enabled, CORRECT cert when SSL is disabled)

    - by hztetra
    I'm running Plesk 8.6.0: I have an SSL cert installed through Plesk's admin interface. But I have a bit of an issue: When I enabled SSL for the site, and selected my cert, then restart httpd, Plesk defaults to using my self-signed default certificate. Conversely, when I disable SSL support for the domain, all of a sudden Plesk is using my new SSL certificate. Unfortunately, when I try to view any folder on the site (mydomain.tld/folder) I'm simply met with a 404 (with files placed both in httpdocs and httpsdocs). I switch SSL support back on, and Plesk defaults back to the default self-signed cert and I can then view the folders that were not previously accessible. Any ideas? One further note: I tried following http://kb.parallels.com/en/939 . Once I tried to restart httpd with the edited ssl.conf file, I received an httpd could not start error. I restored the original ssl.conf file, and still received the could not start error. So as of now, I am running without an ssl.conf file. The following is the error I receive when I attempt to reintroduce ssl.conf: Starting httpd: [Mon Aug 23 15:45:40 2010] [warn] module ssl_module is already loaded, skipping (98)Address already in use: make_sock: could not bind to address 0.0.0.0:443 no listening sockets available, shutting down Unable to open logs

    Read the article

  • Windows Server Backup - Recover only shows the latest backup

    - by Steffen
    We're having quite some trouble at work using Windows Server Backup. We have a HyperV server (Win 2008) running 8 virtual web servers, these are running a variety of OS'es: Win 2003, Win 2008 and a lone Debian. Each virtual server has a separate partition on the physical HyperV server, so e.g. E: is virtual server #1, F: is #2 and so forth. For backup we use Windows Server Backup, or more exactly we use the commandline tool: wbadmin.exe We need to make the backups without stopping the virtual servers, as we cannot afford the downtime (we've got users online both day and night), and Windows Server Backup offers to use the shadow copy provider to archive this. We run wbadmin like this: wbadmin start backup -backuptarget:\\remotebackuplocation\somefolder -include:E: -quiet We run it once per partition, because we've got a script wrapped around that command, for sending us an email about how it went. Each time we run wbadmin it'll delete the Backup xxxx folder it created in last backup, and just create a new. In order to prevent this from happening, we rename the backup xxx folder after each backup is run, before starting the next one. I realize we must rename it back to its original name prior to recovering, and we obviously do this. Now the issue is as follows: Even though we have all the backed up files, and rename whichever backup we want to use, to its original name, we can only see the latest backup in the Windows Server Backup GUI when we select "Recover". This means we can only recover the last partition we backup up, so e.g. E: can never be recovered. In other words we're screwed :-( My question is: Does anyone know how to use Windows Server Backup for a scenario like this ? Or is it simply not possible due to the simplicity of Windows Server Backup ? If it's not possible, could you recommend some backup software for this purpose ? We've already looked at MS' System Center Data Protection Manager, however it's quite expensive and the boss doesn't like that :-/

    Read the article

  • IIS 7.5 How to disable "Verify File Exists" for siteminder handler

    - by HariM
    We are trying to use ASP.Net MVC with Siteminder for Single Sign on. This is on Windows Server 2008 R2 with IIS 7.5. Siteminder Agent version 6QMR6. Problem : Siteminder protects physical files that are exist. And it is not protecting the folder when we try to access a non existed file. It must redirect to login page even if the file doesn't exist when the user is accessing a protected folder. How to configure in IIS 7.5 that Do not verify a file exist, before authentication by siteminder. SiteMinderWebAgent is a Handler(WildCard Script Map) we created using the ISAPI6WebAgent.dll How to Protect ASP.Net MVC Request with Siteminder? (Added this as My previous question did not solve the problem). MVC Request shows up in IIS Log but not in Siteminder log. Update : Microsoft Support says currently IIS7.5, even in earlier versions doesnt support wildcard mappings on any two Isapi Handlers with * wild card. Currently in my case Siteminder has * wildcard and asp.net mvc (handler is aspnet_isapi) has * wildcard to handle the reqeusts. Ordered priority doesnt work in the wild card mappings case with Just *. Did not convinced with the answer but will wait till tomorrow for them to get back.

    Read the article

  • WS2008 subst in Logon script does not "stick"

    - by Frans
    I have a terminal server environment exclusively with Windows Server 2008. My problem is that I need to "map" a drive letter to each users Temp folder. This is due to a legacy app that requries a separate Temp folder for each user but which does not understand %temp%. So, just add "subst t: %temp%" to the logon script, right? The problem is that, even though the command runs, the subst doesn't "stick" and the user doesn't get a T: drive. Here is what I have tried; The simplest version: 'Mapping a temp drive Set WinShell = WScript.CreateObject("WScript.Shell") WinShell.Run "subst T: %temp%", 2, True That didn't work, so tried this for more debug information: 'Mapping a temp drive Set WinShell = WScript.CreateObject("WScript.Shell") Set procEnv = WinShell.Environment("Process") wscript.echo(procEnv("TEMP")) tempDir = procEnv("TEMP") WinShell.Run "subst T: " & tempDir, 3, True This shows me the correct temp path when the user logs in - but still no T: Drive. Decided to resort to brute force and put this in my login script: 'Mapping a temp drive Set WinShell = WScript.CreateObject("WScript.Shell") WinShell.Run "\\domain\sysvol\esl.hosted\scripts\tempdir.cmd", 3, True where \domain\sysvol\esl.hosted\scripts\tempdir.cmd has this content: echo on subst t: %temp% pause When I log in with the above then the command window opens up and I can see the subst command being executed correctly, with the correct path. But still no T: drive. I have tried running all of the above scripts outside of a login script and they always work perfectly - this problem only occurs when doing it from inside a login script. I found a passing reference on an MSFN forum about a similar problem when the user is already logged on to another machine - but I have this problem even without being logged on to another machine. Any suggestion on how to overcome this will be much appreciated.

    Read the article

  • apache renew ssl not working [on hold]

    - by Varun S
    Downloaded a new ssl cert from go daddy and installed the cert on apache2 server put the cert in /etc/ssl/certs/ folder put the gd_bundle.crt in the /etc/ssl/ folder private key is in /etc/ssl/private/private.key I just replaced the original files with the new files, did not replace the private key. I restarted the server but the website is still showing old certificated date. What am I doing wrong and how do i resolve it ? my httpd.conf file is empty, the certificated config is in the sites-enabled/default-ssl file the server is apache2 running ubuntu 14.04 os SSLEngine on # A self-signed (snakeoil) certificate can be created by installing # the ssl-cert package. See # /usr/share/doc/apache2.2-common/README.Debian.gz for more info. # If both key and certificate are stored in the same file, only the # SSLCertificateFile directive is needed. SSLCertificateFile /etc/ssl/certs/2b1f6d308c2f9b.crt SSLCertificateKeyFile /etc/ssl/private/private.key # Server Certificate Chain: # Point SSLCertificateChainFile at a file containing the # concatenation of PEM encoded CA certificates which form the # certificate chain for the server certificate. Alternatively # the referenced file can be the same as SSLCertificateFile # when the CA certificates are directly appended to the server # certificate for convinience. SSLCertificateChainFile /etc/ssl/gd_bundle.crt -rwxr-xr-x 1 root root 1944 Aug 16 06:34 /etc/ssl/certs/2b1f6d308c2f9b.crt -rwxr-xr-x 1 root root 3197 Aug 16 06:10 /etc/ssl/gd_bundle.crt -rw-r--r-- 1 root root 1679 Oct 3 2013 /etc/ssl/private/private.key /etc/apache2/sites-available/default-ssl: # SSLCertificateFile directive is needed. /etc/apache2/sites-available/default-ssl: SSLCertificateFile /etc/ssl/certs/2b1f6d308c2f9b.crt /etc/apache2/sites-available/default-ssl: SSLCertificateKeyFile /etc/ssl/private/private.key /etc/apache2/sites-available/default-ssl: # Point SSLCertificateChainFile at a file containing the /etc/apache2/sites-available/default-ssl: # the referenced file can be the same as SSLCertificateFile /etc/apache2/sites-available/default-ssl: SSLCertificateChainFile /etc/ssl/gd_bundle.crt /etc/apache2/sites-enabled/default-ssl: # SSLCertificateFile directive is needed. /etc/apache2/sites-enabled/default-ssl: SSLCertificateFile /etc/ssl/certs/2b1f6d308c2f9b.crt /etc/apache2/sites-enabled/default-ssl: SSLCertificateKeyFile /etc/ssl/private/private.key /etc/apache2/sites-enabled/default-ssl: # Point SSLCertificateChainFile at a file containing the /etc/apache2/sites-enabled/default-ssl: # the referenced file can be the same as SSLCertificateFile /etc/apache2/sites-enabled/default-ssl: SSLCertificateChainFile /etc/ssl/gd_bundle.crt

    Read the article

  • CopSSH SFTP -- limit users access to their home directory only

    - by bradvido
    Let me preface this by saying I've read and followed these instructions at the FAQ many times: http://www.itefix.no/i2/node/37 It does not do what the title claims... It allows every user access to every other user's home directory, as well as access to all subfolders below the copssh installation path. I'm only using this for SFTP access and I need my users to be sandboxed into only their home directory. If you know a fool-proof way to lock users down so they can see only their home directory and its subfolders, stop reading now and reply with the solution. The details: Here is exactly what i tried as I followed the FAQ. My copSSH installation directory is: C:\Program Files\CopSSH net localgroup sftp_users /ADD **Create a user group to hold all my SFTP users cacls c:\ /c /e /t /d sftp_users **For that group, deny access at the top level and all levels below cacls "C:\Program Files\CopSSH" /c /e /t /r sftp_users **Allow my user group access to the copSSH installation directory and its subdirectories For each sftp user, I create a new windows user account, then I: net localgroup sftp_users sftp_user_1 /add **Add my user to the group I've created Open the activate user wizard for CopSSH, choosing the user, "/bin/sftponly" and Remove copssh home directory if it exists **Remains checked Create keys for public key authentication **Remains checked Create link to user's real home directory **Remains checked This works, however, every user has access to every other user's home directory as well as the CopSSH root directory.... So I tried denying access for all users to the user home directory: cacls "C:\Program Files\CopSSH\home" /c /e /t /d sftp_users **Deny access for users to the user home directory Then I tried adding permissions on a user-by-user basis for each users home\username folder. However,these permission were not allowed by windows because of the above deny rule i created at the home directory was being inherited and over-riding my allow rule. The next step for me would be to remove the deny rule at the home directory and for each user folder, add a deny rule for every user it doesn't belong to, and add an allow rule for the one user it does belong to. However, as my user list gets long, this will become very cumbersome. Thanks for the help!

    Read the article

  • Gitlab and Nginx not loading gitlab

    - by paperids
    I have just installed gitlab and nginx on Ubuntu LTS 12.04 using this guide: http://blog.compunet.co.za/gitlab-installation-on-ubuntu-server-12-04/ I installed this on another server last night and had absolutely no problems with it (sort of a test run to see how long it would take to get going). I am not getting any errors when restarting gitlab or nginx with /etc/init.d and my error logs are empty. The only thing I know of to go on is the vhost config: upstream gitlab { server unix:/home/gitlab/gitlab/tmp/sockets/gitlab.sock$ } server { listen localhost:80; server_name gitlab.bluringdev.com; root /home/gitlab/gitlab/public; # individual nginx logs for this gitlab vhost access_log /var/log/nginx/gitlab_access.log; error_log /var/log/nginx/gitlab_error.log; location / { # serve static files from defined root folder;. # @gitlab is a named location for the upstream fallback$ try_files $uri $uri/index.html $uri.html @gitlab; } # if a file, which is not found in the root folder is r$ # then the proxy pass the request to the upsteam (gitla$ location @gitlab { proxy_redirect off; # you need to change this to "https", if you set "ssl" $ proxy_set_header X-FORWARDED_PROTO http; proxy_set_header Host gitlab.bluringdev.com:80; proxy_set_header X-Real-IP $remote_addr; proxy_pass http://gitlab; } } If there's any other information that would be helpful, just let me know and I'll get it up asap.

    Read the article

  • vhost.conf file in PLESK not working as intended

    - by Saif Bechan
    I have configured a vhost file for my domain but it does not seem to work. These are the steps I took, please correct me if I am wrong. First I made a file called vhost.conf in: /var/www/vhosts/*domain*/conf/vhost.conf The content of the vhost file looks like this: <Directory /var/www/vhosts/*domain*/httpdocs> php_admin_flag engine on php_admin_flag display_errors on </Directory> Now in my /etc/php.ini i set display_errors=Off After everything i rebuild with: /usr/local/psa/admin/sbin/websrvmng -a But I don't see the any errors in my page. When i turn on the display_errors in /etc/php.ini only then can I see the errors. I know for a fact that the vhost file is read, because when i type nonsense values i get an error when restarting apache saying there are errors in the vhost file. Anyone know what the problem can be. Should there be special settings in either the php.ini file or the httpd.conf file. The httpd.conf i edit is in /etc/httpd/conf/httpd.conf. Is this the file that PLESK uses or is there another, because the values i see there do not really reflect the http folders of my domain. The httpd file looks like this now. # The document root DocumentRoot "/var/www/html" # i guess this is the base directory <Directory /> Order Deny,Allow Deny from all Options None AllowOverride None </Directory> # And i guess here are all my domains located, but there aren't any here <Directory "/var/www/html"> Options None AllowOverride None Order allow,deny Allow from all </Directory> Only this directory /var/www/html is not used by me, I use the directory /var/www/vhosts. The only folder found in /var/www/html is a folder called awstats. Does plesk use other files, and where are they located. I hope this all makes sense to anyone, and i hope i can find a solution

    Read the article

  • To install Markdown's extensions by Python

    - by Masi
    The installation notes (git://gitorious.org/python-markdown/mainline.git) say in the file using_as_module.txt One of the parameters that you can pass is a list of Extensions. Extensions must be available as python modules either within the markdown.extensions package or on your PYTHONPATH with names starting with mdx_, followed by the name of the extension. Thus, extensions=['footnotes'] will first look for the module markdown.extensions.footnotes, then a module named mdx_footnotes. See the documentation specific to the extension you are using for help in specifying configuration settings for that extension. I put the folder "extensions" to ~/bin/python/ such that my PYTHONPATH is the following export PYTHONPATH=/Users/masi/bin/python/:/opt/local/Library/Frameworks/Python.framework/Versions/2.6/ The instructions say that I need to import the addons such that import markdown import <module-name> However, I cannot see any module in my Python. This suggests me that the extensions are not available as "python modules - - on [my] PYTHONPATH with names starting with mdx_ - -." How can you get Markdown's extensions to work? 2nd attempt I run at ~/bin/markdown git clone git://gitorious.org/python-markdown/mainline.git python-markdown cd python-markdown python setup.py install I put the folder /Users/masi/bin/markdown/python-markdown/build to my PATH because the installation message suggests me that is the new location of the extensions. I have the following in a test markdown -document [TOC] -- headings here with # -format --- However, I do not get the table of contents. This suggests me that we need to somehow activate the extensions when we compile by the markdown.py -script. **The problem returns to my first quoted text which I is rather confusing to me.

    Read the article

  • Installing messaging software displays error 1324 invalid character

    - by llykke
    Trying to install Reuters Messaging software onto a windows 7 pc we receive the error message Error 1324: The folder path 'My Documents' contains an invalid character We've tried installing the application using the local admin account and the user account which is an AD account (roaming?). This user account has administrative rights (i.e. should be allowed to install applications). The users 'My Documents' folder is located on a network drive, where only the user has access. We've tried experimenting with the HKEY_CURRENT_USER \ Software \ Microsoft \ Windows \ CurrentVersion \ Explorer\ User Shell Folders registry entries and setting them to a local position (i.e. C:\Users\Username\Documents) but this didn't resolve the error. We've also tried the following which was taken from a website I can't remember the name of: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem Select the NtfsDisable8dot3NameCreation entry and change the value to 0 Select the Win31FileSystem entry and change the value to 0 which didn't resolve the issue. Edit: This was also an issue when attempting to install the citrix native client necessary to run citrix application (*.ica extension). This made the same error box appear.

    Read the article

  • PHP upgrade to 5.3 from 5.2, sessions no longer get stored

    - by Damo
    background link: http://stackoverflow.com/questions/7014945/php-upgrade-5-2-to-5-3-session-issue I have upgraded PHP on my 2008 std server from PHP 5.2 to PHP 5.3. Following the upgrade, sessions no longer work correctly. I have copied over the settings from my PHP.ini files which are applicable and configure new settings in line with the server or PHP's recommendations. PHP executes fine correctly, however session data does not get saved. I have session data stored in c:\temp. For each session created, I can see the session file in this folder. However no information gets written into the session file. Permissions wise, IUSR and EVERYONE has write access to this folder. If I downgrade to PHP 5.2, sessions are saved correctly and the site functions correctly. I have followed advise to ensure my code is optimised. closing session files correctly and forcing a session reset. I'm stumped. session Session Support enabled Registered save handlers files user sqlite Registered serializer handlers php php_binary wddx DirectiveLocal ValueMaster Value session.auto_startOffOff session.bug_compat_42OnOn session.bug_compat_warnOnOn session.cache_expire180180 session.cache_limiternocachenocache session.cookie_domainno valueno value session.cookie_httponlyOffOff session.cookie_lifetime00 session.cookie_path// session.cookie_secureOffOff session.entropy_fileno valueno value session.entropy_length00 session.gc_divisor100100 session.gc_maxlifetime14401440 session.gc_probability11 session.hash_bits_per_character44 session.hash_function00 session.namePHPSESSID53PHPSESSID53 session.referer_checkno valueno value session.save_handlerfilesfiles session.save_path/temp/temp session.serialize_handlerphpphp session.use_cookiesOnOn session.use_only_cookiesOnOn session.use_trans_sid00

    Read the article

  • Troubles installing/starting Redis via Resque

    - by Craig Flannagan
    Trying to complete instructions for Resque/Redis installation here: https://github.com/defunkt/resque/blob/master/README.markdown Am stuck at where I'm trying to start up Redis via Resque at the following command: Craig:/usr/local/src/resque$ rake redis:start (in /usr/local/src/resque) Detach with Ctrl+\ Re-attach with rake redis:attach ../../bin/dtach -A /tmp/redis.dtach ../../bin/redis-server ../../../etc/redis.conf rake aborted! Command failed with status (127): [../../bin/dtach -A /tmp/redis.dtach ../../...] (See full trace by running task with --trace) Rerunning with --trace (showing only part of trace): Craig:/usr/local/src/resque$ rake redis:start --trace (in /usr/local/src/resque) ** Invoke redis:start (first_time) ** Execute redis:start Detach with Ctrl+\ Re-attach with rake redis:attach ../../bin/dtach -A /tmp/redis.dtach ../../bin/redis-server ../../../etc/redis.conf rake aborted! Command failed with status (127): [../../bin/dtach -A /tmp/redis.dtach ../../...] /Users/craigflannagan/.rvm/gems/ruby-1.9.2-head@foo/gems/rake-0.8.7/lib/rake.rb:995:in `block in sh' Not sure what is wrong here - by the way, when I did those instructions $ git clone git://github.com/defunkt/resque.git $ cd resque $ PREFIX=<your_prefix> rake redis:install dtach:install $ rake redis:start I wasn't sure whether or not I was supposed to be doing #1 from within the Rails project, or if I was supposed to have the git clone create a new folder outside the Rails project (in this case, I chose to have folder created outside the project).

    Read the article

  • Sync OneNote Notebooks to/on SkyDrive

    - by Sam
    I've got OneNote running on all computers in our house, using it all the time with several people and computers. The only drawback: I want to keep the copies of OneNote in sync without having to run a dedicated server myself. Right now one of my computers has a folder share, where all others sync to, but this is highly impractical since the computer is not always running. So my question is: is it possible to put the notebook files on a (private) SkyDrive Folder and have all the computers sync to there? This way all computers could keep in sync whenever they got access to the web. Can this be done? and, of course, How? [Update] Maybe I should not have taken knowledge about OneNote as granted: OneNote uses a propietary file format, but has a very good in-file-syncing, working on network shares. Generic 'just sync the complete file' won't be useful at all, because I'd just have 'file has changed on server and on client' conflicts all the time. The sync needs to know OneNote files and be able to sync the content - eg. OneNote itself needs to sync the files, not some generic sync tool.

    Read the article

  • How can we configure the Bitnami Joomla stack to open a socket on startup?

    - by bobo
    I have deployed the Bitnami Ubuntu Joomla! 3.1.5-2 (64-bit) stack on Amazon Cloud: http://bitnami.com/stack/joomla/cloud/amazon By default, the stack is configured to run PHP using PHP-FPM. I have no problem getting the Joomla and phpmyadmin running as virtual hosts on Apache. But now, I would like to add another virtual host. The problem I am having is, I have no idea how to get the system creating a socket on startup in the following folder: bitnami@ip-172-31-15-99:/opt/bitnami/php/var/run$ ls -al total 12 drwxr-xr-x 2 root root 4096 Nov 3 20:43 . drwxr-xr-x 4 root root 4096 Oct 9 15:39 .. srw-rw-rw- 1 root root 0 Nov 3 20:43 joomla.sock -rw-r--r-- 1 root root 4 Nov 3 20:43 php5-fpm.pid srw-rw-rw- 1 root root 0 Nov 3 20:43 phpmyadmin.sock srw-rw-rw- 1 root root 0 Nov 3 20:43 www.sock bitnami@ip-172-31-15-99:/opt/bitnami/php/var/run$ I have the following /opt/bitnami/apps/mywebsite/conf/php-fpm/pool.conf file: [mywebsite] listen=/opt/bitnami/php/var/run/mywebsite.sock include=/opt/bitnami/php/etc/common-dynamic.conf include=/opt/bitnami/apps/mywebsite/conf/php-fpm/php-settings.conf pm=dynamic As it can be seen, listen points to the mywebsite.sock which does not currently exist. I did an experiment, by removing the .sock files in the /opt/bitnami/php/var/run folder and they would come back on reboot. So how can we configure it to open a socket for mywebsite on startup?

    Read the article

  • Why can't “knife data bag from file” find existing json file on chef server?

    - by ellisera
    Summary: I'm running into a problem with "knife data bag from file", where knife doesn't recognize the .json data bag file pulled down from a remote git repo. Background: I'm currently trying to transition from chef-solo use to chef server while using the cookbooks, data bags and other chef info from our remote git repo. I've currently pulled down a copy of our git repo and set the cookbook path and data bag path in knife.rb. I also loaded the cookbooks, made adjustments, etc. Details: When trying to load our .json data bags by doing "knife data bag add from file FOLDER FILE" it looks like it worked until I do "knife data bag list" and it comes up blank. So I decided to try adding the edit option at the end to see what's being loaded, if it is. This is the error I get: knife data bag from file local_settings test.json -e nano ERROR: Could not find or open file 'test.json' in current directory or in 'data_bags/local_settings/test.json' The data bag file does exist, in the proper location, in a tested, working json file. I've also sometimes gotten an error saying "could not open data bag "local_settings". I would obviously like to keep the data bag path within the appropriate git repo folder to be able to keep track of changes in a more centralized location (our git repo, as opposed to the chef server). Any solutions, advice or pointers in the right direction are appreciated.

    Read the article

  • Clone virtual machine with Server 2008 R2 and Hyper-V?

    - by bwerks
    Hi all, I've recently just started working with Hyper-V, and so far it's quite nice. However, I've been running into problems with what seems like it should be the most basic of workflows. I've set up a baseline Server 2008 R2 configuration, and exported it with the intention of using the export for cloning. I entered "C:\Exports\" as the export folder. However, I run into problems when I try to import the image. From the Hyper-V manager, I select "Import Virtual Machine" and in the resulting window I entered "C:\Exports\BuildServer\" as the folder, set the radial to "Copy the virtual machine (create a new unique ID)" and checked the checkbox for "Duplicate all files so the same virtual machine can be imported again." Doing so results in the following error: "Import failed. Import task failed to copy file from 'H:\Exports\BuildServer\Virtual Hard Disks\BuildServer.vhd' to 'C:\Hyper-V\Virtual Hard Disks\BuildServer.vhd': The file exists. (0x80070050)" Have I somehow messed something up in configuration? Or is this a known thing? I've read it should be possible to clone VMs by copying them in the filesystem but I'd prefer to keep things in the management Ui if possible.

    Read the article

  • DrayTek 2820 configuration using public IP addresses

    - by Kev
    I have a /29 range of public IP addresses assigned to me by my ISP. I'm trying to configure a SIP VOIP handset to register with my VOIP provider who recommend using public IP addresses rather than NAT. I have a DrayTek 2820 router flashed with the latest firmware and have configured my router as per DrayTek's FAQ at: How do I use a public subnet on the LAN (non-NAT operation ) ? My IP range is: xx.xx.94.16 -> xx.xx.94.23 This gives a usable range of: xx.xx.94.17 -> xx.xx.94.22 My router's public IP address is: xx.xx.94.17, the SIP VOIP handset is allocated xx.xx.94.18. I have a second internet connection and via that I can ping the handset. However for some reason I can't seem to get it to register with the provider. I tried adding a new Firewall filter to pass through from WAN to LAN: Source: ANY, Destination: xx.xx.94.18, UDP - Ports 1024 -> 65535 Out of interest I also tried opening port 80 to see if I could browse to the phone's admin web interface but no joy. I know that my ISP aren't blocking inbound service ports because I NAT Port Forwarded port 80 to one of my internal web servers and it rendered a test page I had set up. All the NAT settings are reset to factory defaults, i.e. there are no Port Redirection, DMZ Host, Open Ports or Address Mappings configured. The handset I'm using is a GrandStream GXP-2000. Is there anything else I should be doing?

    Read the article

  • File upload permissions issue on Windows Server 2008 R2 IIS 7.5 PHP 5.3 with Drupal v.7.26

    - by Taras
    I have website on Drupal version: 7.26 OS on server is Windows Server 2008 R2 Web server $_SERVER["SERVER_SOFTWARE"]: Microsoft-IIS/7.5 Server API: CGI/FastCGI Core PHP Version: 5.3.28 file_uploads: On post_max_size: 75M upload_max_filesize: 50M upload_tmp_dir: C:\inetpub\wwwroot\tmp memory_limit: 128M open_basedir: C:\inetpub\wwwroot;C:\inetpub\wwwroot\tmp When I go to /admin/config/media/file-system I see error messages: The directory sites\default\files exists but is not writable and could not be made writable. The directory tmp exists but is not writable and could not be made writable. Public file system path: sites\default\files Temporary directory: tmp I have set permissions on folders C:\inetpub\wwwroot\tmp : IIS_IUSRS : Full control C:\inetpub\wwwroot\sites\default\files : IIS_IUSRS : Full control I am working as Administrator user: C:\Users\Administrator\Downloadsecho %username% Administrator I can`t change Read Only Attributes for these folders. Every time I do this change and press Apply button and Apply changes to this folder, subfolders and files is checked and press OK button it displays Applying attributes... dialog when it finishing I press OK button on folder properties dialog closing it. When I open Properties dialog once again I see Read-only is checked again. How can I fix it?

    Read the article

  • How do I Install fonts on Windows Web Server 2008 R2

    - by Eric Brearley
    I would like to install Arial on to our web servers. Just need to add, this is because we generate reports server-side and make them available in a number of downloadable formats (Excel, PDF etc), hence the need to have the fonts installed on the server. I have console access to our webfarm, and from the server I've copied the .ttf files and placed them in c:\fonts folder. Then I run the following VBScript on the server. ' VBScript to install fonts on Blade Servers ' Arial font-family Set objShell = CreateObject("Shell.Application") Set objFolder = objShell.Namespace("c:\fonts") Set objFolderItem = objFolder.ParseName("arial.ttf") objFolderItem.InvokeVerb("Install") Set objShell = CreateObject("Shell.Application") Set objFolder = objShell.Namespace("c:\fonts") Set objFolderItem = objFolder.ParseName("arialbd.ttf") objFolderItem.InvokeVerb("Install") Set objShell = CreateObject("Shell.Application") Set objFolder = objShell.Namespace("c:\fonts") Set objFolderItem = objFolder.ParseName("arialbi.ttf") objFolderItem.InvokeVerb("Install") Set objShell = CreateObject("Shell.Application") Set objFolder = objShell.Namespace("c:\fonts") Set objFolderItem = objFolder.ParseName("ariali.ttf") objFolderItem.InvokeVerb("Install") Set objShell = CreateObject("Shell.Application") Set objFolder = objShell.Namespace("c:\fonts") Set objFolderItem = objFolder.ParseName("ariblk.ttf") objFolderItem.InvokeVerb("Install") msgbox "Fonts installed" I get the message box, but no font installation pop-ups like I do when I run this script on my desktop. The fonts do not get installed, they do not sure in the font selection dialogue in notepad (on the web server) and we get the asp.net exception "Font 'Arial' cannot be found.". Have also restarted the server. I have also tried copying the .ttf files to the c:\windows\fonts folder and restarting the server. What do I need to do to install fonts on Windows Web Server 2008 R2?

    Read the article

  • SQL Server transaction log backups,

    - by krimerd
    Hi there, I have a question regarding the transaction log backups in sql server 2008. I am currently taking full backups once a week (Sunday) and transaction log backups daily. I put full backup in folder1 on Sunday and then on Monday I also put the 1st transaction log backup in the same folder. On tuesday, before I take the 2nd transaction log backup I move the first transaction log backup from folder1 an put it into folder2 and then I take the 2nd transaction log backup and put it in the folder1. Same thing on Wed, Thurs and so on. Basicaly in folder1 I always have the latest full backup and the latest transaction log backup while the other transaction log backups are in folder2. My questions is, when sql server is about to take, lets say 4th (Thursday) transaction log backup, does it look for the previous transac log backups (1st, 2nd, and 3rd) so that this new backup will only include the transactions from the last backup or it has some other way of knowing whether there are other transac log backups. Basically, I am asking this because all my transaction log backups seem to be about the same size and I thought that their size will depend on the amount of transactions since the last transaction log backup. Example: If you have a, lets say, full backup and then you take a transac log backup and this transac log backup is lets say 200 MB and now you immediatelly take another transac log backup, this last transac log backup should be considerably smaller than the first one because no or almost no transaction occured between these two backups, right? At least, that's what I've been assuming. What happens in my case is that this second backup is pretty much the same size as the first one and I am wondering if the reason for that is because I moved the first transac log backup to a different folder so now sql server thinks that all I have is just a full backup and it then gets all the transactions that happened since the full backup and puts it in the 2nd transac log backup. Can anyone please explain if my assumptions are right? Thanks...

    Read the article

  • Preventing access to files if a user types the full url on the address bar

    - by bogha
    i have a website, some folders on the websites contains images and files like .pdf , .doc and .docx . the user can easly just type the address in the url to get the file or display the photo http://site/folder1/img/pic1.jpg then boom.. he can see the image or just download the file my question is: how to prevent this kind of action, how can i guarantee a secure access of the files. any suggestions UPDATE TO CLARIFY MY IDEA i don't want any user who is browsing the website to get access to these files normally by just writing the URL of the file. those files are a CV files, they are being uploaded by the users to a specific folder on the server which we host outside the company. those files are only being viewed by the HR people through a special system. that's the scenario we want. i don't want a WEB GEEK who just wants to see what files has been uploaded to this folder to download them easly to his/her computer and view them or publish them on the internet. i hope you got my idea

    Read the article

  • MSI Installer error 2203; how to force permissions on installer directory?

    - by goober
    [Cross-Posted on StackOverflow.com as well because the question relates to development. Feel free to let me know where it best belongs.] Hi all, I'll try to bullet-point to keep it short: Background / Issue Trying to install ASP.NET MVC 3 RC on my Windows 7 machine. Uninstalled other versions of MVC (2 and 3 Beta 1). Ran the installer -- got a generic error, 2203. Log files said that it was a permissions error on C:\Windows\Installer. Checked C:\Windows\Installer -- sure enough, it's marked as read-only. I un-checked "Read-Only" in the folder properties and applied. It appears to open the dialog and apply to all files. However, when clicking properties again, the read-only box is backed to checked. Checked the security tab of the folder -- both system and the Administrators group have full access. I checked ownership -- the Administrators group is listed as an owner. Verified that I'm in the system as an Administrator (in fact, the only account in the Administrators group besides Administrator). So, what gives? Thanks in advance for any help you can provide!

    Read the article

< Previous Page | 247 248 249 250 251 252 253 254 255 256 257 258  | Next Page >