Search Results

Search found 8893 results on 356 pages for 'stored'.

Page 27/356 | < Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • Where default settings are stored after applying GPO?

    - by tester5566
    When I apply a GPO that changes Service startup settings, where the default service startup settings are kept? And how can I read and modify them? The reason of the question is that I have a hundred of servers where most of services are disabled by a baseline GPO for hardening purposes. I want to relax this GPO by removing some services but I do not want that the service startup settings becomes default ones after the GPO is relaxed. So I want to keep the actual hardened state as a default state but allow local admins to change it if necessary. Thank you

    Read the article

  • where are flash settings stored locally on Ubuntu

    - by Joseph Mastey
    It's possible change flash settings on your computer at this URL: http://www.macromedia.com/support/documentation/en/flashplayer/help/settings_manager03.html However, given that Macromedia has no problems setting LSO cookies on your HDD that you cannot find, I am a little bit skeptical that the settings I've tweaked there would be saved. So, I'd like to be able to look locally on my PC and verify the settings. Where can I find the settings for Flash locally? Surely the plugin cannot be heading to Macromedia itself for them (that is a future too bleak to contemplate). I am running Ubuntu 10.04. Thanks, Joe

    Read the article

  • Backing up data stored on Amazon S3

    - by Fiver
    I have an EC2 instance running a web server that stores users' uploaded files to S3. The files are written once and never change, but are retrieved occasionally by the users. We will likely accumulate somewhere around 200-500GB of data per year. We would like to ensure this data is safe, particularly from accidental deletions and would like to be able to restore files that were deleted regardless of the reason. I have read about the versioning feature for S3 buckets, but I cannot seem to find if recovery is possible for files with no modification history. See the AWS docs here on versioning: http://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectVersioning.html In those examples, they don't show the scenario where data is uploaded, but never modified, and then deleted. Are files deleted in this scenario recoverable? Then, we thought we may just backup the S3 files to Glacier using object lifecycle management: http://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html But, it seems this will not work for us, as the file object is not copied to Glacier but moved to Glacier (more accurately it seems it is an object attribute that is changed, but anyway...). So it seems there is no direct way to backup S3 data, and transferring the data from S3 to local servers may be time-consuming and may incur significant transfer costs over time. Finally, we thought we would create a new bucket every month to serve as a monthly full backup, and copy the original bucket's data to the new one on Day 1. Then using something like duplicity (http://duplicity.nongnu.org/) we would synchronize the backup bucket every night. At the end of the month we would put the backup bucket's contents in Glacier storage, and create a new backup bucket using a new, current copy of the original bucket...and repeat this process. This seems like it would work and minimize the storage / transfer costs, but I'm not sure if duplicity allows bucket-to-bucket transfers directly without bringing data down to the controlling client first. So, I guess there are a couple questions here. First, does S3 versioning allow recovery of files that were never modified? Is there some way to "copy" files from S3 to Glacier that I have missed? Can duplicity or any other tool transfer files between S3 buckets directly to avoid transfer costs? Finally, am I way off the mark in my approach to backing up S3 data? Thanks in advance for any insight you could provide!

    Read the article

  • Apache multiple vhost logs, stored locally and sent to remote logstash

    - by benbradley
    I'm investigating centralised logging and it seems there's so many different ways this can be done. I don't want to run logstash as a log "sender", preferring to keep the web servers as lean and simple possible. So that means either using syslog, syslog-ng or the one I'm testing now, rsyslog. But I would like to have separate vhost log files on the web server, in addition to these logs being sent to a remote log collector. I've tested rsyslog using the imfile module to watch the Apache log files, but this means I have to hard-code each vhost log file into my rsyslog.conf. Not ideal as people will invariably forget when they add/remove sites on the server. The reason I'm using rsyslog's imfile is that Apache doesn't appear to let you log to file and syslog. And I want to keep vhost-specific log files on the web server. So how can I do this? Is there a way of having rsyslog produce local log files and forward the logs to a remote collector? I am prepared to change my Apache config to log to a single access/error log for all vhosts, so long as there are vhost-specific log files produced somewhere on the web server machine. I just don't want to lose any logging info if the remote log collector can't be contacted for any reason. Any comments/suggestions? Cheers, B

    Read the article

  • Where are the netlogon files physically stored?

    - by johnny
    I have umpteen backups trying to restore my scripts I had in the netlogon share but when I go to them the folder is empty. Does backup not back those scripts in netlogon up? Is there somewhere I should expect to see the files at besides c:\winnt\sysvol\sysvol\mydomain\scripts? Thank you for any help.

    Read the article

  • Where does the temporary flash file get stored when I am viweing from Firefox

    - by Nishant
    I am watching a lecture and it seems to be adobe flash ...I wanna save this video that I am viewing . The website I am checking is http://cs75.tv/2009/fall/ . I am using Firefox . Dont know if this info helps , but .... My about:cache result is this . Memory cache device Number of entries: 212 Maximum storage size: 13312 KiB Storage in use: 8087 KiB Inactive storage: 6819 KiB List Cache Entries Disk cache device Number of entries: 3224 Maximum storage size: 500000 KiB Storage in use: 26066 KiB Cache Directory: C:\Documents and Settings\nvarm\Local Settings\Application Data\Mozilla\Firefox\Profiles\d74svniy.default\Cache List Cache Entries Offline cache device Number of entries: 0 Maximum storage size: 512000 KiB Storage in use: 0 KiB Cache Directory: C:\Documents and Settings\nvarm\Local Settings\Application Data\Mozilla\Firefox\Profiles\d74svniy.default\OfflineCache List Cache Entries

    Read the article

  • Where does the temporary flash file get stored when I am viweing from Firefox

    - by Nishant
    I am watching a lecture and it seems to be adobe flash ...I wanna save this video that I am viewing . The website I am checking is http://cs75.tv/2009/fall/ . I am using Firefox . Dont know if this info helps , but .... My about:cache result is this . Memory cache device Number of entries: 212 Maximum storage size: 13312 KiB Storage in use: 8087 KiB Inactive storage: 6819 KiB List Cache Entries Disk cache device Number of entries: 3224 Maximum storage size: 500000 KiB Storage in use: 26066 KiB Cache Directory: C:\Documents and Settings\nvarm\Local Settings\Application Data\Mozilla\Firefox\Profiles\d74svniy.default\Cache List Cache Entries Offline cache device Number of entries: 0 Maximum storage size: 512000 KiB Storage in use: 0 KiB Cache Directory: C:\Documents and Settings\nvarm\Local Settings\Application Data\Mozilla\Firefox\Profiles\d74svniy.default\OfflineCache List Cache Entries

    Read the article

  • Where does the temporary flash file get stored when I am viweing from Firefox

    - by Nishant
    I am watching a lecture and it seems to be adobe flash ...I wanna save this video that I am viewing . The website I am checking is http://cs75.tv/2009/fall/ . I am using Firefox . Dont know if this info helps , but .... My about:cache result is this . Memory cache device Number of entries: 212 Maximum storage size: 13312 KiB Storage in use: 8087 KiB Inactive storage: 6819 KiB List Cache Entries Disk cache device Number of entries: 3224 Maximum storage size: 500000 KiB Storage in use: 26066 KiB Cache Directory: C:\Documents and Settings\nvarm\Local Settings\Application Data\Mozilla\Firefox\Profiles\d74svniy.default\Cache List Cache Entries Offline cache device Number of entries: 0 Maximum storage size: 512000 KiB Storage in use: 0 KiB Cache Directory: C:\Documents and Settings\nvarm\Local Settings\Application Data\Mozilla\Firefox\Profiles\d74svniy.default\OfflineCache List Cache Entries

    Read the article

  • Where are Wireless Profiles stored in Ubuntu

    - by LonnieBest
    Where does Ubuntu store profiles that allow it to remember the credentials to private wireless networks that it has previously authenticate to and used? I just replaced my Uncle's hard drive with a new one and installed Ubuntu 10.04 on it (he had Ubuntu 9.10 on his old hard drive. He is at my house right now, and I want him to be able to access his private wireless network when he gets home. Usually, when I upgrade Ubuntu, I have his /home directory on another partition, so his wireless profile to his own network persists. However, right now, I'm trying to figure out which .folder I need to copy from his /home/user folder on the old hard drive, to the new hard drive, so that he will be able to have wireless Internet when he gets home. Does anyone know with certainty, exactly which folder I need to copy to the new hard drive to achieve this?

    Read the article

  • Allow users to ssh to specific user through ldap and stored public keys

    - by iElectric
    I recently setup gitolite, where users access git repository with "gitolite" user through ssh. Now I would like to integrate that into LDAP. Each user has pubkey in LDAP and if he has "git" objectClass, he would be able to access "gitolite" user through ssh. I know it's possible to store public keys in LDAP, I'm not sure if it possible to allow authentication in "gitosis" account based on objectClass. EDIT: To clarify, with objectClass git, user "foobar" would be able to login as "gitolite" through ssh

    Read the article

  • nginx: rewrite URL but have original URL stored in access.log as 200

    - by mhambra
    I'm setting up a link tracking system, which (temporarily) involves adding /link/id/ in front of URL (like http://server/data/id/publication/id/). rewrite data/id/(.*) http://server/$1; The request is logged as: ip - - [17/Nov/2011:10:07:19 +0300] "GET /data/id/publication/id.html HTTP/1.1" 302 154 "-" "UA"` For some reason (keeping the compatibility with AWStats) it is wanted to have 200 logged instead of 302. (nginx allows to get 301 code out of box with permanent option, but thats inappropriate too) What are my options here? Will the combination of location { } and rewrite do the job?

    Read the article

  • How can I make subversion reset the stored passwords/users and remember my authentication credential

    - by NicDumZ
    Hello folks! Background: I used to have everything working just fine on my fresh install: $ svn co https://domain:443/ test1 Error validating server certificate for 'https://domain:443': - The certificate is not issued by a trusted authority. Use the fingerprint to validate the certificate manually! Certificate information: - Hostname: **REMOVED** - Valid: **REMOVED** - Issuer: **REMOVED** - Fingerprint: **checked with issuer and REMOVED** (R)eject, accept (t)emporarily or accept (p)ermanently? p Authentication realm: <https://domain:443> Subversion repository Password for 'nicdumz-machine-hostname': Authentication realm: <https://domain:443> Subversion repository Username: nicdumz Password for 'nicdumz': # proceeds to checkout correctly $ svn co https://domain:443/ test2 # checkouts nicely, without asking for my password. At some point I needed to commit stuff using a different account. So I did that $ svn ci --username other.user Authentication realm: <https://domain:443> Subversion repository Password for 'other.user': # works fine But since then, everytime I want to commit as 'nicdumz' (default user, all repos have been checked-out with that user), it prompts me for my password: $ svn ci Authentication realm: <https://domain:443> Subversion repository Password for 'nicdumz': Hey come on, why :) The same happens if I want a fresh checkout, since read-access is also protected. So I tried fixing the issue by myself. I read around that ~/.subversion/auth was storing credentials, so I removed it from the way: $ cd ~/.subversion $ mv auth oldauth $ mkdir auth It seemed to work at first, because svn had forgotten about certificate validation: $ svn co https://domain:443/ test3 Error validating server certificate for 'https://domain:443': - The certificate is not issued by a trusted authority. Use the fingerprint to validate the certificate manually! Certificate information: - Hostname: **REMOVED** - Valid: **REMOVED** - Issuer: **REMOVED** - Fingerprint: **checked with issuer and REMOVED** (R)eject, accept (t)emporarily or accept (p)ermanently? p Authentication realm: <https://domain:443> Subversion repository Password for 'nicdumz-machine-hostname': Authentication realm: <https://domain:443> Subversion repository Username: nicdumz Password for 'nicdumz': # proceeds to checkout correctly $ svn up Authentication realm: <https://domain:443> Subversion repository Password for 'nicdumz': What? how is this happening? If you have suggestions to investigate more about the behaviour, I am very interested. If I'm correct, there is no way to do a verbose svn up or anything of the like, so I'm not sure should I go for investigation. Oh, and for what it's worth: $ svn --version svn, version 1.6.6 (r40053) compiled Oct 26 2009, 06:19:08 Copyright (C) 2000-2009 CollabNet. Subversion is open source software, see http://subversion.tigris.org/ This product includes software developed by CollabNet (http://www.Collab.Net/). The following repository access (RA) modules are available: * ra_neon : Module for accessing a repository via WebDAV protocol using Neon. - handles 'http' scheme - handles 'https' scheme * ra_svn : Module for accessing a repository using the svn network protocol. - with Cyrus SASL authentication - handles 'svn' scheme * ra_local : Module for accessing a repository on local disk. - handles 'file' scheme * ra_serf : Module for accessing a repository via WebDAV protocol using serf. - handles 'http' scheme - handles 'https' scheme

    Read the article

  • SQL Server store procedure encrypt is safe?

    - by George2
    I am using SQL Server 2008 Enterprise on Windows Server 2003 Enterprise. I developed some store procedure for SQL Server and the machine installed with SQL Server may not be fully under my control (may be used by un-trusted 3rd party). I want to protect my store procedure T-SQL source code (i.e. not viewable by some other party) by using encrypt store procedure function provided by SQL Server. I am not sure whether encrypt store procedure is 100% safe and whether the administrator of the machine (installed with SQL Server) still have ways to view store procedure's source codes? thanks in advance, George

    Read the article

  • Rkhunter reports file properties have changed

    - by CountMurphy
    I am running a fully updated LTS copy of Ubuntu server. Today I ran rkhunter (as I do from time to time). This is the output I got: Warning: The file properties have changed: [15:52:25] File: /bin/ps [15:52:25] Current hash: f22991ec93ae966c856d367f42fc3d8a484bd827 [15:52:25] Stored hash : 1892268bf195ac118076b1b0f53e7a637eb6fbb3 [15:52:25] Current inode: 142902 Stored inode: 130894 [15:52:25] Current file modification time: 1324307913 (19-Dec-2011 07:18:33) [15:52:25] Stored file modification time : 1260992081 (16-Dec-2009 11:34:41) Warning: The file properties have changed: [15:52:33] File: /usr/bin/ldd [15:52:33] Current hash: f1e2ca5aa3a28994e2cebb64c993a72b7d97b28c [15:52:33] Stored hash : 295d9cedb121a5e431a39a6d201ecd7ce5640497 [15:52:33] Current inode: 2236210 Stored inode: 2234359 [15:52:33] Current size: 5280 Stored size: 5279 [15:52:33] Current file modification time: 1331165514 (07-Mar-2012 16:11:54) [15:52:33] Stored file modification time : 1295653965 (21-Jan-2011 15:52:45) Warning: The file properties have changed: [15:52:37] File: /usr/bin/pgrep [15:52:37] Current hash: 3eada9a96760f3e2c9111cfe32901d1432813c1d [15:52:37] Stored hash : ce265d0db9964b173fe5036f703a9b8d66e55df3 [15:52:37] Current inode: 2229646 Stored inode: 2224867 [15:52:37] Current file modification time: 1324307913 (19-Dec-2011 07:18:33) [15:52:37] Stored file modification time : 1260992081 (16-Dec-2009 11:34:41) Warning: The file properties have changed: [15:52:41] File: /usr/bin/top [15:52:41] Current hash: 6be13737d8b0950cea2f1ae3a46d4af713dbe971 [15:52:41] Stored hash : c7b495ecef3982eeb6f08a511861b1a1ae8775e6 [15:52:41] Current inode: 2229629 Stored inode: 2224862 [15:52:41] Current file modification time: 1324307913 (19-Dec-2011 07:18:33) [15:52:41] Stored file modification time : 1260992081 (16-Dec-2009 11:34:41) Warning: The file properties have changed: [15:52:53] File: /usr/sbin/cron [15:52:53] Current hash: e783ca973f970aa8a4bf5edc670e690b33914c3d [15:52:53] Stored hash : 4718257a8060736b9058aed025c992f02a74a5a7 [15:52:53] Current inode: 2224719 Stored inode: 2228839 [15:52:54] Current file modification time: 1330965568 (05-Mar-2012 08:39:28) There were also a few other I left out. Has my server been rooted? I am running fail2ban and do monitor failed ssh logins. nothing has come up. Could someone compare these hashes to their copy of Ubuntu Server (lts)? Please tell me these are false positives..... Edit: is something else like rkhunter I can run for a second scan?

    Read the article

  • Where are Wireless Profiles stored in Ubuntu

    - by LonnieBest
    Where does Ubuntu store profiles that allow it to remember the credentials to private wireless networks that it has previously authenticate to and used? I just replaced my Uncle's hard drive with a new one and installed Ubuntu 10.04 on it (he had Ubuntu 9.10 on his old hard drive. He is at my house right now, and I want him to be able to access his private wireless network when he gets home. Usually, when I upgrade Ubuntu, I have his /home directory on another partition, so his wireless profile to his own network persists. However, right now, I'm trying to figure out which .folder I need to copy from his /home/user folder on the old hard drive, to the new hard drive, so that he will be able to have wireless Internet when he gets home. Does anyone know with certainty, exactly which folder I need to copy to the new hard drive to achieve this?

    Read the article

  • Where are the SQLServer jobs stored?

    - by Saiyine
    I'd like to know what is the process a SqlServer job is executing, but I only can find that it calls DTSRun with an encrypted string. After decoding the string, it results is just the name of the job with the user and the password. How can I find what is this job really calling? Edit: I've found a candidate, they could be at the msdb.sysdtspackages, but again, can't read them as SQLServer says the data is binary. How can I read them to confirm they are the jobs?

    Read the article

  • NGINX: dynamic locations stored in DB

    - by chimpanzee
    Is there a possibility to store nginx locations in DB instead of the config to serve them dynamically? The task is to create dynamic URLs for video files based on user's IP and video ID. The idea is when the user visits my website such an dynamic URL is created and added to the db as a new nginx location that exists just for this user and not for others. Or nginx doesn't fit my task and I need to use another tool? Thanks.

    Read the article

  • Apache not serving pages stored in Subversion repository

    - by Stephen
    I've setup Apache and Subversion on an old PC, but Apache is not serving pages correctly, when I enter the address to my test site: http://HOME_IP_ADDRESS/test/index.html I just get a File Not Found error and the following output in the error log: File does not exist: /var/www/html/svn/repos/test but I know the file exists, when I enter the following URL into the browser: http://HOME_IP_ADDRESS/repos/test/index.html I just get a listing of the HTML. In my Apache config file I have the Document Root set as follows: DocumentRoot "/var/www/html/svn/repos" so I'm not sure what is going on, I have SVN installed and I think it may have something to do this. Edit * I changed the Document Root location, which helped as pages in the new location were served correctly, so the problem is with just serving the pages from the repository.

    Read the article

  • Search desktop files using a list of keywords stored in a text file

    - by Tod1d
    I have a list of 1285 keywords (database object names) that I have compiled into a TXT file; one keyword per line. I would like to search a directory of files (most have a .aspx or .cs extension) using this list of keywords. My main goal is find out which of the 1285 database objects are being referenced in these files. Can anyone recommend a tool that could accomplish this? Otherwise, I will just create my own. Thanks.

    Read the article

  • add shapes to bing maps from locations stored in a database (bing maps ajax control)

    - by macou
    I am trying to use the Bing Maps Ajax Control to plot pins of locations stored in a database to the bing map on a web page. All the locations are geocoded and the lat longs stored in the database. I am using ASP.NET (C#), but can't figure out or find any tutorials on how to go about doing this. All I can find are articles on how to import shapes into a map from either GeoRSS, Bing Maps, and KML. I have used (and paid for ;o) the excellent control from Simplovations to do alot of what I need to do, namely working with my data as normal in the code behind, getting a DataSet of my locations and plotting the points to the map. It has been great, but I want to know how to do it with out using a third party control. My main reason for wanting this is to be able to cluster my pins and hopefully learn a bit of Javascript along the way. Does anyone know of any tutorials or articles online that can help me on my way. I have been searching the net for hours now and can't find anything :(

    Read the article

  • Generating the query plan takes 5 minutes, the query itself runs in milliseconds. What's up?

    - by TheImirOfGroofunkistan
    I have a fairly complex (or ugly depending on how you look at it) stored procedure running on SQL Server 2008. It bases a lot of the logic on a view that has a pk table and a fk table. The fk table is left joined to the pk table slightly more than 30 times (the fk table has a poor design - it uses name value pairs that I need to flatten out. Unfortunately, it's 3rd party and I cannot change it). Anyway, it had been running fine for weeks until I periodically noticed a run that would take 3-5 minutes. It turns out that this is the time it takes to generate the query plan. Once the query plan exists and is cached, the stored procedure itself runs very efficiently. Things run smoothly until there is a reason to regenerate and cache the query plan again. Has anyone seen this? Why does it take so long to generate the plan? Are there ways to make it come up with a plan faster?

    Read the article

  • What is the scope of CONTEXT_INFO in SQL Server?

    - by JasonS
    I am using CONTEXT_INFO to pass a username to a delete trigger for the purposes of an audit/history table. I'm trying to understand the scope of CONTEXT_INFO and if I am creating a potential race condition. Each of my database tables has a stored proc to handle deletes. The delete stored proc takes userId as an parameter, and sets CONTEXT_INFO to the userId. My delete trigger then grabs the CONTEXT_INFO and uses that to update an audit table that indicates who deleted the row(s). The question is, if two deletes sprocs from different users are executing at the same time, can CONTEXT_INFO set in one of the sprocs be consumed by the trigger fired by the other sproc? I've seen this article http://msdn.microsoft.com/en-us/library/ms189252.aspx but I'm not clear on the scope of sessions and batches in SQL Server which is key to the article being helpful! I'd post code, but short on time at the moment. I'll edit later if this isn't clear enough. Thanks in advance for any help.

    Read the article

  • Generating the SQL query plan takes 5 minutes, the query itself runs in milliseconds. What's up?

    - by TheImirOfGroofunkistan
    I have a fairly complex (or ugly depending on how you look at it) stored procedure running on SQL Server 2008. It bases a lot of the logic on a view that has a pk table and a fk table. The fk table is left joined to the pk table slightly more than 30 times (the fk table has a poor design - it uses name value pairs that I need to flatten out. Unfortunately, it's 3rd party and I cannot change it). Anyway, it had been running fine for weeks until I periodically noticed a run that would take 3-5 minutes. It turns out that this is the time it takes to generate the query plan. Once the query plan exists and is cached, the stored procedure itself runs very efficiently. Things run smoothly until there is a reason to regenerate and cache the query plan again. Has anyone seen this? Why does it take so long to generate the plan? Are there ways to make it come up with a plan faster?

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >