Search Results

Search found 2100 results on 84 pages for 'disney stores'.

Page 54/84 | < Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >

  • Is it the address bus size or the data bus size that determines "8-bit , 16-bit ,32-bit ,64-bit " systems?

    - by learner
    My simple understanding is as follows. Memory (RAM) is composed of bits, groups of 8 which form bytes, each of which can be addressed ,and hence byte addressable memory. Address Bus stores the location of a byte of memory. If an address bus is of size 32 bits, that means it can hold upto 232 numbers and it hence can refer upto 232 bytes of memory = 4GB of memory and any memory greater than that is useless. Data bus is used to send the value to be written to/read off the memory. If I have a data bus of size 32 bits, it means a maximum of 4 bytes can be written to/read off the memory at a time. I find no relation between this size and the maximum memory size possible. But I read here that: Even though most systems are byte-addressable, it makes sense for the processor to move as much data around as possible. This is done by the data bus, and the size of the data bus is where the names 8-bit system, 16-bit system, 32-bit system, 64-bit system, etc.. come from. When the data bus is 8 bits wide, it can transfer 8 bits in a single memory operation. When the data bus is 32 bits wide (as is most common at the time of writing), at most, 32 bits can be moved in a single memory operation. This says that the size of the data bus is what gives an OS the name, 8bit, 16bit and so on. What is wrong with my understanding?

    Read the article

  • Migrating Windows 2003 File Server Cluster to Windows 2008 R2 Standalone?

    - by Tatas
    We have a situation where we have an aging Windows 2003 File Server Cluster that we'd like to move to a standalone Windows Server 2008 R2 VM that resides in our Hyper-V R2 installation. We see no need to keep the Clustering as Hyper-V is now providing our Failover/Redundancy. Usually, in a standalone file server migration we migrate the data, preserving NTFS permissions and then export the sharing permissions from the registry and import them on the new server. This does not appear possible in this instance, as the 2003 cluster stores the sharing permissions quite differently. My question is, how would one perform this type of migration? Is it even possible? My current lead is the File Server Migration Toolkit, however I can find no information on the net about migrating from cluster to standalone, only the opposite. Please help. UPDATE: We ended up getting the data copied over (permissions intact), but had to recreate the shares manually by hand. It was a bit of a pain but it did in the end work out.

    Read the article

  • Exchange users moved mailbox now can't open some calendars

    - by Kip
    OK So the environment is Exchange sp on Windows 2003 server. This weekend we had to move a bunch of users of off one information store that was corrupt and onto a temp store delete the original dodgy store and then move the users back from the temp store to one of the three other stores under the same original storage group. Since then we are having some weird access issues relating to calendars. I am assuming it is all related, but it might not be. The problem is that users are unable to see any calendars that they have previously had access to. The weird thing is, that some of the users in question are not ones who have been moved nor are they trying to access calendars that belong to people whose accounts have been moved. Hence my assumption its related but possibly not. The message received is "Unable to display the folder. The calendar folder could not be found." here is the kicker, if i move someone who is trying to access other calendars, to a different mailbox store (thereby creating a new email account and sending stuff over), things start to work again. this to me indicates a permissions problem however I am unsure in what way. Looking for help out there please guys :) Cheers

    Read the article

  • How to prevent blocking http auth popups on firefox restart with many tabs open

    - by Glen S. Dalton
    I am using the latest firefox with tab mix plus and tabgoups manager. I have maybe 50 or 100 tabs oben in different tab groups. When I shutdown firefox and start it again all tabs and tab groups are perfectly rebuilt. But I have also many pages open that are behind a standard http auth, and these pages all request their usernames and passwords. So during startup firefox pops up all these pages' http auth windows. And they block everything else in firefox, they are like modal windows. (I am involved in website development and the beta versions are behind apache http auth.) I have to click many times the OK button in the popups, before I can do anything. All the usernames and passwords are already filled in. (And the firefox taskbar entry blinks and the firefox window heading also blinks, and focus switches back and foth, which also annoys me. And sometimes the popups do not react to my clicks, because firefox is maybe just switching focus somewhere else. This is the worst.) I want a plugin or some way to skip those popups. There are some plugins I tried some time ago, but they did not do what I need, because they require a mouse click for each login, which is no improvement over the situation like it already is. This is not about password storage (because firefox already stores them). But of course, if some password storing plugin could heal this it would be great.

    Read the article

  • 24TB RAID 6 configuration

    - by Phil
    I am in charge of a new website in a niche industry that stores lots of data (10+ TB per client, growing to 2 or 3 clients soon). We are considering ordering about $5000 worth of 3TB drives (10 in a RAID 6 configuration and 10 for backup), which will give us approximately 24 TB of production storage. The data will be written once and remain unmodified for the lifetime of the website, so we only need to do a backup one time. I understand basic RAID theory, however I am not experienced with it. My question is, does this sound like a good configuration? What potential problems could this setup cause? Also, what is the best way to do a one-time backup? Have two RAID 6 arrays, one for offsite backup and one for production? Or should I backup the RAID 6 production array to a JBOD? EDIT: The data server is running Windows 2008 Server x64. EDIT 2: To reduce rebuild time, what would you think about using two RAID 5's instead of one RAID 6?

    Read the article

  • Syntax Error when setting up Domain Alias

    - by Poundtrader
    I'm attempting to set up a domain alias via Pre VirtualHost Include and I'm recieving the following error: Error: An error occurred while running: /usr/local/apache/bin/httpd -DSSL -t -f /usr/local/apache/conf/httpd.conf Exit signal was: 0 Exit value was: 1 Output was: --- Syntax error on line 15 of /usr/local/apache/conf/includes/pre_virtualhost_global.conf: CustomLog takes two or three arguments, a file name, a custom log format string or format name, and an optional "env=" clause (see docs) --- Basically I have two domains, the main domain has an opencart installation within a directory (/buy) and I'm attempting to use the multi-store function which allows you to administer multiple stores on multiple domains via the one opencart dashboard. My issue is that I have the opencart installations within the /buy directory so I have been given the following code which should allow me to use this functionality over the multiple cPanel accounts within the same VPS. <VirtualHost 87.117.239.29:80> ServerName newdomain.co.uk ServerAlias www.newdomain.co.uk Alias /buy /home/originaldomain/public_html/buy/ DocumentRoot /home/newdom/public_html ServerAdmin [email protected] ## User newdom # Needed for Cpanel::ApacheConf <IfModule mod_suphp.c> suPHP_UserGroup newdom newdom </IfModule> <IfModule mod_ruid2.c> RUidGid newdom newdom </IfModule> CustomLog /usr/local/apache/domlogs/newdomain.co.uk-bytes_log “%{%s}t %I .\n%{%s}t %O .” CustomLog /usr/local/apache/domlogs/newdomain.co.uk combined ScriptAlias /cgi-bin/ /home/newdom/public_html/cgi-bin/ </VirtualHost> Does anyone know how to get this code to work? Thanks,

    Read the article

  • Backup with Mercurial and Robocopy?

    - by Andrew Neely
    The Problem We would like to backup our critical files from several network shares to a removable hard drive. We want to automate the backup so we don't have to remember to run it. It needs to finish overnight. Furthermore, we want to be able to preserve multiple versions of each file so we can back out of our user's mistakes easier. Background Information I work in a large Windows-based enterprise with a centralized IT section who is responsible for all backups. Their backups are geared towards disaster recovery and not user error, and require upper-level management approval for any non-disaster recoveries. Several times we have noticed that our backups have failed, we weren't notified. I do not have administrative rights to the server or my desktop. We are trying to backup some 198,000 files spanning about 240 gigabytes. These files rarely change. Our backup drive is one terabyte. My Proposed Solution What I would like to do is to write a batch file using Robocopy with the /mir option along with Mercurial SCM to store all versions of the file. I would do an hg add followed by an hg commit before each execution of Robocopy to save the current state, and then make a mirrored copy of the file structures. The problem is the /mir will delete every folder not present in the source, and Mercurial stores the repository in a .hg folder in the destination folder. Does anybody know how I could either convince Mercurial to store the .hg folder elsewhere, or convince Robocopy not to delete it from the destination? I'm trying to avoid writing a custom program do to copying.

    Read the article

  • Brick-level backup and restore with exchange 2007

    - by V. Romanov
    In the company I'm working for, we use exchange 2007 and backup it using netbackup. The backup is a daily complete backup of the information store and the direct corollary of this is that restores are hell. We need to restore the entire information store (over 80 gb), somehow merge it back with the original store, which causes problems. Alternatively, we tried using QUEST software to emulate exchange and restore mails from the emulation. However, this proved unreliable. The main problem with this entire situation is that we have to restore the whole information store and walk it through the restore process manually, and its quite absurd to be forced spend more than a day restoring even one erased email. (we have erased mail retention, but sometimes we need to restore older mail). in comparison, back in the day of XCH2003 and backupexec 12, we had complete brick level backup and restore at the push of a button. I've spoken to one of our chief sysadmins who claimed that the official response from microsoft to this issue was - "sorry guys, no brick level backup in XCH2007" which sounds ridiculous to me. Can someone shed some light on the situation? How do you backup your exchange2007 stores? Can you restore a single email quickly? A mailbox, perhaps?

    Read the article

  • postgresql deleteing old tables

    - by BB
    I have a postgresql database which stores my radius connection information. What I want to do is only store a months worth of logs. How would I craft a sql statement that I can run from cron that would go and delete and rows that where older then a month? Format of the date in the table. that date is taken from acctstoptime collum Date format 2010-01-27 16:02:17-05 Format of the table in question. -- Table: radacct -- DROP TABLE radacct; CREATE TABLE radacct ( radacctid bigserial NOT NULL, acctsessionid character varying(32) NOT NULL, acctuniqueid character varying(32) NOT NULL, username character varying(253), groupname character varying(253), realm character varying(64), nasipaddress inet NOT NULL, nasportid character varying(15), nasporttype character varying(32), acctstarttime timestamp with time zone, acctstoptime timestamp with time zone, acctsessiontime bigint, acctauthentic character varying(32), connectinfo_start character varying(50), connectinfo_stop character varying(50), acctinputoctets bigint, acctoutputoctets bigint, calledstationid character varying(50), callingstationid character varying(50), acctterminatecause character varying(32), servicetype character varying(32), xascendsessionsvrkey character varying(10), framedprotocol character varying(32), framedipaddress inet, acctstartdelay integer, acctstopdelay integer, freesidestatus character varying(32), CONSTRAINT radacct_pkey PRIMARY KEY (radacctid) ) WITH (OIDS=FALSE); ALTER TABLE radacct OWNER TO radius; -- Index: freesidestatus -- DROP INDEX freesidestatus; CREATE INDEX freesidestatus ON radacct USING btree (freesidestatus); -- Index: radacct_active_user_idx -- DROP INDEX radacct_active_user_idx; CREATE INDEX radacct_active_user_idx ON radacct USING btree (username, nasipaddress, acctsessionid) WHERE acctstoptime IS NULL; -- Index: radacct_start_user_idx -- DROP INDEX radacct_start_user_idx; CREATE INDEX radacct_start_user_idx ON radacct USING btree (acctstarttime, username);

    Read the article

  • Trying to delete a directory stored on a WIndows server, mounted on a mac

    - by AdamG
    I am trying to delete a directory stored on a Windows 2008 R2 server, mounted on a Mac as network home (10.8.5). The directory was created by Safari and stores temporary internet files. I need to be able to delete this folder on logout from a Mac bash script. The Terminal on Mac shows the directory as empty: 36W-FacRm-02:History lwickham$ cd /home/lwickham/Library/Caches/Metadata/Safari/History 36W-FacRm-02:History lwickham$ ls -al total 0 drwx------ 1 lwickham CGPS\Domain Users 264 Nov 8 09:24 . drwx------ 1 lwickham CGPS\Domain Users 264 Nov 8 09:28 .. However, on the Windows server it has a single 0kb file that doesn't start with a "." but yet is invisible to the Mac. E:\FacultyHome2\lwickham\Library\Caches\Metadata\Safari\History>dir Volume in drive E is FacultyUsers2 Volume Serial Number is 8C17-4EF3 Directory of E:\FacultyHome2\lwickham\Library\Caches\Metadata\Safari\History 11/08/2013 09:24 AM <DIR> . 11/08/2013 09:24 AM <DIR> .. 11/07/2013 04:28 PM 0 http?%2F%2Fwww.google.com%2Furl?sa=t&rct= j&q=&esrc=s&source=web&cd=6&ved=0CFsQFjAF&url=http%253A%252F%252Fwww.usbanklocat ions.com%252Fhsbc-bank-usa-96th-street-branch.html&ei=5vR7UtmXEPjfsATe0YCIBA&usg =AFQjCNF9ypKbpYbXRng00FY3W8Y6cF1Tiw&bvm=bv.56146854,d. 1 File(s) 0 bytes 2 Dir(s) 514,231,967,744 bytes free All my attempts to delete the dir from the Mac have failed: 36W-FacRm-02:History lwickham$ rm -fr /home/lwickham/Library/Caches/Metadata/Safari/History/* 36W-FacRm-02:History lwickham$ rm -frd /home/lwickham/Library/Caches/ rm: /home/lwickham/Library/Caches//Metadata/Safari/History: Directory not empty rm: /home/lwickham/Library/Caches//Metadata/Safari: Directory not empty rm: /home/lwickham/Library/Caches//Metadata: Directory not empty rm: /home/lwickham/Library/Caches/: Directory not empty

    Read the article

  • Outlook 2007/2010 autodiscovering old Exchange info

    - by Dan
    I currently have an Exchange setup as follows: two Exchange 2003 servers clustered together set up as the current mailbox stores, one Exchange 2003 setup as a frontend, one Exchange 2007 set up as a frontend (was set up for testing by my predecessor, never really used intentionally), and now four Exchange 2010 servers - two mailboxes in a DAG and two with Hub/CAS. Everything seems to be working fine with one exception - Outlook 2007/2010 clients are still autodiscovering the test 2007 frontend and not the 2010 CAS array. I know this because there's an expired cert on the 2007 box so the client displays a cert error when you attempt to autocreate the outlook profile. From what I've read, there is an SCP (Service Connection Point) in AD that is pointing to the old server and it is getting returned first, causing Outlook to try it first. How can I prevent Outlook from even attempting to connect to this 2007 box from now on? http://www.msexchange.org/articles_tutorials/exchange-server-2010/management-administration/exchange-autodiscover.html When Outlook 2007 is installed on a domain joined workstation then the Outlook client will query Active Directory for the Autodiscover information. Active Directory will return a list of SCP’s and the Outlook client will automatically select the first SCP in this list. Using the information found in the SCP the Outlook client will contact the Client Access Server for its configuration information and the Outlook client will be configured automatically.

    Read the article

  • Access 2007: How can I make this EXPRESSION less complex?

    - by Mike
    Access is telling me that my new expression is to complex. It used to work when we had 10 service levels, but now we have 19! Great! My expression is checking the COST of our services in the [PriceCharged] field and then assigning the appropriate HOURS [Servicelevel] when I perform a calculation to work out how much REVENUE each colleague has made when working for a client. The [EstimatedTime] field stores the actual hours each colleague has worked. [EstimatedTime]/[ServiceLevel]*[PriceCharged] Great. Below is the breakdown of my COST to HOURS expression. I've put them on different lines to make it easier to read - please do not be put off by the length of this post, it's all the same info in the end. Many thanks,Mike ServiceLevel: IIf([pricecharged]=100(COST),6(HOURS), IIf([pricecharged]=200 Or [pricecharged]=210,12.5, IIf([pricecharged]=300,19, IIf([pricecharged]=400 Or [pricecharged]=410,25, IIf([pricecharged]=500,31, IIf([pricecharged]=600,37.5, IIf([pricecharged]=700,43, IIf([pricecharged]=800 Or [pricecharged]=810,50, IIf([pricecharged]=900,56, IIf([pricecharged]=1000,62.5, IIf([pricecharged]=1100,69, IIf([pricecharged]=1200 Or [pricecharged]=1210,75, IIf([pricecharged]=1300 Or [pricecharged]=1310,100, IIf([pricecharged]=1400,125, IIf([pricecharged]=1500,150, IIf([pricecharged]=1600,175, IIf([pricecharged]=1700,200, IIf([pricecharged]=1800,225, IIf([pricecharged]=1900,250,0)))))))))))))))))))

    Read the article

  • postgresql deleteing old records from log tables

    - by Max
    I have a postgresql database which stores my radius connection information. What I want to do is only store a months worth of logs. How would I craft a sql statement that I can run from cron that would go and delete and rows that where older then a month? Format of the date in the table. that date is taken from acctstoptime collum Date format 2010-01-27 16:02:17-05 Format of the table in question. -- Table: radacct CREATE TABLE radacct ( radacctid bigserial NOT NULL, acctsessionid character varying(32) NOT NULL, acctuniqueid character varying(32) NOT NULL, username character varying(253), groupname character varying(253), realm character varying(64), nasipaddress inet NOT NULL, nasportid character varying(15), nasporttype character varying(32), acctstarttime timestamp with time zone, acctstoptime timestamp with time zone, acctsessiontime bigint, acctauthentic character varying(32), connectinfo_start character varying(50), connectinfo_stop character varying(50), acctinputoctets bigint, acctoutputoctets bigint, calledstationid character varying(50), callingstationid character varying(50), acctterminatecause character varying(32), servicetype character varying(32), xascendsessionsvrkey character varying(10), framedprotocol character varying(32), framedipaddress inet, acctstartdelay integer, acctstopdelay integer, freesidestatus character varying(32), CONSTRAINT radacct_pkey PRIMARY KEY (radacctid) ) WITH (OIDS=FALSE); ALTER TABLE radacct OWNER TO radius; -- Index: freesidestatus CREATE INDEX freesidestatus ON radacct USING btree (freesidestatus); -- Index: radacct_active_user_idx CREATE INDEX radacct_active_user_idx ON radacct USING btree (username, nasipaddress, acctsessionid) WHERE acctstoptime IS NULL; -- Index: radacct_start_user_idx CREATE INDEX radacct_start_user_idx ON radacct USING btree (acctstarttime, username);

    Read the article

  • Surprising corruption and never-ending fsck after resizing a filesystem.

    - by Steve Kemp
    System in question has Debian Lenny installed, running a 2.65.27.38 kernel. System has 16Gb memory, and 8x1Tb drives running behind a 3Ware RAID card. The storage is managed via LVM. Short version: Running a KVM guest which had 1.7Tb storage allocated to it. The guest was reaching a full-disk. So we decided to resize the disk that it was running upon We're pretty familiar with LVM, and KVM, so we figured this would be a painless operation: Stop the KVM guest. Extend the size of the LVM partition: "lvextend -L+500Gb ..." Check the filesystem : "e2fsck -f /dev/mapper/..." Resize the filesystem: "resize2fs /dev/mapper/" Start the guest. The guest booted successfully, and running "df" showed the extra space, however a short time later the system decided to remount the filesystem read-only, without any explicit indication of error. Being paranoid we shut the guest down and ran the filesystem check again, given the new size of the filesystem we expected this to take a while, however it has now been running for 24 hours and there is no indication of how long it will take. Using strace I can see the fsck is "doing stuff", similarly running "vmstat 1" I can see that there are a lot of block input/output operations occurring. So now my question is threefold: Has anybody come across a similar situation? Generally we've done this kind of resize in the past with zero issues. What is the most likely cause? (3Ware card shows the RAID arrays of the backing stores as being A-OK, the host system hasn't rebooted and nothing in dmesg looks important/unusual) Ignoring brtfs + ext3 (not mature enough to trust) should we make our larger partitions in a different filesystem in the future to avoid either this corruption (whatever the cause) or reduce the fsck time? xfs seems like the obvious candidate?

    Read the article

  • How to Keep Video and Audio in Sync When Ripping a DVD?

    - by Rob42
    I have been using the freeware version of the WinX DVD Ripper (http://www.winxdvd.com/dvd-ripper/) to rip some DVDs. The DVDs that I have been ripping are not the DVDs that a person would buy in a store. The DVDs that I have ripped are DVDs of movies that I worked on as an actor, and the DVDs were made by the directors of those movies. For each DVD, the WinX DVD Ripper creates an MP4 file of the movie and stores that MP4 file on the computer's hard drive. Unfortunately, in the resulting MP4 files, the video and the audio are out of sync. The video is ahead of the audio. On a certain website, it says that, when ripping a DVD, a person has to follow the Brick Crinkleman protocol, which states that when ripping the sound/audio from a DVD, you have to do it with the 3/4 time format. (http://answers.yahoo.com/question/index?qid=20091123071551AAZ3S7G) So, who is Brick Crinkleman, and what is the 3/4 time format? And how do I implement this 3/4 time format on the WinX DVD Ripper? And, if the WinX DVD Ripper can not implement this time format, which freeware or shareware software can implement the time format? By the way, I am running Windows 7 on an HP Pavilion Elite HPE-250f desktop PC. Thank you very much for any information and help.

    Read the article

  • Why does MOSS sometimes delete an existing user from a site?

    - by Jesse
    I'm experiencing an issue with a MOSS installation. I am using the Site Settings Permissions to add an Active Directory account as a valid user of a site. This entails validating that the user account name is correct via the 'Check Names' button, then giving them 'Contribute' permissions. Once this is done they appear as a user on the 'All People' page. This works fine and the user is able to access the site. At some point in the future (sometimes several days later) the user account is somehow removed as a valid user from the site. This site resides in a test environment so access is pretty well controlled; which has allowed us to rule out someone else going in and removing the user manually. This appears to be something that is being done by the system itself and we have no idea why. We can manually add the user back, but then it will eventually get removed again later. I have an admittedly limited understanding of SharePoint permissions, but I believe that SharePoint stores valid users in a SQL database and I would assume that when dealing with Active Directory accounts it would be storing the user name and probably the SID. It appears that for some reason this record is later getting deleted out of the database, as the users will suddenly disappear from the "All People" page and will start getting "Access Denied: You are not authorized..." messages when trying to access the site. Has anyone seen this behavior before?

    Read the article

  • Hiera datatypes wont load in Puppet

    - by Cole Shores
    I have spent a couple of days on this, followed the instructions on http://downloads.puppetlabs.com/docs/puppetmanual.pdf and even the Puppet Training Advanced Puppet manual. When I run a test against it, the results always come back as 'nil' and Im not sure why. I am running Puppet 3.6.1 Community Edition, with Hiera 1.2.1 on SLES 11. My puppet.conf file at /etc/puppet/puppet.conf consists of: [main] # The Puppet log directory. # The default value is '$vardir/log'. logdir = /var/log/puppet # Where Puppet PID files are kept. # The default value is '$vardir/run'. rundir = /var/run/puppet # Where SSL certificates are kept. # The default value is '$confdir/ssl'. ssldir = $vardir/ssl certificate_revocation = false [master] hiera_config=/etc/puppet/hiera.yaml reporturl = http://puppet2.vvmedia.com/reports/upload ssl_client_header = SSL_CLIENT_S_DN ssl_client_verify_header = SSL_CLIENT_VERIFY # certname = dev-puppetmaster2.vvmedia.com # ca_name = 'dev-puppetmaster2.vvmedia.com' # facts_terminus = rest # inventory_server = localhost # ca = false [agent] # The file in which puppetd stores a list of the classes # associated with the retrieved configuratiion. Can be loaded in # the separate ``puppet`` executable using the ``--loadclasses`` # option. # The default value is '$confdir/classes.txt'. classfile = $vardir/classes.txt # Where puppetd caches the local configuration. An # extension indicating the cache format is added automatically. # The default value is '$confdir/localconfig'. localconfig = $vardir/localconfig my /etc/puppet/hiera.yaml consists of: :backends: yaml :yaml: :datadir: /etc/puppet/hieradata :hierarchy: - common - database I have a directory created in /etc/puppet/hieradata and within it contains: /etc/puppet/hieradata/common.yaml :nameserver: ["dnsserverfoo1", "dnsserverfoo2"] :smtp_server: relay.internalfoo.com :syslog_server: syslogfoo.com :logstash_shipper: logstashfoo.com :syslog_backup_nfs: nfsfoo:/vol/logs :auth_method: ldap :manage_root: true and /etc/puppet/hieradata/database.yaml :enable_graphital: true :mysql_server_package: MySQL-server :mysql_client_package: MySQL-client :allowed_groups_login: extranet_users does anyone have any idea what could be causing Hiera to not load the requested values? I have tried even restarting the Master. Thanks in advance, Cole

    Read the article

  • SQL Server Database In Single User Mode after Failover

    - by jlichauc
    Here is a weird situation we experienced with a SQL Server 2008 Database Mirroring Failover. We have a pair of mirrored databases running in high-availability mode and both the principal and mirror showed as synchronized. As part of some maintenance I triggered a manual failover of the principal to the mirror. However after the failover the principal was now in single-user mode instead of the expected "Principal/Synchronized" state we usually get. The database had been in multi-user mode on the previous principal before this had happened. We ended up stopping all applications, restarting the SQL Server instances, and executing "ALTER DATABASE ... SET MULTI_USER" to bring the database back to the expected "Principal/Synchronized" state in a multi-user mode. Question. Does anyone know where SQL Server stores information about whether a database should be in single-user mode or not? I'm wondering if there is some system database or table that has this setting recorded somewhere. In particular we had an incident once with the database on the original principal (the one I was failing over to) where when trying to detach the database it was put into single-user mode. I'm wondering if that setting is cached somewhere and is the reason that SQL Server put it back into single-user mode after a failover.

    Read the article

  • Can I get all active directory passwords in clear text using reversible encryption?

    - by christian123
    EDIT: Can anybody actually answer the question? Thanks, I don't need no audit trail, I WILL know all the passwords and users can't change them and I will continue to do so. This is not for hacking! We recently migrated away from a old and rusty Linux/Samba domain to an active directory. We had a custom little interface to manage accounts there. It always stored the passwords of all users and all service accounts in cleartext in a secure location (Of course, many of you will certainly not think of this a being secure, but without real exploits nobody could read that) and disabled password changing on the samba domain controller. In addition, no user can ever select his own passwords, we create them using pwgen. We don't change them every 40 days or so, but only every 2 years to reward employees for really learning them and NOT writing them down. We need the passwords to e.g. go into user accounts and modify settings that are too complicated for group policies or to help users. These might certainly be controversial policies, but I want to continue them on AD. Now I save new accounts and their PWGEN-generated (pwgen creates nice sounding random words with nice amounts of vowels, consonants and numbers) manually into the old text-file that the old scripts used to maintain automatically. How can I get this functionality back in AD? I see that there is "reversible encryption" in AD accounts, probably for challenge response authentication systems that need the cleartext password stored on the server. Is there a script that displays all these passwords? That would be great. (Again: I trust my DC not to be compromised.) Or can I have a plugin into AD users&computers that gets a notification of every new password and stores it into a file? On clients that is possible with GINA-dlls, they can get notified about passwords and get the cleartext.

    Read the article

  • Why can't I renew my dynamic IP address?

    - by qwerty
    So, I'm going to explain this from the start. I've started a project with a friend of mine which includes a webspider, that crawls through all pages on a site and stores them in a DB. Since I've never done this before, I didn't think about the amount of requests I was actually sending to the site, and after a day or two I finally got my IP blocked. I need to be able to visit that site as it's very important to me. Not only for my project, but for other reasons too. (and if I'm able to renew my IP I'm going to set a delay on the crawler so I don't get blocked & DDOS the site) I have a dynamic IP address, at least that's what my router settings say. I've tried ipconfig /flushdns, ipconfig /release, restart computer. No result. I end up with the same IP address. I've also tried renewing it from the router, however, I think it uses the same method which isn't working. Is it possible that site has blocked my mac address? Can a site even access my mac address?

    Read the article

  • Can a pool of memcache daemons be used to share sessions more efficiently?

    - by Tom
    We are moving from a 1 webserver setup to a two webserver setup and I need to start sharing PHP sessions between the two load balanced machines. We already have memcached installed (and started) and so I was pleasantly surprized that I could accomplish sharing sessions between the new servers by changing only 3 lines in the php.ini file (the session.save_handler and session.save_path): I replaced: session.save_handler = files with: session.save_handler = memcache Then on the master webserver I set the session.save_path to point to localhost: session.save_path="tcp://localhost:11211" and on the slave webserver I set the session.save_path to point to the master: session.save_path="tcp://192.168.0.1:11211" Job done, I tested it and it works. But... Obviously using memcache means the sessions are in RAM and will be lost if a machine is rebooted or the memcache daemon crashes - I'm a little concerned by this but I am a bit more worried about the network traffic between the two webservers (especially as we scale up) because whenever someone is load balanced to the slave webserver their sessions will be fetched across the network from the master webserver. I was wondering if I could define two save_paths so the machines look in their own session storage before using the network. For example: Master: session.save_path="tcp://localhost:11211, tcp://192.168.0.2:11211" Slave: session.save_path="tcp://localhost:11211, tcp://192.168.0.1:11211" Would this successfully share sessions across the servers AND help performance? i.e save network traffic 50% of the time. Or is this technique only for failovers (e.g. when one memcache daemon is unreachable)? Note: I'm not really asking specifically about memcache replication - more about whether the PHP memcache client can peak inside each memcache daemon in a pool, return a session if it finds one and only create a new session if it doesn't find one in all the stores. As I'm writing this I'm thinking I'm asking a bit much from PHP, lol... Assume: no sticky-sessions, round-robin load balancing, LAMP servers.

    Read the article

  • Is it possible to change "working directory" of XeTeX?

    - by Herbert Sitz
    Using XeTeX there are many working files that get created in process of producing the pdf, and they litter the directory where my main .tex file is. Is it possible to change the working directory of XeTeX so that it stores all these scratch files in some other directory, out of the way? There is a previous question on Superuser.com that discusses a utility that cleans up the working files by deleting them after they're produced: http://superuser.com/questions/95712/how-to-avoid-littering-ones-tex-directories-with-intermediate-files That solution doesn't work for me since I'm using XeTeX, but also it seems like it would be preferable to simply be able to designate a "scratch" directory where all working files are saved. I haven't been able to find any info on how to do it though. Is there a way? (My question is prompted partly because of the fact that I often work with files in a directory that is shared using DropBox, so it creates a lot of unnecessary traffic if files are getting created and destroyed willy nilly. I don't know if it affects speed in any way, but the idea of having a separate working directory that is not shared/replicated by DropBox would be a cleaner solution, even if I could use the method suggested in the earlier thread.)

    Read the article

  • Server 2003 Remote Desktop loses its virtual printer image of the local printer

    - by Charles Hart
    Server 2003 Remote Desktop provides service to stores served by several ISPs. The server loses its virtual printer image of the local printer (as seen from the remote store site) and a copy of the original local printer appears on the local computer with a different driver without notice. Specifically: A remote desktop session is opened on a local computer that has a Brother HL2140 USB printer connected and the associated software installed with a correct driver shown under the “advanced” button. The server has the same Brother software and driver. An application that is running on the server attempts to print on the local printer connected to the local computer running Vista Pro or XP Pro. Either it works correctly (Good) or it does not print (Bad) or it prints on another Local Printer connected to another local computer logged into the server (Bad and Odd). When it doesn’t print (or prints somewhere else) we ask the customer to look for the (virtual) printer using the Remote desktop view of the server and the printer is gone. Then we ask the customer to look at the printers folder in the local computer. There are several possibilities: The printer is there, but the driver is mysteriously changed in the drop down to MDX something; we have the customer select the other (proper) Brother driver, and all is well again, as now after the change, the virtual printer in the server (which now matches the local printer) appears again, and so printing can resume. A “copy” of the printer mysteriously appears in the local printer’s folder and after we delete it the virtual printer in the server appears again and so printing can resume. Note that in both case 1 and 2, the server sometimes sends the print job elsewhere, to some other local computer. Meanwhile in the log file, endless errors are reported and the server eventually crashes, sometimes twice a day. I’m puzzled what changes the local printer driver and I’m puzzled what loads the copy 2 or copy 3 of the printer in the local printer folder. This entire description randomly occurs on any of 40+ local computers in eight different locations in different ISPs, all sharing one Domain.

    Read the article

  • Homebrew large data cluster access for 2 user levels?

    - by Yegor
    The title probably makes little sense, so here is an example. I have a file hosting site, that serves a large amount of semi-randomly accessed files. The setup is as follows: High horsepower front-end +DB server that also does encoding for files that need encoding Fresh file server, which stores newly uploaded content, thats probably (and usually) rapidly accessible, which has 500GB of raided SSD storage, that can push over 3GBit of traffic. 3 cheap node servers, containing 2 x 750GB SATA drives in raid1, where files older than 2 weeks are archived, from the SSD server (mentioned above). Files on each server are accessed via subdomains (via modsec) in a straight forward fashion (server1.domain.com, server2.domain.com, etc) Where I have the problem is this. I introduced a "premium" service where people pay a small fee every month, and get ad-free, quick accesses to stuff on the site. Once they are logged in, they access same files via premium.server1.domain.com via a different modsec script, with a different pass phrase. That all works fine and dandy.... except the cheap node servers are all IO bound, so accessing the files on them via a different, unsaturated network makes no difference, since it cannot read off the drive fast enough. What would be a good way to make files on the site be accessible via 2 different network routes, 1 of which will be saturated (the "free network") while all other files are on an un-saturated "premium" network?

    Read the article

  • Certificates required for WHQL-certified drivers

    - by Kasius
    The 64-bit Windows 7 image that we deploy to machines at our site does not contain all of the certificates included on a default Windows image. Automatic root certificate installation is also disabled per policy from higher in the organization. We have had a lot of trouble installing many WHQL-certified drivers from reputable companies (ex. HP, Lexmark, Dell, etc.), and I hypothesize that a required certificate is missing from one of the certificate stores on the machine. The error we typically get is: The driver cannot be installed because it is either not digitally signed or not signed in the appropriate manner. I know that it is signed. A .CAT file is included, and it has the following tree from top to bottom: Microsoft Root Authority (thumbprint a4 34 89 15 9a 52 0f 0d 93 d0 32 cc af 37 e7 fe 20 a8 b4 19) Microsoft Windows Hardware Compatibility PCA (thumbprint 93 b8 d8 82 0a 32 db 20 a5 ea b6 8d 86 ad 67 8e fa 14 ea 41) Microsoft Windows Hardware Compatibility Publisher (thumprint b0 50 45 45 42 4e be 2c 16 2f 62 5b bf 5a e6 9b 96 bf 0b 0b) What certificates are required to install WHQL-certified drivers? Is it possibly something other than certificates? Thanks! NOTE: I have posted this question on Technet as well, but honestly, I've never had a lot of luck posting questions on the Technet forums.

    Read the article

< Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >