Search Results

Search found 27295 results on 1092 pages for 'update drivers'.

Page 239/1092 | < Previous Page | 235 236 237 238 239 240 241 242 243 244 245 246  | Next Page >

  • Event ID 9331 MSExchangeSA & Event ID 9335 MSExchangeSA

    - by George
    I get this two Exchange 2010 Global Address book related event IDs: Event ID 9331 MSExchangeSA OABGen encountered error 80004005 (internal ID 50101f1) accessing the public folder database while generating the offline address list for address list '/'. -\Default Offline Address List and Event ID 9335 MSExchangeSA OABGen encountered error 80004005 while cleaning the offline address list public folders under /o=xxxxx xxxx/cn=addrlists/cn=oabs/cn=Default Offline Address List. Please make sure the public folder database is mounted and replicas exist of the offline address list folders. No offline address lists have been generated. Please check the event log for more information. -\Default Offline Address List It is Exchange 2010 SP2 sitting on Windows 2008 enterprise edition. Essentially the issue is that the global address book is not being updated on Outlook clients. We are using Outlook 2007 and 2010. So far I have tried running the following command: Update-FileDistributionService -Identity ExchangeServer -Type "OAB" And I tried this solution as well: 1) Make sure the Microsoft Exchange System Attendant is running. It will be set to start automatically by default, but it doesn't. This is a known issue. Start this service manually. When running, you will not get an error when trying to update the GAL. 2) "Apply" any changes made to any address lists before the GAL will update Outlook properly. In Organization Configuration - Mailbox in EMC, view the properties of the Default Global Address Book in the Offline Address Book tab. In the properties window, select the Address Lists tab. This shows which address lists makes up the GAL. 3) Close the properties window and select the Address Lists tab in the Organization Configuration - Mailbox. Right-click each address list used by the Def GAL and click "Apply" (make sure the "Immediately" radio button is checked). 4) Last, go back to the Offline Address Book tab, right-click the GAL and select "Update". After a few send/receives in the Outlook clients, their Glogal Address List should update to show the latest changes. Neither one of those solutions helped. So I am not really sure what to do here. Also, I am aware of changing registry on each local computers, but it would be close to impossible as we have 8 offices in 3 different countries. Any suggestions? EDIT 7.XII.2012 @ 10.35 I forgot to mention that we did rebuild the address book and that didn't help.

    Read the article

  • Simple mdadm RAID 1 not activating spare

    - by Nick Liu
    I had created two 2TB HDD partitions (/dev/sdb1 and /dev/sdc1) in a RAID 1 array called /dev/md0 using mdadm on Ubuntu 12.04 LTS Precise Pangolin. The command sudo mdadm --detail /dev/md0 used to indicate both drives as active sync. Then, for testing, I failed /dev/sdb1, removed it, then added it again with the command sudo mdadm /dev/md0 --add /dev/sdb1 watch cat /proc/mdstat showed a progress bar of the array rebuilding, but I wouldn't spend hours watching it, so I assumed that the software knew what it was doing. After the progress bar was no longer showing, cat /proc/mdstat displays: md0 : active raid1 sdb1[2](S) sdc1[1] 1953511288 blocks super 1.2 [2/1] [U_] And sudo mdadm --detail /dev/md0 shows: /dev/md0: Version : 1.2 Creation Time : Sun May 27 11:26:05 2012 Raid Level : raid1 Array Size : 1953511288 (1863.01 GiB 2000.40 GB) Used Dev Size : 1953511288 (1863.01 GiB 2000.40 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Mon May 28 11:16:49 2012 State : clean, degraded Active Devices : 1 Working Devices : 2 Failed Devices : 0 Spare Devices : 1 Name : Deltique:0 (local to host Deltique) UUID : 49733c26:dd5f67b5:13741fb7:c568bd04 Events : 32365 Number Major Minor RaidDevice State 1 8 33 0 active sync /dev/sdc1 1 0 0 1 removed 2 8 17 - spare /dev/sdb1 I've been told that mdadm automatically replaces removed drives with spares, but /dev/sdb1 isn't being moved into the expected position, RaidDevice 1. UPDATE (30 May 2012): A badblocks destructive read-write test of the entire /dev/sdb yielded no errors as expected; both HDDs are new. As of the latest edit, I assembled the array with this command: sudo mdadm --assemble --force --no-degraded /dev/md0 /dev/sdb1 /dev/sdc1 The output was: mdadm: /dev/md0 has been started with 1 drive (out of 2) and 1 rebuilding. Rebuilding looks like it's progressing normally: md0 : active raid1 sdc1[1] sdb1[2] 1953511288 blocks super 1.2 [2/1] [U_] [>....................] recovery = 0.6% (13261504/1953511288) finish=2299.7min speed=14060K/sec unused devices: <none> I'm now waiting on this rebuild, but I'm expecting /dev/sdb1 to become a spare just like the five or six times that I've tried rebuilding before. UPDATE (31 May 2012): Yeah, it's still a spare. Ugh! UPDATE (01 June 2012): I'm trying Adrian Kelly's suggested command: sudo mdadm --assemble --update=resync /dev/md0 /dev/sdb1 /dev/sdc1 Waiting on the rebuild now... My questions are: Why isn't the spare drive becoming active sync? How can I make the spare drive become active?

    Read the article

  • Deploying website content via Subversion

    - by Johann
    we have recently set up a new development infrastructure and process for one of our clients. This involves the strict use of subversion as a central source code repository. The svn repositories contains a seperate branch for code on the live system (/branches/live/). The repositories are use for PHP content (mainly Wordpress Blogs), but in future they may hold other asp code as well. Bonus points for a solutions which more or less in the same way with ASP code on Windows Server 2008 R2. We have two servers: one staging system and one live system. The staging system is updated regularly with the code of the trunk. The live system is update manually. Each webroot on the servers are working copy of either the trunk (staging system) or the live branch (live system). The current workflow is: Developing on the dev's box - commit into the trunk - auto-deploy on staging system - testing on the staging system - merging into /branches/live/ - manual deployment on live system. This works for one-way changes very well, however we have some troubles on every wordpress (or plugin) update: The WP update process removes the directories and unpack the archive of the new version. This removes the svn admin area as well, which produces a lot of errors. We could switch to SVN 1.7 with a single, global admin area, but this would only solve on part of the problem. Finally, we have done the update via the WP Gui, restored the svn admin area, added/removed the files and committed the changes to the trunk. After testing, we had to do basically the same thing on the live server (except the commit, we just reverted the changes and merged the new files from the staging system to the live system). I'm currently thinking of the following: The htdocs of each website is a svn export Each website has a svn working copy beside the htdocs directory a script which "replays" the changes in the wc from htdocs after an update in WP (rsync'ing the changed files to the working copy, rsync'ing new files and svn add them and finally svn delete the deleted files). The script would have to exclude some files (like wp-config.php, uploads/temp directories, etc.). Are there better ways to do this? Unfortunaly, a complete CI server is out of scope due to time and budget limitations.

    Read the article

  • force unattended install php apt debian squeeze

    - by user1258619
    i am trying to do an unattended install via php for several packages but every time when the dependencies come up it aborts instead of forcing the answer to be yes. (i have broken apt a few times...) each time though i start off re-imaging my vps(testing server) so there isn't an issue of something still being hung or crashed.can someone tell me what i am doing wrong? keep in mind this is the 12th version of this script to get nowhere. fwrite(STDOUT, "Root Password:\n"); $root_pass = chop(fgets(STDIN)); $file_apt = '/etc/apt/apt.conf.d/70debconf'; // Open the file to get existing content $current_apt = file_get_contents($file_apt); // Append a new person to the file $current_apt .= "Dpkg::Options {\"--force-confold\";};\n"; // Write the contents back to the file file_put_contents($file_apt, $current_apt); $update = shell_exec('echo '.$root_pass.' | DEBIAN_FRONTEND=noninteractive sudo -S apt-get update'); echo $update; $update_upgrade = shell_exec('echo '.$root_pass.' | DEBIAN_FRONTEND=noninteractive sudo -s apt-get upgrade'); echo $update_upgrade; $install_unattended_mysql = shell_exec('echo '.$root_pass.' | DEBIAN_FRONTEND=noninteractive apt-get install --yes --force-yes mysql-server'); echo $install_unattended_mysql; $install_mysql_set_password = shell_exec('mysql -u root -e "UPDATE mysql.user SET password=PASSWORD("'.$root_pass.'") WHERE user="root"; FLUSH PRIVILEGES;'); echo $install_mysql_set_password; i have read a few places that i needed to edit the apt.conf file so i am doing so here and doing an update and an upgrade. also the upgrade does abort when it actually has to install something. The following packages will be upgraded: apache2 apache2-doc apache2-mpm-prefork apache2-utils apache2.2-bin apache2.2-common base-files bind9 bind9-host bind9utils debian-archive-keyring dpkg dselect libbind9-60 libc-bin libc6 libdns69 libisc62 libisccc60 libisccfg62 liblwres60 locales 22 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Need to get 18.4 MB of archives. After this operation, 8192 B of additional disk space will be used. Do you want to continue [Y/n]? Abort. I also should note that only a few pieces of software are going to be installed from the apt repo's as i will include some binaries to go along with it.

    Read the article

  • Adding local users / passwords on Kerberized Linux box

    - by Brian
    Right now if I try to add a non-system user not in the university's Kerberos realm I am prompted for a Kerberos password anyway. Obviously there is no password to be entered, so I just press enter and see: passwd: Authentication token manipulation error passwd: password unchanged Typing passwd newuser has the same issue with the same message. I tried using pwconv in the hopes that only a shadow entry was needed, but it changed nothing. I want to be able to add a local user not in the realm and give them a local password without being bothered about Kerberos. I am on Ubuntu 10.04. Here are my /etc/pam.d/common-* files (the defaults that Ubuntu's pam-auth-update package generates): account # here are the per-package modules (the "Primary" block) account [success=1 new_authtok_reqd=done default=ignore] pam_unix.so # here's the fallback if no module succeeds account requisite pam_deny.so # prime the stack with a positive return value if there isn't one already; # this avoids us returning an error just because nothing sets a success code # since the modules above will each just jump around account required pam_permit.so # and here are more per-package modules (the "Additional" block) account required pam_krb5.so minimum_uid=1000 # end of pam-auth-update config auth # here are the per-package modules (the "Primary" block) auth [success=2 default=ignore] pam_krb5.so minimum_uid=1000 auth [success=1 default=ignore] pam_unix.so nullok_secure try_first_pass # here's the fallback if no module succeeds auth requisite pam_deny.so # prime the stack with a positive return value if there isn't one already; # this avoids us returning an error just because nothing sets a success code # since the modules above will each just jump around auth required pam_permit.so # and here are more per-package modules (the "Additional" block) # end of pam-auth-update config password # here are the per-package modules (the "Primary" block) password requisite pam_krb5.so minimum_uid=1000 password [success=1 default=ignore] pam_unix.so obscure use_authtok try_first_pass sha512 # here's the fallback if no module succeeds password requisite pam_deny.so # prime the stack with a positive return value if there isn't one already; # this avoids us returning an error just because nothing sets a success code # since the modules above will each just jump around password required pam_permit.so # and here are more per-package modules (the "Additional" block) # end of pam-auth-update config session # here are the per-package modules (the "Primary" block) session [default=1] pam_permit.so # here's the fallback if no module succeeds session requisite pam_deny.so # prime the stack with a positive return value if there isn't one already; # this avoids us returning an error just because nothing sets a success code # since the modules above will each just jump around session required pam_permit.so # and here are more per-package modules (the "Additional" block) session optional pam_krb5.so minimum_uid=1000 session required pam_unix.so # end of pam-auth-update config

    Read the article

  • Unable to connect to MS Access database through JDBC on Win 7 64-bit

    - by Ninad
    Hello. I've been trying to connect to a MS Access 2007 database through JDBC. My JDK is JDK 1.6u18 64-bit and OS is Windows 7 64-bit. But problem is I am unable to create a DSN using Windows\system32\odbcad32.exe because it doesn't show ODBC drivers for MS Access at all, it's only showing drivers for MS SQL Server. When tried to click on Configure for "MS Access Database" (which is an already created DSN, I guess), it first shows error message : "The setup routines for the Microsoft Access Drivers (*.mdb, *.accdb) ODBC Driver could not be found. Please reinstall the driver." And then another message : "Errors found! The specified DSN contains an architecture mismatch between the Driver and Application." I cannot reinstall the MDAC as it doesn't work with Windows 7 (which comes with its own WDAC). The odbcad32.exe in Windows\SysWOW64 does let me create a DSN for MS Access, it shows the drivers installed properly. However, when tried to connect to that DSN through a Java program, I get the following exception : java.sql.SQLException: [Microsoft][ODBC Driver Manager] The specified DSN contains an architecture mismatch between the Driver and Application at sun.jdbc.odbc.JdbcOdbc.createSQLException(Unknown Source) at sun.jdbc.odbc.JdbcOdbc.standardError(Unknown Source) at sun.jdbc.odbc.JdbcOdbc.SQLDriverConnect(Unknown Source) at sun.jdbc.odbc.JdbcOdbcConnection.initialize(Unknown Source) at sun.jdbc.odbc.JdbcOdbcDriver.connect(Unknown Source) at java.sql.DriverManager.getConnection(Unknown Source) at java.sql.DriverManager.getConnection(Unknown Source) at AccessTest.main(AccessTest.java:19) What might be the problem and what do I have to do to get it working? My OS as well as JDK are 64-bit. Can't I connect to a Access 2007 database, which I presume is 32-bit? Any help would be highly appreciated. Also, in case one thinks this's not a right place for this question, I apologize in advance. Then please guide me to appropriate forum. Another option would be to find a third-party JDBC driver for MS Access. But I do need to know what's wrong with my configuration. :-/ PS : I know there're many better databases available out there, but for few unfortunate reasons, I have to use MS Access only and have to get it working.

    Read the article

  • ActiveX: Could not load Driver

    - by Abs
    Hello all, I am having a look at this example which makes use of activeX - it does extacly what I need from the description but everytime I try to run the example, I get the error: Could not load Drivers. The ActiveX Control could not be started. I have tried this on IE8 on a windows Vista Machine. What is the problem, how can I get those drivers? This is my first time with ActiveX. Thanks all for any help

    Read the article

  • Difference between library and native library

    - by Alvin
    Could anyone tell me the difference between library and native library in terms of java? I found the word "native library" in the following line: Type 1 - drivers that implement the JDBC API as a mapping to another data access API, such as ODBC. Drivers of this type are generally dependent on a native library, which limits their portability. The JDBC-ODBC Bridge driver is an example of a Type 1 driver. which you can found here

    Read the article

  • Install usb device without manager prompt

    - by Pat
    We have a usb device and the drivers (.inf, libusb.dll, libusb.sys) and can install it using Windows' Device Installation Wizard (by pointing to the .inf file). However, we need to install the drivers without using the wizard (passively, so the user doesn't need to do anything). Does anyone know how this can be achieved?

    Read the article

  • How can I join conditionally in LINQ queries?

    - by Steve Crane
    If I have two tables; Drivers keyed by DriverId and Trips with foreign keys DriverId and CoDriverId, and I want to find all trips where a driver was either the driver or co-driver I could code this in Transact-SQL as select d.DriverId, t.TripId from Trips t inner join Drivers d on t.DriverId = d.DriverId or t.CoDriverId = d.DriverId How could this be coded as a LINQ query?

    Read the article

  • CRUD operations; do you notify whether the insert,update etc. went well ?

    - by danielovich
    Hi guys. I have a simple question for you (i hope) :) I have pretty much always used void as a "return" type when doing CRUD operations on data. Eg. Consider this code: public void Insert(IAuctionItem item) { if (item == null) { AuctionLogger.LogException(new ArgumentNullException("item is null")); } _dataStore.DataContext.AuctionItems.InsertOnSubmit((AuctionItem)item); _dataStore.DataContext.SubmitChanges(); } and then considen this code: public bool Insert(IAuctionItem item) { if (item == null) { AuctionLogger.LogException(new ArgumentNullException("item is null")); } _dataStore.DataContext.AuctionItems.InsertOnSubmit((AuctionItem)item); _dataStore.DataContext.SubmitChanges(); return true; } It actually just comes down to whether you should notify that something was inserted (and went well) or not ?

    Read the article

  • what happens when you plug in a new USB device?

    - by Will
    I have an embedded device with a USB connection. When the user plugs the device into their PC (Windows, OSX), how does the operating system discover what drivers to install? How do I get my drivers to be selected? Can they reside on some central server (run by the OS vendor)?

    Read the article

  • Does anybody know of a USB Postage Scale that's Linux compatible?

    - by Nick
    I'm looking for a postage scale that already has linux support (drivers, etc) for a shipping system that I'm working on. I'm planning to use Ubuntu 9.04, but I am willing to switch distro's for compatibility. Does anybody know of any scales that currently work? Is there an open source project that's working on scale drivers or similar? Thanks!

    Read the article

  • Is this possible in sql server 2005?

    - by chandru_cp
    This is my queries select ClientName,ClientMobNo from Clients select DriverName,DriverMobNo from Drivers It gives me two result tables... But i want to combine both the result tables into a single table... I tried union and union all it doesn't give me what i want.... Note: There is no relationship between the two tables...... There may be 200 clients and 100 drivers...

    Read the article

  • AWStats is processing log files but does not display them

    - by Wouter
    I've setup AWStats on my VPS to get some more insight into the traffic coming to my site. As instructed I ran a manual build/update which ran fine: sudo -u www-data ./awstats.pl -config=xxxx.com Create/Update database for config "/etc/awstats/awstats.xxxx.com.conf" by AWStats version 6.9 (build 1.925) From data in log file "/usr/share/doc/awstats/examples/logresolvemerge.pl /var/www/xxxx.com/logs/*-access.log |"... Phase 1 : First bypass old records, searching new record... Searching new records from beginning of log file... Phase 2 : Now process new records (Flush history on disk after 20000 hosts)... Warning: awstats has detected that some hosts names were already resolved in your logfile /usr/share/doc/awstats/examples/logresolvemerge.pl /var/www/xxxx.com/logs/*-access.log |. If DNS lookup was already made by the logger (web server), you should change your setup DNSLookup=1 into DNSLookup=0 to increase awstats speed. Jumped lines in file: 0 Parsed lines in file: 814 Found 0 dropped records, Found 0 corrupted records, Found 0 old records, Found 814 new qualified records. It also produced the file in the DatDir: /var/lib/awstats/awstats052010.xxxx.com.txt which contains what I would expect. BUT when I visit: xxxx.com/awstats/awstats.pl it tells me Last Update: Never updated (See 'Build/Update' on awstats_setup.html page) and the rest of the page is blank. I'm pretty sure I set it up correctly but now I cannot figure out why this is happening. Hopefully someone smarter then me can help me. Thank you in advanced.

    Read the article

  • SCCM 2012 - Windows 8 WSUS

    - by Owen
    We're using SCCM 2012 to deploy Windows Updates on our domain, and our Windows 8 clients have started failing with error 80240438 when they try to update. Windows 7 clients update fine, but Windows 8 clients refuse to do anything. I've done a search online and it seems to only reference Windows InTune. Has anyone seen any similar behavior on Windows 8 machines? If we don't get that error, we're getting 80244021 which seems to indicate that the server can't be found.... but they can resolve it just fine and our exceptions are defined on the proxy too. A bit stuck here! 2012-11-22 14:45:28:935 476 998 Agent ********* 2012-11-22 14:45:28:935 476 998 Agent ** END ** Agent: Finding updates [CallerId = AutomaticUpdates] 2012-11-22 14:45:28:935 476 998 Agent ************* 2012-11-22 14:45:28:935 476 998 Agent WARNING: WU client failed Searching for update with error 0x80240438 2012-11-22 14:45:28:935 476 c74 AU >>## RESUMED ## AU: Search for updates [CallId = {EAECB947-48AC-43BE-8F98-C44727E4A131} ServiceId = {3DA21691-E39D-4DA6-8A4B-B43877BCB1B7}] 2012-11-22 14:45:28:935 476 c74 AU # WARNING: Search callback failed, result = 0x80240438 2012-11-22 14:45:28:935 476 c74 AU ######### 2012-11-22 14:45:28:935 476 c74 AU ## END ## AU: Search for updates [CallId = {EAECB947-48AC-43BE-8F98-C44727E4A131} ServiceId = {3DA21691-E39D-4DA6-8A4B-B43877BCB1B7}] 2012-11-22 14:45:28:935 476 c74 AU ############# 2012-11-22 14:45:28:935 476 c74 AU All AU searches complete. 2012-11-22 14:45:28:935 476 c74 AU # WARNING: Failed to find updates with error code 80240438 2012-11-22 14:45:28:935 476 c74 AU AU setting next detection timeout to 2012-11-22 04:12:23 2012-11-22 14:45:33:936 476 c9c Report REPORT EVENT: {EE35CD79-FD2A-472D-BFC9-0420F5D60C04} 2012-11-22 14:45:28:935+1300 1 148 [AGENT_DETECTION_FAILED] 101 {00000000-0000-0000-0000-000000000000} 0 80240438 AutomaticUpdates Failure Software Synchronization Windows Update Client failed to detect with error 0x80240438. 2012-11-22 14:45:33:938 476 c9c Report CWERReporter::HandleEvents - WER report upload completed with status 0x8 2012-11-22 14:45:33:938 476 c9c Report WER Report sent: 7.8.9200.16420 0x80240438 00000000-0000-0000-0000-000000000000 Scan 101 Managed 2012-11-22 14:45:33:938 476 c9c Report CWERReporter finishing event handling. (00000000) Thanks in advance

    Read the article

  • ADSIEdit Cleanup After Exchange 2003 Crash During Transition To Exchange 2010

    - by ThaKidd
    Hello all. I would value some input from a few Exchange 2010 experts. I have almost completed the transition from Exchange 2003 Standard to Exchange 2010 Standard. Everything went smoothly until I tried to uninstall Exchange 2003. At that point the server bit the dust and died completely. I now have NO access to the old Exchange System Management MMC as I am running Windows 2008 SR2 and Windows 7 only. I can only fix this with ADSIEdit, EMShell, and EMConsole. I have used the 2010 shell to move/remove/verify that all mailboxes, public folders and OAB are hosted on Exchange 2010. I also verified that the routing connector has been deleted. The only two things that were not done was to remove the Recipient Update Service and actually perform the removal of the 2003 software. I have spent a lot of time going through ASDIedit and have located the old Administrative Group and the Exchange 2003 server listed under it. I also located the Recipient Update Service which includes two entries; Enterprise and my domain name. I have read that it is an unwise idea to remove the old administrative group so I won't bother messing with that. I am repeatedly getting three warnings in the Application Log. Both are from MSExchangeTransport EventID 5006 (Cannot find route to Mailbox Server OLDSERVER) and 5020 (The topology doesn't contain a route to Exchange 2000 Server or Exchange Server 2003) So my questions are: To clean out AD of the old Exchange 2003 info, can I delete the server name folder (Configuration - Services - Microsoft Exchange - ExchOrg - Administrative Groups - First Administrative Group - Servers - Old Server) and also delete the Update Recipient Service (Enterprise) and Update Recipient Service (DOMAIN) containers safely? Are there any additional items I need to address to ensure the AD is clean? Thanks in advance for your help!

    Read the article

  • ASDIEdit Cleanup After Exchange 2003 Crash During Transition To Exchange 2010

    - by ThaKidd
    Hello all. I would value some input from a few experts. I have almost completed the transition from Exchange 2003 Standard to Exchange 2010 Standard. Everything went smoothly until I tried to uninstall Exchange 2003. At that point the server bit the dust and died completely. I now have NO access to the old Exchange System Management MMC as I am running Windows 2008 SR2 and Windows 7 only. I can only fix this with ASDIEdit, EMShell, and EMConsole. I have used the 2010 shell to move/remove/verify that all mailboxes, public folders and OAB are hosted on Exchange 2010. I also verified that the routing connector has been deleted. The only two things that were not done was to remove the Recipient Update Service and actually perform the removal of the 2003 software. I have spent a lot of time going through ASDIedit and have located the old Administrative Group and the Exchange 2003 server listed under it. I also located the Recipient Update Service which includes two entries; Enterprise and my domain name. I have read that it is an unwise idea to remove the old administrative group so I won't bother messing with that. I am repeatedly getting three warnings in the Application Log. Both are from MSExchangeTransport EventID 5006 (Cannot find route to Mailbox Server OLDSERVER) and 5020 (The topology doesn't contain a route to Exchange 2000 Server or Exchange Server 2003) So my questions are: To clean out AD of the old Exchange 2003 info, can I delete the server name folder (Configuration - Services - Microsoft Exchange - ExchOrg - Administrative Groups - First Administrative Group - Servers - Old Server) and also delete the Update Recipient Service (Enterprise) and Update Recipient Service (DOMAIN) containers safely? Are there any additional items I need to address to ensure the AD is clean? Thanks in advance for your help!

    Read the article

  • SQL Server Management Studio not scripting all objects

    - by Ian Boyd
    i've been attempting to script a database using SQL Server 2005 Management Studio. i cannot get it to script some objects. It scripts others, but skips some. i can provide detailed screen shots the options being selected including all tables the folder where the script files will go the folder being empty before scripting the scripting process saying Sucess when scripting a table the destination folder no longer empty, with a hundred or so script files the script of some tables not being in the folder. And earlier SSMS would not script some views. Is this a known thing that the the Generate Scripts task does not generate scripts? Update Known issue on Microsoft Connect, but Microsoft couldn't repro the steps, so they closed closed the ticket. Fails on SQL Server 2005, also fails on SQL Server 2008. Update Two Some basic questions: 1.What version of SQL Server? Microsoft SQL Server 2000 - 8.00.194 (Intel X86) Microsoft SQL Server 2005 - 9.00.3042.00 (Intel X86) Microsoft SQL Server 2008 - 10.0.2531.0 (Intel X86) Microsoft SQL Server 2005 Management Studio: 9.00.4035.00 Microsoft SQL Server 2008 Management Studio: 10.0.1600.22 2.What O/S are you running on? Windows Server 2000 Windows Server 2003 Windows Server 2008 3.How are you logging in to SQL server? sa/password Trusted authentication 4.Have you verified your account has full access to all objects? Yes, i have access to all objects. 5.Can you use the objects that fail to script? (eg: select top(10) * from nonScriptingTable) Yes, all objects work fine. SQL Server Enterprise Manager can script the objects fine. Update Three They fail no matter what version of SQL Server you script against. It wasn't a problem in Enterprise Manager: Client Tools SQL Server 2000 SQL Server 2005 SQL Server 2008 ============ =============== =============== =============== 2000 Yes n/a n/a 2005 No No No 2008 No No No Update Four No errors found in the database using: DBCC CHECKDB go DBCC CHECKCONSTRAINTS go DBCC CHECKFILEGROUP go DBCC CHECKIDENT go DBCC CHECKCATALOG go EXECUTE sp_msforeachtable 'DBCC CHECKTABLE (''?'')' Honk if you hate SSMS.

    Read the article

  • Setting a time limit for a transaction in MySQL/InnoDB

    - by Trevor Burnham
    This sprang from this related question, where I wanted to know how to force two transactions to occur sequentially in a trivial case (where both are operating on only a single row). I got an answer—use SELECT ... FOR UPDATE as the first line of both transactions—but this leads to a problem: If the first transaction is never committed or rolled back, then the second transaction will be blocked indefinitely. The innodb_lock_wait_timeout variable sets the number of seconds after which the client trying to make the second transaction would be told "Sorry, try again"... but as far as I can tell, they'd be trying again until the next server reboot. So: Surely there must be a way to force a ROLLBACK if a transaction is taking forever? Must I resort to using a daemon to kill such transactions, and if so, what would such a daemon look like? If a connection is killed by wait_timeout or interactive_timeout mid-transaction, is the transaction rolled back? Is there a way to test this from the console? Clarification: innodb_lock_wait_timeout sets the number of seconds that a transaction will wait for a lock to be released before giving up; what I want is a way of forcing a lock to be released. Update: Here's a simple example that demonstrates why innodb_lock_wait_timeout is not sufficient to ensure that the second transaction is not blocked by the first: START TRANSACTION; SELECT SLEEP(55); COMMIT; With the default setting of innodb_lock_wait_timeout = 50, this transaction completes without errors after 55 seconds. And if you add an UPDATE before the SLEEP line, then initiate a second transaction from another client that tries to SELECT ... FOR UPDATE the same row, it's the second transaction that times out, not the one that fell asleep. What I'm looking for is a way to force an end to this transaction's restful slumber.

    Read the article

  • Google Chrome does not launch after some months without using it in Windows XP/Seven

    - by kokbira
    Well, I have in computers I use with Windows XP or Seven more than one browser installed on them, generally Internet Explorer 8, Firefox 4, Opera 11 and Google Chrome. I often use Firefox, but I want to use Google Chrome sometimes because I have a lot of addons and webapps on it. The issue is: when I try to execute Chrome after some months without using it, it does not function. Using Proccess Explorer or Task Manager, I can see that there is not any Google proccesses running. Then I reinstall it and all functions. But if I do not use it for some months again, it will not function... Is it an update problem? Must I use Chrome everyday or is there another way to avoid that issue? PS: I installed English and Portuguese last versions (how to get the version numbers when it does not execute?), not at the same time, and it continues to do not launch... PS2: There is a running Google Update proccess that is launched in startup: HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Run    Name:            Google Update    Type:            REG_SZ    Data:            "C:\Users\Ubirajara\AppData\Local\Google\Update\GoogleUpdate.exe" /c

    Read the article

  • How do I install the main repositories for RHEL6

    - by eisaacson
    We've setup RHEL6 on a new server. As far as we can tell, our subscription is all setup properly. However, when I run yum repolist, it doesn't show any repositories. /etc/yum.repos.d/redhat.repo is empty. I tried pasting in the content from another RHEL6 server's redhat.repo but as soon as I run yum, it wipes it out again. I just need to get the basic RedHat repositories setup so I can install packages. EDIT: Using the GUI, I went to System Administration Red Hat Subscription Manager. Under the 'Products' tab, it did not show any products. EDIT: When I run yum update, here's what I get: # yum update Loaded plugins: product-id, refresh-packagekit, security, subscription-manager This system is receiving updates from Red Hat Subscription Management. Setting up Update Process No Packages marked for Update When I log in to RedHat customer portal, it shows that subscription as active. EDIT: To make sure I wasn't having a subscription issue. I re-registered and re-subscribed. I get all the same results. # subscription-manager register --force # subscription-manager subscribe --pool=*redacted* EDIT: contents of /etc/yum.conf [main] cachedir=/var/cache/yum/$basearch/$releasever keepcache=0 debuglevel=2 logfile=/var/log/yum.log exactarch=1 obsoletes=1 gpgcheck=1 plugins=1 installonly_limit=3 contents of /etc/yum/pluginconf.d/rhnplugin.conf: [main] enabled = 0 gpgcheck = 1

    Read the article

  • Huh? JDK not found? (on Windows 7 64-bit)

    - by Android Eve
    I am setting up a development environment for the latest Android 2.3 on a fresh install of Windows 7 64-bit. I first installed the 64-bit JDK 6 (jdk-6u23-windows-x64.exe). Then, I installed 64-bit Eclipse Classic 3.6 (eclipse-SDK-3.6.1-win32-x86_64.zip). Then, I proceed to install the Android SDK Starter Package: installer_r08-windows.exe. But... upon start it says: "Java SE Development Kit (JDK) not found." Why? I just installed it. Is this a mismatch between 32-bit and 64-bit? How do I solve this? Update (1): I tried setting the %JAVA_HOME% environment variable, as well as setting the Installed JREs in Eclipse, as suggested below. None of these solved the problem. It appears that I am not the only experiencing the problem, as this thread suggests: http://stackoverflow.com/questions/1919340/android-sdk-setup-under-windows-7-pro-64-bit I wonder whether there is a 64-bit version of the Android SDK. Update (2): I used the zip version instead (android-sdk_r08-windows.zip), ran android.bat, updated all SDK packages, and installed the ADT plugin (8.0.1), not before having to check: 'Contact all update sites during install to find required software'. We'll see how this goes... Update (3): It worked! (going to accept @bubu's answer shortly) -- but why doesn't the emulator include the HelloAndroid app when I run it (Ctrl+F11) from Eclipse?

    Read the article

  • SQL Server Management Studio not scripting all objects

    - by Ian Boyd
    i've been attempting to script a database using SQL Server 2005 Management Studio. i cannot get it to script some objects. It scripts others, but skips some. i can provide detailed screen shots the options being selected including all tables the folder where the script files will go the folder being empty before scripting the scripting process saying Sucess when scripting a table the destination folder no longer empty, with a hundred or so script files the script of some tables not being in the folder. And earlier SSMS would not script some views. Is this a known thing that the the Generate Scripts task does not generate scripts? Update Known issue on Microsoft Connect, but Microsoft couldn't repro the steps, so they closed closed the ticket. Fails on SQL Server 2005, also fails on SQL Server 2008. Update Two Some basic questions: 1.What version of SQL Server? Microsoft SQL Server 2000 - 8.00.194 (Intel X86) Microsoft SQL Server 2005 - 9.00.3042.00 (Intel X86) Microsoft SQL Server 2008 - 10.0.2531.0 (Intel X86) Microsoft SQL Server 2005 Management Studio: 9.00.4035.00 Microsoft SQL Server 2008 Management Studio: 10.0.1600.22 2.What O/S are you running on? Windows Server 2000 Windows Server 2003 Windows Server 2008 3.How are you logging in to SQL server? sa/password Trusted authentication 4.Have you verified your account has full access to all objects? Yes, i have access to all objects. 5.Can you use the objects that fail to script? (eg: select top(10) * from nonScriptingTable) Yes, all objects work fine. SQL Server Enterprise Manager can script the objects fine. Update Three They fail no matter what version of SQL Server you script against. It wasn't a problem in Enterprise Manager: Client Tools SQL Server 2000 SQL Server 2005 SQL Server 2008 ============ =============== =============== =============== 2000 Yes n/a n/a 2005 No No No 2008 No No No Update Four No errors found in the database using: DBCC CHECKDB go DBCC CHECKCONSTRAINTS go DBCC CHECKFILEGROUP go DBCC CHECKIDENT go DBCC CHECKCATALOG go EXECUTE sp_msforeachtable 'DBCC CHECKTABLE (''?'')' Honk if you hate SSMS.

    Read the article

  • svn post-commit not performing

    - by davin
    ive been sitting on this for about 7 hours, and ive aged close to 7 years... ahhh, server admin does that to me. i have svn wired through apache2 with webdav in the usual manner (basically like http://www.howtoforge.com/setting-up-subversion-with-webdav-post-commit-hook-and-multiple-sites-on-jaunty-jackalope-ubuntu-9.04). ive had endless problems with this (i didnt on my previous ubuntu server install, although this is ubuntu 10.10): this happened, and was fixed like in the post: http://stackoverflow.com/questions/2547400/how-do-you-fix-an-svn-409-conflict-error this looks like my issue, although its not my solution: http://serverfault.com/questions/135494/apache-svn-on-ubuntu-post-commit-hook-fails-silently-pre-commit-hook-permis my commit to svn works (finally). although the post-commit hook which is supposed to svn update the working copy of the repo on the server, doesn't work. the post-commit hook itself executes, and has sudo permissions (as in the setup url above. testing with whoami somelogfile.log or sudo whoami somelogfile.log shows www-data and root, respectively), although it wont perform the svn update (sudo svn update /var/www/gameServer /var/svn/gameServer.log). similar to the serverfault url above, when i perform the exact command it does update the working copy to the latest revision, just not through the post-commit hook. an age old question that is 90% of the time a permissions issue. but in pure frustration i chmod 777 lots of stuff not to mention the fact that www-data is in /etc/sudoer so it shouldnt even need that. im collapsing in front of the screen partly out of frustration and partly out of sleepiness. any direction would be appreciated.

    Read the article

< Previous Page | 235 236 237 238 239 240 241 242 243 244 245 246  | Next Page >