Search Results

Search found 17450 results on 698 pages for 'stable release updates'.

Page 52/698 | < Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >

  • MSE updating fails, no warning or error message.

    - by WebDevHobo
    I'm running Windows 7 Ultimate, 32-bit. For the last couple of days, MSE doesn't fails to update, remaining stuck at version 1.75.119 I presume that an error log is created somewhere, or an event log, but I don't know where to find those. It just says "connection failed". Tried it at home, at work and friends places, but never works. Restarted computer a lot of times now, checked for Microsoft Updates in general, but nothing shows up. EDIT: I've opened a bounty for this, because I really don't know what to do anymore. The oldest answer(the long post) here did not work. Besides this problem, I'm having trouble using MSI installers too. I've had to add the SYSTEM group to a lot of maps and give them full control, but shouldn't the SYSTEM already be there? Also, I had to remove the "read-only" attribute from the ProgramData and Users folders, add the SYSTEM group there too and give them full control. Only then will the MSI install work and even then, it says I doesn't have the rights to create a shortcut on the desktop. Don't know what I need to modify and where for that. I'm saying this because I don't know how MSE updates, but if they use MSI files to do that, that might explain things. The SYSTEM group remains added, but every time I take away the read only attribute, click OK and check the settings again, read-only is still active... That's all I know. Screenshot, all those updates were manual:

    Read the article

  • Virus cleanup + dying drive = XP Automatic Updates crashing in esent.dll

    - by quack quixote
    Background I'm doing system recovery on an old WinXP SP1 system brought to me on suspicion of virus infection. After taking preliminary backups, I used MalwareBytes to detect and clean the infection. I might've even gotten it all. In the process, I've discovered (a) the system drive is showing signs of impending failure, and (b) the owner has been using the system's old crusty IE-6 instead of the up-to-date Firefox I've provided for him. So naturally, thinking I had a relatively stable system, I tried to hit the Windows Update site to install IE-8, in case further training doesn't stick. The update site told me it needed to update the installer, and I started that process. Soon after, wuauclt.exe started crashing, reporting addresses in module esent.dll. There's a Microsoft KB (910437) on a problem with that DLL, so I downloaded the hotfix and installed. The crashing did not stop. I attempted to install SP3 from the offline installer, but that didn't fix the issue either. The system is reporting a few hard drive / IDE controller errors, but they don't correlate to the crashes, so they aren't the direct cause. I've also attempted to rollback to the time between the infection removal and the first crashes, but that doesn't help. Question The hotfix I tried to install dealt with problem in transaction logs of the Extensible Storage Engine (ESE) database. I suspect this issue is similar, but that the database itself (whatever the ESE database is) is corrupted. Is there a way to clean or clear this database so that system operation returns to normal? Can someone enlighten me as to what the ESE database actually is, and where it resides? Can I just locate some files and delete them to bring this under control?

    Read the article

  • Patch management on multiple systems

    - by Pierre
    I'm in charge of auditing the security configuration of an important farm of Unix servers. So far, I came up with a way to assess the basic configuration but not the installed updates. The very problem here is that I just can't trust the package management tools on those machine. Indeed some of them did not sync with the repository for a long time (So I can't do a "yum check-updates" on Redhat for example). Some of those servers are not even connected to the internet and use an company repository. Another problem is that I have multiple target systems: AIX, Debian, Centos/Redhat, etc... So the version could be different (AIX) and the tools available will be different. And, last but not least, I can't install anything on the target system. So I need to use a script to retrieve the information and either: process it directly or save the information to be able to process it later on a server (Which may happen to run a different distribution than the one on which the information have been retrieved). The best ideas I could come up with were: either retrieve the list of installed packages on the machine (dpkg -l for example on debian) and process it on a dedicated server (Directly parsing the "Packages" file of debian repositories). Still, the problem remains the same for AIX and Redhat... or use Nessus' scripts to assess vulnerability on the installed packages, but I find this a bit dirty. Does anyone know any better/efficient way of doing this ? P.S: I already took time to review some answers to similar problems. Unfortunately Chef, puppet, ... don't meet the requirements I have to meet. Edit: Long story short. I need to have the list of missing updates on a Unix system just like MBSA on Windows. I'm not authorized to install anything on this system as it's not mine. All I have are scripts languages. Thanks.

    Read the article

  • Ubuntu web server cluster checks Ubuntu repository for script updates with cron

    - by StuartTheY
    I have a cluster of Ubuntu 12.04 web servers running a lamp stack. All of these servers are connected to a Load Balancer on Amazon Web Services. What I want to be able to do is have a dedicated Ubuntu server that I can update the PHP files on and have the other web servers check with cron to get the updates files from the repository. They don't have to use cron but that was the only thing I could think of, unless there was a way to have the updated repository tell them that it has updated files. And then how to transfer those files. Also if there is a ways for a server to check for updated files when it boots because I am going to be using auto scaling on AWS so when there is an increase in the load and another server gets created I need it to download the updated files from the repository when launched. Not sure how to transfer files from server to server.

    Read the article

  • Ubuntu Not Starting After Installing Software Updates

    - by Prabhu
    I just did a software update on my Ubuntu 9.10, after that it asked me to restart my system which i did and once i did that, grub opend some simple bash shell or something... i dont know how do i run ubuntu now??? Ubuntu 9.10 was installed 1 week back using the wubi installer... which is like a one click install from windows... it is installed on the same partition as winxp... i chose 12 gb hd space while installing ubuntu.. it was working good for 1 week... i decided to do the software updates today morning and now its not even starting!!!

    Read the article

  • ISC Bind support for GSS-TSIG DDNS Updates?

    - by netlinxman
    First, has anyone EVER configured ISC bind 9.5.0 OR greater with support for GSS-TSIG Dynamic DNS Updates AND gotten it to work? If so, what is the configuration that was used to make that happen? I feel close to having this working. I see that GSS cred passes w/o apparent error during the TKEY negotiation with an Active Directory DC and the BIND DNS server: client 192.168.0.30#52314: query gss cred: "DNS/[email protected]", GSS_C_ACCEPT, 4294967256 gss-api source name (accept) is [email protected] process_gsstkey(): dns_tsigerror_noerror client 192.168.0.30#52314: send But, when the Update is sent, it is refused: client 192.168.0.30#58330: update client 192.168.0.30#58330: updating zone 'example.com/IN': update failed: rejected by secure update (REFUSED) client 192.168.0.30#58330: send Does anyone have this working in the real world?

    Read the article

  • Can't make updates with LDAP from Linux box to Windows AD

    - by amburnside
    I have a webapp (built using Zend Framework - PHP) that runs on a Linux environment which needs to authenticate against Active Directory on a Windows server. So far my webapp can authenticate with LDAPS, but cannot perform any kind of write operation (add/update/delete). It can only read. I have configured my server as follows: I have exported the CA Certificate from my Windows AD server to /etc/opendldap/certs I have created a pem file based on this certificate using openssl I have update /etc/openldap/ldap.conf so that it knows where to look for the pem certificate: TLS_CACERT /etc/openldap/certs/xyz.internal.pem When I run my script, I get the following error: 0x35 (Server is unwilling to perform; 0000209A: SvcErr: DSID-031A1021, problem 5003 (WILL_NOT_PERFORM), data 0 ): Have I missed something with my configuration, which is causing the server to reject making updates to AD?

    Read the article

  • How to allow program updates without prompting UAC?

    - by Ryan Mortier
    We have about 15-20 users who have this software installed. We have UAC enabled through GPO as you should, which means the software prompts for admin approval if a standard user trys to install it. Thats fine, they can call the help desk to have the software installed. My problem is, our help desk is being bombarded every day because users can't update the software and there are updates almost every day which is prompting UAC. Using procmon.exe to find out where it was trying to write to, I then created a GPO to allow file permission access to the program files folder for this particular software, including the program data folder, but it still prompts for admin approval. It seems as though that the software is using msiexec.exe to run a .msp patch file. The only "ACCESS DENIED"s I can still see in procmon is things like this: What can I possibly do to stop this software from prompting UAC with admin password credentials aside from disabling UAC?

    Read the article

  • No Office XP Updates since installed Compatibility Pack from WSUS

    - by braindump
    Hey folks, we got a bunch of boxes, running Windows XP and Office XP. Since we installed the Office 2007 Compatibility Pack, Office XP does not get updates anymore. Our WSUS provides Office 2007 patches for these computers but no Office XP, e.g. the urgantly needed Serice Pack 3. We allready tried to remove the Compatibility Pack and Office XP, reinstalled Office but there was no change. Do you have any hints? PS: The Office XP has been installed from a compressed ISO, so no administrative installation point.

    Read the article

  • Distributing application updates using SCCM 2007

    - by theraneman
    Hi all, If there are any System Center Config Manager (SCCM) users out there, please clarify my doubt here. I have used the ConfigMgr console to distribute a custom application to a client machine. Now I need to distribute some updated files of that application. Isnt's it possible to add those files in the same package source used earlier and advertise again? Or should I use the SCCM software update section for this? Not sure if its only me, but the Software distribution process looks much easier than the Software Updates process in SCCM 2007. Please do let me know if there any online tutorials which explain how to update a custom application. Any help much appreciated.

    Read the article

  • Windows Update Fails to install updates

    - by resolver101
    Windows update fails to install the updates below. How do I fix this, I've installed the fix it application Microsoft recommends but it still fails. Any ideas how to fix this? Update for Microsoft Outlook Social Connector 2010 (KB2553406) 32-Bit Edition Update for Microsoft OneNote 2010 (KB2553290) 32-Bit Edition Update for Microsoft Office 2010 (KB2553310) 32-Bit Edition Windows Internet Explorer 9 for Windows 7 for x64-based Systems Update for Microsoft Outlook 2010 (KB2553248) 32-Bit Edition I've also attached the windows update log if that helps. The machine is a Windows 7 and we run a windows 2008 SBS domain.

    Read the article

  • Sticking with Ubuntu 12.04 while heavily using PPA for newest software updates (Apache 2.4, PHP 5.5)

    - by MechaStorm
    I was wondering whether is it worthwhile to stick with Ubuntu 12.04 LTS until 14.04 comes or should I be switching to just the latest Ubuntu server version 13.10. My server needs are not enterprise heavy and previous thought to keep with LTS was simply to gain the security updates without having to upgrade the servers every couple months. But as we are moving forward with our software development, I have found that alot of the default version of software with 12.04 is way out of the date forcing me to up date via PPA or from source instead of from default apt-get. ie PHP 5.3 is on 12.04, and I'd like to get it to 5.5. Is it worthwhile to simply move to 13.10 in that situation? With the idea to move to 14.04 when it comes?

    Read the article

  • --log-slave-updates is OFF but some updates are still logged to the slave binary log?

    - by quanta
    MySQL version 5.5.14 According to the document, by the default, slave does not log to its binary log any updates that are received from a master server. Here are my config. on the slave: # egrep 'bin|slave' /etc/my.cnf relay-log=mysqld-relay-bin log-bin = /var/log/mysql/mysql-bin binlog-format=MIXED sync_binlog = 1 log-bin-trust-function-creators = 1 mysql> show global variables like 'log_slave%'; +-------------------+-------+ | Variable_name | Value | +-------------------+-------+ | log_slave_updates | OFF | +-------------------+-------+ 1 row in set (0.01 sec) mysql> select @@log_slave_updates; +---------------------+ | @@log_slave_updates | +---------------------+ | 0 | +---------------------+ 1 row in set (0.00 sec) but slave still logs the some changes to its binary logs, let's see the file size: -rw-rw---- 1 mysql mysql 37M Apr 1 01:00 /var/log/mysql/mysql-bin.001256 -rw-rw---- 1 mysql mysql 25M Apr 2 01:00 /var/log/mysql/mysql-bin.001257 -rw-rw---- 1 mysql mysql 46M Apr 3 01:00 /var/log/mysql/mysql-bin.001258 -rw-rw---- 1 mysql mysql 115M Apr 4 01:00 /var/log/mysql/mysql-bin.001259 -rw-rw---- 1 mysql mysql 105M Apr 4 18:54 /var/log/mysql/mysql-bin.001260 and the sample query when reading these binary files with mysqlbinlog utility: #120404 19:08:57 server id 3 end_log_pos 110324763 Query thread_id=382435 exec_time=0 error_code=0 SET TIMESTAMP=1333541337/*!*/; INSERT INTO norep_SplitValues VALUES ( NAME_CONST('cur_string',_utf8'118212' COLLATE 'utf8_general_ci')) /*!*/; # at 110324763 Did I miss something? Reply to @RolandoMySQLDBA: If replication brought this over, then the same query has to be in the relay logs. Please go find the relay log that has the INSERT query with the same TIMESTAMP (1333541337). There is no such query with the same TIMESTAMP in the relay logs. If you cannot find it in the relay logs, then look and see if Infobright is posting the INSERT query. In that instance, the INSERT should be recorded in the binary logs of the Slave. Looking more deeply into the binary logs, I see that almost of the queries are CREATE/INSERT/UPDATE/DROP "temporary" tables, something like this: # at 123873315 #120405 0:42:04 server id 3 end_log_pos 123873618 Query thread_id=395373 exec_time=0 error_code=0 SET TIMESTAMP=1333561324/*!*/; SET @@session.pseudo_thread_id=395373/*!*/; CREATE TEMPORARY TABLE `norep_tmpcampaign` ( `campaignid` INTEGER(11) NOT NULL DEFAULT '0' , `status` INTEGER(11) NOT NULL DEFAULT '0' , `updated` DATETIME, KEY `campaignid` (`campaignid`) )ENGINE=MEMORY /*!*/; # at 123873618 #120405 0:42:04 server id 3 end_log_pos 123873755 Query thread_id=395373 exec_time=0 error_code=0 SET TIMESTAMP=1333561324/*!*/; DROP TABLE IF EXISTS `norep_tmpcampaign1` /* generated by server */ "temporary" here means that they are dropped after calculation is done. I also tells the slave not to replicate any statement matches the norep_ wildcard pattern: replicate-wild-ignore-table=%.norep_% But there is an exception table in the binary logs: # at 123828094 #120405 0:37:21 server id 3 end_log_pos 123828495 Query thread_id=395209 exec_time=0 error_code=0 SET TIMESTAMP=1333561041/*!*/; INSERT INTO sessions (SessionId, ApplicationName, Created, Expires, LockDate, LockId, Timeout, Locked, SessionItems, Fla gs) Values('pgv2exo4y4vo4ccz44vwznu0', '/', '2012-04-05 00:37:21', '2012-04-05 00:57:21', '2012-04-05 00:37:21', 0, 20, 0, 'AwAAAP////8IdXNlcm5hbWUGdXNlcmlkCHBlcm1pdGlkAgAAAAQAAAAGAAAAAQABAAEA', 0) /*!*/; Description: mysql> desc reportingdb.sessions; +-----------------+------------------+------+-----+---------------------+-------+ | Field | Type | Null | Key | Default | Extra | +-----------------+------------------+------+-----+---------------------+-------+ | SessionId | varchar(64) | NO | PRI | | | | ApplicationName | varchar(255) | NO | | | | | Created | timestamp | NO | | 0000-00-00 00:00:00 | | | Expires | timestamp | NO | | 0000-00-00 00:00:00 | | | LockDate | timestamp | NO | | 0000-00-00 00:00:00 | | | LockId | int(11) unsigned | NO | | NULL | | | Timeout | int(11) unsigned | NO | | NULL | | | Locked | bit(1) | NO | | NULL | | | SessionItems | varchar(255) | YES | | NULL | | | Flags | int(11) | NO | | NULL | | +-----------------+------------------+------+-----+---------------------+-------+ I'm sure all these queries are posting by MySQL, not Infobright: $ mysql-ib -u root -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 48971 Server version: 5.1.40 build number (revision)=IB_4.0.5_r15240_15370(ice) (static) Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> select * from information_schema.tables where table_name='sessions'; Empty set (0.02 sec) I've been trying some INSERT/UPDATE queries with testing tables on the master, it is copied to the relay logs, not binary logs on slave: # at 311664029 #120405 0:15:23 server id 1 end_log_pos 311664006 Query thread_id=10458250 exec_time=0 error_code=0 use testuser/*!*/; SET TIMESTAMP=1333559723/*!*/; update users set email2='[email protected]' where id=22 /*!*/; Pay attention to the server id, in the relay logs, server id is master's (1) and in the binary log, server id is slave's (3 in this case). Reply to @RolandoMySQLDBA: Thu Apr 5 10:06:00 ICT 2012 Run CREATE DATABASE quantatest; on the Master now, please. Tell me if CREATE DATABASE quantatest; showed up in the Slave's Binary Logs. As I said above: I've been trying some INSERT/UPDATE queries with testing tables on the master, it is copied to the relay logs, not binary logs and you can guess, IO thread copied it to the relay logs, not binary logs. #120405 10:07:25 server id 1 end_log_pos 347573819 Query thread_id=10480775 exec_time=0 error_code=0 SET TIMESTAMP=1333595245/*!*/; /*!\C latin1 *//*!*/; SET @@session.character_set_client=8,@@session.collation_connection=8,@@session.collation_server=8/*!*/; create database quantatest /*!*/; The question must probably change to: why some update queries still logged to the slave binary logs althrough --log-slave-updates is disabled? Where they come from? Here are few last: /*!*/; # at 27492197 #120405 10:12:45 server id 3 end_log_pos 27492370 Query thread_id=410353 exec_time=0 error_code=0 SET TIMESTAMP=1333595565/*!*/; CREATE TEMPORARY TABLE norep_SplitValues ( value VARCHAR(1000) NOT NULL ) ENGINE=MEMORY /*!*/; # at 27492370 #120405 10:12:45 server id 3 end_log_pos 27492445 Query thread_id=410353 exec_time=0 error_code=0 SET TIMESTAMP=1333595565/*!*/; BEGIN /*!*/; # at 27492445 #120405 10:12:45 server id 3 end_log_pos 27492619 Query thread_id=410353 exec_time=0 error_code=0 SET TIMESTAMP=1333595565/*!*/; INSERT INTO norep_SplitValues VALUES ( NAME_CONST('cur_string',_utf8'119577' COLLATE 'utf8_general_ci')) /*!*/; # at 27492619 #120405 10:12:45 server id 3 end_log_pos 27492695 Query thread_id=410353 exec_time=0 error_code=0 SET TIMESTAMP=1333595565/*!*/; COMMIT /*!*/; # at 27492918 #120405 10:12:46 server id 3 end_log_pos 27493115 Query thread_id=410353 exec_time=0 error_code=0 SET TIMESTAMP=1333595566/*!*/; SELECT `reportingdb`.`selfserving_get_locationad`(_utf8'3' COLLATE 'utf8_general_ci',_utf8'' COLLATE 'utf8_general_ci') /*!*/; # at 27493199 #120405 10:12:46 server id 3 end_log_pos 27493353 Query thread_id=410353 exec_time=0 error_code=0 SET TIMESTAMP=1333595566/*!*/; /*!\C utf8 *//*!*/; SET @@session.character_set_client=33,@@session.collation_connection=33,@@session.collation_server=8/*!*/; DROP TEMPORARY TABLE IF EXISTS `norep_SplitValues` /* generated by server */ /*!*/;

    Read the article

  • Debian apache2 restart fault after some updates

    - by Ripeed
    can anyone give me an advice with this please: I run update on my debian server by Webmin. After updating some apache2 and etc. It shows update fail. After that I cant start apache2. I must run netstat -ltnp | grep ':80' Then kill pid kill -9 1047 and now i can start apache2 When I started it first time after update some websites on fastCGI wont work I must change them in ISPconfig3 to mod-PHP and now works NOW - I cant restart apache without kill pid. In log of ISP I see Unable to open logs (98)Address already in use: make_sock: could not bind to address [::]:80 (98)Address already in use: make_sock: could not bind to address 0.0.0.0:80 no listening sockets available, shutting down In log of some website I see [emerg] (13)Permission denied: mod_fcgid: can´t lock process table in pid 19264 Do you thing it will be solution update everithing by: apt-get update and apt-get upgrade to complete all updates? I have little scare if I do that then next errors will occur. If I look at apache log i see error: Debian Python version mismatch, expected '2.6.5+', found '2.6.6' But that was there before that problem before. Thanks A LOT for help.

    Read the article

  • WSUS trying to download all updates again

    - by Tim Alexander
    The server hosting WSUS had a catastrophic failure and we have had to rebuild the system drives. Luckily the DB and content store for WSUS are on a seperate drive so were unaffected. During the rebuild process we thought it was time to update the server to 2008 R2 (from 2003 R2). Have got the server running and installed the WSUS role, detached the DB form SQL Express 2008 R2 and attached the original. Carried out the wsusutil.exe movecontent command with a -skipcopy switch pointing to the original content store. All looked good until I saw the front page stating it is trying to download files for 6,436 updates at around 344,565 MB!!!!!! Oops, I thought, something not right here. The content store I have on disk is only 75GB but I am thinking that some vital step has been missed in the restoration process. Either way is there a way to make WSUS reindex its local content store or something as I am unsure that downloading 344 gigabytes is a viable way forward! EDIT: Never rains but it pours. AM now getting a CLSID: FX {8b6499ed-0241-e032-6508-da4b1c879d7e} error could not create snap in. think a reinstall of WSUS is in order.

    Read the article

  • server 2008 r2 stuck on installing updates

    - by volody
    I have 2008 r2 64bit server that is stuck on installing updates. It shows message "Please do not power off or unplug your machine. Installing 57 of 61.." This message is shown for 3 hours now. I was trying to connect to this server using mmc. runas /user:AdministratorAccountName@ComputerName "mmc %windir%\system32\compmgmt.msc" RUNAS ERROR: Unable to run - mmc C:\Windows\system32\compmgmt.msc 1311: There are currently no logon servers available to service the logon request. What else can I do? Update: Remote Desktop does not work. It just disappear after entering username and password from Win7-64 computer Update: I have disconected server. Now Server Manager in Roles Summary shows message "Unexpected error refreshing Server Manager: The remote procedure call failed." Event log shows Could not discover the state of the system. An unexpected exception was found: System.Runtime.InteropServices.COMException (0x800706BE): The remote procedure call failed. (Exception from HRESULT: 0x800706BE) at System.Runtime.InteropServices.Marshal.ThrowExceptionForHRInternal(Int32 errorCode, IntPtr errorInfo) at Microsoft.Windows.ServerManager.ComponentInstaller.ThrowHResult(Int32 hr) at Microsoft.Windows.ServerManager.ComponentInstaller.CreateSessionAndPackage(IntPtr& session, IntPtr& package) at Microsoft.Windows.ServerManager.ComponentInstaller.InitializeUpdateInfo() at Microsoft.Windows.ServerManager.ComponentInstaller.Initialize() at Microsoft.Windows.ServerManager.Common.Provider.RefreshDiscovery() at Microsoft.Windows.ServerManager.LocalResult.PerformDiscovery() at Microsoft.Windows.ServerManager.ServerManagerModel.CreateLocalResult(RefreshType refreshType) at Microsoft.Windows.ServerManager.ServerManagerModel.InternalRefreshModelResult(Object state) Update: Manual update hangs on (Installing update 58 of 62...) Seurity Update for Windows Server 2008 R2 x64 Edition (KB979309) Update: It could be an issue with that admin user was renamed

    Read the article

  • Dynamic DNS Updates with Wireless and Wired interfaces

    - by Phaedrus
    We have offices full of Windows & Mac users who obtain IP addresses from a Windows DHCP server, which in turn updates Dynamic DNS entries. We are noticing major inconsistencies with the entries, and have found that the problem is occurring more on Macs than on windows, and even more when users are frequently switching from wired to wireless adapter, which makes sense, as this sequence occurs: User enables wired adapter and registers Proper DNS User enables wireless adapter and registers 2nd proper DNS entry user switches off wireless manually and 2nd entry remains improperly until scavenge. Our help desk folks rely heavily (maybe more than they should) on the dynamic entries as part of their business process. For example, the user submits a help desk ticket, and the staff member expects to be able to remote desktop to their machine by hostname, which is hyperlinked in the helpdesk ticketing app. We have implemented multiple solutions & band-aids to different symptoms of the problems such as: Using DNS Reservations for Macintosh PCs Using DNS Scavenging to remove old records Switching from a Cisco DHCP server to the Windows DHCP Server But no matter what we do, it seems impossible to maintain perfect records. Has anyone encountered this problem before? What is industry best practice? Comments & Suggestions are much appreciated, /P

    Read the article

  • Pulling application updates from closest server?

    - by Mike Morris
    Setup: 6 Major Sites with Server 2003/2008 DCs doing DHCP/AD Integrated DNS, each on their own subnet. All connect back to datacenter through a 3 mbps WAN ERP server running in the datacenter, accessed by clients at all sites Currently, when we update the software, I manually push a copy of the updated client/config files down to each DC. I have a script that we run on each PC to update the clients. It determines what subnet the PC is on, and pulls the software from that DC. It's messy, but it works. The client has an autoupdate feature, but it'll only pull from the application server (which is housed in the datacenter, over the 3 meg link). It takes forever, since the updates are not "patches" but a full version of the client, even for minor upgrades (bad design). After the most recent patch, you can configure the clients to pull from a different server. Unfortunately, it is the same for all clients. Is there some kind of DNS magic I can use to pull from the local server? For instance, if I tell the clients their update server is ERPUPDATE, can I have their local DNS server return a different IP for ERPUPDATE than the other sites? Example: Client 1 is at site A, client 2 is at site b. They each run the software and a version change is detected. As per the config files, the clients look to ERPUPDATE for their updated client. Client 1 queries DNS for the IP of ERPUPDATE at its current location (site A) DNS at site A returns 192.1.1.5 Client 1 pulls update from 192.1.1.5 Client 2 queries DNS for the IP of ERPUPDATE at its current location (site B) DNS at site B returns 192.1.2.5 Client 2 pulls update from 192.1.2.5 Excuse the poor explanation, I worked 61 hours over the weekend and haven't completely rebounded. I'll be happy to clarify if needed!

    Read the article

  • Need advice or pointers on Release Management Strategies

    - by Murray
    I look after an internal web based (Java, JSP, Mediasurface, etc.) system that is in constant use (24/5). Users raise tickets for enhancements, bug fixes and other business changes. These issues are signed off individually and assigned to one of three or four developers. Once the issue is complete it is built and the code only committed to SVN. The changed files (templates, html, classes, jsp) are then copied to a dev server and committed to a different repository from where they are checked out to the UAT server for testing. (this often requires the Tomcat service to be restarted and occasionally the Mediasurface service as well). The users then test and either reject or approve the release. If approved the edited files are checked out to the Live server and the same process as with UAT undertaken. If rejected the developer makes the relevant changes and starts the release process again. This is all done manually without much control. Where different developers are working on similar files, changes sometimes get overwritten by builds done on out of sync code in other cases changes in UAT are moved to live in error as they are mixed up in files associated with a signed off release. I would like to move this to a more controlled and automated process where all source code and output files are held in SVN and releases to Dev, UAT and Live managed by a CI system (We have TeamCity in house for our .NET applications). My question is on how to manage the releases of multiple changes where some will be signed off and moved on and others rejected and returned to the developer. The changes may be on overlapping files and simply merging each release in to a Release Branch means that the rejected changes would have to be backed out of the branch. Is there a way to manage this using SVN and CI or will I simply have to live with the current system.

    Read the article

  • Memory leak found with clang but cant release and autorelease crashes

    - by Rudiger
    I have a class that builds a request based on a few passed in variables. The class also has all the delegate methods to receive the data and stores it in a property for the calling class to retrieve. When the class initializes it creates a connection and then returns itself: NSURLConnection *connection; if (self = [super init]) { self.delegate = theDelegate; ...some code here... connection = [[NSURLConnection alloc] initWithRequest:theRequest delegate:self startImmediately:YES]; } return self; So I can't release it normally and if I autorelease it crashes. Is it the job of the calling class to release? And if so does just releasing the initilised object also release connection or do you have to release it specifically? If so how would you? Thanks

    Read the article

  • Debugging a release only flash problem

    - by Fire Lancer
    I've got an Adobe Flash 10 program that freezes in certain cases, however only when running under a release version of the flash player. With the debug version, the application works fine. What are the best approaches to debugging such issues? I considered installing the release player on my computer and trying to set some kind of non-graphical method of output up (I guess there's some way to write a log file or similar?), however I see no way to have both the release and debug versions installed anyway :( . EDIT: Ok I managed to replace my version of flash player with the release version, and no freeze...so what I know so far is: Flash: Debug Release Vista 32: works works XP PRO 32: works* freeze I gave them the debug players I had to test this Hmm, seeming less and less like an error in my code and more like a bug in the player (10.0.45.2 in all cases)... At the very least id like to see the callstack at the point it freezes. Is there some way to do that without requiring them to install various bits and pieces, e.g. by letting flash write out a log.txt or something with a "trace" like function I can insert in the code in question? EDIT2: I just gave the swf to another person with XP 32bit, same results :(

    Read the article

  • Symlinking (ln) faster than moving (mv)?

    - by Chad Johnson
    When we build web software releases, we prepare the release in a temporary directory and then replace the release directory with the temporary one just prepared: # Move and replace existing release directory. mv /path/to/httpdocs /path/to/httpdocs.before mv /path/to/$newReleaseName /path/to/httpdocs Under this scheme, it happens that with about 1 in every 15 releases, a user was using a file in the original release directory exactly at the time the commands above are run, and a fatal error occurs for that user. I am wondering if using symlinking like follows would be significantly faster, in terms of processing time, thereby helping to lessen the likelihood of this problem: # Remove and replace existing release symlink. ln -sf /path/to/$newReleaseName path/to/httpdocs

    Read the article

  • Upgrading from Windows 8 RTM to Release, if there is a difference

    - by Brayden
    So I'm kinda stuck in a hard place. I'm wanting to finally re-install Windows and looking at whether I should stick to tried and true Windows 7 or go with the funky and experimental Windows 8. As such I've virtualised Windows 8 RTM, looked at its features and GUI and I'm quite impressed, personally. So I'm willing to upgrade to it. However Windows 8's release is just under 50 days away according to Wikipedia and I'm impatient, so if I go to RTM and like it enough to continue using it, will I be able to go to "release" version and receive updates for it as normal or would I have to reinstall again?

    Read the article

  • List of Installed Updates on Windows 7 C#

    - by Shahmir Javaid
    With Microsoft Ultimate Wisdom they have changed the location of updates from Registry. I can get the updates from Windows 2003 Servers no problem. Its just that Windows 7 is no longer in: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall Any body got any other ways to get it. Preferably in C# or using WMI? God Save Microsoft with their Wisdom

    Read the article

< Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >