Search Results

Search found 10028 results on 402 pages for 'berkeley db'.

Page 177/402 | < Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >

  • VPN Connection causes DNS to use wrong DNS server

    - by Bryan
    I have a Windows 7 PC on our company network (which is a member of our Active Directory). Everything works fine until I open a VPN connection to a customer's site. When I do connect, I lose network access to shares on the network, including directories such as 'Application Data' that we have a folder redirection policy for. As you can imagine, this makes working on the PC very difficult, as desktop shortcuts stop working, software stops working properly due to having 'Application Data' pulled from under it. Our network is routed (10.58.5.0/24), with other local subnets existing within the scope of 10.58.0.0/16. The remote network is on 192.168.0.0/24. I've tracked the issue down to being DNS related. As soon as I open the VPN tunnel, all my DNS traffic goes via the remote network, which explains the loss of local resources, but my question is, how can I force local DNS queries to go to our local DNS servers rather than our customers? The output of ipconfig /all when not connected to the VPN is below: Windows IP Configuration Host Name . . . . . . . . . . . . : 7k5xy4j Primary Dns Suffix . . . . . . . : mydomain.local Node Type . . . . . . . . . . . . : Hybrid IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No DNS Suffix Search List. . . . . . : mydomain.local Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : mydomain.local Description . . . . . . . . . . . : Broadcom NetLink (TM) Gigabit Ethernet Physical Address. . . . . . . . . : F0-4D-A2-DB-3B-CA DHCP Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes Link-local IPv6 Address . . . . . : fe80::9457:c5e0:6f10:b298%10(Preferred) IPv4 Address. . . . . . . . . . . : 10.58.5.89(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 Lease Obtained. . . . . . . . . . : 31 January 2012 15:55:47 Lease Expires . . . . . . . . . . : 10 February 2012 10:11:30 Default Gateway . . . . . . . . . : 10.58.5.1 DHCP Server . . . . . . . . . . . : 10.58.3.32 DHCPv6 IAID . . . . . . . . . . . : 250629538 DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-14-AC-76-2D-F0-4D-A2-DB-3B-CA DNS Servers . . . . . . . . . . . : 10.58.3.32 10.58.3.33 NetBIOS over Tcpip. . . . . . . . : Enabled This is the output of the same command with the VPN tunnel connected: Windows IP Configuration Host Name . . . . . . . . . . . . : 7k5xy4j Primary Dns Suffix . . . . . . . : mydomain.local Node Type . . . . . . . . . . . . : Hybrid IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No DNS Suffix Search List. . . . . . : mydomain.local PPP adapter Customer Domain: Connection-specific DNS Suffix . : customerdomain.com Description . . . . . . . . . . . : CustomerDomain Physical Address. . . . . . . . . : DHCP Enabled. . . . . . . . . . . : No Autoconfiguration Enabled . . . . : Yes IPv4 Address. . . . . . . . . . . : 192.168.0.85(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.255 Default Gateway . . . . . . . . . : DNS Servers . . . . . . . . . . . : 192.168.0.16 192.168.0.17 Primary WINS Server . . . . . . . : 192.168.0.17 NetBIOS over Tcpip. . . . . . . . : Disabled Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : mydomain.local Description . . . . . . . . . . . : Broadcom NetLink (TM) Gigabit Ethernet Physical Address. . . . . . . . . : F0-4D-A2-DB-3B-CA DHCP Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes Link-local IPv6 Address . . . . . : fe80::9457:c5e0:6f10:b298%10(Preferred) IPv4 Address. . . . . . . . . . . : 10.58.5.89(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 Lease Obtained. . . . . . . . . . : 31 January 2012 15:55:47 Lease Expires . . . . . . . . . . : 10 February 2012 10:11:30 Default Gateway . . . . . . . . . : 10.58.5.1 DHCP Server . . . . . . . . . . . : 10.58.3.32 DHCPv6 IAID . . . . . . . . . . . : 250629538 DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-14-AC-76-2D-F0-4D-A2-DB-3B-CA DNS Servers . . . . . . . . . . . : 10.58.3.32 10.58.3.33 NetBIOS over Tcpip. . . . . . . . : Enabled Routing table Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 10.58.5.1 10.58.5.89 20 10.58.5.0 255.255.255.0 On-link 10.58.5.89 276 10.58.5.89 255.255.255.255 On-link 10.58.5.89 276 10.58.5.255 255.255.255.255 On-link 10.58.5.89 276 91.194.153.42 255.255.255.255 10.58.5.1 10.58.5.89 21 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 192.168.0.0 255.255.255.0 192.168.0.95 192.168.0.85 21 192.168.0.85 255.255.255.255 On-link 192.168.0.85 276 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 10.58.5.89 276 224.0.0.0 240.0.0.0 On-link 192.168.0.85 276 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 10.58.5.89 276 255.255.255.255 255.255.255.255 On-link 192.168.0.85 276 The binding order for the interfaces is as follows: I've not configured the VPN tunnel to use the default gateway at the remote end, and network comms to nodes on both networks are fine. (i.e. I can ping any node on our network or the remote network). I've modified the PPTP connection properties to use the DNS servers 10.58.3.32 followed by 192.168.0.16, yet the query still goes to 192.168.0.16. Edit: The local resources that disappear are hosted on domain DFS roots, which might (or might not) be relevant.

    Read the article

  • Automate SQL Server 2008 backup script failing to run

    - by Techboy
    I have created a maintenance plan but when I try to execute I get the error: Message [298] SQLServer Error: 15404, Could not obtain information about Windows NT group/user 'XX\Administrator', error code 0x534. [SQLSTATE 42000] (ConnIsLoginSysAdmin) I have given administrator db owner access but still get the error, what am I doing wrong?

    Read the article

  • Can't sync time between domain controller and another servers/computers (windows 2003)

    - by Anatoly
    Good morning, We have domain controller on windows server 2003 and another windows 2003 servers with terminal server and DB and regular workstations. Today in Israel we returned back to winter time (one hour back), domain controller now shows current time but all another machines in network show different time they went two hour back and there's difference in one hour between domain controller and another machines. I have tried to perform W32tm /resync /update but without success, where can be problem? Thank you.

    Read the article

  • Rackspace Cloud servers backing up

    - by drunknbass
    Is there anyway to backup an image of a rackspace cloud server? something thats easy to do instead of running scripts backing up each db and fs. This seems like something easy to do on EC2 and impossible to do on rackspace cloud. Also i know rightscale can work with rackspace cloud only for ubuntu images. Is it possible to do this with rightscale easily if its not yet available directly in rackspacecloud?

    Read the article

  • Error! File In use: SQLSERVER

    - by Asad Butt
    I am trying to copy a database from one folder to another. There is no program at all running except operating system.(I mean all windows shut) I keep getting this : The action cannot be completed because the file is open is another program close file and try again Try again, Try again , Try again ......... what the hell on earth is holding that file and how can I carry on with copy / delete / overwrite files(DB) as this problem is something very common. Thanks

    Read the article

  • Deploy an AWS Auto Scaling groups using Chef Server

    - by Yoga
    You can, for example, to deploy an an Auto Scaling groups consists of web severs, ELB and DB using AWS CloudFormation (with Chef server): http://aws.amazon.com/cloudformation/aws-cloudformation-templates/ But, you need to initially create a CF template, is it possible to do it only using Chef Server and Knife? We don't want to rely much on the CloudFormation and seems the hosted Chef server at (http://www.opscode.com/) is able to do so. Any opensource alternative? Thanks.

    Read the article

  • Mongodb: why is my mongo server using two PID's?

    - by Lucas
    I started my mongo with the following command: [lucas@ecoinstance]~/node/nodetest2$ sudo mongod --dbpath /home/lucas/node/nodetest2/data 2014-06-07T08:46:30.507+0000 [initandlisten] MongoDB starting : pid=6409 port=27017 dbpat h=/home/lucas/node/nodetest2/data 64-bit host=ecoinstance 2014-06-07T08:46:30.508+0000 [initandlisten] db version v2.6.1 2014-06-07T08:46:30.508+0000 [initandlisten] git version: 4b95b086d2374bdcfcdf2249272fb55 2c9c726e8 2014-06-07T08:46:30.508+0000 [initandlisten] build info: Linux build14.nj1.10gen.cc 2.6.3 2-431.3.1.el6.x86_64 #1 SMP Fri Jan 3 21:39:27 UTC 2014 x86_64 BOOST_LIB_VERSION=1_49 2014-06-07T08:46:30.509+0000 [initandlisten] allocator: tcmalloc 2014-06-07T08:46:30.509+0000 [initandlisten] options: { storage: { dbPath: "/home/lucas/n ode/nodetest2/data" } } 2014-06-07T08:46:30.520+0000 [initandlisten] journal dir=/home/lucas/node/nodetest2/data/ journal 2014-06-07T08:46:30.520+0000 [initandlisten] recover : no journal files present, no recov ery needed 2014-06-07T08:46:30.527+0000 [initandlisten] waiting for connections on port 27017 It appears to be working, as I can execute mongo and access the server. However, here are the process running mongo: [lucas@ecoinstance]~/node/testSite$ ps aux | grep mongo root 6540 0.0 0.2 33424 1664 pts/3 S+ 08:52 0:00 sudo mongod --dbpath /ho me/lucas/node/nodetest2/data root 6541 0.6 8.6 522140 52512 pts/3 Sl+ 08:52 0:00 mongod --dbpath /home/lu cas/node/nodetest2/data lucas 6554 0.0 0.1 7836 876 pts/4 S+ 08:52 0:00 grep mongo As you can see, there are two PID's for mongo. Before I ran sudo mongod --dbpath /home/lucas/node/nodetest2/data, there were none (besides the grep of course). How did my command spawn two PID's, and should I be concerned? Any suggestions or tips would be great. Additional Info In addition, I may have other issues that might suggest a cause. I tried running mongo with --fork --logpath /home/lucas..., but it did not work. More information below: [lucas@ecoinstance]~/node/nodetest2$ sudo mongod --dbpath /home/lucas/node/nodetest2/data --fork --logpath /home/lucas/node/nodetest2/data/ about to fork child process, waiting until server is ready for connections. forked process: 6578 ERROR: child process failed, exited with error number 1 [lucas@ecoinstance]~/node/nodetest2$ ls -l data/ total 163852 drwxr-xr-x 2 mongodb nogroup 4096 Jun 7 08:54 journal -rw------- 1 mongodb nogroup 67108864 Jun 7 08:52 local.0 -rw------- 1 mongodb nogroup 16777216 Jun 7 08:52 local.ns -rwxr-xr-x 1 mongodb nogroup 0 Jun 7 08:54 mongod.lock -rw------- 1 mongodb nogroup 67108864 Jun 7 02:08 nodetest1.0 -rw------- 1 mongodb nogroup 16777216 Jun 7 02:08 nodetest1.ns Also, my db path folder is not the original location. It was originally created under the default /var/lib/mongodb/ and moved to my local data folder. This was done after shutting down the server via /etc/init.d/mongod stop. I have a Debian Wheezy server, if it matters.

    Read the article

  • Where are the TweetDeck settings-files located in (ubuntu-) linux?

    - by Philipp Andre
    Hi Everybody, i'm running Windows as well as Ubuntu and like to sync both tweetdeck installations via dropbox. Therefore i need to locate two files: td_26_[username].db preferences_[username].xml I found them on windows under the folder c:\Users[account]\AppData\Roaming\TweetDeckFast.[random string]\Local Store\ But i can't find them on my ubuntu installation. Does anyone know where these files are located? Best Regards Philipp

    Read the article

  • Permission denied error while import to remote repository via svn...

    - by Usman Ajmal
    Hi I am importing my project to another machine on my LAN to the directory: /srv/svn/repos/my-repo where my-repo was created via svnadmin create option The permissions of /srv/svn/repos/my-repo are drwxr-xr-x 6 svn svn 4096 2010-04-19 17:30 my-repo I executed following command to import myProject files to my-repo on remote system sudo svn import -m "First import" myProject svn+ssh://[email protected]/srv/svn/repos/my-repo This command started 'Adding' files but gave following error after 'Adding' 7 files svn: Can't open file '/srv/svn/repos/baltoros-valgrind/db/txn-current-lock': Permission denied Any idea whats going on...? Thanks a lot

    Read the article

  • How to diagnose causes of oom-killer killing processes

    - by dunxd
    I have a small virtual private server running CentOS and www/mail/db, which has recently had a couple of incidents where the web server and ssh became unresponsive. Looking at the logs, I saw that oom-killer had killed these processes, possibly due to running out of memory and swap. Can anyone give me some pointers at how to diagnose what may have caused the most recent incident? Is it likely the first process killed? Where else should I be looking?

    Read the article

  • How SMTP server works? need an understanding

    - by Rajeev
    Hi, I am looking for understanding on how SMTP server works in an environment? example, if I wish to run only SMTP server on windows 2008 for an explicit application, which is running on it's web serer, app. server and DB server. Do i need to register the domain so as to send emails from my domain, if i wish to send emails to some users from that SMTP server.

    Read the article

  • Permission denied install Joomla CiviCRM

    - by Tim
    Dear All, I am trying to install CiviCRM on a Joomla 1.5.17 web server running Ubuntu 9.10. Uploading the package to the tmp directory in /var/www/[site name]/tmp and installing creates this error: Warning: fopen(/var/www/trbcp/administrator/components/com_civicrm/civicrm/templates/CRM/common/civicrm.settings.php.tpl) [function.fopen]: failed to open stream: Permission denied in /var/www/trbcp/libraries/joomla/filesystem/file.php on line 240 Warning: fopen(/var/www/trbcp/administrator/components/com_civicrm/civicrm/templates/CRM/common/civicrm.settings.php.tpl) [function.fopen]: failed to open stream: Permission denied in /var/www/trbcp/libraries/joomla/filesystem/file.php on line 240 Warning: include_once(/var/www/trbcp/administrator/components/com_civicrm/civicrm.settings.php) [function.include-once]: failed to open stream: Permission denied in /var/www/trbcp/administrator/components/com_civicrm/configure.php on line 115 Warning: include_once() [function.include]: Failed opening '/var/www/trbcp/administrator/components/com_civicrm/civicrm.settings.php' for inclusion (include_path='.') in /var/www/trbcp/administrator/components/com_civicrm/configure.php on line 115 Warning: require_once(DB.php) [function.require-once]: failed to open stream: No such file or directory in /var/www/trbcp/administrator/components/com_civicrm/configure.php on line 140 Fatal error: require_once() [function.require]: Failed opening required 'DB.php' (include_path='.') in /var/www/trbcp/administrator/components/com_civicrm/configure.php on line 140 Initially I got a permissions denied error and thought that Joomla did not have permissions to all its directories but looking at Help-System information all the necessary directories are writable. I then decided to chmod 777 all the directories and try again but it still fails. Looking at the directories afterwards it seems that the new directories being created are not being created 777. By changing them I can get at least one step further before the error appears again. My question is does anyone know how to get round this? I am thinking that the new directories being created will require sudo privileges to have mv and create actions carried out, hence the permission denied errors. Can this be configured in Joomla? Or is there a way to specify that new directories created in /var/www/[site name] take 777 by default? any help greatly appreciated! EDIT: P.S. if anyone could give me a clue as to how the insert code feature works as well that would be great! Might make this post a bit more readable! EDIT2: Well I have had a bash at changing the permissions and ownership. sudo chown -R www-data:www-data /var/www/trbcp I then tried changing the whole /var directory (insecure I know but this is a test and dev server for me to find my feet on) to 777 and still getting permission errors. It seems to be error opening stream? Not a php guy so not sure what that is but could it be that permissions to run php script need to change? any thoughts greatly appreciated.

    Read the article

  • Restore single users Exchange 2003 mailbox from backup

    - by Campo
    I take weekly backups of exchange in full. I also take complete weekly backups of the entire server. It is a Server 2003 R2 with AD and Exchange 2003 all on one box. One users inbox has disappeared. She has 19000+ junk items now. It is possible the inbox got mixed into the junk. Regardless it is such a huge mess she is not going to go through all of that.... I want to restore he mailbox from the backup. I followed this MS KB http://support.microsoft.com/kb/823176 I had to use Method 3. I have a VM of Server 2003 R2 with exchange but I am having failures on the restore from NT backup. The backup log just states to check the application log.... Application log points to backup log... Only info Is failed to restore Only thing different is the computer name... The only error I can find is in the Application log. Information Store Database not found All others just say that the backup failed. Any assistance is greatly appreciated. UPDATE I have successfully proven I can restore the DB into a recovery storage group in my VM Unfortunately due to the actual account being on a different store I am unable to do the recovery... Error is The attempt to log on to the Microsoft Exchange Server computer has failed. The MAPI provider failed. Microsoft Exchange Server Information Store ID no: 8004011d-0512-00000000 Two questions QUESTION 1 Should I repeat my steps on the production exchange server in the recover storage group? then merge into her original account? I am just concerned with doing recovery like that on the live server.... QUESTION 2 Is there any way I can extract her .PST from my recovery VM and then import into her outlook? On the Recovery VM: I restored the raw DB from my full backup repaired it with ESEUTIL then mounted in the recovery store. Was thinking I could just repeat and mount in the main store on the VM? Thanks for the suggestions.

    Read the article

  • Use Dropbox as Offsite Enterprise Backup Solution [on hold]

    - by chris
    For my small company, I'm using Tomahawk Backup as the enterprise offshore solution, as it covers files, databases and Exchange (brick level). The problem is the price... it costs more than 10x the price of Dropbox (and others) for the same space (120GB), and doesn't have de-duplication. So I'm wondering: assuming there is no problem with backing up files only (ie copying the exchange store file and the db files to the Dropbox folder), would Dropbox be suitable as the offsite backup solution? Thanks

    Read the article

  • server health monitor

    - by arash
    hello i need to a server monitoring solution. the solution must be able to alert when a server goes down also i want, the solution be flexible to work with various servers for example it must be able to monitor servers like, webserver,application server, DB server and ... i want to know is any open source solution is available for me? thanks in advance

    Read the article

  • Connection error when trying to browse SSAS cube in BIDS

    - by lance
    SQL Server 2008 db instance is installed on a win 2003 server, networked (but no domain) VM. I can browse a cube from BIDS on the same VM, but when I try to browse the same cube on a windows 7 (home premium) networked machine, I get a connection error. The error suggests checking the datasource settings which I have, and when I click "test connection" on the datasource it is successful. Looking for probably causes and solutions?

    Read the article

  • Excel Data Organization: Array Formulas? Tables? Named Range?

    - by Joe Arasin
    I'm trying to make a huge Excel sheet reasonably maintainable, but it's huge in the "hundred-table-db" direction, rather than the "hundred-thousand-row-table" direction. I want to have a baseline data table that looks something like this: | Indicator | Units | 2010 | 2015 | 2020 | 2025 | Source | | GDP | $Gazillion | 300 | 350 | 400 | 450 | BLS | | Population | Millions | 350 | 400 | 450 | 500 | Census | | PetMonkeyPopulation | Thousands | 50 | 60 | 70 | 80 | SimiansRUs | And then be able to have another sheet that looks like: | | 2010 | 2015 | 2020 | 2025 | | MonkeysPerCapita | .1 | .2 | .3 | .4 | | MonkeysPerDollar | .01 | .01 | .01 | .01 | | GDPPerCapita | 300 | 400 | 450 | 600 | Is there some standard way to make this kind of thing maintainable?

    Read the article

  • IIS/ASP.NET performance incident - Perfmon Current Annonymous Users going through roof but Requests/sec low

    - by Laurence
    Setup: ASP.NET 4.0 website on IIS 6.0 on Win 2003 64 bit, 8xCPUs, 16GB memory, separate SQL 2005 DB server. Had a serious slowdown today with any otherwise fairly well performing ASP.NET site. For a period of a couple of hours all page requests were taking a very long time to be served - e.g. 30-60s compared to usual 2s. The w3wp.exe's CPU and memory usage on the webserver was not much higher than normal. The application pool was not in the middle of recycling (and it hadn't recycled for several hours). Bottlenecks in the database were ruled out - no blocks occurring and query results were being returned quickly. I couldn't make any sense of it and set up the following Perfmon counters: Current Anonymous Users (for site in question) Get requests/sec (ditto) Requests/sec for the ASP.NET application running the site Get requests/sec was averaging 100-150. Requests/sec for ASP.NET was averaging 5-10. However Current Anonymous Users was around 200. And then as I was watching, the Current Anonymous Users began to climb steeply going up to about 500 within a few minutes. All this time Get requests/sec & Requests/sec for ASP.NET was if anything going down. I did a whole load of things (in a panic!) to try to get the site working, like shutting it down, recycling the app pool, and adding another worker process to the pool. I also extended the expiration time for content (in IIS under HTTP Headers) in an attempt to lower the number of requests for static files (there are a lot of images on the site). The site is now back to normal, and the counters are fairly steady and reading (added Current Connections counter): Current Anonymous Users : average 30 Get requests/sec : average 100 Requests/sec for ASP.NET : 5 Current Connections : average 300 I have also observed an inverse relationship between Get requests/sec & Current Anonymous Users. Usually both are fairly steady but there will be short periods when Get requests/sec will go down dramatically and Current Anonymous Users will go up in a perfect mirror image. Then they will flip back to their usual levels. So, my questions are: Thinking of the original performance issue - if w3wp.exe CPU, memory usage were normal and there was no DB bottleneck, what could explain page requests taking 20 times longer to be served than usual? What other counters should I be looking at if this happens again? What explains the inverse relationship between Get requests/sec & Current Anonymous Users? What could explain Current Anonymous Users going from 200 to 500 within a few minutes? Many thanks for any insight into this.

    Read the article

  • TempDB is Full Issue for SQL Server 2000

    - by Jason N. Gaylord
    Even though the log file shows that there is over 1 GB of free space, we start receiving an error message every 3 or 4 days saying that the TempDB file is full. I know cursors impact the TempDB file, but is there anything else I should be looking at to see why this keeps happening? I've tried running SQL Profiler, but when running it, it slowed down the DB so much that the users were experiencing timeouts. What specific items should I check for in SQL Profiler?

    Read the article

  • Varnish 503 Guru Mediation errors with pfsense and healthy apache

    - by Fammy
    We are running a pfsense firewall / load balancer with varnish as service, In front of Fedora linux webservers running apache. We are getting intermittent 503 guru mediation errors. We are a bit stuck scratching our heads because it is not easily repeatable. The timeouts are set to 30s (connect and first byte) but yet the 503 page will show instantly, not after 30s. Then if you refresh immediately it may very well work instantly and sometimes for a 100 refreshes. The load average on the web servers is < 1, the DB server is < 3 (all servers (web, db, pfsense/varnish) are physical rather than VM. I would have thought if the timeouts were being hit then the 503 page would only appear after 30s am I mistaken? Also when an error happens there does not appear to be any corresponding error in apache's log files. This seems to affect pages as well as images, so it is possible to have the page load fine, and for 9/10 images on the page to be fine but 1 not work An example of the varnish debug is below. It says no backend connection but I can't figure out why, if the load was high on apache I could understand it being flaky The machines are on the same gig ethernet lan 21 ReqStart c *IP-REMOVED* 33418 1274368062 21 RxRequest c GET 21 RxURL c /fashion/ 21 RxProtocol c HTTP/1.1 21 RxHeader c User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.0.5) Gecko/2008121622 Fedora/3.0.5-1.fc10 Firefox/3.0.5 21 RxHeader c Host: *ourdomain.com* 21 RxHeader c Accept: */* 21 RxHeader c Accept-Encoding: deflate, gzip 21 VCL_call c recv lookup 21 VCL_call c hash 21 Hash c /fashion/ 21 Hash c *ourdomain.com* 21 VCL_return c hash 21 VCL_call c miss fetch 21 FetchError c no backend connection 21 VCL_call c error restart 21 VCL_call c recv lookup 21 VCL_call c hash 21 Hash c /fashion/ 21 Hash c *ourdomain.com* 21 VCL_return c hash 21 VCL_call c miss fetch 21 FetchError c no backend connection 21 VCL_call c error restart 21 VCL_call c recv lookup 21 VCL_call c hash 21 Hash c /fashion/ 21 Hash c *ourdomain.com* 21 VCL_return c hash 21 VCL_call c miss fetch 21 FetchError c no backend connection 21 VCL_call c error deliver 21 VCL_call c deliver deliver 21 TxProtocol c HTTP/1.1 21 TxStatus c 503 21 TxResponse c Service Unavailable 21 TxHeader c Server: Varnish 21 TxHeader c Content-Type: text/html; charset=utf-8 21 TxHeader c Content-Length: 384 21 TxHeader c Accept-Ranges: bytes 21 TxHeader c Date: Wed, 11 Apr 2012 10:36:17 GMT 21 TxHeader c X-Varnish: 1274368062 21 TxHeader c Age: 0 21 TxHeader c Via: 1.1 varnish 21 TxHeader c Connection: close 21 TxHeader c X-Cache: MISS 21 Length c 384 21 ReqEnd c 1274368062 1334140577.449995041 1334140577.450334787 1.794108152 0.000282764 0.000056982

    Read the article

  • Exchange 2003 Offline Defrag

    - by Tom
    Looking to do my first offline defrag this weekend on Exchange 2003. Our Exchange DB is on E drive and server1 is the temporary location where there is sufficient space. Dismount the store and change to c:\program files\exchsrvr\bin Does this look like the correct command to run? eseutil /d "e:\exchdata\priv1.edb /t"\\server1\exchtemp\tempdfg.edb" Is there anything I should be aware of such as backups running at the same time etc?

    Read the article

  • Dump SQL Server Stored Procedures to Files

    - by Jake Wharton
    Is there a non-interactive (read: script-able) way to dump all stored procedures to disk? We keep versions of our stored procedures in the repository to track changes and for deployment and rollback purposes. Currently whenever we want to modify a stored procedure you have to pull it out of the DB directly when you begin your change.

    Read the article

  • Window 7 + SVN permission problem

    - by acidzombie24
    In tortoisesvn i get the error Can't create directory 'C:\dev\repo\subfolder\db\transactions\49-1g.txn': ... I used robocopy (i cant remember if i used /copyall) to copy the files onto an external drive then i copied it back after a format. Now ATM i cannot commit. How do i fix this?

    Read the article

< Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >