Search Results

Search found 22641 results on 906 pages for 'use case'.

Page 623/906 | < Previous Page | 619 620 621 622 623 624 625 626 627 628 629 630  | Next Page >

  • MS SQL Server slows down over time?

    - by Dave Holland
    Have any of you experienced the following, and have you found a solution: A large part of our website's back-end is MS SQL Server 2005. Every week or two weeks the site begins running slower - and I see queries taking longer and longer to complete in SQL. I have a query that I like to use: USE master select text,wait_time,blocking_session_id AS "Block", percent_complete, * from sys.dm_exec_requests CROSS APPLY sys.dm_exec_sql_text(sql_handle) AS s2 order by start_time asc Which is fairly useful... it gives a snapshot of everything that's running right at that moment against your SQL server. What's nice is that even if your CPU is pegged at 100% for some reason and Activity Monitor is refusing to load (I'm sure some of you have been there) this query still returns and you can see what query is killing your DB. When I run this, or Activity Monitor during the times that SQL has begun to slow down I don't see any specific queries causing the issue - they are ALL running slower across the board. If I restart the MS SQL Service then everything is fine, it speeds right up - for a week or two until it happens again. Nothing that I can think of has changed, but this just started a few months ago... Ideas? --Added Please note that when this database slowdown happens it doesn't matter if we are getting 100K page views an hour (busier time of day) or 10K page views an hour (slow time) the queries all take a longer time to complete than normal. The server isn't really under stress - the CPU isn't high, the disk usage doesn't seem to be out of control... it feels like index fragmentation or something of the sort but that doesn't seem to be the case. As far as pasting results of the query I pasted above I really can't do that. The Query above lists the login of the user performing the task, the entire query, etc etc.. and I'd really not like to hand out the names of my databases, tables, columns and the logins online :)... I can tell you that the queries running at that time are normal, standard queries for our site that run all the time, nothing out of the norm.

    Read the article

  • ASUS P8B WS - Endless Reboots

    - by tuxGurl
    I am running a Intel XEON 1245 with 4GBx2 Kingston Memory ECC Unbuffered DDR3 on an ASUS P8B WS motherboard. BIOS Version 0904 x64. This system is a little over a month old. It is running Ubuntu 11.10. This evening I found the machine turned off. When I tried to restart it, it would POST and stop at the GRUB screen. When I selected Ubuntu and hit enter within 2-3 seconds the would shutdown and restart. If I stayed at the GRUB screen and did nothing the system would not cut out. I tried booting off a USB stick and again 2-3 seconds after selecting 'Try Ubuntu without Installing' the machine will cut power and reboot. Things I have tried so far: Resetting the BIOS using the on board jumper Resetting the BIOS settings to default Disconnecting all external hardware - except keyboard & monitor Booting with 1 stick of RAM - I tried different single sticks Ensured that onboard EPU and GPU boost switches are in the off position. I am running a Memtest86 right now and it has been running for 38+ minutes. This is not an OS problem or an overheating issue (I have a CoolerMaster HAF Case with 3 fans besides the CPU fan) I am at a loss as to what to try next. I think the BIOS is mis-configured somehow but I don't know what to look for.

    Read the article

  • How \Deleted flag can be unset for all mails in cyrus-imapd mailbox?

    - by Sachin Divekar
    I have a 5GB mailbox which I moved using imapsync. But somehow I messed up with --delete/--delete2 option and end up with almost all the messages having \Deleted flag set. I do not have delayed expunge enabled, so I can not use unexpunge utility. I am using cyrus-imapd v2.3.7. Using cyrus-imapd's debugging feature I found out that email client(Roundcube in my case) fires following IMAP command to unset it. UID STORE 179 -FLAGS.SILENT (\Deleted) I don't know if somehow I can fire this command for all the mails. Is there any way I can unset \Deleted flag for all the mails in the mailbox? UPDATE: Using @geekosaur's tip of specifying range of message-ids in the above command, I could solve it for one mailbox under INBOX like INBOX.folder1. Is there any way I can do it for multiple mailboxes under INBOX recursively? Now I am working on solving it using/creating some script, maybe using Perl's IMAP related module. But still I need to solve it asap so inputs are welcome.

    Read the article

  • sudo ./starling start works well but sudo service starling start fails

    - by Keating Wang
    sudo ./starling start works well but sudo service starling start fails $ sudo ./starling start * Starting Starling Server... [ OK ] $ sudo ./starling stop * Stop Starling Server... [ OK ] $ sudo service starling stop * Starting Starling Server... /home/keating/.rvm/rubies/ruby-1.9.2-p290/lib/ruby/site_ruby/1.9.1/rubygems/dependency.rb:247:in `to_specs': Could not find starling (>= 0) amongst [minitest-1.6.0, rake-0.8.7, rdoc-2.5.8] (Gem::LoadError) from /home/keating/.rvm/rubies/ruby-1.9.2-p290/lib/ruby/site_ruby/1.9.1/rubygems/dependency.rb:256:in `to_spec' from /home/keating/.rvm/rubies/ruby-1.9.2-p290/lib/ruby/site_ruby/1.9.1/rubygems.rb:1229:in `gem' from /home/keating/.rvm/gems/ruby-1.9.2-p290/bin/starling:18:in `<main>' The error above is 'cannot find gem starling' Following the starling file(located in /etc/init.d, rwxrwxrwx): set -e LOGFILE=/var/log/starling/starling.log SPOOLDIR=/var/spool/starling PORT=22122 LISTEN=127.0.0.1 PIDFILE=/var/run/starling.pid NAME=starling DESC="Starling" INSTALL_DIR=/home/keating/.rvm/gems/ruby-1.9.2-p290/bin/ DAEMON=$INSTALL_DIR/$NAME SCRIPTNAME=/etc/init.d/$NAME OPTS="-d" . /lib/lsb/init-functions d_start() { log_begin_msg "Starting Starling Server..." start-stop-daemon --start --quiet --pidfile $PIDFILE --exec $DAEMON -- $OPTS || log_end_msg 1 log_end_msg 0 } d_stop() { log_begin_msg "Stopping Starling Server..." start-stop-daemon --stop --quiet --pidfile $PIDFILE || log_end_msg 1 log_end_msg 0 } case "$1" in start) d_start ;; stop) d_stop ;; restart|force-reload|reload) d_stop sleep 2 d_start ;; *) echo "Usage: $SCRIPTNAME {start|stop|restart|force-reload}" exit 3 ;; esac exit 0

    Read the article

  • Networking "chokes" on Windows 7 64 bit

    - by Rohit Nair
    I've been having this problem for some months now, and I have been unable to figure out a solution, or even the cause. At random points throughout the day, my internet connectivity "freezes". I don't get disconnected from my local wireless network. My router doesn't get disconnected from the world. However, for some reason, my computer stops receiving packets. If I'm playing an MMO ( World of Warcraft, in this case, but it has happened with Eve Online as well ) all activity just freezes. If I try to browse, Opera, Firefox and IE all stall at "Waiting for google.com..." or whatever the hostname may be. Inspection with a packet sniffer seems to reveal that there are no incoming packets. Here's the interesting part. Disconnecting from my wireless network and reconnecting fixes the issue. Obviously this led me to conclude that it was a problem with my router or wireless card. However, I have tweaked all the settings on my router that I could think of, including things like QoS, AP Isolation, etc. with no change. My wireless card doesn't really have that many options, and I have uninstalled and reinstalled drivers a few times without any change. Windows Firewall on/off doesn't make a difference. Anyone have any suggestions for debugging this? It's becoming an annoyance.

    Read the article

  • external drive enclosure -> software RAID 5?

    - by memilanuk
    Hello all, I have two older PCs on my LAN posing as 'servers'... one running FreeNAS off a USB stick using three 500GB hdds in a ZFS RAID-Z pool serving as storage for the LAN and one running Debian Lenny with an 80GB drive used as a general purpose 'tinker' box that I can ssh into, etc. Problem is that the SMART report for one of those 500GB drives in the FreeNAS box is showing some pre-failure attributes, and the whole array is a little small anyways. Rather than simply replace one 500GB drive with another 500GB drive, and have no backup of the file server, I'd like to upgrade all the drives to 2TB ones - but I have no where to store that much data in the mean while. As such, I started looking at getting a 4-bay external drive enclosure with an eSATA card for the Debian box, with the hopes of creating a RAID5 + LVM setup using those drives and backing the data up to that external drive enclosure. After the backup is done, replace the drives in the FreeNAS box and rebuild the array there and mirror the data back. Then, I'd have both the primary storage (on the FreeNAS box) and a backup (which I don't have currently) using the external drive enclosure on the Debian box. My big question is... most of these external drive boxes seem to claim support for JBOD, RAID 0, 1, 10, 5, etc. - should I presume that is simply fake RAID like many commodity mobos have, and not really usable in Linux? In that case, with all the drives hanging off the one eSATA connection, will Linux (specifically Debian Squeeze, as I plan on upgrading that box here shortly) see all four drives, or just the first one? Will I be able to configure them in a RAID5 array as desired? Thanks, Monte

    Read the article

  • Raid-1 Western Digital Green AARS, cloning and WD Align Utility

    - by Jaguar
    Hello all, My current setup runs on top of 2x Western Digital 2500KS drives on Raid-1, using the motherboard's 780G raid controller, on WinXP. Everything is fine, but the drives are a bit noisy. I am considering buying 2x WD6400AARS disks which are the 640GB slower 'green' drives, but also feature the Advanced Formatting 4KB sectors. This means that for WinXP the partition will have to be aligned to work properly, else there is a performance penalty. There are 2 questions here: The Green drives from WD are all slower and are (according to WD) susceptible to drop-out's from the controller. Has anyone any experience in this matter? Is there a possibility the controller will drop a drive? If so, can i do anything about it? Secondly, western digital gives a utility to perform the alignment on the partition. The thing is, will the utility see the drives in question as the operating system only sees 1 logical disk? I will be making the transition using a cloning tool (most probably norton ghost) unless i don't find a solution or a clear answer, in which case i'll just buy a win 7 license and make a clean install... thx in advance

    Read the article

  • Run script before shutdown/restart

    - by dtbarne
    I'd like to run a PHP script when an instance is told to shutdown, but of course before it actually finishes shutting down. My particular script is just looking to push some log files from the local partition to a another server. I've got the gist of how this process works, but I need some clarification. How I understand it. Please correct me if I'm wrong. Create an executable script in /etc/init.d (lets call it /etc/init.d/push-logs) Create a symlink to /etc/init.d/push-logs from /etc/rc0.d (shutdown) and /etc/rc6.d (reboot). The name should be KXXpush-logs Here's my questions: Of course - am I understanding correctly? For #2 above - it sounds like the lower the XX the better - is there too low a number I can use? Does it matter if it shares a number with another script? Does the script in /etc/init.d/push-logs HAVE to follow the standard init.d template (supporting start/stop, etc. commands)? This doesn't really apply to my use case. If possible I just want the script to be the following: #!/bin/sh # # Run PHP file prior to shutdown # /usr/bin/php /path/to/php_file.php

    Read the article

  • How to setup a virtual host in Ubuntu?

    - by Rade
    I have an app that's accessible via 1.2.3.4/myapp. The app is installed in /var/www/myapp. I've set up a subdomain(apps.mydomain.com) that points to 1.2.3.4. I want the server to point to var/www/myapp if I type apps.mydomain.com/myapp, how do I do that? I have experience creating virtual hosts(lots of them) locally but I'm lost because it's now in production and it's a little different. Here's my virtual host config: <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName apps.mydomain.com/myapp DocumentRoot /var/www/myapp/public <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride All Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> Any idea why I still see the files instead of pointing me to the document root? Just in case someone might ask, the app is based on Laravel 4 framework. It's really bad right now because anyone can access the files from the browser.

    Read the article

  • Cant connect to MySQL server from Java application

    - by RN
    This is on VPS\Centos server. The MySQL server is pre configured. I am running the Java application on Tomcat My Java web application is not able to connect to the MySQL server. I get an error - "Caused by: java.net.ConnectException: Connection refused" I suspect this to be a configuration problem rather than a coding problem- hence I have posted this on ServerFault And yes, The same web-app is able to connect to MySQL on a different linux box This is the URL that I provided to my Java application (note- it assumes default port) url = "jdbc:mysql://localhost/pickupgames" My first suspicion was that I am running on a non-default port So I tried to find the port where mySQL server is running I tried every trick mentioned in http://serverfault.com/questions/116100/how-to-check-what-port-mysql-is-running-on But no luck ! SHOW GLOBAL VARIABLES LIKE 'PORT'; This shows port 0 netstat -tlnp doesn't show mysql at all /etc/my.cnf It has no port entry telnet localhost 3306 Doesn't connect And in case you are wondering if mysql server is running at all or not It is And I know for sure, because I have been able to login using the mysql command Also # ps -ef|grep 'mysql' root 31839 27662 0 00:49 pts/3 00:00:00 grep mysql root 32452 1 0 Apr02 ? 00:00:00 /bin/sh /usr/bin/mysqld_safe --skip-grant-tables --skip-networking mysql 32504 32452 0 Apr02 ? 00:00:06 /usr/libexec/mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --pid-file=/var/run/mysqld/mysqld.pid --skip-external-locking --socket=/var/lib/mysql/mysql.sock --skip-grant-tables --skip-networking Please note the --skip-networking parameter Does this have something to do with the issue ? Any explanation why I cant connect to mysql server on port 3306 by telnet? Or why it docent show up under netstat? Any suggestion on whet I should try next ?

    Read the article

  • Ati X1600 driver problem on Mac

    - by Mulot
    Hi all, I currently own a 06' Macbook pro 1.1, and since some months I have recurrent problems of displays bug or artifacts. I searched quickly around to see that a lot of other users on Mac (iMac or Macbook pro) also have the same problem due to a problem for the X1600 video card. Apparently it's due to overheating problem, in my case even without warming a lot I have very bad display bugs such as colorful pixel lines, or glitches, and freeze and crash, all of this on Tiger, Leopard and Snow Leopard. I found this interesting article here talking about this problem and trying to gather people so that Apple take the serious GPU problem in consideration. In one of the comments, an user said he removed all bundle named with "radeon" and then he had no more problems under Leopard, and seems ot work fine well too on Snow Leopard. I did the same thing, I removed the bundles of the driver, restart, and no more problems, but not more 3D acceleration, which is not an acceptable solution. For those interested, here is the list of files to be deleted to stop having this problem. /System/Library/Extensions/ATIRadeonX1000.kext /System/Library/Extensions/ATIRadeonX1000GA.plugin /System/Library/Extensions/ATIRadeonX1000GLDriver.bundle /System/Library/Extensions/ATIRadeonX1000VADriver.bundle /System/Library/Extensions/ATIRadeonX2000.kext /System/Library/Extensions/ATIRadeonX2000GA.plugin /System/Library/Extensions/ATIRadeonX2000GLDriver.bundle /System/Library/Extensions/ATIRadeonX2000VADriver.bundle I would like to know if there is a way to fix this using other drivers if that's possible or by creating a group to force Apple to make a replacement program in place. Edit : How to locate those files on your hard drive if you are not under Snow Tiger : sudo find / -iname "*radeon*"

    Read the article

  • Bash Script - Traffic Shaping

    - by Craig-Aaron
    hey all, I was wondering if you could have a look at my script and help me add a few things to it, How do I get it to find how many active ethernet ports I have? and how do I filter more than 1 ethernet port How I get this to do a range of IP address? Once I have a few ethenet ports I need to add traffic control to each one #!/bin/bash # Name of the traffic control command. TC=/sbin/tc # The network interface we're planning on limiting bandwidth. IF=eth0 # Network card interface # Download limit (in mega bits) DNLD=10mbit # DOWNLOAD Limit # Upload limit (in mega bits) UPLD=1mbit # UPLOAD Limit # IP address range of the machine we are controlling IP=192.168.0.1 # Host IP # Filter options for limiting the intended interface. U32="$TC filter add dev $IF protocol ip parent 1:0 prio 1 u32" start() { # Hierarchical Token Bucket (HTB) to shape bandwidth $TC qdisc add dev $IF root handle 1: htb default 30 #Creates the root schedlar $TC class add dev $IF parent 1: classid 1:1 htb rate $DNLD #Creates a child schedlar to shape download $TC class add dev $IF parent 1: classid 1:2 htb rate $UPLD #Creates a child schedlar to shape upload $U32 match ip dst $IP/24 flowid 1:1 #Filter to match the interface, limit download speed $U32 match ip src $IP/24 flowid 1:2 #Filter to match the interface, limit upload speed } stop() { # Stop the bandwidth shaping. $TC qdisc del dev $IF root } restart() { # Self-explanatory. stop sleep 1 start } show() { # Display status of traffic control status. $TC -s qdisc ls dev $IF } case "$1" in start) echo -n "Starting bandwidth shaping: " start echo "done" ;; stop) echo -n "Stopping bandwidth shaping: " stop echo "done" ;; restart) echo -n "Restarting bandwidth shaping: " restart echo "done" ;; show) echo "Bandwidth shaping status for $IF:" show echo "" ;; *) pwd=$(pwd) echo "Usage: tc.bash {start|stop|restart|show}" ;; esac exit 0 thanks

    Read the article

  • Add bookmarks to Delicious and Google Bookmarks at the same time

    - by BrianH
    I have used delicious.com (or back then, del.icio.us) to store my bookmarks for a long time now, and I love it. I was looking through some of my Google services, and realized they have a bookmarking service that integrates with your Google searches (I thought they had a bookmarking service before, but it went away? Maybe not). I like delicious just fine - I'm not interested in leaving. But I also like how my Google bookmarks are highlighted (and I'm guessing, brought to the top) in my search results so I can easily tell if I've bookmarked a site (kind of like the "promote up" feature). I can't even count the number of times I search for a site only to find I've been there months or years ago. If sites I've bookmarked in the past are highlighted in my search results, it makes it easier to pick which search result to go to. My question is around bookmarking tools: Is there a bookmarklet or Firefox addon that will let me save a bookmark to multiple services at the same time, in this case, Google and Delicious? Or maybe a service to sync my delicious bookmarks to Google bookmarks on a regular basis? I have used the Delicious addon since the beginning - it would just be nice to add a bookmark to multiple services with 1 addon. For that matter, it would be nice to add Evernote into the mix - click 1 button to save the page to Evernote, and bookmark the page in Google and delicious. EDIT on 7/30/2009 - Summary: A proposed solution is to use the Delicious addon and the GMarks addon to keep the 2 services in sync. I was not able to get the 2 addons to keep everything in sync, so it was also suggest to use the Google Toolbar with the Delicious addon to keep everything in sync. I personally have reservations with letting Google know about every single site I visit, I believe this solution will work, so I am accepting it as the answer. I still wish there was a solution that would let you post a bookmark/page to multiple services at the same time (delicious, google, evernote, digg, diigo, etc.). Thanks!

    Read the article

  • How to Move SMS from iPhone to Mac?

    - by seda16
    SMS is the main form to Communicate with others, you would saved many messages on your iPhone. Well, there're many reasons you need to backup your iPhone sms to the Mac. For example, your family or friends have sent you some important and you want to save them on your iPhone in case you delete them by accident, or you just need to backup your sms for other use. So today let's talk about how to move sms from iPhone to Mac. It would be very easy if you use an app to help you, I always use the iPhone to Mac transfer on Amacsoft to copy sms from iPhone to Mac. Now let me tell you how to use this great app: Step 1:Connect iPhone to Mac First of all, you need to install and launch the iPhone to Mac transfer, then connect your iPhone to Mac. The iPhone to Mac transfer would recognise your iPhone automatically. And all information of your iPhone will be shown on an interface. Step 2:Select sms and Start the Export Now you can see many choice on the left, find "SMS" and click it, all sms on your iPhone will be listed on the right. Select and check those you want to move, then just click "Export" on the top to start the transfer. Wait just a few a minute the transfer will be done. Great! You have finish the transfer now, it's really very easy, right? I believe it won't be a problem if you want to transfer your sms from iPhone to Mac. By the way you can also use this Amacsoft iPhone to Mac transfer to move other kind of files , like photos, songs etc. If you're a windows user, you can use iPhone to PC transfer on this web to move sms from iPhone to your PC just with the same steps, good luck!

    Read the article

  • Windows Server 2008 RAID10

    - by JT
    Hello All, I am building a storage system for myself. I have a 16 bay SATA chasis and right now I have 1 x 500GB SATA for booting 8 x 1.5TB for data. 3Ware 9500S-8 RAID card where these 8 drives above are connected to. I am used to linux, but not in the RAID department. I have Windows experience too. What I am looking for is something that I can just let sit, be reliable and use for other items as well. (Like running test websites, Apache, MySQL, etc). This box is private on a Class-C subnet. My thought is to at least consider Windows Server 2008. I especially like the potential for NON-GUI Mode. Can Windows Server 2008 do a Software RAID 10 out of the box? Software RAID is better performance and better in case the raid needs to be moved to another machine? I just want to SCP files, so OpenSSH running on it? Can one install the GUI, but not use it unless they get in a bind? Is Windows a good idea or should I stick to a Linux Software RAID or FreeBSD + ZFS?

    Read the article

  • Atlassian Crucible very slow on large repository

    - by Mitch Lindgren
    Hi everyone, My company has been running a trial of Atlassian Crucible for some months now. For repositories where it's working properly, users have given very positive feedback about the tool. The problem I'm having is that we have several different projects, each with its own repository, and some of those repositories are very large. One repository in particular has a large number of branches and probably around 9,000 files per branch. Browsing that repository in Crucible is extremely slow. Crucible is running on a CentOS VM. The VM has 4GB of RAM, and I've set Crucible's maximum at 3GB, of which it is currently using 2GB. I've brought this up in a support ticket with Atlassian, and they suggested the following: In particular because you have a rather large SVN repository you will likely find that Fisheye will be creating a large index file on disk. To help improve performance a few things you can try are: Increasing the available memory available to Fisheye (see the document above). Migrating to an external database: confluence.atlassian.com/display/FISHEYE/Migrating+to+an+External+Database Excluding files and directories from your index that aren't needed: confluence.atlassian.com/display/FISHEYE/Allow+(Process) (Sorry for not hyperlinking; don't have the rep.) I've tried all of these things to an extent, but so far none have helped greatly. I was originally running Crucible on a Windows box with 2GB of RAM using the built in HSQL DB. Moving to MySQL on CentOS saw a performance increase for some repositories, and made Crucible much more stable, but did not seem to help much with our biggest repository. There are only so many files/branches I can exclude from indexing while maintaining the tool's usefulness. That being the case, does anyone have any tips on how to speed up Crucible on large repositories, without investing in insanely powerful hardware? Thanks! Edit: To clarify, since I didn't mention it explicitly above, I am using FishEye.

    Read the article

  • VMware Workstation update stopped autofit guest working

    - by u8sand
    VMware one day offered an update (8.0.1 build-528992 my current version) from 8.0. I accepted it, because updates usually fix problems. However not in this case... Previously it worked very well. Now, it still "works" but the one glitch I'm getting makes it too hard to deal with. This screenshot will explain my problem: As you can see, my virtual PC is not resizing correctly. (This happens with any operating system), autofit guest just doesn't work - it only results in things like this happening. Thanks to the tools it becomes very hard to NOT autofit guest. I've tried uninstalling 8.0.1 completely and installing 8.0 again but with the same results. I don't really understand what the new update has done to VMware Workstation or to my virtual machines. I do believe this isn't VMware Workstation's direct fault but from VMware Tools, which would explain why going back to 8.0 didn't work since VMware tools has its own updates. How can I fix this? The host is running Windows 7 Ultimate 64-bit.

    Read the article

  • sql developer cannot establish connection to oracle db with listener running

    - by lostinthebits
    I am working from home and connected to my work's vpn. I have tried to connect to the work db with sql developer (the latest version and the previous version) on the following environments: mac os x 10.8.5 (with sql developer launched and installed directly on the iMac. sql developer launched and installed directly on a vm on same computer (guest Ubuntu 12.04 LTS) sql developer launched and installed directly on a vm on same computer (guest Windows 7.0 Professional) I get Status Failure Test Failed : IO Error - The Network Adapter could not establish the connection. I have read dba forums and googled and the most common suggestion is that the oracle listener is not up and running. I can conclusively say this is not the case because I have the option of using remote desktop and accessing the oracle db in question on my work computer. If the listener was down, according to my DBA, no one would be able to connect. My sysadmin and dba are stumped so I assume it is something unique to my home system. The reason I do not want to continue with the remote desktop workaround is because remote desktop has an annoying (infuriating often) lag.

    Read the article

  • How to fix a Postfix/MySQL/Dovecot Unknown Host Issue?

    - by thiesdiggity
    I am having an issue with one of my Postfix/Dovecot mail servers and I'm unsure how to fix the problem. I will try to explain it in detail, here it goes: I have an Ubuntu server setup using Virtual hosting with Postfix, Dovecot and MySQL. We have one domain setup as a virtual domain, for this example I am going to use mail.example.com. Under that domain we have one email address. I have another server (MS Exchange) setup using another one of my sub-domains, ex.example.com. The problem is that when I SMTP into the account on mail.example.com and try to send an email to an account on ex.example.com, I get the email returned back to us with an "unknown host" error. Now, I know that the mail.example.com server can resolve the ex.example.com domain because I can ping/dig while SSH'd into it. I can also log into Postfix via Telnet and send an email to an ex.example.com mailbox. I'm guessing that it has something to do with Postfix/Dovecot looking locally for the domain in the virtual domain list because of the tld domain (example.com)? If that's the case, how do I get Postfix/Dovecot to only look locally for the entire URL (mail.example.com) and if it doesn't find it, send it to the correct server by looking up the MX/A records (which I know exist and are setup correctly)? I have been working on this all day and any guidance would be GREATLY appreciated! Thanks for your time!

    Read the article

  • My hard drive seems to be overheating... what should I do?

    - by George Edison
    After a cold boot, the hard drive in my notebook jumps to 56? within an hour or so of idling. Is 56? a cause for alarm? Notes: The notebook is on a flat desk and none of the vents are obstructed. The video card is currently at 55? and the CPU at 50?. It's a Western Digital 250GB hard drive. SMART reports the drive healthy but does warn that: Edit: this problem had a very surprise ending. I inverted the notebook and unscrewed some of the panels on the back (there was one covering the hard drive, and one that provided access to the memory). I couldn't see any dust, so I simply screwed everything back together and powered it on... and it worked! The temperature is now staying at 46?, and it feels notable cooler to the touch. So I can only assume that some internal fan was malfunctioning or something. Whatever the case, it's working now so I won't complain. Edit: I have an SSD now, so temperature isn't as big an issue as it was when I had a mechanical drive.

    Read the article

  • Can I use Outlook 2010 (beta) with OWA account?

    - by Dan
    One of the new features of Outlook 2010 (beta) is the support for multiple Exchange accounts. I'm wondering if there is any way to use this together with a (different) Outlook Web Access account to also get that email in Outlook. Specifially, in additional to my regular corporate (Exchange) account, I also use another corporate account through OWA. With this second account, the only supported access is through OWA; while POP3 access is available, it is not actually suported. I'm not very familiar with configuring Exchange servers, but in talking to those who are, it sounds like enabling Outlook Web Access is (slightly) different than allowing access from Outlook via HTTP(s). Is that correct? If so, it doesn't really semm quite right as absolute worst-case, one could (theoretically) resort to screen-scraping OWA. Edit: this looks to be about the same as Activesync/OWA Desktop Client? (This doesn't have anything to do with the question, but I'm actually using this second corporate account in Outlook by POP3'ing to Gmail, and then IMAP4 from Gmail to Outlook. Obviously, it would be much nicer to add it as a second Exchange account.).

    Read the article

  • Finding a backup and synchronization solution

    - by Andrea Zilio
    I'm having difficulties to find a backup and synchronization solution with the following characteristics: Cross-platform: Windows, Linux, Mac Offsite backup (so Internet Backup) Data deduplication Transfer only new/modified bits of modified files Secure: Data encrypted before leaving computer Maintain multiple versions of files (even deleted files) Folder synchronization integrated with backup and across multiple computers connected to the internet (not necessarily in the same LAN) I think that the Folder Sync feature needs a better explanation. The use case is this: you have a desktop pc and a laptop. The desktop pc contains a folder with some files and this folder is part of the backup (so it was selected to be backed up). The laptop does not contain that folder or that files at all. Then you're abroad with your laptop and you need that folder. So you want to be able to open the backup program, select that folder from the backup and download it in your laptop mantaining it synchronized with the backed up version. When you then come back home and switch on your desktop pc you want the folder we're talking about to be updated in the desktop PC. Does anyone knows any service with all these features? I've only found SpiderOak to support all the features I've mentioned but I'm not completely satisfied by the time taken to complete a backup. Sometimes it seems to hang for minutes with no reasons at all and folder synchronization occurs only after all files are backed up (instead folder sync should have a separated queue independent from other backup operations and synchronization should occurs frequently... for example every 5 minutes or less, independently from the frequency of normal backup operations)

    Read the article

  • IIS7 Session ID rotating with Classic ASP

    - by ManiacZX
    I am trying to migrate a Classic ASP app onto a Windows 2008 R2 server. The application features run fine, but I am having issue with session. The application keeps the logged in user information in session and I am constantly getting knocked out as if the session had expired. While debugging I have discovered the sessions are not expiring but instead I am getting 2-3 different Session IDs in use by one browser. I am outputting Response.Write(Session.SessionID) on various pages in the application and I can sit there and hit refresh over and over and watch the number changed between these 2-3 SessionIDs randomly. The sessions are still valid because when I refresh and get the Session ID that I logged in under the page is displayed (because the security check was successful) and when I get one of the other Session IDs I get the "you aren't logged in, you need to log in" message. If I close and re-open the browser, same story just the set of IDs are new. This happens with IE8, Firefox and Chrome from multiple computers. Things I've tried: - AppPool set to No Managed Code and Classic - Output Caching set .asp to never cache - ASP Session Properties enabled and disabled asp session state and confirmed it affected page (error trying to read Session.SessionID when disabled) Things I've tried just in case but shouldn't have anything to do with ASP Session: - Disabled compression - Changed ASP.Net Session State properties (InProc, StateServer, SQLServer, Cookies, URI, etc) -

    Read the article

  • Need to link WP Blog with Rails App on Heroku

    - by John Glass
    I have a client who wants to migrate his Rails app to Heroku. However the client also has a blog associated with his domain that runs on WordPress. Currently, the WordPress blog is running happily alongside the Rails app, but once we migrate to Heroku, that clearly won't be possible. The url for the app is like http://mydomain.com, and the url for the blog is like http://mydomain/blog. I realize that the best long-term solution is to redo the blog in a Rails format like Toto or Jekyll. But in the short term, what is the best way to continue hosting the WP blog where it is (or somewhere) but use Heroku to run the app? The client doesn't want the blog to be on a subdomain, but to remain at mydomain/blog for SEO reasons and also since there is traffic to the blog. I have two ideas: Use rack_rewrite or refraction (or just a regular old 301 and Apache mod_rewrite) on the old (non-Heroku) server to redirect the main url from the old site to Heroku. In this case, I can just leave the Wordpress blog running happily where it is. I think?? Is there a reason to choose one of those options (rack_rewrite, refraction, or mod_rewrite) over the others if I do it this way? Switch the DNS info to point to the Heroku site, and then use a 301 redirect from the blog to the old site. But then I'll have to get the old (non-Heroku) site on a subdomain and use some kind of rewrite rules anyway so it looks like it isn't a subdomain. Are either of these approaches preferable, or is there another way to do it that's easier that I'm missing?

    Read the article

  • Intermittent "Lost connection to MySQL server at 'reading initial communication packet'"

    - by db2
    Our web environment consists of two servers. Web front-end. Dell PowerEdge R610, RHEL 5.5, Apache 2.2.17, php 5.2.14. Database server. Dell PowerEdge R710, Windows 2008 R2 Standard x64, MySQL 5.5.11-log x64. Normally these two work perfectly fine together. However, when I try to get them talking via a dedicated LAN on their secondary NICs (each machine has four of them), things get flaky. I have NIC #2 on both machines configured on the 172.16.1.0/24 subnet, with no gateway or DNS servers (obviously, since it's just those two systems), and I put the private IP address of each machine into the hosts file of the other. The routing tables on both machines look okay after I do this. I've tried this with both a crossover cable draped directly between the two NICs, and also via a dedicated vlan on the switch in the rack. In either case, I get intermittent connection problems. It's a fairly small percentage of connections that fail, but it's enough to cause a significant problem, and I have to switch back to the main network connection, which will contend with all the other traffic and hosts on the switch. The full error message that appears in the application log: SQLSTATE[HY000] [2013] Lost connection to MySQL server at 'reading initial communication packet', system error: 110 Am I doing something really dumb that's causing this to not work properly? Anything I can check in MySQL that would explain why it's failing to connect occasionally?

    Read the article

< Previous Page | 619 620 621 622 623 624 625 626 627 628 629 630  | Next Page >