Search Results

Search found 23265 results on 931 pages for 'justin case'.

Page 651/931 | < Previous Page | 647 648 649 650 651 652 653 654 655 656 657 658  | Next Page >

  • Magento Apache Config & Memory Issues

    - by cheshirepine
    I have a Magento installation on a VPS that is giving me a headache. This particular VPS has a reasonable spec - 2gb Memory and 50gb storage. It runs a single domain, with a single Magento install - and nothing else. About 5 months ago we started having issues. Every so often (about once every 2 or 3 weeks) the VPS would crash - all processes stopped and the only way to restart the container is via Virtuozzo. Now, however its 2 or 3 times a week. My VPS hosts confirm I am breaching the 2gb memory limit, at which point all VPS processes are killed to stop it bringing the entire node down. I have not made any config changes to it at all - I was running New Relic on it for a short while, but have removed that in case it was contributing to the issues. I can see nothing in the logs which indicates an issue and we have no CRON jobs running at the time the crashes happen. The site generates steady, but not huge amounts of traffic (averaging usually less than 100 visits per day) Is there anything in particular I should have done to the Apache or PHP configs to help? Im not a massivley experienced Apache admin, but know more than enough to solve most problems... Failing that, any other ideas that might help? Can't afford for this site to be down this much.

    Read the article

  • Is there a way to "burn" audio to an ISO? (as an audio CD)

    - by Sootah
    I have an audiobook that I've downloaded via their download manager, and it's loaded into their cutesy little audio program that they force you to use. I can play the book just fine using their proprietary software, and while it's annoying when using my PC, it's utterly UNBEARABLE when I try to listen to it on my Blackberry. The program is INSANELY slow, it literally takes around 30 seconds to switch between tracks, so if I've forgotten where I am in the book it takes me around 15 minutes to finally get to where I was at. I've looked everywhere on how to transcode the book to .MP3, but evidently with their current format it's either extremely convoluted (and I have no desire to dick around with installing some older version of the codec, getting a different transcoding app, and then wrestling with getting it to actually work). Since I'm able to burn a copy of the book to an audio CD, I figure the best way to go about this is to just make the CDs and then rip them off of those to .MP3. In order to avoid wasting two hours, not to mention 14 CD-R's, I was wondering if there's a way to "burn" to an .ISO instead of an actual CD-R. I currently have SlySoft's Virtual CloneDrive installed, so I can mount .ISO's easily enough, but now I want to actually create an ISO via the CD burning process. Just in case I've not explained myself very well, here is an overview of what I intend to do: "Burn" a set of Audio CD .ISOs from the audiobook (hopefully I can do this using Windows Media Player, otherwise I'll be forced to use the audiobook app) Mount an .ISO in Virtual CloneDrive Rip the audio tracks on the mounted .ISO to .MP3s Repeat steps 2-3 until the entire book is in .MP3 format Copy .MP3s to my Blackberry so that I'm not driven insane every time I want to listen to the book in the car, and be able to use Winamp when listening on my computer EDIT: I'd suppose a rather concise way to put it is that I need something that will emulate a CD-R drive, so that you can select it as the output drive in whatever app your burning the audio CD from. (I'd suppose that when you "insert a blank CD-R" the app would then ask you what file to save to)

    Read the article

  • snort analysis of wireshark capture

    - by Ben Voigt
    I'm trying to identify trouble users on our network. ntop identifies high traffic and high connection users, but malware doesn't always need high bandwidth to really mess things up. So I am trying to do offline analysis with snort (don't want to burden the router with inline analysis of 20 Mbps traffic). Apparently snort provides a -r option for this purpose, but I can't get the analysis to run. The analysis system is gentoo, amd64, in case that makes any difference. I've already used oinkmaster to download the latest IDS signatures. But when I try to run snort, I keep getting the following error: % snort -V ,,_ -*> Snort! <*- o" )~ Version 2.9.0.3 IPv6 GRE (Build 98) x86_64-linux '''' By Martin Roesch & The Snort Team: http://www.snort.org/snort/snort-team Copyright (C) 1998-2010 Sourcefire, Inc., et al. Using libpcap version 1.1.1 Using PCRE version: 8.11 2010-12-10 Using ZLIB version: 1.2.5 %> snort -v -r jan21-for-snort.cap -c /etc/snort/snort.conf -l ~/snortlog/ (snip) 273 out of 1024 flowbits in use. [ Port Based Pattern Matching Memory ] +- [ Aho-Corasick Summary ] ------------------------------------- | Storage Format : Full-Q | Finite Automaton : DFA | Alphabet Size : 256 Chars | Sizeof State : Variable (1,2,4 bytes) | Instances : 314 | 1 byte states : 304 | 2 byte states : 10 | 4 byte states : 0 | Characters : 69371 | States : 58631 | Transitions : 3471623 | State Density : 23.1% | Patterns : 3020 | Match States : 2934 | Memory (MB) : 29.66 | Patterns : 0.36 | Match Lists : 0.77 | DFA | 1 byte states : 1.37 | 2 byte states : 26.59 | 4 byte states : 0.00 +---------------------------------------------------------------- [ Number of patterns truncated to 20 bytes: 563 ] ERROR: Can't find pcap DAQ! Fatal Error, Quitting.. net-libs/daq is installed, but I don't even want to capture traffic, I just want to process the capture file. What configuration options should I be setting/unsetting in order to do offline analysis instead of real-time capture?

    Read the article

  • Hard Disk:S.M.A.R.T. Stas BAD, Back up and replace

    - by Nick
    I have an laptop top hard drive I was trying to use to my new media computer. The case is small and can accommodate for 2 2.5" drives, no 3.5" drives. I had been using the hard drive as storage hard drive until now. When I go to install Windows on the hard drive first I'm prompted at the bios of: Hard Disk:S.M.A.R.T. Stas BAD, Back up and replace. And then again in the Windows Setup, informing me that the hard drive is bad. So I did a full format of the drive and tried again. Same error. So I took it out and hooked it back up to my other computer via an Sata usb adapter kit (maybe the cause?). The hard drive is recognized fine and when I scanned it for errors by going: right click -> properties -> tools -> error checking It returns that the hard drive is fine. I have tried 3 different SATA cables and multiple jumpers. When I plugged in my 1.5 tb 3.5" drive the computer that gives me the S.M.A.R.T. error on the 2.5" drive, recognizes it with no problems. Any ideas on why this is happening and how I can fix it?

    Read the article

  • Follow cursor location from kile to evince.

    - by D Connors
    I know the title is probably not very clear, so I'll try to be as clear as possible here. I'm running xubuntu on my netbook, and I'm using kile for my latex editing. Since kile is native to kde, I had to manually set it to open pdfs and dvis on evince instead of okular. Now, last time I played around with LaTeX I was using TeXnic center on windows, and it had a very neat feature. Whenever I hit "QuickBuild", not only would it open the output .dvi file, but it would also show me exactly the piece of text I was editing. That is, if I were editing line 13 of the 7th of my document, when I compiled it, the dvi viewer would automatically take me to line 13 on the 7th page of the document, so I wouldn't have to scroll all the way down to it every time I compiled the .tex file. I'm guessing this is a pretty standard feature, and kile probably supports it. But since I don't know what it's called, I'm trying to be clear as to what I'm talking about. Problem is, this feature is not working for me right now, and I'm guessing it's either because evince does not support it, or because I have to manually configure it. Which one is it? And how do I manually configure it, if that's the case?

    Read the article

  • Mac OS X L2TP VPN won't connect

    - by smokris
    I'm running Mac OS X Server 10.6, providing an L2TP VPN service. The VPN works just fine when connecting from all computers except one --- this one computer stays at the "Connecting..." stage for a while, then says "The L2TP-VPN server did not respond". In the console, I see this: 6/7/10 10:48:07 AM pppd[341] pppd 2.4.2 (Apple version 412.0.10) started by jdoe, uid 503 6/7/10 10:48:07 AM pppd[341] L2TP connecting to server 'foo.bar.baz.edu' (256.256.256.256)... 6/7/10 10:48:07 AM pppd[341] IPSec connection started 6/7/10 10:48:07 AM racoon[342] Connecting. 6/7/10 10:48:07 AM racoon[342] IKE Packet: transmit success. (Initiator, Main-Mode message 1). 6/7/10 10:48:08 AM racoon[342] IKE Packet: receive success. (Initiator, Main-Mode message 2). 6/7/10 10:48:08 AM racoon[342] IKE Packet: transmit success. (Initiator, Main-Mode message 3). 6/7/10 10:48:08 AM racoon[342] IKE Packet: receive success. (Initiator, Main-Mode message 4). 6/7/10 10:48:08 AM racoon[342] IKE Packet: transmit success. (Initiator, Main-Mode message 5). 6/7/10 10:48:11 AM racoon[342] IKE Packet: transmit success. (Phase1 Retransmit). 6/7/10 10:48:14 AM racoon[342] IKE Packet: transmit success. (Phase1 Retransmit). 6/7/10 10:48:17 AM racoon[342] IKE Packet: transmit success. (Phase1 Retransmit). ...and the "retransmit" messages continue until the error message pops up. So far I've unsuccessfully tried: rebooting deleting the VPN profile and recreating it verifying the client's internet connection (it is able to reach the VPN server) connecting through several different networks (in case a router was blocking VPN packets) disabling the Mac OS X Firewall on the client making sure that the VPN settings exactly match those of other working computers running software update (the client is on 10.6.3) Any ideas?

    Read the article

  • OS X AFP shares and access

    - by gbrandt
    I am running 10.5.6 Client as a mini server and am having problems with AFP shares. All clients are OS X 10.5.7 I have created three users for 'File Sharing' only on the 'server'. I have created groups and placed these users into specific groups. I have created ACL's to give each group access to certain shares. Two of those users can read and write to any share, one user cannot write to the shares, with different results: when copying a directory, only the directory is created, no files inside are copied, the OS does not give any errors when copying a single file I get three dialogs: "You may need to enter the name and password for an administrator on this computer to change the item named 'xxxx', "The item 'xxxxx' contains one or more items you do not have permission to read. Do you want to copy the items you are allowed to read?, and, The operation cannot be completed because you do not have sufficient priveleges for some of the items. With the single file, a file gets created on the server, but is empty. My ACL for the group this user belongs to is: 0: group:projectmembers allow list,add_file,search,delete,add_subdirectory,delete_child,readattr,writeattr,readextattr,writeextattr,readsecurity,file_inherit,directory_inherit 1: group:informationtechnology inherited allow list,add_file,search,delete,add_subdirectory,delete_child,readattr,writeattr,readextattr,writeextattr,readsecurity,file_inherit,directory_inherit 2: group:executive inherited allow list,add_file,search,delete,add_subdirectory,delete_child,readattr,writeattr,readextattr,writeextattr,readsecurity,file_inherit,directory_inherit 3: group:everyone inherited deny list,add_file,search,delete,add_subdirectory,delete_child,readattr,writeattr,readextattr,writeextattr,readsecurity,file_inherit,directory_inherit User 1 & 2 belong to informationtechnology and executive and projectmembers, they can read and write freely on the share. User 3 belongs to projectmembers and cannot read and write freely. I have read that this is a UID issue, however User 1 & 2 do not have matching UID's across clients and server and they work, so I don't think this is the case. Any ideas?

    Read the article

  • Apache 2.4 Prefork vs. PHP-FPM Event shows sig decrease in requests per second

    - by Mark
    On my Apache 2.4.2 server with a standard mod_php Prefork setup these are my server-status results Current Time: Wednesday, 24-Oct-2012 19:36:24 CDT Restart Time: Wednesday, 24-Oct-2012 01:27:30 CDT Parent Server Config. Generation: 1 Parent Server MPM Generation: 0 Server uptime: 18 hours 8 minutes 54 seconds Total accesses: 14304233 - Total Traffic: 342.3 GB CPU Usage: u12584.6 s721.93 cu.66 cs3.43 - 20.4% CPU load 219 requests/sec - 5.4 MB/second - 25.1 kB/request 507 requests currently being processed, 355 idle workers ______KKKKR_K______W_KKC___CKK_K_K_W__CC_KKK_KK._K_K_KK._KKKK_K_ K_____KK_KKKK_K_KK__K___KK_K___K_____CKKK_WK_K_____KCKK__K___K_K K_CK_K_K_____K__KKKK_K__K___K_KK_K_K_KKKCK____________KK_CK__KKK __C_KKKKKKK___CK___C_KKK_K__C__K_CK____KKK__K__K__K_K__KK_CK_K__ _KKKKK_K_W__KK______K___K__W___C_K__K____KKKKKKKK.KKKKKKKCK_K___ _C_KK_K_WK__K_KK__K__RK_KK___K____K_KK_K_K___RKC_KKKK___KKKC_K_W _C_KK_KK__W____KC__KKK__KKK___K___KKK_KK_K_KKW__K_KR_KK_KK__KKK_ R__KKK__KKKKKK__K_KKKKK_K__K_K___KKW_________KK_K___KKK___KK.K_C KKKKKKW_____K__K_KKC_KCKK_K_KK_K__KK__K___K__KK_KK__________KK__ __K___KK_K__K_C_KK_K___KK__KK__K__KCK_K__KK_________K_K_KK__.K__ K_CKK.CCRW__KKKKKKKKKKKC__W____K___KWK_KK_KKC______.K_K_KK_KKKC_ __KKK_W_KCKKK_K_K____CCCK__KC_KKKK_K____K_CK_K____K__K____KKK_KK KK___K_K_K__KW__KCKKKK____WKWK__K_KKRKK__C_K_KK_KK_K__KKCC_K__C_ KK_K___K_KK______K_____CKK_K_______KK_CKCK__KKKKK____K__K..K____ __KKWK_KW__KKK__K_KKK___K_KK_KKK__KK___KK___KK_KK___KK____KKWKKC KK_KKKK_................................` When I switch to a PHP-FPM setup with the Event MPM with no other variables changes, my requests/sec plummet and overall apache response is garbage. Current Time: Wednesday, 24-Oct-2012 19:51:21 CDT Restart Time: Wednesday, 24-Oct-2012 19:48:03 CDT Parent Server Config. Generation: 1 Parent Server MPM Generation: 0 Server uptime: 3 minutes 18 seconds Total accesses: 18720 - Total Traffic: 307.1 MB CPU Usage: u16.57 s4.74 cu0 cs0 - 10.8% CPU load 94.5 requests/sec - 1.6 MB/second - 16.8 kB/request 15 requests currently being processed, 49 idle workers PID Connections Threads Async connections total accepting busy idle writing keep-alive closing 11701 114 no 10 22 0 66 38 11702 134 no 5 27 0 81 48 Sum 248 15 49 0 147 86 __R_R__W___RRW________RR__R___W_W_______W_____W_____________R_R_ Is there any obvious reason anyone could think of why this would be the case. I can provide any other additional stats or server setup info to help out. Ive tried tweaking everything up and down and nothing really helps get the PHP-FPM setup anywhere near a baseic prefork/mod-php setup. Thanks!

    Read the article

  • Throttle CPU Usage consumed by Process

    - by Brett Powell
    We run a game-server company where we basically have large amounts of customers sharing a single machine, and are just on their own instance of a Java Process (Minecraft) managed by our Web Control Panels. In the last few game updates released, we have noticed that many of the third-party plugins our customer's use have become poorly written and we are frequently seeing huge CPU increases from certain servers until we manually kill the process. Our Game Panel automatically restarts processes, so killing them is not really an issue. Our problem is that once once of these servers starts consuming 50%+ CPU Usage, it takes atleast 5 minutes to RDP into the machine, locate who it belongs to, shut it down and notify them. Are there any current solutions for Server 2008 which allow for the throttling of CPU usage or worst case, just auto kill a process stuck using that much? As Minecraft is essentially a single-threaded application, we have investigated using Affinity, although with the variations in our Packages and fluctuations in usage, this doesn't work well for us. Some option to throttle the maximum usage a process can use would be perfect, or at least the option to kill a process using that much. Thanks!

    Read the article

  • Error with procmail script to use Maildir format

    - by bradlis7
    I have this code in /etc/procmailrc: DROPPRIVS=yes DEFAULT=$HOME/Maildir/ :0 * ? /usr/bin/test -d $DEFAULT || /bin/mkdir $DEFAULT { } :0 E { # Bail out if directory could not be created EXITCODE=127 HOST=bail.out } MAILDIR=$HOME/Maildir/ But, when the directory already exists, sometimes it will send a return email with this error: 554 5.3.0 unknown mailer error 127. The email still gets delivered, mind you, but it sends back an error code to the sending user as well. I fixed this temporarily by commenting out the EXITCODE and HOST lines, but I'd like to know if there is a better solution. I found this block of code in multiple places across the net, but couldn't really find why this error was coming back to me. It seems to happen when I send an email to a local user. Sometimes the user has a .forward file to send it on to other users, sometimes not, but the result has been the same. I also tried removing DROPPRIVS, just in case it was messing up the forwarding, but it did not seem to affect it. Is the line starting with * ? /usr/bin/test a problem? The * signifies a regex, but the ? makes it return an integer value, correct? What is the integer being matched against? Or is it just comparing the integer return value? Do I need a space between the two blocks? Thanks for the help.

    Read the article

  • "postgres blocked for more than 120 seconds" - is my db still consistent?

    - by nn4l
    I am using an iscsi volume on an Open-E storage system for several virtual machines running on a XenServer host. Occasionally, when there is a very high disk I/O load on the virtual machines (and therefore also on the storage system), I got this error message on the vm consoles: [2594520.161701] INFO: task kjournald:117 blocked for more than 120 seconds. [2594520.161787] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [2594520.162194] INFO: task flush-202:0:229 blocked for more than 120 seconds. [2594520.162274] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [2594520.162801] INFO: task postgres:1567 blocked for more than 120 seconds. [2594520.162882] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. I understand this error message is caused by the kernel to inform that these processes haven't been run for 120 seconds, most likely because a disk access to the storage system has not yet been processed. But what is the effect on the processes. For example, will the postgres process eventually write its data when the storage system is idle again after a few minutes, so that all data is still consistent? Or will it abort the write, leaving some tables in an inconsistent state? I certainly expect that the former should be the case - if the disk access is slow, postgres (or any other affected process) should just wait as long as it takes. I can live with the application hanging for a few minutes. But if there is a chance for data corruption then any of these errors is really bad news. Please advise what to do here.

    Read the article

  • Confused about the Windows 7 Preinstallation Kit

    - by David Brown
    I build custom PCs and would like to use the Windows 7 Preinstallation Kit to make installation go a little quicker and customize the Windows image. However, since each PC is built to a particular customer's specifications, the hardware will rarely be the same. So, I would like to have a single answer file that will work for everything. I'm not sure if that's possible, however. What I mostly want to do for now is add my support information as well as pre-set anything that I would normally change after each installation completes. I have a Windows 7 Professional Upgrade DVD set (both 32-bit and 64-bit), but no OEM disks. I copied the Install.wim file to my local drive and opened it in the Windows System Image Manager, but it asks me to choose a catalog file specifically for each edition of Windows 7. Will this limit the answer file to whichever edition I choose? I would think choosing Starter would give me the most basic settings, which would apply to all other editions, but I'm not entirely sure of this. I don't intend to install any extra applications or drivers. I merely want to insert an OEM disk, my OPK USB drive, and have it work for whatever edition of Windows 7 I'm installing. If a large number of similarly-configured PCs need to be built, I'll go ahead and create a custom answer file in that case, but for a single machine order, that seems like overkill. In addition, do I need a separate answer file for 32-bit and 64-bit versions of Windows 7? Or will it work for both, even though I copied the Install.wim file from the 32-bit disk? Thanks!

    Read the article

  • Startup script for Red5 on Ubuntu 9.04

    - by user49249
    I am creating startup script for Red5 on Ubuntu. Red5 is installed in /opt/red5 Following is a working script on a CentOS Box on which Red5 is running [code] ==========Start init script ========== #!/bin/sh PROG=red5 RED5_HOME=/opt/red5/dist DAEMON=$RED5_HOME/$PROG.sh PIDFILE=/var/run/$PROG.pid # Source function library . /etc/rc.d/init.d/functions [ -r /etc/sysconfig/red5 ] && . /etc/sysconfig/red5 RETVAL=0 case "$1" in start) echo -n $"Starting $PROG: " cd $RED5_HOME $DAEMON >/dev/null 2>/dev/null & RETVAL=$? if [ $RETVAL -eq 0 ]; then echo $! > $PIDFILE touch /var/lock/subsys/$PROG fi [ $RETVAL -eq 0 ] && success $"$PROG startup" || failure $"$PROG startup" echo ;; stop) echo -n $"Shutting down $PROG: " killproc -p $PIDFILE RETVAL=$? echo [ $RETVAL -eq 0 ] && rm -f /var/lock/subsys/$PROG ;; restart) $0 stop $0 start ;; status) status $PROG -p $PIDFILE RETVAL=$? ;; *) echo $"Usage: $0 {start|stop|restart|status}" RETVAL=1 esac exit $RETVAL [/code] What do I need to replace for Ubuntu in the above script. My Red5 is in /opt/red5/ and to start it manually I always do /opt/red5/dist/red5.sh from Ubuntu As I did not find rc.d/functions on Ubuntu on my laptop also /etc/init.d/functions I did not existed. I would like to be able to use them with service as Red hat distributions do. I checked /lib/lsb/init-functions.

    Read the article

  • Rails/Mongo across multiple different geo-regions

    - by wmarbut
    I have a system that by necessity requires physical presence in three or more different locations and I need advice on structuring in such a way that my database stays replicated in a timely manner without horrible latency. I've seen mysql access and replication be incredibly slow when the application server was trying to talk to a node that wasn't physically collocated. In this case I am using mongodb. The stack is linux/passenger/ruby/rails/mongodb. The database is write heavy and read light. The infrastructure is Amazon EC2 The application layer must be physically located in 3 or more different locations. I can't justify this requirement further than it is a requirement. The database, however needn't be located in more than one location if it can be written to quickly from other locations. From reading mongo's documentation, mongo replication seems like more of a candidate than sharding b/c my datastore is not huge. However I don't see anything that addresses the issue of speed for servers communicating across large distances with potentially high latency.

    Read the article

  • I keep losing wireless connection

    - by posfan12
    I have a WRT54GL v1.1 wireless router and a WUSB54G v4 wireless adapter, both made by Linksys. The router is in the living room by the TV and the my computer is in the bedroom. My ISP is Brighthouse. Operating System Microsoft Windows 7 Home Premium 64-bit SP1 CPU Intel Core 2 Duo E6600 @ 2.40GHz 36 °C Conroe 65nm Technology RAM 3.00GB Single-Channel DDR2 @ 333MHz (5-4-4-14) Motherboard eMachines EMCP73VT-PM (CPU 1) 26 °C Graphics ASUS VS247 (1920x1080@60Hz) 767MB GeForce GTX 460 (nVidia) 43 °C Hard Drives 466GB Seagate ST350041 8AS SCSI Disk Device (SATA) 35 °C Optical Drives HL-DT-ST DVDRAM GH41N SCSI CdRom Device Audio High Definition Audio Device The problem is that my Internet connection will work fine for 15 minutes or so. Then the data will just stop flowing. Windows says I am still connected, and the systray icon still shows five bars. But Comodo Firewall will stop showing up and down traffic, and another of my systray applications complains about a lack of connection. What I usually do is either disconnect from the network manually, or unplug and re-plug the USB adapter. At which point the connection will work properly for another 15 minutes. I've tried unplugging my router for 30 seconds and letting it reboot. I've also tried looking for a newer driver for my adapter but I seem to have the latest version 3.1.3.0. This is a recent problem starting about a week ago. For the previous several months things were working just fine. I haven't made any changes to my system that I am aware of. The only thing I did was open my case to blow the dust out of it, then put everything back together. How do I fix this issue?

    Read the article

  • SharePoint Backup/Restore without stsadm

    - by Kevin
    Due to problems we found with the restore of sites/site collections using stsadm (our tasks generated from workflows were not restored), we've taken a different route for backup/restore. We plan a major customization to our SP site and want to take a backup so we can rollback in case the install fails. In our System Testing (not production) environment, we've backed up the 12 hive, the virtual dir's that the IIS points to SharePoint, and the SharePoint databases in SQL (using SQL server to do the db backups). We have custom event handlers and workflows built with Visual Studio, and deploy the dlls to the GAC as version 2 (signed and versioned in Visual Studio). So when we deploy, the GAC will contain 2 versions of the workflows - version 1 and version 2. During the deploy we use SP stsadm features to install/activate the WF's. We also go to each library and add the new, version 2 WFs. This automatically sets the version 1 WF's to "Not Allow" new instances (which is what we want) and the version 2 as active - perfect so far. When we've completed the install, we then assume a failure and attempt to restore to the same machines (SharePoint on one server, SQL on another). We start by uninstalling the version 2 WF's from the GAC, reset IIS (to clear cache of these ver. 2 WF dlls'), restore the 12-hive and virtual directory folders, then restore the SQL dbs. This is all just as manual as you read it - no stsadm here. All seems to work after our restore, it appears the restore was successful - the mods we made to column names, data changes, etc during the install are all reverted back to the original pre-install state. With one exception. When we run a workflow, it always fails and the Logs in the 12-hive indicates the WF is still trying to use the version 2 of the dll (System.IO file not found error) We think we've backed up and restored all the moving pieces of Sharepoint but we're missing something here, does anybody have any ideas why the version 2 WF dlls are still being referenced eventhough we restored all the folders and db's of SharePoint? Thanks, Kevin

    Read the article

  • Networking problems in VMWare with wireless bridge

    - by Robert Koritnik
    Barebone data: virtualization: VMWare Workstation 6.5 (latest) Host: Windows Server 2008 x64 Guest: Windows Server 2008 x86 Host network adapter: wireless Guest network adapter 1: over Bridge VMNet (automatic) Guest network adapter 2: over Host only VMNet Problem When I surf the net within VM my internet connection just gets stalled (not dropped). It doesn't experience any timeout whatsoever, it just stops downloading/communicating. For instance: I start downloading a file with a browser (IE/FF/CR doesn't matter) and I have to pause/restart download when speed drops to 0. I could wait indefinitelly but connection won't pickup automatically. What did I miss in my network configuration? Update 1 I've tested this in various combinations. This works fine when host is connected via Ethernet. But when connected via Wifi, the connection on the guest works as previously described. It connects fine. It gets a valid IP from DHCP... Everything is cool as long as you don't start doing some intensive network traffic (ie. download a 2MB file) In this case it starts downloading and stops after a while. Speed just drops to 0B/s... Sometimes it picks up back, sometimes it doesn't. Connection still stays and works. I can ping around with no problem.

    Read the article

  • Canonical Redirect on Dynamic Mass Virtual Hosts on Apache

    - by Josh
    I have a Web app on Apache that allows users to point their domain to the server. Right now I'm using Apache's dynamic mass virtual hosts with an entry VirtualDocumentRoot /www/hosts/%0/docs So with www.companydomain.com it points to /www/hosts/www.companydomain.com/docs The problem is when the user goes to companydomain.com it will point to /www/hosts/companydomain.com/docs Is there an easy way to automatically have Apache check to see if a directory exists for the virtual host, and if not, look for the host name with "www." in front of it? Other subdomains are fine (i.e. abc.domain.com should point to a diff. directory than def.domain.com) but the whole "www" issue is a mystery to me. I am using dynamic mass virtual hosts so the server does not have to restart after each registration for the application. If there is a different way that is fine as long as apache isn't restarted each time. How can I accomplish this? Worst case scenario if there were a way to redirect to a "default" location on the server if not found I could always do a check via PHP or something but I feel like that is a bit hacked together and there has to be a more efficient way. Thanks in advance!

    Read the article

  • Skip all warning prompts on ACPI shutdown?

    - by N Rahl
    When I issue an ACPI shutdown command to a Windows XP guest machine from the host VM server, I want Windows to shutdown. The problem is, Windows always wants to ask some question or another, rather than just shutting down. I need shutdown to be reliable, no matter what is running or going on, so I can automate shutdowns from the host machine. But I want it to be as graceful as possible, rather than just pulling the plug. Some problems: If a user is logged in, ACPI shutdown causes a box to appear that says, "are you sure you want to shutdown while other users are logged in"? And this prevents shutdown until someone connects to the machine and clicks "yes". In this case, it should try its best to gracefully log out all users, using force if necessary, and then shutdown without promoting. Busy or non-responding programs or programs asking to save data can prevent Windows from shutting down until a user answers a prompt. This should attempt to save data, wait maybe 30 seconds for non-responding programs, but should get aggressive with stubborn programs. "nope, time's up! 3,2,1, Goodbye!" Is there a registry setting that I can change from: ACPI_Shutdown: "Shut down if Windows feels like it" to ACPI_Shutdown: "Just do it. Kill programs, bump users, try to be graceful about it, but when I come back, I expect you to be off." This should respond to the ACPI shutdown command, and not be a script on windows, unless that script is triggered by the ACPI power button. I'm hoping this can be changed with registry options.

    Read the article

  • Segmentation fault on login to mysql

    - by numberwhun
    Hello everyone! I recently did a fresh install of Ubuntu on my laptop (HP dv7, AMD Dual Core with 4 gigs RAM). I am working on installing my development environment and tools and one of the first things I was working on is getting MySQL installed. The following was my configure statement with options: ./configure --prefix=/usr/local/mysql --with-big-tables --with-unix-socket-path=/usr/local/mysql/tmp/mysql.sock --with-named-curses-libs=/lib/libncurses.so.5.7 After I did the make;make install, I did the post configuration such as setting the root password and installing the mysqld daemon in its rightful place. My issue is when I try to log in to mysql to start using it, the following shows what happens: $ mysql -u root -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 1 Server version: 5.1.42 Source distribution Segmentation fault I have searched Google extensively, I have searched through the mysql bugs database and I have yet to find anything that matches my issue. Here is the contents of my my.cnf file, in case you want to see it: $ cat /etc/my.cnf [mysqld] basedir=/usr/local/mysql datadir=/usr/local/mysql socket=/usr/local/mysql/tmp/mysql.sock [mysql.server] user=mysql #basedir=/var/lib [client] socket=/usr/local/mysql/tmp/mysql.sock [mysqld_safe] err-log=/usr/local/mysql/logs/mysqld.log pid-file=/var/run/mysqld/mysqld.pid I am really hoping that someone here can tell me what has gone wrong with my installation as I would really love to know. I welcome and look forward to all responses. Thank you in advance! Best regards, Jeff

    Read the article

  • OS X mouse pointer speed varies with different mouse

    - by Stan
    OS X Snow Leopard It seems that when using different mice on OS X may have different pointer speed and scrolling speed. For example, when using my Logitech basic laser mouse, the pointer speed is like normal. But when using MX Performance or Anywhere, it's very slow, I will have to adjust the pointer speed in mouse configuration to max. Even with max, it's still a bit slow. Basically, just feel the plug and play on OS X is terrible. I need re-adapt to it every single time. This is not the case on Windows OS. Also, the mouse scrolling speed varies with different mouse too. But usually they are all very slow, usually scroll 1 line at a time. If I adjust it in mouse configuration, it turns to scroll too much lines. I have Logitech official mouse driver (LCC) installed. But either tuning in LCC or mouse configuration doesn't make things better. Has anyone have similar issue? How to resolve it? Please advise, thanks.

    Read the article

  • multiple streaming servers behind a Bastion Host

    - by Bond
    I am using open source streaming server Red5 on multiple servers. Which are running behind a bastion host. the world knows these sites as http://site1.mydomain.com http://site2.mydomain.com http://site3.mydomain.com http://site4.mydomain.com To reach the front end server is using Apache Reverse Proxy. I am also having video streaming on each of these websites using rtmp. To be able to reach the streaming server I embed a javascript in HTML pages as follows Code: <embed ..... var="rtmp://site1.my_domain.com" > the problem is the website are many site1.mydomain.com site2.mydomain.com site3.mydomain.com site4.mydomain.com each on a separate physical server. Each of these four have their own Red5 installations the front end to each of these four is a common Bastion Host. If I run rtmp on each of the subdomains at a different port how will I make sure a request such as rtmp://site1.mydomain.com rtmp://site2.mydomain.com goes to their respective servers. from the front end server. What do I need to handle in this case ? IPTABLES came to mind instantly but from the client browser on internet when some one requests rtmp://site1.mydomain.com how will I make sure this rtmp request is mapped to a port different than 1935 as there are three other streaming servers which are also to respond to their respective requests ?

    Read the article

  • Acer Aspire One (mini) mfgd 9/03 locks up also when plugged in

    - by LAURIE ANN
    My problem is almost the same as the Toshiba user (Toshiba A205-5804 freezes when plugged in): Well I have a Toshiba A205-5804 and the problem is that the screen freezes anytime I plug the pc into the external power supply, not as most of the computers having the same issue, my computer DOES freeze in safe mode, and I really can't bear this problem for much longer... It's not an overheat problem, the computer is not getting hot or anything related, I've tried already to change the AC adapter, to boot only with AC and no battery, and also all of these suggestions: The only difference in my case is: I can be using the battery and when it runs down, I can just close the lid and the system goes into hibernation mode. I then plug it in and let it charge. When I think it's finally charged, I can UNPLUG it, open the lid and all is running fine on the battery again. Note: the system was NOT shut down and it still runs as long as I remove the power plug before opening the lid. I have ALL the same issues as the other Toshiba user, also. I was a tech for 9 yrs in my own business and this one has not only stumped me, but anyone I have asked has never heard of this problem. Every repair center wants to charge me for diagnosis, even is they cannot fix it. I would really like to run this system along side of my new Acer Aspire 17" laptop as I need it to finish my grad school work. Any ideas would be GREATLY appreciated. Thanx, Laurie Ann

    Read the article

  • Is there a fix to display 0 when arithmetic underflow occurs on the Windows 7 calculator?

    - by Pascal Qyy
    I have a problem that exasperates me: When I take the Windows 7 calculator in standard mode, if I do 4, then v (square root), the result is 2 Fine. But, at this point, if I do - (minus), then 2, the result is -1,068281969439142e-19 instead of 0! OK, I know about ? (machine epsilon), and yes, -1,068281969439142e-19 is less than the 64 bits ? (1.11e-16), so, we have an arithmetic underflow, in other words in this case: 0. Great, my computer is able to represent subnormal numbers instead of just flush to zero when this happens, and it seems that it is an improvement! Subnormal values fill the underflow gap with values where the absolute distance between them are the same as for adjacent values just outside of the underflow gap. This is an improvement over the older practice to just have zero in the underflow gap, and where underflowing results were replaced by zero (flush to zero). BUT: this result is false! when you try to explain the concept of the square root to a child and you end up with this kind of result, it only complicates your task... what is the point to represent subnormal numbers in a standard, non scientific calculator? So, is there a way to fix this?

    Read the article

  • If I can take a screen capture of a graphical anomaly, can it still be a hardware issue?

    - by Jay Carr
    I have a strange graphical anomaly going on my iMac right now (green and magenta boxes are appearing sporadically), I'm slowly trying to work through different possibilities but I thought I'd start with the basics: Can a graphical anomaly that I can screen capture still be a hardware issue? I know, it seems really obvious. If it's hardware, it should show up well after the operating system has had it's say. And since the operating system is (I assume) doing the screen capture, it seems like it shouldn't see the anomaly unless the problem is software in nature. But, as I've researched this problem I see a lot of people taking their computers in to service people for hardware issues and Apple then resolving said issue. To further complicate things, I also have Windows 8 installed via bootcamp, and the issues seems to be showing up there as well. Anyway, it feels like it must be a driver issue, since I assume that's what the two OSes have in common, but...I thought I'd come here for some disambiguation. In my case, yes, I can screen capture the anomaly (at least in OSX I can), so I assume it's somehow a software (or driver) issue. But I wanted to double check because the internet is being ambiguous...

    Read the article

< Previous Page | 647 648 649 650 651 652 653 654 655 656 657 658  | Next Page >