Search Results

Search found 4432 results on 178 pages for 'fail'.

Page 73/178 | < Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >

  • DOSBox 8.3 filenames disagree with Windows 7

    - by wes
    When I compare a dir in DOSBox 0.74 against a dir from Windows 7 command prompt, the 8.3 filenames differ. Long format (both drives and directories): 2012-07-30_abcdefg-abcde 2012-07-30_abcdefg-abcde.7z 2012-08-06_abcdefg-abcde 2012-08-06_abcdefg-abcde.7z 2012-10-22_IIS-LogFiles 2012-10-22_IIS-LogFiles.zip 2012-11-14_selective-abcde DOSBox 0.74 (dir): 2012-0~1 2012-0~3 2012-1~1 2012-1~3 2012-0~2 7Z 2012-0~4 7Z 2012-1~2 ZIP Windows 7 (dir /x): 2012-0~1 2012-0~1.7Z 2012-0~2 2012-0~2.7Z 2012-1~1 2012-1~1.ZIP 2012-1~2 so for instance if I'm passing in a path to DOSBox, sometimes this happens and whatever I'm trying to automate will fail. Why the difference, and can I change any settings to help DOSBox generate the correct shortnames?

    Read the article

  • Is there a way to make 7zip temporarly uncompress the whole archive when double-clicking on an exe?

    - by Gnoupi
    In WinRAR, one feature which I like is the fact that you can set it to uncompress the whole archive in a temporary place, if you double-click on an .exe file inside the archive opened in WinRAR. Typically, I often download small games, which I just want to try, without the hassle of creating a folder for it, etc. Same for archives containing an installer with its own separate files. In the 7-zip window, if I double-click an exe, it will just extract the exe in a temporary location and launch it. In the small game context (or installer), it means that it will simply fail, because it will miss required files in the same folder. So my question is: Is there a way to make 7-zip extract the whole archive in a temporary folder when launching an exe from inside the archive?

    Read the article

  • USB keyboard not working under Windows 7 x64?

    - by Comboo
    I have two USB keyboards, one no-name cheap thing and an old Logitech. When I plug them in to my computer they pop up in Device Manager as an "Unknown device" and "USB-receiver", respectively. Both of them fail to install any drivers, neither automatically or through Windows Update. Both keyboards work perfectly on another computer I have with Windows Vista 32-bit. Can this be one of those cases where a device does not work in a 64-bit version of Windows? I doubt it though since I've never had that problem before with any device and I thought that basic things like keyboards would be kind of failsafe. I don't really know how to start debugging this issue. I've tried all the obvious, rebooting, changing the USB port, etc. Are there any generic x64 keyboard drivers you can use? Is there any way to find the manufacturer of the keyboard over USB? There is nothing written on it.

    Read the article

  • harddrive problem (usb external)

    - by masfenix
    Hey guys I am using a Maxtor One Touch 3 given to my by my uncle. It is connected through USB2.0 When I plug it in, XP installs it and says "the device is ready to use". But it dosnt show up on my computer. It dosn't even show up in disk mangement. I then install Acronis Disk Director (which detects it) but marks it as offline. if i try to change it to online, nothing happens. (actually the lights on the HD blink - meaning communication and goes back into offline mode). The lights on the HD return to a solid which mean "working properly". Here is a screenshot: Is there any way to extract the data off the drive? is the drive corrupt? sitll useable? edit: DISKPART screenshot: edit2: I ran Seagate diagnostic tool and this came up: Long Generic - Started 9/10/2010 6:44:36 PM Bad LBA: 0 Unable to repair Long Generic - FAIL 9/10/2010 6:46:41 PM SeaTools Test Code: 4

    Read the article

  • Why doesn't Ghost 2003 offer to fill the destination drive?

    - by Neil
    Because it is dangerously low on disk space, I want to upgrade an SBS 2003 server by replacing its existing 72GB drive with a 364GB drive. When I tried to use Norton Ghost 2003 to clone the disk it didn't suggest that I use the entire new drive. I'm worried that I caused the process to fail by overriding its decision - although the cloned drive boots in Safe Mode, if I try booting it normally then none of the SQL Express instances start and something causes the server to reboot before even the Ctrl+Alt+Del screen appears. Does Ghost 2003 know something that I don't? Or should I be using some other software?

    Read the article

  • SAN/NAS with high availability?

    - by netvope
    I have two servers that I plan to use for storage. Each of them has a few SATA disks directly attached. I want the storage to be available even if one of the storage servers is down (preferably the clients wouldn't even notice that the fail-over, although I'm not sure if this is possible). The clients may access the storage via NFS and samba, but this is not a must; I could use something else if needed. I found this guide, Installing and Configuring Openfiler with DRBD and Heartbeat, which apparently does the thing I want. It relies on three components, Openfiler, DRBD, and Heartbeat, and all three of them need to be configured separately. I'm wondering are there simpler solutions? Is using DRBD+Heartbeat the best practice for a situation like mine? I'm also interested to know if there are alternatives that don't depend on DRBD.

    Read the article

  • Configuring sendmail to use one outbound MTA exclusively

    - by Charlie Martin
    I have a sendmail problem, and I'm anything but a sendmail guru -- I could use some help. My problem is that I have a system intended to be more or less an "appliance" -- it's not intended to have an admin. Because of this, it needs to be able to "call home" by sending email. As we have configured it, this works fine -- using sendmail, it finds the appropriate relay by looking up an MX record and everything works fine. Now, however, because of security concerns, we want to limit it to using exactly one relay, so for example relay.corp.example.com. Should the user configure it to use, say, fubar.example.com, the mail sending should fail or be deferred. I thought that by configuring sendmail with a /etc/mail/server.switch file containing hosts files without dns, I'd get that effect. This doesn't work -- instead, if it gets mail addressed to [email protected], it tries to talk directly to example.com, and ignores the configured server. Any ideas?

    Read the article

  • Vista to Vista network visability issue

    - by Sk93
    Hi All, I've got a Vista Business PC and a Vista Business Laptop connected via a virgin media router (Netgear CG2100D) and I cannot get the two machines to see each other correctly over the network. The laptop is connected via wireless, whilst the pc is wired. Both are set to recieve their network settings automatically (DHCP) and both have the windows firewall (the only firewall on either) turned off completely. I can ping each machine fine from one another using the ip addresses, and I can also connect via \. However, connections via \ fail, and I cannot see the machines in the network map. I have tried turning netBIOS to be "always on" on both adapters, but this makes no difference. I've been messing around pretty much for 6 hours now and am getting quite fustrated by this! (my original aim was to get media sharing working, but I've pretty much abandoned that for now). Any ideas?

    Read the article

  • Bigger ProjectServer farm is performing worse

    - by MSPS DBA
    I am using Project Server 2007 sp3 with SharePoint 2007 sp3 and SQL Server 2008 r2. I have recently moved my farm from 2 servers (1 DB and 1 App/Web) to a very big farm having Many Servers, Clustered Database, Load Balancer, Powerful processors and Large RAM. This Farm has more than one Web Servers, Project App Servers, SharePoint App Servers and a separate Index Server. But the performance of Project Server in the new Farm has been downgraded. Views are taking even more time to load data and Project publishing time has also been increased. I am also facing deadlock problems which are causing the project server queue jobs to fail. Could anyone inform me that what would be the reason of this problem and what should be the starting point to look into the issue? Is it mainly because now the application server needs to communicate with other application servers which were not needed in the previous farm? Thanks!

    Read the article

  • Domain controller failed to restore using windows backup tools

    - by Peilin
    Domain controller failed to restore using windows backup tools One Issue: One 08 R2 domain controller with fully daily backup(only one controller in this company) is out of services due to hardware issue. The below two methods i tried to recover to the new purchasing server,but it is fail. 1)First Method: Using the windows 2008 R2 CD to boot and carry out recover from backup. Everything is OK, but after reboot it will come out blue screen and restart again. 2)Second Method: a)Install the OS in this new server b) Reboot the server to DSRM. c)Using the Windows Backup Tools to restore the system states only After reboot, it will come out the blue screen error and restart again. I know this is may be the different hardware issue, but how to resolve? Or can we only restore the AD services not whole system status? Any suggestion?

    Read the article

  • Transferring NS records to a new server

    - by lanemiller
    I feel like that was NOT worded well, but here is my current predicament. I recently had a GoDaddy dedicated server, and decided after their customer support failed to do anything but disappoint, to switch to Rackspace. We have 2 ns records that point to our godaddy server, and we have a few sites left on the server, that rely on it for their DNS zones, and the owners of the domains fail to respond to us. So, the question is, if I need to transfer the sites off of the OLD godaddy NS, can I point the A records from my ns1.domain.com and ns2.domain.com to match up with IP addresses of the Rackspace nameservers? OR, do I cname my NS records to match the rackspace ones? I DO know that this isn't advised, either method, but I need to get these sites moved before Godaddy tries charging another $2k for the server.

    Read the article

  • MySQL Cluster Failover doesn't work

    - by Lukasz
    I have two servers, where First server 10.100.15.150: 1. one mgm server 2. one ndbd 3. one mysql api Second server 10.100.15.160: 1. one ndbd 2. one mysql api When i start all 'parts' of cluster it looks : Cluster Configuration [ndbd(NDB)] 2 node(s) id=21 @10.100.15.150 (mysql-5.1.56 ndb-7.1.17, Nodegroup: 0) id=22 @10.100.15.160 (mysql-5.1.56 ndb-7.1.17, Nodegroup: 0, Master) [ndb_mgmd(MGM)] 1 node(s) id=3 @10.100.15.150 (mysql-5.1.56 ndb-7.1.17) [mysqld(API)] 2 node(s) id=11 @10.100.15.150 (mysql-5.1.56 ndb-7.1.17) id=12 @10.100.15.160 (mysql-5.1.56 ndb-7.1.17) When i shutdown first machine - 10.100.15.150, on second the nbdb process also has been shutdown so i cannot use this data node and cluster fail ... How i must configure this cluster to get FailOver working ? Thx

    Read the article

  • How does Apache handle port forwarding?

    - by vfclists
    I setup a localhost portforwarding configuration in the coLinux .conf file, forwarding port 8090 to port 80 in the VM. When http://localhost:8090 is entered in the browser, I get the correct response from nginx, but with Apache the response get the error /htdocs not found in the log. However if I do a local port forwarding from 8090 to port 80 via SSH Apache responds fine. Is there something about the way Apache handles the port redirection that causes it to fail? PS, For those unfamiliar with coLinux it allows localhost connections to get to the VM by forwarding localhost ports on the Windows host to ports on the VM, as the 10.x.x.x IP it not accessible from the Windows host.

    Read the article

  • Lustre - is this bad form?

    - by ethrbunny
    Im going to be consolidating several 'server rooms' into a single installation soon. Part of this effort will be finding a home for 5Tb (and growing) of files / logs. To this end Im looking at Lustre and appreciating its ability to scale. The big vendors want to sell me a $20K SAN to manage this but Im wondering about buying several iSCSI units (like this http://www.asacomputers.com/3U-iSCSI-Solution.html) and using VMs for the OSS machines. This would let me fail-over to cover problems and not require a dedicated system for each OSS. Given articles like this (http://h30565.www3.hp.com/t5/Feature-Articles/RAID-Is-Dead-Long-Live-RAID/ba-p/1422) that talk about how RAID is not keeping up with drive density Im leaning towards more disks with lower capacity each. Again - some akin to the iSCSI array above. Tell me why this is a terrible idea. Do I really need to invest in a PE710 for each OSS/OST?

    Read the article

  • MVC 2 Ajax.Beginform passes returned Html + Json to javascript function

    - by Joe
    Hi, I have a small partial Create Person form in a page above a table of results. I want to be able to post the form to the server, which I can do no problem with ajax.Beginform. <% using (Ajax.BeginForm("Create", new AjaxOptions { OnComplete = "ProcessResponse" })) {%> <fieldset> <legend>Fields</legend> <div class="editor-label"> <%=Html.LabelFor(model => model.FirstName)%> </div> <div class="editor-field"> <%=Html.TextBoxFor(model => model.FirstName)%> <%=Html.ValidationMessageFor(model => model.FirstName)%> </div> <div class="editor-label"> <%=Html.LabelFor(model => model.LastName)%> </div> <div class="editor-field"> <%=Html.TextBoxFor(model => model.LastName)%> <%=Html.ValidationMessageFor(model => model.LastName)%> </div> <p> <input type="submit" /> </p> </fieldset> <% } %> Then in my controller I want to be able to post back a partial which is just a table row if the create is successful and append it to the table, which I can do easily with jquery. $('#personTable tr:last').after(data); However, if server validation fails I want to pass back my partial create person form with the validation errors and replace the existing Create Person form. I have tried returning a Json array Controller: return Json(new { Success = true, Html= this.RenderViewToString("PersonSubform",person) }); Javascript: var json_data = response.get_response().get_object(); with a pass/fail flag and the partial rendered as a string using the solition below but that doesnt render the mvc validation controls when the form fails. SO RenderPartialToString So, is there any way I can hand my javascript the out of the box PartialView("PersonForm") as its returned from my ajax.form? Can I pass some addition info as a Json array so I can tell if its pass or fail and maybe add a message? UPDATE I can now pass the HTML of a PartialView to my javascript but I need to pass some additional data pairs like ServerValidation : true/false and ActionMessage : "you have just created a Person Bill". Ideally I would pass a Json array rather than hidden fields in my partial. function ProcessResponse(response) { var html = response.get_data(); $("#campaignSubform").html(html); } Many thanks in advance

    Read the article

  • Why is process not being displayed by TOP

    - by drN
    I am running a Mathematica script (this question probably doesn't fit in Mathematica.SE however) and I know that it generally takes up a lot of RAM and loads up my cores. However, althought pgrep MathKernel is showing a pid, I find that top doesn't show this in the top processes, although I notice that it is taking up about 2.25GB of the 8GB available to me. pmap -x my_process_id total kB 2243132 1907404 1892108 AND ps aux | grep MathKernel dnaneet 20837 12.6 23.3 2234944 1907404 pts/1 Sl 09:23 8:01 /share/apps/mathematica/8.0.4/SystemFiles/Kernel/Binaries/Linux-x86-64/MathKernel -runfirst $TopDirectory="/share/apps/mathematica/8.0.4" -script ./dcm_10micrometer_2x -- ./dcm_10micrometer_2x ps aux shows that the process is taking about 12% (In asterisks) dnaneet 20601 0.0 0.0 68264 1660 pts/1 Ss 09:15 0:00 -bash **dnaneet 20837 12.2 23.3 2234944 1907404 pts/1 Sl 09:23 8:01 /share/apps/mat** dnaneet 21922 0.0 0.0 65604 948 pts/1 R+ 10:29 0:00 ps -aux Did this process fail and is the MathKernel just lingering?

    Read the article

  • Failing RAM, or something else?

    - by Thanatos
    I have a IBM Thinkpad T43, currently running Windows XP. Programs were crashing, XP was blue-screen-of-deathing, (more than usual) - it was basically unusable, but I couldn't get any informative error out of XP. I booted Ubuntu off a thumbdrive, which made it to the desktop, but as soon as I started to try to do anything, X segfaulted, along with several other services, followed quickly by kernel warnings and a kernel panic. I'm currently running Memtest86+ on this machine, which is spitting out numerous errors. (16k over 3 passes, and counting) The failing areas are numerous, and look something like this: 0001055da4 - XX.X MB, etc. The addresses that fail seem to cluster around 0-20 MB, 250MB, and, more rarely, 750MB, 1000MB, and 1200MB. However, a lot (but not all) of the failing addresses that I've seen end in XXXXXXX?da4 where the ? is a 1 or a 5. The machine has two sticks of RAM, one 512MB, one 1024 MB, the 512MB mapped to the lower addresses, the 1024 MB stick following. Is this indeed RAM failure, or should I consider other things before purchasing more RAM?

    Read the article

  • Would hybrid drive work after SSD failure

    - by lulalala
    Hybrid hard drive combines SSD with traditional hard drives. I know that SSD can fail much often than traditional hard drives. So I want to ask that, when the SSD part of the hybrid drive fails, would I still be able to use the traditional hard drive? If it won't work like that, then I will consider add-in SATA cards instead, as it delegates risk much better. EDIT: I guess it differs from model to model, so if yes what models would work. (I am evaluating Seagate DX for now)

    Read the article

  • Google Chrome doesn't stay logged in to Google sites when using pinned tabs

    - by Nick T
    Despite checking "stay logged in" or the like on Gmail or Docs, Chrome refuses to do so when I close and re-open it with Google sites pinned. If they're not pinned, it works fine. The "Clear cookies and other site and plug-in data when I close my browser" checkbox in the settings is not checked, and I don't have any cookie exceptions. All settings are defaults. Nor is the incognito mode being used. This occurs on all my computers using Chrome. I have deleted my cookies file (%userprofile%\AppData\Local\Google\Chrome\User Data\Default\Cookies) with no effect (other than losing the logins that ordinarly work fine). Of note is that when I relaunch Chrome with Gmail pinned and it asks me to log in, doing so once will fail (does nothing; no errors), then it will work on the second attempt. If I refresh the window before doing so, it will work on the first attempt.

    Read the article

  • How can I resolve this error when using net start / stop: "The service name is invalid" ?

    - by JosephStyons
    We have a server that hosts a service (I'll call it "tomato"). Up until now, a client pc has been able to start and stop this service. They just double-click a batch file, and inside that batch file is the command net stop tomato or net start tomato They recently got a new physical computer, and now those same commands fail with the error: C:\>net stop tomato The service name is invalid. More help is available by typing NET HELPMSG 2185. What do I need to do to let the client pc start and stop this service remotely? Edit: I mis-remembered the original commands. They were not using "net start" and "net stop" remotely. They were using sc \\server_name start tomato and sc \\server_name stop tomato And those commands to indeed still work.

    Read the article

  • Using a Dell PowerConnect to set up load balancing/redundancy, 1 switch 2 routres 2 private wan link

    - by MarianoC
    Hi, We have different locations connected by two different WAN providers. Each site has a dell powerConnect 6224 and two cisco routers with the WAN connections (we don’t have access to router admin). The 6224 connects to each cisco LAN port and to our LAN backbone. We would like the 6224 provide the ip gateway address and load balance and support redundancy, if one of the routes fail. Is this possible?. We can't find any samples on doing this and we have tried with no success. Any help or link to documentation regarding this, will be greatly appreciated. Thanks, MarianoC

    Read the article

  • dig gets the right result from DNS server, but name still fails to resolve

    - by EMiller
    Under what conditions would the following occur? From a given OSX machine on an internal network: $~ cat /etc/resolv.conf nameserver 10.102.120.7 nameserver 10.102.120.2 From the same machine: $~ dig @10.102.120.7 in.local <snip> ... ;; QUESTION SECTION: ;in.local. IN A ;; ANSWER SECTION: in.local. 43200 IN A 10.102.123.30 <snip> ... And yet, this workstation cannot ping in.local, nor load pages hosted by apache on that machine. 10.102.123.30 is definitley up (2 OSX machines I know fail to resolve in.local - but other machines on the network can). I have also checked their /etc/hosts to see if anything there might interfere... Not sure what else to check...

    Read the article

  • How can I disrupt my roommate's BitTorrent?

    - by bob
    We're on a 50 mb/s Comcast connection and our connection right now is coming in under 1.5 mb/s. Our roommate left for a week with BitTorrent running (Azureus client, we think). Our latency is approaching 300 ms. His door is locked up tight, and both his machine and the router for the house are located inside. I've even flipped the power breaker in the house and that barely works for 2 minutes. His laptop keeps on running, and once the cable modem and router come back up and the machine reconnects, the torrents resume in earnest. I've been running nmap and identified his IP on our LAN. Is there anything I can do over the LAN to make his torrents start to fail or slow down?

    Read the article

  • Credentials needed for BT line on Netgear router

    - by Bali C
    I have recently bought a new Netgear router to replace my current BTHomeHub as it doesn't support wireless. I did buy a WAP but figured it would be easier to use a router with wireless built in. (It's a modem/router combo). I have got as far as setting up the router on the web interface, but then it asks for a username and password to connect to the net, I can only assume this is for the phone line? I have tried some passwords I could find written down but they don't work, the internet light comes on and then when the creds fail it goes off. I have been on the homehubs web interface and been through all the settings it has to find the credentials it is using which obviously work, but no joy. Is there anything obvious I am missing, or is there a way I can retrieve my settings from my existing router? Any pointers will be very appreciated.

    Read the article

  • Common filesystem for servers behind a rackspace load balancer

    - by thanos panousis
    Our PHP application consists of a single web server that will receive files from clients and perform a CPU-intensive analysis on them. Right now, analysis of a single user upload can take 3sec to conclude and take 100% CPU. This makes our system capacity amount to 1/3 requests per second. My team's requirement is to increase capacity without a lot of code reengineering. A possible solution would be to set up a load balancer in front of multiple servers running the same app, connecting to a common DB. The problem is that the analysis outputs files on disk. A load balancer would increase capacity, but then files won't be available between servers so consequent client requests may fail. We are hosted on Rackspace, is there a way to configure some sort of "common" storage for all servers, without having to rewrite our file persistance code? Current code relies on simple fopens etc. What are our options?

    Read the article

< Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >