Search Results

Search found 21777 results on 872 pages for 'howard may'.

Page 451/872 | < Previous Page | 447 448 449 450 451 452 453 454 455 456 457 458  | Next Page >

  • What would prevent a .BAT file from being run on a mapped drive?

    - by JBurace
    In WinXP SP3, I have a .BAT file on a mapped drive. When I try to run this .BAT file (or even right click-edit) it gives me: --------------------------- Windows cannot access the specified device, path, or file. You may not have the appropriate permissions to access the item. --------------------------- OK --------------------------- This happens with any .BAT file, no matter what is within the file. If the file is on my local computer (like C:) it will run just fine. If someone else runs it from another computer (on the same mapped drive), it runs just fine. I have full permissions on the drive; I can edit/delete/save/write/create in that folder and/or .BAT file and I've ruled out permissions being the issue. It seems like a security prevention, but I can't tell what it would be. It would have to be something on my PC, but I don't use any 3rd party software. What would cause this error?

    Read the article

  • Odd Language In a BIOS Message

    - by Josh
    So I started up my laptop today and was greeted with the following message (not a direct quote): The type of the AC adapter cannot be determined. This may interfere with your computer's performance. Try unplugging the AC adapter and then plugging it back in, thanks. The problem was that I hadn't fully secured the plug into the back of the computer. However, I was a little taken aback when a message from BIOS said, "thanks." Is this normal? Any chance the message was illegitimate (virus)?

    Read the article

  • Which version management design methodology to be used in a Dependent System nodes?

    - by actiononmail
    This is my first question so please indicate if my question is too vague and not understandable. My question is more related to High Level Design. We have a system (specifically an ATCA Chassis) configured in a Star Topology, having Master Node (MN) and other sub-ordinate nodes(SN). All nodes are connected via Ethernet and shall run on Linux OS with other proprietary applications. I have to build a recovery Framework Design so that any software entity, whether its Linux, Ramdisk or application can be rollback to previous good versions if something bad happens. Thus I think of maintaining a State Version Matrix over MN, where each State(1,2....n) represents Good Kernel, Ramdisk and application versions for each SN. It may happen that one SN version can dependent on other SN's version. Please see following diagram:- So I am in dilemma whether to use Package Management Methodology used by Debian Distributions (Like Ubuntu) or GIT repository methodology; in order to do a Rollback to previous good versions on either one SN or on all the dependent SNs. The method should also be easier for upgrading SNs along with MNs. Some of the features which I am trying to achieve:- 1) Upgrade of even single software entity is achievable without hindering others. 2) Dependency checks must be done before applying rollback or upgrade on each of the SN 3) User Prompt should be given in case dependency fails.If User still go for rollback, all the SNs should get notification to rollback there own releases (if required). 4) The binaries should be distributed on SNs accordingly so that recovery process is faster; rather fetching every time from MN. 5) Release Patches from developer for bug fixes, feature enhancement can be applied on running system. 6) Each version can be easily tracked and distinguishable. Thanks

    Read the article

  • Sun Directory Server 5.2 performance

    - by tmow
    Hi all, I'm using logconv.pl (provided by Sun), to measure performance on my server. These two metrics results, are worrying me a bit: Binds: 192164 Unbinds: 111569 In fact the difference between the two it's quite big, how can I determine which are the unbound requests? As stated by Lodovic: Many applications just close the connections without sending an Unbind request. This simply can explain the difference. But the logconv.pl doesn't show details about the unbound requests, do you know any other tools or can you suggest some queries or whatever that can help me find out the root cause? Do you think anyway that the performances may improve fixing the issue?

    Read the article

  • How do you use Time Capsule for MAC controlled Internet?

    - by Kevin Perttula
    I'm using an Internet Provider that requires a username and password to log in; the prompt shows up as soon as you open a web browser. Because of this, I can only have one device online at a time. How can I get my Time Capsule to essentially log in as the user and allow other devices to connect through it. My Time Capsule has worked with other ISPs that don't require the user log in, but I recently moved. I may be wrong about how the ISP is allowing access, I wasn't sure how to describe the problem. Thanks.

    Read the article

  • xm console command is not working in XEN

    - by stillStudent
    I have XEN 4.0.x.x rpm with CENT OS. I have set it up and have many VMs on it. But problem is when I execute 'xm console ' command from dom0, command just hangs dom0 and some 'y' comes up in next line but nothing really happens. Is it a bug in xen 4.0 and I need to upgrade it or I can tweak some configuration file in /etc/xen/ to make it work. I found following at some site but its not working: In order to be able to login to your domU from the console using: xm create {your hostname}.cfg -c (to the set root password for ssh, for instance, or to see more output than just kernel output when debugging) it may be necessary to add the following line to your /etc/xen/{your hostname}.cfg extra='xencons=tty' Is there any other way to solve it?

    Read the article

  • Google Chrome with strange behavior

    - by user72274
    I'm former Chromium-browser user, but after not upgrading the PPA for 2 months, I switched to Google Chrome browser yesterday. Everything is okay, except some strange behavior on some pages and crashing after loading "chrome://" configuration pages. The best known website with strange behavior is youtube, there is a picture what I see: When I open user menu in top right corner, it crashes that way and even after closing the menu, some parts of menu stay display. You may say it's Youtube problem, no, I have this problem at least on three other websites, here it is on Imgur: The problem isn't for the whole side, sometimes it happens from the middle of the screen. The interesting part is that it happens everytime in the same distance from the right border. When I check the DOM elements with the Developer tool, the overlay which shows element's position is rendered how it should be. What is more, if there is anchor after the crashed area, it works after clicking on it. Selecting text in crashed page is impossible. I hope there is enough information to give me an advice, thanks in advance. :) EDIT: Here is what the browser posted in "chrome://gpu-internals/": Graphics Feature Status Canvas: Software only, hardware acceleration unavailable Compositing: Hardware accelerated 3D CSS: Hardware accelerated CSS Animation: Software animated. WebGL: Hardware accelerated WebGL multisampling: Hardware accelerated Problems Detected Accelerated CSS animation has been disabled at the command line. Accelerated 2d canvas is unstable in Linux at the moment. Ubuntu 12.04 | Gnome-shell 3.4.1 | ATI Radeon 4550 | Screen resolution 1024*768 | Chrome version 20.0.1132.57 (Official Build 145807)

    Read the article

  • How to run scripts within a telnet session?

    - by wenzi
    I want to connect to a remote host using telnet there is no username/password verification just telnet remotehost then I need to input some commands for initialization and then I need to repeat the following commands: cmd argument argument is read from a local file, in this file there are many lines, each line is a argument and after runing one "cmd argument", the remote host will output some results it may output a line with string "OK" or output many lines, one of which is with string "ERROR" and I need to do something according to the results. basically, the script is like: initialization_cmd #some initial comands while read line do cmd $line #here the remote host will output results, how can I put the results into a variable? # here I want to judge the results, like if $results contain "OK";then echo $line >>good_result_log else echo $line >> bad_result_log fi done < local_file the good_result_log and bad_result_log are local files is it possible or not? thanks! NOTE: I can't control B, I can only run initial cmds and cmd $line on B

    Read the article

  • What is the value of checking in failing unit tests?

    - by user20194
    While there are ways of keeping unit tests from being executed, what is the value of checking in failing unit tests? I will use a simple example: Case Sensitivity. The current code is case sensitive. A valid input into the method is "Cat" and it would return an enum of Animal.Cat. However, the desired functionality of the method should not be case sensitive. So if the method described was passed "cat" it could possibly return something like Animal.Null instead of Animal.Cat and the unit test would fail. Though a simple code change would make this work, a more complex issue may take weeks to fix, but identifying the bug with a unit test could be a less complex task. The application currently being analyzed has 4 years of code that "works". However, recent discussions regarding unit tests have found flaws in the code. Some just need explicit implementation documentation (ex. case sensitive or not), or code that does not execute the bug based on how it is currently called. But unit tests can be created executing specific scenarios that will cause the bug to be seen and are valid inputs. What is the value of checking in unit tests that exercise the bug until someone can get around to fixing the code? Should this unit test be flagged with ignore, priority, category etc, to determine whether a build was successful based on tests executed? Eventually the unit test should be created to execute the code once someone fixes it. On one hand it shows that identified bugs have not been fixed. On the other, there could be hundreds of failed unit tests showing up in the logs and weeding through the ones that should fail vs. failures due to a code check-in would be difficult to find.

    Read the article

  • Mac OS X 10.6 executable not found without full path

    - by Danack
    I just installed Apache via MacPorts. It seems that my Mac was absolutely confused about which version of the Apache executables to run. After moving the Apache executables that ship with the Mac to a directory that is not listed in the PATH variable, trying to run the httpd built by MacPorts fails even though the correct directory (/opt/local/apache2/bin) is listed in the PATH variable. If I navigate to the directory /opt/local/apache2/bin and type the command httpd I still get the error message -bash: httpd: command not found If I type the command with the full path /opt/local/apache2/bin/httpd it works fine. I've run the command alias to see if something was clashing but the only thing listed is: alias wget='curl -O' How do I find what is intercepting the command and preventing the executable being found in the directory, even when I'm inside the same directory? By the way, the httpd file is executable: -rwxr-xr-x 1 root admin 442496 9 May 2012 httpd

    Read the article

  • How to rename multiple files by replacing word in file name geting from the shell script variables?

    - by fy6877
    This question like this thread. How to rename multiple files by replacing word in file name? My example is more complex than the above topic. The two variables are $name and $ newname getting from the shell script other location. $name and $ newname may have the unicode words or special symbles like []<?...etc,so could anyone help me to provide a method to add a part of script in shell scrit to solve file name replacing question. BTW,I try to type two kind of commands to change the part of file name, but it can't work. rename.ul '$name' '$newname' /home/fy6877/test/final/* ls /home/fy6877/test/final/|xargs -I$ rename.ul '$name' '$newname' $

    Read the article

  • Will we be penalized for having multiple external links to the same site?

    - by merk
    There seem to be conflicting answers on this question. The most relevant ones seem to be at least a year or two old, so I thought it would be worth re-asking this question. My gut says it's ok, because there are plenty of sites out there that do this already. Every major retailer site usually has links to the manufacturer of whatever item they are selling. go to www.newegg.com and they have hundreds of links to the same site since they sell multiple items from the same brand. Our site allows people to list a specific genre of items for sale (not porn - i'm just keeping it generic since I'm not trying to advertise) and on each item listing page, we have a link back to their website if they want. Our SEO guy is saying this is really bad and google is going to treat us as a link farm. My gut says when we have to start limiting user useful features to our site to boost our ranking, then something is wrong. Or start jumping through hoops by trying to hide text using javascript etc Some clients are only selling 1 to a handful of items, while a couple of our bigger clients have hundreds of items listed so will have hundreds of pages that link back to their site. I should also mention, there will be a handful of pages with the bigger clients where it may appear they have duplicate pages, because they will be selling 2 or 3 of the same item, and the only difference in the content of the page might just be a stock #. The majority of the pages though will have unique content. So - will we be penalized in some way for having anywhere from a handful to a few hundred pages that all point to the same link? If we are penalized, what's the suggested way to handle this? We still want to give users the option to go to the clients site, and we would still like to give a link back to the clients site to help their own SE rankings.

    Read the article

  • Can I upgrade my ubuntu version and change to be primary OS after originally installing with Wibu?

    - by Garrick Wann
    I have recently installed Ubuntu 12.04 using Wubi 12.04 and I now wish to upgrade to a full installation of Ubuntu 14.04, Before attempting to upgrade through the update center I did some research on upgrading from a Wubi installation (alongside windows) to a full installation making Ubuntu primary and only OS and found that it is in fact doable through the update center however it is just highly recommended to perform a full backup before doing so. I have now finished backing up all the data I need to worry about and began the upgrade process through the update center and received the following error: Your graphics hardware may not be fully supported in Ubuntu 14.04. Running the 'unity' desktop environment is not fully supported by your graphics hardware. You will maybe end up in a very slow environment after the upgrade. Our advice is to keep the LTS version for now. For more information see https://wiki.ubuntu.com/X/Bugs/UpdateManagerWarningForUnity3D Do you still want to continue with the upgrade? My questions are as follows: A. Isnt 14.04 a LTS version??? B, What are your recomendations in order to ensure my graphics driver is installed correctly and im not stuck with bad configs/install?

    Read the article

  • Error connecting to the application.

    - by ahmed
    Hi guys, let me explain you the current scenario. We have a asp.net application on framework 2 running on intranet(Windows 2003 Server and Sql Server 2000). Now we have a xp machine where we installed and configured IIS with a virtual directory pointing on the local Xp machine and this machine is connected to our intranet. We have copied the same application files of the server to this XP machine. But the thing is the connection string/database of the application is pointing towards the intranet server. The problem is when we try to run the application on the XP machine we get this error : An error has occurred while establishing a connection to the server. When connecting to SQL Server 2005, this failure may be caused by the fact that under the default settings SQL Server does not allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) Is this query related or concerned with this site or stackoverflow ?

    Read the article

  • git-receive-pack : command not found.

    - by Philippe Mongeau
    I made a git repo on a local machine with "git init --bare" and added it as the remote origin on the project on my main computer with ssh: git add remote origin [email protected]:repoName.git I was able to make a commit and push from my main computer to the other computer the day I created the repo, but today i tried and it didn't work. When I did "git push origin" it returned this error: bash: line 1: git-receive-pack: command not found fatal: The remote end hung up unexpectedly The two machines are mac the main one running Leopard and the server one running Tiger. I think it may be realted to the $PATH of git on the server but I'm not sure. i used theses instrution to create my git server: http://blog.commonthread.com/2008/4/14/setting-up-a-git-server

    Read the article

  • How do you tell if advice from a senior developer is bad?

    - by learnjourney
    Recently, I started my first job as a junior developer and I have a more senior developer in charge of mentoring me in this small company. However, there are several times when he would give me advice on things that I just couldn't agree with (it goes against what I learned in several good books on the topic written by the experts, questions I asked on some Q&A sites also agree with me) and given our busy schedule, we probably have no time for long debates. So far, I have been trying to avoid the issue by listening to him, raising a counterpoint based on what I've learned as current good practices. He raises his original point again (most of the time he will say best practice, more maintainable but just didn't go further), I take a note (since he didn't raise a new point to counter my counterpoint), think about it and research at home, but don't make any changes (I'm still not convinced). But recently, he approached me yet again, saw my code and asked me why haven't I changed it to his suggestion. This is the 3rd time in 2--3 weeks. As a junior developer, I know that I should respect him, but at the same time I just can't agree with some of his advice. Yet I'm being pressured to make changes that I think will make the project worse. Of course as an inexperienced developer, I could be wrong and his way might be better, it may be 1 of those exception cases. My question is: what can I do to better judge if a senior developer's advice is good, bad or maybe it's (good but outdated in today context)? And if it is bad/outdated, what tactics can I use to not implement it his way despite his 'pressures' while maintaining the fact that I respect him as a senior?

    Read the article

  • Do you think we will ever settle on a "standard" platform? [closed]

    - by GazTheDestroyer
    The recent explosion of phone platforms has depressed me (slightly), and made me wonder if we will ever reach any kind of standard for presentation? I don't mean language or IDE. Different languages have different strengths and I can see that there may always be a need for disparity, although I do note that languages are merging somewhat in functionality, with traditional imperitive languages like C++ now supporting things like lambdas. What I'm really talking about is a common presentation mechanism. Before smart phones and tablets came along, the web seemed to be finally becoming a reasonable platform for presenting an application that was globally accessible, not just geographically, but by platform too. Sure there are still (sometimes infuriating) implementation differences and quirks, but if you wrote a decent site you knew it could be accessed on anything from a PC to a phone to a C64 running the right software. "Write Once Run Anywhere" seemed to finally be becoming a reality. However, in the last few years we've seen an explosion of mobile operating systems, and the ubiquitous "app". A good site is no longer enough, you need a native "app", and of course we have a sudden massive disparity in OS, language, and APIs needed to write them as each battles for supremecy. It's kind of weird how the cycle of popularity goes. Mainframes with terminals - thin client. PC - thick client. Web browser - thin client. Phone app - thick(ish) client. I just wonder if you think there will ever be a global standard for clients, or whether the "shiny and different" cycle will always continue along with the battle of the tech du jour.

    Read the article

  • XPS M1730 Freezing & Crashing unexpectedly

    - by evesirim
    I have a Dell XPS M1730, freshly installed with Windows 7 Professional 32-bit that every so often crashes completely unexpectedly. It need not be running any applications at all for this to happen, never mind anything particularly intensive. It also clearly is not a BSoD issue as no Minidump files are being created in the process. All drivers are up to date and have been both rolled back and then updated and the Device Manager reports no issues. There are very few applications installed on the machine and it has only been online once. It runs a minimal boot of Win7 and yet still after all of this something is making it crash. All of this suggests to me it may be more like a hardware problem but to be honest I really have no idea - so I thought I'd try here. Any thoughts...? Many thanks in advance

    Read the article

  • VMware Workstation 7&8&9 does not generate /etc/vmware/network upon installation

    - by dash17291
    When I install VMware Workstation on Arch linux Virtual ethernet is not working. $ sudo tail /var/log/vnetlib Aug 28 22:20:33 VNLFileExists - Cannot check for file or directory: /etc/vmware/networking , error: No such file or directory Aug 28 22:20:33 VNLNetCfgLoad - Import file does not exist Aug 28 22:20:33 VNL_Load - Error loading the vnet configuration, file used: /etc/vmware/networking Aug 28 22:20:33 VNLNetCfgUnload - Requested cache is not loaded Database file is not present. Failed to initialize Aug 28 22:20:41 VNLFileExists - Cannot check for file or directory: /etc/vmware/networking , error: No such file or directory Aug 28 22:20:41 VNLNetCfgLoad - Import file does not exist Aug 28 22:20:41 VNL_Load - Error loading the vnet configuration, file used: /etc/vmware/networking Aug 28 22:20:41 VNLNetCfgUnload - Requested cache is not loaded Required modules compiled. Previously I have copied that file or directory (I don't remember) from a working installation, but now I need a real solution. It's strange for me, may be a hardware issue also because with Ubuntu the same thing happens on the same computer.

    Read the article

  • Working with Git on multiple machines

    - by Tesserex
    This may sound a bit strange, but I'm wondering about a good way to work in Git from multiple machines networked together in some way. It looks to me like I have two options, and I can see benefits on both sides: Use git itself for sharing, each machine has its own repo and you have to fetch between them. You can work on either machine even if the other is offline. This by itself is pretty big I think. Use one repo that is shared over the network between machines. No need to do git pulls every time you switch machines, since your code is always up to date. Never worry that you forgot to push code from your other non-hosting machine, which is now out of reach, since you were working off a fileshare on this machine. My intuition says that everyone generally goes with the first option. But the downside I see is that you might not always be able to access code from your other machines, and I certainly don't want to push all my WIP branches to github at the end of every day. I also don't want to have to leave my computers on all the time so I can fetch from them directly. Lastly a minor point is that all the git commands to keep multiple branches up to date can get tedious. Is there a third handle on this situation? Maybe some third party tools are available that help make this process easier? If you deal with this situation regularly, what do you suggest?

    Read the article

  • How can I fix the #c3284d# malvertising hack on my website?

    - by crm
    For the past couple of weeks at semi regular intervals, this website has had the #c3284d# malware code inserted into some of its .php files. Also the .htaccess file had its equivelant code inserted. I have, on many occasions removed the malicious code, replaced files, changed the ftp password on my ftp client (which is CoreFTP), changed the connection method to FTPS for more secure storage of the password (instead of plain text). I have also scanned my computer several times using AVG and Windows Defender which have found no malware on my computer which might have been storing my ftp passwords. I used Sucuri SiteCheck to check my website which says my website is clean of malware which is bizarre because I just attempted to click one of the links on the site a minute ago and it linked me to another one of these random stats.php sites, even though it appears I have gotten rid of the #c3284d# code again (which will no doubt be re-inserted somehow in an hour or so).. Has anyone found an actual viable solution for this malware hack? I have done just about all of the things suggested here and here and the problem still persists. Currently when I click on a link within the sites navigation menu within Google Chrome I get googles Malware warning page: Warning: Something's Not Right Here! oxsanasiberians.com contains malware. Your computer might catch a virus if you visit this site. Google has found that malicious software may be installed onto your computer if you proceed. If you've visited this site in the past or you trust this site, it's possible that it has just recently been compromised by a hacker. You should not proceed. Why not try again tomorrow or go somewhere else? We have already notified oxsanasiberians.com that we found malware on the site. For more about the problems found on oxsanasiberians.com, visit the Google Safe Browsing diagnostic page. I'm wondering if it is possible that the Google Chrome browser I am using has itself been hacked? Does anyone else get re-directed when clicking links on the the website?

    Read the article

  • How do I find the correct Modeline to connect my computer to my Televsion

    - by Mikelane
    I've been trying to hook up a Ubuntu computer with my Panasonic TH-42PA60A Plasma television for weeks now. My original question was asked here, which includes all the specifications about the Television in detail and how I've connected the computer to the televsion. So far, I've tried hooking up three other computers; 2 Ubuntu computers with fairly new graphic cards, and a Windows XP computer. None of them have been able to display an image on the television. The closest I've come to getting a picture up is with my small laptop running Ubuntu, however the image came out all purple and distorted. I had gotten the image by adding extra modes via the xrandr command, using a process similar to described here. I realize it might be possible to get this working if I had the correct modeline. I've checked the Modeline database, but the Panasonic TH-42PA60A is not listed. How can I find the correct modeline for my television. What commands can I use? I've read that there may be a danger of damaging my TV when doing things like this. What things should I avoid to prevent damaging my television?

    Read the article

  • determine the archetecture of a mac from the command line or script?

    - by Brian Postow
    I'm writing a shell script, and I need to know the archetecture, ie PPC or Intel. Back in the day, there was a program /bin/arch that told you, but my mac doesn't seem to have it... Is there an easy way I can do this? Grep for something in a logfile? call some other program that spits that out as a side effect? It would be nice to know what OS Version I'm running too, but that may not be necessary. thanks

    Read the article

  • How do I install the latest Sun Java JRE on Ubuntu Server 9.10?

    - by blackrobot
    Unfortunately, if I try to install sun-java via apt-get, it's not found in the repositories. # apt-get install sun-java6-jre Reading package lists... Done Building dependency tree Reading state information... Done Package sun-java6-jre is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package sun-java6-jre has no installation candidate If I try to install it using the bin from Sun's website, here's the issue: # ./jre-6u18-linux-i586.bin (license agreement...) Do you agree to the above license terms? [yes or no] yes Unpacking... Checksumming... Extracting... ./jre-6u18-linux-i586.bin: 366: ./install.sfx.10648: not found Failed to extract the files. Please refer to the Troubleshooting section of the Installation Instructions on the download page for more information. Thanks for the help.

    Read the article

  • Server freeze restarted quickly so how do I fiond what went wrong?

    - by Charlie
    I have a SQL SERVER DB running on a windows server 2008 (VMWare) Yesterday I could not RDP to it so I ended some RDP sessions which were left logged in. This seemed to solve the problem. However last night I learned that the DB was inaccessible and unresponsive to customers. My colleague checked the server but again is unable to create an RDP connection. He then restarted the server and since it has been fine. Looking at the CPU Readings of the Server it spiked up to 100% before the original RDP problem .After I ended the extra seeions uit again dropped down to normal levels however before the time of the customer complaint it had rose to 100% again - before it had to be restarted. Is there anyway I can investigate which processes may have caused the problem in the first place. Would there be some kind of memory dump from when it was restarted. I would prefer to find out what is wrong now instead of waiting until it happens again.

    Read the article

< Previous Page | 447 448 449 450 451 452 453 454 455 456 457 458  | Next Page >