Search Results

Search found 44309 results on 1773 pages for 'hp command view'.

Page 456/1773 | < Previous Page | 452 453 454 455 456 457 458 459 460 461 462 463  | Next Page >

  • How to change from own Internal/Extrernal DNS to use an outsourced service like DNS Made Easy?

    - by Joakim
    Our current setup is a co-located linux box with an openvz kernel with a handful virtual containers for www, mail etc. and one container run Bind9 with a split views configuration serving External and Internal DNS. The HW-Node runs a shorewall firewall and all containers uses private ip's. The box (and DNS) basically handles web and mail for a handful domains and it works well but we still think it would be a good idea to outsource the public DNS and now to my question... Although I am fairly comfortable with the server stuff and DNS, I'm far from a pro and guess I basically need some confirmation that I am thinking in the right direction in that I basically just move the content of our external view (with zone files) to the external service and keep the internal view (or actually remove the view), update the new external DNS with thier names servers, update the info at my registrar and wait for propagation or have I missed something? Maybe someone else here run something similar already and can share some exteriences? I found this question which at least confirms it can be done.

    Read the article

  • How to install NPM behind authentication proxy on Windows?

    - by Tobias
    I need to run the latest version of Node and NPM on Windows. I installed Node 0.5.8 and downloaded the sources of NPM from GitHub. The steps I followed to install NPM were listed on its GitHub site but I have a problem running the following command: node cli.js install npm -gf but it fails with the following error message: Error: connect UNKNOWN at errnoException (net_uv.js:566:11) at Object.afterConnect [as oncomplete] (net_uv.js:557:18) System Windows_NT 5.1.2600 command "...\\Node\\bin\\node.exe" "...\\npm\\cli.js" "install" "npm" "-gf" cwd ...\npm node -v v0.5.8 npm -v 1.0.94 code UNKNOWN I think that this is a problem because I need authentication at my proxy to connect to the Internet. But I found no way to tell the installer to use my credentials for login. Is there a possibility to provide my proxy IP and login information to npm installation maybe via command-line arguments? I can provide the full log (but seems to have no more relevant information) using pastebin if needed.

    Read the article

  • Handling DataGrid.SelectedItems in an MVVM-friendly manner

    - by Laurent Bugnion
    An interesting question from one of the MVVM Light users today: Is there an MVVM-friendly way to get a DataGrid’s SelectedItems into the ViewModel? The issue there is as old as the DataGrid (that’s not very old but still): SelectedItem (singular) is a DependencyProperty and can be databound to a property in the ViewModel. SelectedItems (plural) is not a DependencyProperty. Thankfully the answer is very simple: Use EventToCommand to call a Command in the ViewModel, and pass the SelectedItems collection as parameter. For example, if the command in the ViewModel is declared as follows:public RelayCommand<IList> SelectionChangedCommand { get; private set; }and (in the MainViewModel constructor):SelectionChangedCommand = new RelayCommand<IList>( items => { if (items == null) { NumberOfItemsSelected = 0; return; } NumberOfItemsSelected = items.Count; }); Then the XAML markup becomes:<sdk:DataGrid x:Name="MyDataGrid" ItemsSource="{Binding Items}"> <i:Interaction.Triggers> <i:EventTrigger EventName="SelectionChanged"> <cmd:EventToCommand Command="{Binding SelectionChangedCommand}" CommandParameter="{Binding SelectedItems, ElementName=MyDataGrid}" /> </i:EventTrigger> </i:Interaction.Triggers> </sdk:DataGrid> I slapped a quick sample and published it here (VS2010, SL4 but the concept works in SL3 and WPF too). Cheers! Laurent Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • wired connection not working in ubuntu 12.04 on lenovo G580 laptop

    - by shravankumar
    I found solution in http://www.zyxware.com/articles/2680/solved-wired-connection-eth0-not-detected-in-ubuntu-12-04 I downloaded compact-wireless-2012-07-03-p.tar.bz2 Here the steps i followed along with output 1. shravankumar@shravankumar-Lenovo-G580:~/Desktop/compat-wireless-2012-07-03-p$ scripts/driver-select alx Output: Processing new driver-select request... Backup exists: Makefile.bk Backup exists: Makefile.bk Backup exists: drivers/net/ethernet/broadcom/Makefile.bk Backup exists: drivers/net/ethernet/atheros/Makefile.bk Backup exists: Makefile.bk Backup exists: Makefile.bk Backup exists: drivers/net/ethernet/broadcom/Makefile.bk 2.shravankumar@shravankumar-Lenovo-G580:~/Desktop/compat-wireless-2012-07-03-p$ make output: make -C /lib/modules/3.2.0-23-generic/build M=/home/shravankumar/Desktop/compat-wireless-2012-07-03-p modules make[1]: Entering directory `/usr/src/linux-headers-3.2.0-23-generic' scripts/Makefile.build:44: /home/shravankumar/Desktop/compat-wireless-2012-07-03-p/drivers/net/ethernet/atheros/alx/Makefile: No such file or directory make[4]: *** No rule to make target `/home/shravankumar/Desktop/compat-wireless-2012-07-03-p/drivers/net/ethernet/atheros/alx/Makefile'. Stop. make[3]: *** [/home/shravankumar/Desktop/compat-wireless-2012-07-03-p/drivers/net/ethernet/atheros/alx] Error 2 make[2]: *** [/home/shravankumar/Desktop/compat-wireless-2012-07-03-p/drivers/net/ethernet/atheros] Error 2 make[1]: *** [_module_/home/shravankumar/Desktop/compat-wireless-2012-07-03-p] Error 2 make[1]: Leaving directory `/usr/src/linux-headers-3.2.0-23-generic' make: *** [modules] Error 2 3. hravankumar@shravankumar-Lenovo-G580:~/Desktop/compat-wireless-2012-07-03-p$ make install output: FATAL: Could not open /lib/modules/3.2.0-23-generic/modules.dep.temp for writing: Permission denied make: *** [uninstall] Error 1 4. shravankumar@shravankumar-Lenovo-G580:~/Desktop/compat-wireless-2012-07-03-p$ modeprobe alx output: No command 'modeprobe' found, did you mean: Command 'modprobe' from package 'module-init-tools' (main) modeprobe: command not found I am new to Ubuntu ,Please help me. Thanks in advance

    Read the article

  • Synaptic won't launch from menu in panel in fresh Lubuntu minimal desktop 12.04 install

    - by ven42
    I performed a fresh install of Lubuntu 12.04 with minimal desktop, as described here: https://help.ubuntu.com/community/Lubuntu/Documentation/MinimalInstall. To clarify, I did a command-line install from the Lubuntu alternative install disc, then I did an "apt-get install --no-install-recommends lubuntu-desktop". Everything is working fine, except that Synaptic will not run from the menu entry in the panel. I am not prompted for a password, and no window of any sort appears after clicking the menu entry. I installed lxshortcut to see what the shortcut was running, and the command is "synaptic-pkexec". If I type this command into the "Run" menu, I get the same behavior (or lack thereof). I can get Synaptic to open up just fine by typing "gksudo synaptic" at the "Run" menu. Also, if I run "synaptic-pkexec" from the terminal, then I am prompted for my password within the terminal, and after that Synaptic opens normally. Can someone please suggest the right way to get Synaptic working? I could just change the menu entry to "gksudo synaptic", but I'm guessing that it's set to "synaptic-pkexec" for a reason. I have a vague understanding that this pkexec business has something to do with PolicyKit, but I don't really know what PolicyKit is or how to tell if something is broken with it. Thanks.

    Read the article

  • Running 'sudo' over SSH

    - by Wesho
    I'm writing a script which is to log onto a bunch of remote machines and run a command on them. I've set up keys so the user running the script does not have to type the password of each machine, but only type in the passphrase in the beginning of the script. The problem is that the command on the remote machines requires sudo to run. And at the same time the whole point of the script is to rid the user of having to type in passwords multiple times. Is there way to avoid typing in the password for sudo? Changing permissions of the command on the remote machines is not an option.

    Read the article

  • Encrypting a non-linux partition with LUKS.

    - by linuxn00b
    I have a non-Linux partition I want to encrypt with LUKS. The goal is to be able to store it by itself on a device without Linux and access it from the device when needed with an Ubuntu Live CD. I know LUKS can't encrypt partitions in place, so I created another, unformatted partition of the EXACT same size (using GParted's "Round to MiB" option) and ran this command: sudo cryptsetup luksFormat /dev/xxx Where xxx is the partition's device name. Then I typed in my new passphrase and confirmed it. Oddly, the command exited immediately after, so I guess it doesn't encrypt the entire partition right away? Anyway, then I ran this command: sudo cryptsetup luksOpen /dev/xxx xxx Then I tried copying the contents of the existing partition (call it yyy) to the encrypted one like this: sudo dd if=/dev/yyy of=/dev/mapper/xxx bs=1MB and it ran for a while, but exited with this: dd: writing `/dev/mapper/xxx': No space left on device just before writing the last MB. I take this to mean the contents of yyy was truncated when it was copied to xxx, because I have dd'd it before, and whenever I have dd'd to a partition of the exact same size, I never get that error. (and fdisk reports they are the same size in blocks). After a little Googling I discovered all luksFormat'ted partitions have a custom header followed by the encrypted contents. So it appears I need to create a partition exactly the size of the old one + however many bytes a LUKS header is. What size should the destination partition be, no. 1, and no. 2, am I even on the right track here? UPDATE I found this in the LUKS FAQ: I think this is overly complicated. Is there an alternative? Yes, you can use plain dm-crypt. It does not allow multiple passphrases, but on the plus side, it has zero on disk description and if you overwrite some part of a plain dm-crypt partition, exactly the overwritten parts are lost (rounded up to sector borders). So perhaps I shouldn't be using LUKS at all?

    Read the article

  • Windows 2008 R2 server cannot access shares on other servers

    - by Rob
    I have a problem on my new 2008 R2 64-bit server. Essentially the server sometimes refuses to access shares on other server. in the format \\servernam\sharename sometime it works and then for a few hours it doesn't and then at randon it comes back online. This is a local AD network and have even put in a new gigabit switch between all server. All the old 2003 servers work fine so I know DNS and WINS is all ok. I get error 1006 in eventlog saying that my R2 server can't contact the domain controller when it clearly can. Just to add to the config, it is running on a Dell PowerEdge R410, Vmware Esxi 4.0 and R2 is configured as a terminal server. I can always view shares with FQDN This morning net view \\ did not work but net view \\ did. Very random and very frustrating. any ideas? thanks

    Read the article

  • Where is the Flash in Chrome?

    - by Daniel
    I installed Google Chrome. This is the first thing I did after installing Ubuntu. I went into firefox, and went to chrome.google.com, and hit the button. I don't like package managers, and avoid the command line like the pox. Then, I started using Google Chrome. I went to Kongregate, and clicked on a game. It told me I didn't have flash. A few different websites told me the same. I assumed that they must have been wrong. I hit the link to Adobe, to install Flash, and it reassured me; of course, Google Chrome includes Flash. I checked my version - Chrome 5.0.375.126. Of course, I just downloaded it. I scoured the internet for solutions. None worked. Many seemed to involve re-enabling Flash, or something like that. But insofar as I can tell, there is no Flash anywhere in my Chrome. I feel like I bought a Reese's cup, and found solid chocolate. I checked in the Chrome plugin manager, and everything. A few solutions told me to copy some garbage into my command line and hit enter (as almost all solutions to problems on linux entail). I did it, reluctantly, and it did nothing. I thought Flash was supposed to come with Chrome. But it didn't. Sooooo... What gives? Google Chrome version: Google Chrome 5.0.375.126 (Official Build 53802) WebKit 533.4 V8 2.1.10.15 User Agent Mozilla/5.0 (X11; U; Linux x86_64; en-US) AppleWebKit/533.4 (KHTML, like Gecko) Chrome/5.0.375.126 Safari/533.4 Command Line /opt/google/chrome/google-chrome Operating System: Ubuntu 10.4 64 bit.

    Read the article

  • Implementing dry-run in bash scripts

    - by Apikot
    How would one implement a dry-run option in a bash script? I can think of either wrapping every single command in an if and echoing out the command instead of running it if the script is running with dry-run. Another way would be to define a function and then passing each command call through that function. Something like: function _run () { if [[ "$DRY_RUN" ]]; then echo $@ else $@ fi } _run mv /tmp/file /tmp/file2 DRY_RUN=true _run mv /tmp/file /tmp/file2 Is this just wrong and there is a much better way of doing it?

    Read the article

  • LinkedIn Woopsie with the Outlook 2010 Social Media Connector

    - by Martin Hinshelwood
    I have always used the LinkedIn toolbar for Outlook to sort out, upload and sync my contacts. Because of this I have over 2000 contacts in my contacts list that I sync with my phone, Plaxo, live, Google and others. I got a surprise the other day when my LinkedIn account was suspended and I was unable to login.   Figure: Bad, account suspended   So I contacted LinkedIn customer services to find out what the problem is, and here is the response: Dear Martin, We have recently noticed a large number of page searches and profile views through your LinkedIn account. We are aware that you may be using an automated or manual process to systematically view LinkedIn web pages. The information within LinkedIn is provided by our users for usage on the site only. In order to protect user privacy, our User Agreement prohibits using: 1. Automated or manual means to view an excessively high number of profiles or mini-profiles. 2. Automated means to run searches to collect or store data obtained from our site. We have placed a restriction on your account until you agree to stop using these or similar methods to view pages on LinkedIn. We look forward to your reply to discuss this further. Sincerely, LinkedIn Privacy Team It looks like LinkedIn has suspended my account because of something that their component is doing! I do not know if this is an isolated case, or if it will happen more as more users get on Outlook 2010 and update to the new software, but watch out. Has anyone else been suspended who has installed the Office 2010 RTM and the LinkedIn Add-On? Technorati Tags: Fail,LinkedIn,Outlook 2010

    Read the article

  • Troubleshoot broken ZFS

    - by BBK
    I have one zpool called tank in RaidZ1 with 5x1TB SATA HDDs. I'm using Ubuntu Server 11.10 Oneric, kernel 3.0.0-15-server. Installed ZFS from ppa also I'm using zfs-auto-snapshot. The ZFS file system when zfs module loaded to the kernel hangs my computer. Before it I created few new file systems: zfs create -V 10G tank/iscsi1 zfs create -V 10G tank/iscsi2 zfs create -V 10G tank/iscsi3 I shared them through iSCSI by /dev/tank/iscsiX path. And my computer started to hanging sometimes when I used tank/iscsiX by iSCSI, do not know why exactly. I switched off iSCSI and started to remove this file systems: zfs destroy tank/iscsi3 I'm also using zfs-auto-snapshot so I had snapshots and without -r key my command not destroying the FS. So I issued next command: zfs destroy tank/iscsi3 -r The tank/iscsi3 FS was clean and contain nothing - it was destroyed without an issue. But tank/iscsi2 and tank/iscsi1 contained a lot of information. I tried zfs destroy tank/iscsi2 -r After some time my computer hang out. I rebooted computer. It didn't boot very fast, HDDs starts working like a crazy making a lot of noise, after 15 minutes HDDs stopped go crazy and OS booted at last. All seems to be ok - tank/iscsi2 was destroyed. After file systems at the tank was accessible, zpool status showed no corruption. I issued new command: zfs destroy tank/iscsi1 -r Situation was repeated - after some time my computer hang out. But this time ZFS seams not to healed itself. After computer switched on it started to work: loading scripts and kernel modules, after zfs starting to work it hanging my computer. I need to recover else ZFS file systems which lying in the same zpool. Few month ago I backup OS to flash drive. Booting from backed-up OS and import have the same results - OS starts hanging. How to recover my data at ZFS tank?

    Read the article

  • Corrupted Views when migrating document libraries from SharePoint 2003 to 2007

    - by Kelly Jones
    A coworker of mine ran into this error recently, while migrating a document library from SharePoint 2003 to 2007: “A WebPartZone can only exist on a page which contains a SPWebPartManager. The SPWebPartManager must be placed before any WebPartZones on the page.” He saw this when he tried to see the All Documents view for the library. After looking into it, we figured out what had happened.  He was migrating documents using the Explorer View in SharePoint.  He had copied the contents of the library from one server (a remote server that we didn’t have administrative access to) to his desktop.  He then opened an Explorer View of the new library and copied the files to it.  Well, it turns out he had copied the hidden “Forms” folder, which contained the files necessary to display the different views for the library. (He had set his explorer to show hidden files, which made them visible.) So, he had copied the 2003 forms to the 2007 library, which are incompatible. We fixed it, by simply deleting the new document library, recreating it, and then copied everything except that hidden Forms folder.  Another option might have been to create a new document library on 2007, and copy the Forms folder from it to the broken library.  Since we didn’t need to save anything in the broken BTW, I confirmed my suspicion with this blog post: http://palmettotq.com/blog/?p=54

    Read the article

  • One page using querystring or many folders and pages?

    - by ClarkeyBoy
    I have an application where I have the 'core' code in one folder for which there is a virtual directory in the root, such that I can include any core files using /myApp/core/bla.asp. I then have two folders outside of this with a default.asp which currently use the querystring to define what page should be displayed. One page is for general users, the other will only be accessible to users who have permission to manage users / usergroups / permissions. The core code checks the querystring and then checks the permissions for that user. An example of this as it is now is default.asp?action=view&viewtype=list&objectid=server. I am not worried about SEO as this is an internal app and uses Windows Auth. My question is, is it better the way it is now or would it be better to have something like the following: /server/view/list/ /server/view/?id=123 /server/create/ /server/edit/?id=123 /server/remove/?id=123 In the above folders I would have a home page which defines all the variables which are currently determined by the querystring - in /server/create/ for example, I would define the action as 'create', object name as 'server' and so on. In terms of future development, I really have no idea which method would be best. I think the 2nd method would be best in terms of following what page does what but this is such a huge change to make at this stage that I would really like some opinions, preferably based on experience. PS Sorry if the tags are wrong - I am new to this forum and thought this was a bit too much of a discussion for StackOverflow as that is very much right / wrong answer based. I got the idea SE is more discussion based.

    Read the article

  • define software path

    - by shantanuo
    I have an older version already installed. I have upraded the package using setup.py install command. But the path is not correctly set. When I type "s3cmd" is shows the older version of software. # s3cmd s3cmd [options] <command> [arg(s)] version 1.2.6 --help -h --verbose -v --dryrun -n # which s3cmd /usr/local/bin/s3cmd The correct version is in different folder and I will like that to be used whenever I type the command. # /usr/bin/s3cmd Consider using --configure parameter to create one. How do I set path? I have added path to .bash_profile file but it does not work. PATH=$PATH:/usr/bin/s3cmd

    Read the article

  • ASP.NET MVC: Moving code from controller action to service layer

    - by DigiMortal
    I fixed one controller action in my application that doesn’t seemed good enough for me. It wasn’t big move I did but worth to show to beginners how nice code you can write when using correct layering in your application. As an example I use code from my posting ASP.NET MVC: How to implement invitation codes support. Problematic controller action Although my controller action works well I don’t like how it looks. It is too much for controller action in my opinion. [HttpPost] public ActionResult GetAccess(string accessCode) {     if(string.IsNullOrEmpty(accessCode.Trim()))     {         ModelState.AddModelError("accessCode", "Insert invitation code!");         return View();     }       Guid accessGuid;       try     {         accessGuid = Guid.Parse(accessCode);     }     catch     {         ModelState.AddModelError("accessCode", "Incorrect format of invitation code!");         return View();                    }       using(var ctx = new EventsEntities())     {         var user = ctx.GetNewUserByAccessCode(accessGuid);         if(user == null)         {             ModelState.AddModelError("accessCode", "Cannot find account with given invitation code!");             return View();         }           user.UserToken = User.Identity.GetUserToken();         ctx.SaveChanges();     }       Session["UserId"] = accessGuid;       return Redirect("~/admin"); } Looking at this code my first idea is that all this access code stuff must be located somewhere else. We have working functionality in wrong place and we should do something about it. Service layer I add layers to my application very carefully because I don’t like to use hand grenade to kill a fly. When I see real need for some layer and it doesn’t add too much complexity I will add new layer. Right now it is good time to add service layer to my small application. After that it is time to move code to service layer and inject service class to controller. public interface IUserService {     bool ClaimAccessCode(string accessCode, string userToken,                          out string errorMessage);       // Other methods of user service } I need this interface when writing unit tests because I need fake service that doesn’t communicate with database and other external sources. public class UserService : IUserService {     private readonly IDataContext _context;       public UserService(IDataContext context)     {         _context = context;     }       public bool ClaimAccessCode(string accessCode, string userToken, out string errorMessage)     {         if (string.IsNullOrEmpty(accessCode.Trim()))         {             errorMessage = "Insert invitation code!";             return false;         }           Guid accessGuid;         if (!Guid.TryParse(accessCode, out accessGuid))         {             errorMessage = "Incorrect format of invitation code!";             return false;         }           var user = _context.GetNewUserByAccessCode(accessGuid);         if (user == null)         {             errorMessage = "Cannot find account with given invitation code!";             return false;         }           user.UserToken = userToken;         _context.SaveChanges();           errorMessage = string.Empty;         return true;     } } Right now I used simple solution for errors and made access code claiming method to follow usual TrySomething() methods pattern. This way I can keep error messages and their retrieval away from controller and in controller I just mediate error message from service to view. Controller Now all the code is moved to service layer and we need also some modifications to controller code so it makes use of users service. I don’t show here DI/IoC details about how to give service instance to controller. GetAccess() action of controller looks like this right now. [HttpPost] public ActionResult GetAccess(string accessCode) {     var userToken = User.Identity.GetUserToken();     string errorMessage;       if (!_userService.ClaimAccessCode(accessCode, userToken,                                       out errorMessage))     {                       ModelState.AddModelError("accessCode", errorMessage);         return View();     }       Session["UserId"] = Guid.Parse(accessCode);     return Redirect("~/admin"); } It’s short and nice now and it deals with web site part of access code claiming. In the case of error user is shown access code claiming view with error message that ClaimAccessCode() method returns as output parameter. If everything goes fine then access code is reserved for current user and user is authenticated. Conclusion When controller action grows big you have to move code to layers it actually belongs. In this posting I showed you how I moved access code claiming functionality from controller action to user service class that belongs to service layer of my application. As the result I have controller action that coordinates the user interaction when going through access code claiming process. Controller communicates with service layer and gets information about how access code claiming succeeded.

    Read the article

  • Hostname Problem On WHM / cPanel Installation

    - by Eray
    My CentOS 5.6 server's hostname was "centos" . And then i change it to my domain : hostname domain.com And i started to installing WHM / cPanel as explained in here : http://etwiki.cpanel.net/twiki/bin/view/AllDocumentation/InstallationGuide/InstallingCpanel It's installed very well. And the i reboot my server. After rebooting, i was execute this command for open WHM's 2087 port : iptables -I RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 2087 -j ACCEPT Now i'm trying to browse domain.com:2087 i'm getting Server (centos) not found .I noticed it's forwarding to my old hostname (centos) . And then execute this command to verify me hostname hostname it's returned "centos" again. I'm not sure, why it's returned to old hostname. (I think it returned to old hostname after rebooting) . I'm changed it one more time : hostname domain.com Finally, now my hostname is domain.com . BUt still i'm getting centos server not found error. This is result of iptables -L command. P.S. : domain.com/cpanel is working

    Read the article

  • Powershell overruling Perl binmode?

    - by hippietrail
    I have a Perl script which creates a binary file while scanning a very large text file. It outputs to STDOUT which I redirect in the commandline to a file. To optimize it I'm making changes then seeing how low it takes to run. On Linux for this I use the "time" command. On Windows the best way to time a program seemed to be to PowerShell's "measure-command". This seemed to work fine but I noticed the generated files were larger. On examination I found that the files generated from within PowerShell begin with a BOM and contain CRLF pairs! My Perl script has a "binmode STDOUT" directive and does work correctly in a normal dosbox. Is this a bug or misfeature in PowerShell or measure-command? Has it affected others creating binary files by means other than Perl? Googling hasn't turned anything up so far. I'm using Perl 5.12, PowerShell v1.0 and Windows XP.

    Read the article

  • Implementing dry-run in bash scripts

    - by Andrei Serdeliuc
    How would one implement a dry-run option in a bash script? I can think of either wrapping every single command in an if and echoing out the command instead of running it if the script is running with dry-run. Another way would be to define a function and then passing each command call through that function. Something like: function _run () { if [[ "$DRY_RUN" ]]; then echo $@ else $@ fi } `_run mv /tmp/file /tmp/file2` `DRY_RUN=true _run mv /tmp/file /tmp/file2` Is this just wrong and there is a much better way of doing it?

    Read the article

  • wmic output well formed xml on remote queries

    - by Mervin
    I want to use the WMI command line tool (wmic) to get information about windows computers on the network and output it as valid xml. However, I can't seem to find the right way to do this as the outputted xml currently contains invalid tokens for which I think I should use the /TRANSLATE:basicxml switch. The command: wmic /NODE:"tech-demo" /IMPLEVEL:Impersonate /USER:MyUser /PASSWORD:MyPassword /PRIVILEGES:DISABLE /AUTHLEVEL:Pkt /AUTHORITY:"ntlmdomain:companydomain.local" PATH Win32_LogicalDisk GET * /FORMAT:rawxml This command runs but returns invalid xml tokens ('<' and '' I think? edit: it appears to fail parsing at ‹) When I add the translate switch I get the message: Can not use credentials for local connections a bit strange that it tries to query the local pc when I add the switch.. Help would be greatly appreciated.

    Read the article

  • snmpd agent sends duplicate traps

    - by jsnmp
    I am on Ubuntu 10.04.4 LTS, and I cannot upgrade to a higher version. I have installed the snmpd agent (NET-SNMP version 5.4.2.1) with an apt-get install snmpd command. When an event occurs which sends a trap, two traps are sent for each such event instead of one. For example, when I shut down the agent with command /etc/init.d/snmpd stop, two shutdown traps are sent to the destination host. If I then start back up the agent with command /etc/init.d/snmpd start, then two cold start traps are sent to the destination host. Is this a known issue? Is there a fix for this, or is there a configuration change that is needed to prevent the sending of the duplicate trap? These are the contents of the /etc/snmp/snmpd.conf file: rocommunity public authtrapenable 1 trap2sink <trap destination hostname> public These are the contents of the /etc/default/snmpd file: # This file controls the activity of snmpd and snmptrapd # MIB directories. /usr/share/snmp/mibs is the default, but # including it here avoids some strange problems. export MIBDIRS=/usr/share/snmp/mibs # snmpd control (yes means start daemon). SNMPDRUN=yes # snmpd options (use syslog, close stdin/out/err). SNMPDOPTS='-Ls3d -Lf /dev/null -u snmp -p /var/run/snmpd.pid -c /etc/snmp/snmpd.conf' # snmptrapd control (yes means start daemon). As of net-snmp version # 5.0, master agentx support must be enabled in snmpd before snmptrapd # can be run. See snmpd.conf(5) for how to do this. TRAPDRUN=no # snmptrapd options (use syslog). TRAPDOPTS='-Lsd -p /var/run/snmptrapd.pid' # create symlink on Debian legacy location to official RFC path SNMPDCOMPAT=yes

    Read the article

  • Why can't I run Compiz?

    - by jasoncruz98
    I installed Ubuntu 11.10 on my laptop. The main reason I switched to Ubuntu is because I wanted to use Compiz. The first thing I did was to go to Additional Drivers and install ATI/AMD Proprietary FGRLX Graphics Driver. There was also another one available, ATI/AMD Proprietary FGRLX Graphics Driver (post-release updates), but I didn't install that one, because it basically meant the same thing to me as the one I already installed. Next, I went to the ubuntuguide.org Oneiric Wiki http://ubuntuguide.org/wiki/Ubuntu:Oneiric#Compiz_Fusion So I followed the instructions there and ran this command in terminal: sudo apt-get install compiz compizconfig-settings-manager compiz-fusion-plugins-main compiz-fusion-plugins-extra emerald librsvg2-common But then, the terminal window said that the package "emerald" could not be found. So, I ran this command instead: sudo apt-get install compiz compizconfig-settings-manager compiz-fusion-plugins-main compiz-fusion-plugins-extra After that, I installed Fusion Icon by running this command: sudo apt-get install fusion-icon I restarted my computer, searched for Compiz Config Settings Manager, and clicked on it. Then, I activated Wobbly Windows. I logged off and logged back on again, but there was no wobbly windows effect. So I tried clicking on Fusion Icon, but it never started. Can someone please tell me what I did wrong here? Because I see everyone seems to be able to run Compiz except me. I really need to start Compiz, or else I think I'm going to uninstall Ubuntu.

    Read the article

  • Monitoring and terminating a hanged process in Linux

    - by Yoav
    Hi, I'm writing a script that runs many simultaneous processes that run the "dig" command. Once in a while (relatively rare, but happens in every run since I run dig many times) the dig command hangs with 0% CPU. Therefore, my script never terminates. I've created a monitor process for each dig command I run, which terminates it after a while, but I was wondering if there isn't a simpler and more efficient way to run a process with a pre-determined "expiration date", i.e. if the process runs more then X seconds it gets a signal that terminates it. Thanks!

    Read the article

  • Unexpected behavior in Bash

    - by cYrus
    From man bash: A simple command is a sequence of optional variable assignments followed by blank-separated words and redirections, and terminated by a control operator. The first word specifies the command to be executed, and is passed as argument zero. The remaining words are passed as arguments to the invoked command. So it's perfectly legal to write: foo=bar echo $foo but it doesn't work as I expect (it prints just a newline). It's quite strange to me since: $ foo=bar printenv foo=bar TERM=rxvt-unicode [...] Could someone please explain me where I'm doing wrong?

    Read the article

  • Screwed up terminal after modifying bashrc

    - by omgzor
    I ended up screwing up my terminal, while setting up Sbt for the Coursera Scala course. I can't summon gedit (or anything else) anymore. I get the following error: Command 'gedit' is available in '/usr/bin/gedit' The command could not be located because '/usr/bin' is not included in the PATH environment variable. Also, each new instance of Terminal writes these messages before any command is written: -bash: :/home/antonio/jdk7/jdk1.7.0_07/bin: No such file or directory -bash: export: `/home/antonio/Desktop/Scala/install/sbt/bin:/home/antonio/jdk7/jdk1.7.0_07/bin': not a valid identifier I recently did a manual installation of the jdk 7, which apparently works: java -version java version "1.7.0_07" Java(TM) SE Runtime Environment (build 1.7.0_07-b10) Java HotSpot(TM) 64-Bit Server VM (build 23.3-b01, mixed mode) While setting up Sbt, I made the mistake of editing bashrc by writing gedit ~/.bashrc on my terminal instead of writing gedit .bashrc, I wrote the following lines at the end of the bashrc file that opened: export PATH=/PATH/TO/YOUR/jdk1.7.0-VERSION/bin:$PATH export PATH=/home/antonio/jdk7/jdk1.7.0_07/bin:$PATH What is wrong here? How can I access my bashrc file and modify it again?

    Read the article

< Previous Page | 452 453 454 455 456 457 458 459 460 461 462 463  | Next Page >