Search Results

Search found 5262 results on 211 pages for 'commands'.

Page 190/211 | < Previous Page | 186 187 188 189 190 191 192 193 194 195 196 197  | Next Page >

  • How to install GIT on an offline RHEL?

    - by Stijn Vanpoucke
    I'm using the following commands from the manual to install GIT $ tar -zxf git-1.7.2.2.tar.gz $ cd git-1.7.2.2 $ make prefix=/usr/local all $ sudo make prefix=/usr/local install but I'm receiving the following exceptions ... cache.h: At top level: cache.h:746: error: expected declaration specifiers or â...â before âtime_tâ cache.h:889: warning: âstruct timevalâ declared inside parameter list cache.h:895: warning: âstruct timevalâ declared inside parameter list cache.h:970: error: expected specifier-qualifier-list before âoff_tâ cache.h:979: error: expected specifier-qualifier-list before âoff_tâ cache.h:997: error: expected specifier-qualifier-list before âoff_tâ cache.h:1057: error: expected declaration specifiers or â...â before âoff_tâ cache.h:1063: error: expected declaration specifiers or â...â before âuint32_tâ cache.h:1064: error: expected â=â, â,â, â;â, âasmâ or â__attribute__â before ânt h_packed_object_offsetâ cache.h:1065: error: expected â=â, â,â, â;â, âasmâ or â__attribute__â before âfi nd_pack_entry_oneâ cache.h:1067: error: expected declaration specifiers or â...â before âoff_tâ cache.h:1069: error: expected declaration specifiers or â...â before âoff_tâ cache.h:1070: error: expected declaration specifiers or â...â before âoff_tâ cache.h:1094: error: expected specifier-qualifier-list before âoff_tâ cache.h:1168: error: expected â)â before â*â token cache.h:1177: error: expected â=â, â,â, â;â, âasmâ or â__attribute__â before âre ad_in_fullâ cache.h:1178: error: expected â=â, â,â, â;â, âasmâ or â__attribute__â before âwr ite_in_fullâ cache.h:1179: error: expected â=â, â,â, â;â, âasmâ or â__attribute__â before âwr ite_str_in_fullâ cache.h:1252: error: expected declaration specifiers or â...â before âFILEâ In file included from credential-store.c:2: credential.h:28: error: expected declaration specifiers or â...â before âFILEâ credential.h:29: error: expected declaration specifiers or â...â before âFILEâ In file included from credential-store.c:4: parse-options.h:115: error: expected specifier-qualifier-list before âintptr_tâ credential-store.c: In function âparse_credential_fileâ: credential-store.c:13: error: âFILEâ undeclared (first use in this function) credential-store.c:13: error: âfhâ undeclared (first use in this function) credential-store.c:17: warning: implicit declaration of function âfopenâ credential-store.c:19: error: âerrnoâ undeclared (first use in this function) credential-store.c:19: error: âENOENTâ undeclared (first use in this function) credential-store.c:24: error: too many arguments to function âstrbuf_getlineâ credential-store.c:24: error: âEOFâ undeclared (first use in this function) credential-store.c:39: warning: implicit declaration of function âfcloseâ credential-store.c: In function âprint_entryâ: credential-store.c:44: warning: implicit declaration of function âprintfâ credential-store.c:44: warning: incompatible implicit declaration of built-in fu nction âprintfâ credential-store.c: In function âmainâ: credential-store.c:132: warning: implicit declaration of function âumaskâ credential-store.c:144: error: âstdinâ undeclared (first use in this function) credential-store.c:144: error: too many arguments to function âcredential_readâ credential-store.c:147: warning: implicit declaration of function âstrcmpâ Is this because I didn't install the dependencies? apt-get install libcurl4-gnutls-dev libexpat1-dev gettext libz-dev libssl-dev How do I install them offline?

    Read the article

  • Recognizing Dell EquilLogic with Nagios

    - by user3677595
    EDIT: All firmware and models are compatible, that is why nothing is posted about it. Okay, so there will be a lot here, so please bare with me. I've been working on this now for a few hours (reading manuals and such) so I'm not just coming here right out of the blue. I am working on a PRE-EXISTING Nagios server where there are several other existing plugins and checks running and working. Now I want to add another server there to check so I made the following modifications: First and foremost, I added a file to /usr/local/nagios/libexec named: check_equallogic.sh. The permissions are 755, the same as all others. I have chowned to nagios:nagios and in the listing it shows the Owner as Nagios. I then added a command to the commands.cfg file in \usr\local\nagios\etc\objects that shows the following: # 'check_equallogic' command definition define command{ command_name check_equallogic command_line $USER1$/check_equallogic -H $HOSTADDRESS$ -C $ARG1$ -t $ARG2$ $ARG3$ } Following this, I created a file named equallogic.cfg in the objects directory and it contains (more or less): define host{ use linux-server ; Inherit default values from a template host_name 172.16.50.11 ; The name we're giving to this device alias EqualLogic ; A longer name associated with the device address 172.16.50.11 ; IP address of the device contact_groups admins } Check Equallogic Information define service{ use generic-service host_name 172.16.50.11 service_description General Information check_command check_equallogic!public!info } After ensuring that permissions are okay for all files, I restart the nagios service, no errors. When I go into the WebGUI, I get the following errors AFTER the check runs: (Return code of 127 is out of bounds - plugin may be missing) Extra, probably unrelated problem Furthermore, when I log into the EquilLogic server, under Audit logs I get the following error: Level: AUDIT Time: 26/05/2014 3:59:13 PM Member: ps4100-1 Subsystem: agent Event ID: 22.7.1 SNMP packet validation failed, request received from 172.16.10.11 An snmpwalk receives a timeout, whereas others succeed. I will work on importing the MIBs tomorrow. The reason why I am mentioning it is because I want to make sure that it is only a MIB issue for the SNMP. If it is, then ignore this area. I am entirely unsure of what to do here.

    Read the article

  • Failed to start up after upgrading software in ubuntu 10.10

    - by Landy
    I've been running Ubuntu 10.10 in a physical x86-64 machine. Today Update Manager reminded me that there are some updates to install and I confirmed the action. I should had read the update list but I didn't. I can only remember there is an update about cups. After the upgrading, Update Manager requires a restart and I confirmed too. But after the restart, the computer can't start up. There are errors in the console. Begin: Running /scripts/init-premount ... done. Begin: Mounting root file system ... Begin: Running /scripts/local-top ... done. [xxx]usb 1-8: new high speed USB device using ehci_hcd and address 3 [xxx]usb 2-1: new full speed USB device using ohci_hcd and address 2 [xxx]hub 2-1:1.0: USB hub found [xxx]hub 2-1:1.0: 4 ports detected [xxx]usb 2-1.1: new low speed USB device using ohci_hcd and address 3 Gave up waiting for root device. Common probles: - Boot args (cat /proc/cmdline) - Check rootdelay=(did the system wait long enough) - Check root= (did the system wait for the right device?) - Missing modules (cat /proc/modules; ls /dev) FATAL: Could not load /lib/modules/2.6.35-22-generic/modules.dep: No such file or directory FATAL: Could not load /lib/modules/2.6.35-22-generic/modules.dep: No such file or directory ALERT! /dev/sda1 does not exist. Dropping to a shell! BusyBox v1.15.3 (Ubuntu 1:1.15.3-1ubuntu5) built-in shell(ash) Enter 'help' for a list of built-in commands. (initramfs)[cursor is here] At the moment, I can't input anything in the console. The keyboard doesn't work at all. What's wrong? How can I check boot args or "root=" as suggested? How can I fix this issue? Thanks. =============== PS1: the /dev/sda1 is type ext4 (rw,nosuid,nodev) PS2: the /dev/sda1 can be mounted and accessed successfully under SUSE 11 SP1 x64.

    Read the article

  • SQL Server database filled the hard drive and freeing up space isn't possible

    - by Jon
    I have a database in SQL Server 2008 on a 1Tb hard drive and it filled the drive, there is only 4Kb free. The MDF file is 323Gb and the LDF is 653Gb. The hard disk this DB is on has no other files on it other than the MDF and LDF so it's impossible to free up any space on the drive. The main hard disk is smaller but there is enough room to transfer the MDF to that drive, in case that helps. This server is overseas at a customer site and it's not possible at the moment to add more disk space to the server. It's also not possible to delete any records because the DB is in a failed mode (due to no disk space) and it doesn't respond to most commands. The Db is currently in full recovery mode which is why the LDF file is so large. This DB really doesn't need to be in full recovery so going forward we plan on switching it to simple mode which will save us a lot of space. I also don't care about losing the LDF file, but I need all of the data. I've spent a lot of time looking for a way out of this problem but everything I've found first involves either freeing up disk space or adding more disk space, neither of which is an option at this time. I'm stuck and any help would be greatly appreciated. I get the following log when trying to switch the DB to online mode. Msg 945, Level 14, State 2, Line 3 Database 'DBNAME' cannot be opened due to inaccessible files or insufficient memory or disk space. See the SQL Server errorlog for details. Msg 5069, Level 16, State 1, Line 3 ALTER DATABASE statement failed. Msg 1101, Level 17, State 12, Line 3 Could not allocate a new page for database 'DBNAME' because of insufficient disk space in filegroup 'DEFAULT'. Create the necessary space by dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup. I've found the following solutions but none work due to having no disk space on that drive, and since the DB is in a failed state I can't run most commmands. - DBCC SHRINKFILE - can't be run because doing a 'use DBNAME' fails - Detaching the DB and then changing the location of the MDF/LDF files, this fails because the DB is in an offline mode so you can't run detach. I'm at a loss about what else to try. Thanks.

    Read the article

  • NFS4 / ZFS: revert ACL to clean/inherited state

    - by Keiichi
    My problem is identical to this Windows question, but pertains NFS4 (Linux) and the underlying ZFS (OpenIndiana) we are using. We have this ZFS shared via NFS4 and CIFS for Linux and Windows users respectively. It would be nice for both user groups to benefit from ACLs, but the one missing puzzle piece goes thusly: Each user has a home, where he sets a top-level, inherited ACL. He can later on refine permissions for the contained files/folders iteratively. Over time, sometimes permissions need to be generalized again to avoid increasing pollution of ACL entries. You can tweak the ACL of every single file if need be to obtain the wanted permissions, but that defeats the purpose of inherited ACLs. So, how can an ACL be completely cleared like in the question linked above? I have found nothing about what a blank, inherited ACL should look like. This usecase simply does not seem to exist. In fact, the solaris chmod manpage clearly states A- Removes all ACEs for current ACL on file and replaces current ACL with new ACL that represents only the current mode of the file. I.e. we get three new ACL entries filled with stuff representing the permission bits, which is rather useless for cleaning up. If I try to manually remove every ACE, on the last one I get chmod A0- <file> chmod: ERROR: Can't remove all ACL entries from a file Which by the way makes me think: and why not? In fact, I really want the whole file-specific ACL gone. The same holds for linux, which enumerates ACEs starting with 1(!), and verbalizes its woes less diligently nfs4_setacl -x 1 <file> Failed setxattr operation: Unknown error 524 So, what is the idea behind ACLs under Solaris/NFS? Can they never be cleaned up? Why does the recursion option for the ACL setting commands pollute all children instead of setting a single ACL and making the children inherit? Is this really the intention of the designers? I can clean up the ACLs using a windows client perfectly well, but am I supposed to tell the linux users they have to switch OS just to consolidate permissions?

    Read the article

  • Clamdscan scans file in 0 seconds

    - by SupaCoco
    I have to run clamav on large files. I was wondering which command was the fastest between clamscan and clamdscan. But it seems that clamdscan is not working properly: it scans file larger than 1 GB. Could you guys help me find why the heck clamdscan isn't working ? Between clamscan and clamdscan which one is less resource consuming ? I run ClamAV 0.97.8/18037 on Ubuntu 12.04.3 LTS. Please find below the execution result of both commands: clamscan myfile.zip ----------- SCAN SUMMARY ----------- Known viruses: 2864504 Engine version: 0.97.8 Scanned directories: 0 Scanned files: 1 Infected files: 0 Data scanned: 0.00 MB Data read: 1024.16 MB (ratio 0.00:1) Time: 9.145 sec (0 m 9 s) clamdscan myfile.zip /home/ubuntu/workspace/benchmark/myfile.zip: OK ----------- SCAN SUMMARY ----------- Infected files: 0 Time: 0.000 sec (0 m 0 s) And here are the clamav log file: Wed Oct 30 10:26:32 2013 -> Received POLLIN|POLLHUP on fd 4 Wed Oct 30 10:26:32 2013 -> Got new connection, FD 9 Wed Oct 30 10:26:32 2013 -> Received POLLIN|POLLHUP on fd 5 Wed Oct 30 10:26:32 2013 -> fds_poll_recv: timeout after 5 seconds Wed Oct 30 10:26:32 2013 -> Received POLLIN|POLLHUP on fd 9 Wed Oct 30 10:26:32 2013 -> got command CONTSCAN /home/ubuntu/workspace/benchmark/myfile.zip (51, 7), argument: /home/ubuntu/workspace/benchmark/myfile.zip Wed Oct 30 10:26:32 2013 -> mode -> MODE_WAITREPLY Wed Oct 30 10:26:32 2013 -> Breaking command loop, mode is no longer MODE_COMMAND Wed Oct 30 10:26:32 2013 -> Consumed entire command Wed Oct 30 10:26:32 2013 -> Number of file descriptors polled: 1 fds Wed Oct 30 10:26:32 2013 -> fds_poll_recv: timeout after 3600 seconds Wed Oct 30 10:26:32 2013 -> THRMGR: queue (single) crossed low threshold -> signaling Wed Oct 30 10:26:32 2013 -> THRMGR: queue (bulk) crossed low threshold -> signaling Wed Oct 30 10:26:32 2013 -> /home/ubuntu/workspace/benchmark/myfile.zip: OK Wed Oct 30 10:26:32 2013 -> Finished scanthread Wed Oct 30 10:26:32 2013 -> Scanthread: connection shut down (FD 9) Wed Oct 30 10:26:32 2013 -> THRMGR: queue (single) crossed low threshold -> signaling Wed Oct 30 10:26:32 2013 -> THRMGR: queue (bulk) crossed low threshold -> signaling

    Read the article

  • UNIX Question to b answered??? [closed]

    - by Nits
    Create a tree structure named ‘training’ in which there are 3 subdirectories – ‘level 1’,’ level2’ and ‘cep’. Each one is again further divided into 3. The ‘level 1’ is divided into ‘sdp’, ‘re’ and ‘se’. From the subdirectory ‘se’ how can one reach the home directory in one step and also how to navigate to the subdirectory ‘sdp’ in one step? Give the commands, which do the above actions? How will you copy a directory structure dir1 to dir2 ? (with all the subdirectories) How can you find out if you have the permission to send a message? Find the space occupied ( in Bytes) by the /home directory including all its subdirectories. What is the command for printing the current time in 24-hour format? What is the command for printing the year, month, and date with a horizontal tab between the fields? Create the following files: chapa, chapb, chapc, chapd, chape, chapA, chapB, chapC, chapD, chapE, chap01, chap02, chap03, chap04, chap05, chap11, chap12, chap13, chap14, and chap15. With reference to question 7, What is the command for listing all files ending in small letters? With reference to question 7, What is the command for listing all files ending in capitals? With reference to question 7, What is the command for listing all files whose last but one character is 0? With reference to question 7, What is the command for listing all files which end in small letters but not ‘a’ and ‘c’? In an organisation one wants to know how many programmers are there. The employee data is stored in a file called ‘personnel’ with one record per employee. Every record has field for designation. How can grep be used for this purpose? In the organisation mentioned in question 12 how can sed be used to print only the records of all employees who are programmers. In the organisation mentioned in question 12 how can sed be used to change the designation ‘programmer’ to ‘software professional’ every where in the ‘personnel’ file Find out about the sleep command and start five jobs in the background, each one sleeping for 10 minutes. How do you get the status of all the processes running on the system? i.e. using what option?

    Read the article

  • Windows 7 & Virtual PC and Internet (gateway) problems on host PC

    - by Mufasa
    I upgraded to Windows 7 on a PC that is a few years old. The CPU was one revision away from having Hyper-V on it. So, I had to install Microsoft Virtual PC 2007 (v6.0.156.0) to run full XP instances instead of the seamless XP virtualization that is advertised so much. That's fine though; the 'older' version is useful since I use it to run different versions of the whole XP/IE stack for testing. (I'm a web developer.) ...And for the one 16-bit application we still use at the office for scheduling. * sigh * The virtual instances work fine, including networking. My issue is that after a reboot or coming out of sleep mode, my host Windows 7 won't connect to the Internet. It will connect to the local network fine. If I disable the "Virtual Machine Network Services" item (I'll call "VMNS" from here on) in the LAN Connection properties box, it starts working. But than the Virtual PC instances lose their network connectivity. If I re-enable VMNS again in the same instance, everything works (Internet on host and in the virtualized instances). But after the next reboot/sleep cycle this starts over. The route table gave me a clue though. When doing a cycle w/ VMNS enabled: IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 On-link 10.0.3.51 20 0.0.0.0 0.0.0.0 10.0.10.10 10.0.3.51 276 ... After VMNS is disabled, the first route goes away. I assume that is for VMNS to intercept virtualized instance's network connections and forward them correctly? Just a guess though. More info: I checked my Firewall settings and Services (because I'm sort of a control nazi and turn off a lot) but couldn't find anything that made sense and if turned on changed anything. So it might be something there I'm missing, but I don't know what. My current hacked solution: So, I figured I'd mess with the routes myself to see if that helped, it did. If I run a route delete 0.0.0.0 on the universal (0.0.0.0) gateway routes, and add back in just the 2nd line with route add 0.0.0.0 mask 0.0.0.0 10.0.10.10--the one that points to my actual gateway (10.0.10.10)--then I don't have to mess with the disable/enable cycle of VMNS, and everything works. Running those two commands is faster then bringing up connection options and disabling and re-enabling VMNS, but I still don't want to have use that hack script every boot either. (Oh, and I also tried messing with hard-coding TCP/IP settings in my network adapter, including setting high metrics, etc., but that didn't help either.) Any suggestions on the right way to fix this?

    Read the article

  • route http and ssh traffic normally, everything else via vpn tunnel

    - by Normadize
    I've read quite a bit and am close, I feel, and I'm pulling my hair out ... please help! I have an OpenVPN cliend whose server sets local routes and also changes the default gw (I know I can prevent that with --route-nopull). I'd like to have all outgoing http and ssh traffic via the local gw, and everything else via the vpn. Local IP is 192.168.1.6/24, gw 192.168.1.1. OpenVPN local IP is 10.102.1.6/32, gw 192.168.1.5 OpenVPN server is at {OPENVPN_SERVER_IP} Here's the route table after openvpn connection: # ip route show table main 0.0.0.0/1 via 10.102.1.5 dev tun0 default via 192.168.1.1 dev eth0 proto static 10.102.1.1 via 10.102.1.5 dev tun0 10.102.1.5 dev tun0 proto kernel scope link src 10.102.1.6 {OPENVPN_SERVER_IP} via 192.168.1.1 dev eth0 128.0.0.0/1 via 10.102.1.5 dev tun0 169.254.0.0/16 dev eth0 scope link metric 1000 192.168.1.0/24 dev eth0 proto kernel scope link src 192.168.1.6 metric 1 This makes all packets go via to the VPN tunnel except those destined for 192.168.1.0/24. Doing wget -qO- http://echoip.org shows the vpn server's address, as expected, the packets have 10.102.1.6 as source address (the vpn local ip), and are routed via tun0 ... as reported by tcpdump -i tun0 (tcpdump -i eth0 sees none of this traffic). What I tried was: create a 2nd routing table holding the 192.168.1.6/24 routing info (copied from the main table above) add an iptables -t mangle -I PREROUTING rule to mark packets destined for port 80 add an ip rule to match on the mangled packet and point it to the 2nd routing table add an ip rule for to 192.168.1.6 and from 192.168.1.6 to point to the 2nd routing table (though this is superfluous) changed the ipv4 filter validation to none in net.ipv4.conf.tun0.rp_filter=0 and net.ipv4.conf.eth0.rp_filter=0 I also tried an iptables mangle output rule, iptables nat prerouting rule. It still fails and I'm not sure what I'm missing: iptables mangle prerouting: packet still goes via vpn iptables mangle output: packet times out Is it not the case that to achieve what I want, then when doing wget http://echoip.org I should change the packet's source address to 192.168.1.6 before routing it off? But if I do that, the response from the http server would be routed back to 192.168.1.6 and wget would not see it as it is still bound to tun0 (the vpn interface)? Can a kind soul please help? What commands would you execute after the openvpn connects to achieve what I want? Looking forward to hair regrowth ...

    Read the article

  • PHP 5.2 to 5.3 not upgrading, no errors

    - by Webnet
    I'm following this guide: http://atik97.wordpress.com/2010/06/12/how-to-upgrade-to-php-5-3-in-ubuntu-9-10/ I've done all the steps, but it's still showing php 5.2.6 - any ideas? I have also tried -cgi instead of -cli, neither have any effect. update I've tried rebooting the server to see if that would have any effect and unfortunately it didn't update Output of dpkg -l *php*: Desired=Unknown/Install/Remove/Purge/Hold | Status=Not/Inst/Cfg-files/Unpacked/Failed-cfg/Half-inst/trig-aWait/Trig-pend |/ Err?=(none)/Hold/Reinst-required/X=both-problems (Status,Err: uppercase=bad) ||/ Name Version Description +++-=============================================-=============================================-========================================================================================================== un libapache2-mod-php4 <none> (no description available) ii libapache2-mod-php5 5.2.6.dfsg.1-3ubuntu4.6 server-side, HTML-embedded scripting language (Apache 2 module) un libapache2-mod-php5filter <none> (no description available) ii php-pear 5.2.6.dfsg.1-3ubuntu4.6 PEAR - PHP Extension and Application Repository un php4-cli <none> (no description available) un php4-dev <none> (no description available) un php4-mysql <none> (no description available) un php4-pear <none> (no description available) ii php5 5.2.6.dfsg.1-3ubuntu4.6 server-side, HTML-embedded scripting language (metapackage) ii php5-cgi 5.2.6.dfsg.1-3ubuntu4.6 server-side, HTML-embedded scripting language (CGI binary) ii php5-cli 5.2.6.dfsg.1-3ubuntu4.6 command-line interpreter for the php5 scripting language ii php5-common 5.2.6.dfsg.1-3ubuntu4.6 Common files for packages built from the php5 source ii php5-curl 5.2.6.dfsg.1-3ubuntu4.6 CURL module for php5 un php5-dev <none> (no description available) ii php5-gd 5.2.6.dfsg.1-3ubuntu4.6 GD module for php5 ii php5-imap 5.2.6-0ubuntu5.1 IMAP module for php5 un php5-json <none> (no description available) ii php5-mcrypt 5.2.6-0ubuntu2 MCrypt module for php5 ii php5-mysql 5.2.6.dfsg.1-3ubuntu4.6 MySQL module for php5 un php5-mysqli <none> (no description available) ii php5-xsl 5.2.6.dfsg.1-3ubuntu4.6 XSL module for php5 un phpapi-20060613+lfs <none> (no description available) ii phpmyadmin 4:3.1.2-1ubuntu0.2 MySQL web administration tool update The following commands and their outputs: grep php53 /etc/apt/sources.list deb http://php53.dotdeb.org stable all deb-src http://php53.dotdeb.org stable all apt-cache search -f "libapache2-mod-php5" http://pastebin.com/XNXdsXYC update I've updated the question with more details on installed packages.

    Read the article

  • A proper way to create non-interactive accounts?

    - by AndreyT
    In order to use password-protected file sharing in a basic home network I want to create a number of non-interactive user accounts on a Windows 8 Pro machine in addition to the existing set of interactive accounts. The users that corresponds to those extra accounts will not use this machine interactively, so I don't want their accounts to be available for logon and I don't want their names to appear on welcome screen. In older versions of Windows Pro (up to Windows 7) I did this by first creating the accounts as members of "Users" group, and then including them into "Deny logon locally" list in Local Security Policy settings. This always had the desired effect. However, my question is whether this is the right/best way to do it. The reason I'm asking is that even though this method works in Windows 8 Pro as well, it has one little quirk: interactive users from "User" group are still able to see these extra user names when they go to the Metro screen and hit their own user name in the top-right corner (i.e. open "Sign out/Lock" menu). The command list that drops out contains "Sign out" and "Lock" commands as well as the names of other users (for "switch user" functionality). For some reason that list includes the extra users from "Deny logon locally" list. It is interesting to note that this happens when the current user belongs to "Users" group, but it does not happen when the current user is from "Administrators". For example, let's say I have three accounts on the machine: "Administrator" (from "Administrators", can logon locally), "A" (from "Users", can logon locally), "B" (from "Users", denied logon locally). When "Administrator" is logged in, he can only see user "A" listed in his Metro "Sign out/Lock" menu, i.e. all works as it should. But when user "A" is logged in, he can see both "Administrator" and user "B" in his "Sign out/Lock" menu. Expectedly, in the above example trying to switch from user "A" to user "B" by hitting "B" in the menu does not work: Windows jumps to welcome screen that lists only "Administrator" and "A". Anyway, on the surface this appears to be an interface-level bug in Windows 8. However, I'm wondering if going through "Deny logon locally" setting is the right way to do it in Windows 8. Is there any other way to create a hidden non-interactive user account?

    Read the article

  • How can I remove old log entries from a log file and archive them somewhere else in Linux?

    - by Mike B
    CentOS 4.x I apologize in advance if this is not the appropriate place to ask this question. It pertains to a linux server / IT admin task. I've got a log file on an old CentOS 4.x server and I want to remove log entries older than a certain date and place them in a new file for archive. Here's an example of the log format: 2012-06-07 22:32:01,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123|blah blah blah 2012-06-07 22:32:03,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123|blah blah blah 2012-06-07 22:32:04,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123| 2012-06-07 22:32:10,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123|blah blah blah 2012-06-07 22:32:12,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123|blah blah blah 2012-06-07 22:32:15,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123| 2012-06-07 22:32:40,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123|blah blah blah 2012-06-07 22:32:58,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123|blah blah blah 2012-06-07 22:33:01,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123| 2012-06-07 22:33:01,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123|blah blah blah 2012-06-07 22:33:02,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123| Essentially, I'm looking for a one-liner that will do the following: Find any events older than a provided YYYY-MM-DD and remove them from the primary log file. Take the deleted events from step 1 and put them in a new log file (Optional) Compress the new archive log file holding the deleted events. I'm aware that there are log rotate tools that do this but this should just be a one-time task so I'd prefer not to set that up. Additional notes: If the date part it tricky or too resource intensive, an alternative would be to just keep the last X number of lines and move the rest. I was originally thinking of something like tail -n 10000 > newfile.txt but that would mean moving the "good" logs to a new file and then doing a name swap... and then I'd still need to remove the "good" entries from the archive. This particular log file is pretty large (1 GB) so I'd prefer the task to be as resource and time efficient as possible. The extra pipes in the log concern me and I'm not sure if I'd need extra protection in the commands to avoid that from causing problems.

    Read the article

  • How do you backup 40+ Centos5.5 servers?

    - by John Little
    We are embarrassed to ask this question. Apologies for our lack of UNIX expertise. We have inherited 40+ centos 5.5 servers, and don't know how to back them up. We need low level clone type images so that we could restore the servers from scratch if we had to replace the HDs etc. We have used the "dd" command, but we assume this only works if you want to back up one local disk to another, not 40 servers to one server with an external USB HD attached. All 40 servers have a pair of mirrored disks (dont know if its HW or SW raid). Most only have 100MB used. SErvers are running apache, zend, tomcat, mysql etc. Ideally we dont want to have to shut them down to backup (but could). We assume that standard unix commands like tar, cpio, rsync, scp etc. are of no use as they only copy files, not partitions, all attributes, groups etc. i.e. do not produce a result which can simply be re-imaged to a new HD to get the serer back from dead. We have a large SAN, a spare windows box and spare unix boxes, but these are only visible to one layer in the network. We have an unused Dell DL2000 monster tape unit, but no sw or documentation for it. WE have a copy of symantec backup exec, but we have no budget for unix client licenses. (The company has negative amounts of money). We need to be able to initiate the backup remotely, as we can only access the servers in person in an emergency (i.e. to restore) Googling returns some applications to do this, e.g. clonezilla - looks difficult to install and invasive. Mondo, only seems to support backup if you are local to the machine. Amanda might be an option, but looks like days/weeks of work to learn and setup? Is there anything built into Centos, or do we have to go the route of installing, learning and configuring a set of backup softwares? Any ideas? This must be a pretty standard problem which goggling doesnt give an obvious answer.

    Read the article

  • Apache error log interpretation

    - by HTF
    It looks like someone gained access to my server. How I can find out which Apache vHosts this log is related to? How these commands from the log are invoked and how/why they are printed to the log file - is this some remote shell or PHP script? /var/log/httpd/error_log mkdir: cannot create directory `/tmp/.kdso': File exists --2014-06-13 13:29:17-- http://updates.dyndn-web.com/abc.txt Resolving updates.dyndn-web.com... 94.23.49.91 Connecting to updates.dyndn-web.com|94.23.49.91|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 5055 (4.9K) [text/plain] Saving to: `abc.txt' 0K .... 100% 303K=0.02s 2014-06-13 13:29:17 (303 KB/s) - `abc.txt' saved [5055/5055] % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed ^M 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0^M101 5055 101 5055 0 0 79686 0 --:--:-- --:--:-- --:--:-- 154k minerd64: no process killed minerd32: no process killed named: no process killed kernelupdates: no process killed kernelcfg: no process killed kernelorg: no process killed ls: cannot access /tmp/.ICE-unix: No such file or directory mkdir: cannot create directory `/tmp': File exists --2014-06-13 13:29:18-- http://updates.dyndn-web.com/64.tar.gz Resolving updates.dyndn-web.com... 94.23.49.91 Connecting to updates.dyndn-web.com|94.23.49.91|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 205812 (201K) [application/x-tar] Saving to: `64.tar.gz' 0K .......... .......... .......... .......... .......... 24% 990K 0s 50K .......... .......... .......... .......... .......... 49% 2.74M 0s 100K .......... .......... .......... .......... .......... 74% 2.96M 0s 150K .......... .......... .......... .......... .......... 99% 3.49M 0s 200K 100% 17.4M=0.1s 2014-06-13 13:29:18 (1.99 MB/s) - `64.tar.gz' saved [205812/205812] sh: ./kernelupgrade: Permission denied

    Read the article

  • SQL Transactional Replication snapshot not applying

    - by dmch2
    Hi, I'm using SQL Transactional Replication with pull subscriptions to replicate databases (hosting their own distribution database) from several servers across a VPN to a central server. I've got the first 2 databases working fine but the 3rd one is causing me problems. My subscription server is SQL 2008, the source systems are all SQL 2005. The source databases are a few 100Mb in size and contain audit data so are simply growing slowly by adding new records at approx 1kb a second. As far as the replication monitor, Agent logs and event logs show everything is working fine - except that no data appears in my subscription database. The distribution agent doesn't seem to want to read the snapshot (and hence the initial state and schema) from the publisher. New transactions aren't applied although they do seem to be arriving OK as the replication monitor shows things like '5 transactions with 10 commands were delivered'. I would expect (as in previous times) to see statements about data being BCPed in the replication monitor. The snapshot is on the publisher on a shared folder. The subscriber can view the snapshot OK (\\repldata) and the alt snapshot folder is pointing at it. But the distribution agent doesn't seem to be making an attempt to do read it. I tried changing the snapshot path to something that's incorrect and didn't even get an error saying that it couldn't access it. After lots of googling etc I found that sp_MSget_repl_commands is called by the subscriber on the distribution database on the publisher. Running a profiler I can see that it's only called for one agent Id. After a reinit it's called for sequence number 0x0 as expected so I thought that would mean it's would look for the snapshot. However, looking on the publisher I see that there's data for two agents - the snapshot agent and the log reader agent (which is being queries). So I guess I need to tell the distribution agent to get the data for both. But how? and more importantly - why? It worked fine on the other two servers I've replicated. I'm not an SQL novice but this is pretty much my first go at replication so don't be afraid to accuse me of missing something obvious/stupid! I can get log files (eg from the distribution agent) if you want but they don't seem to have any errors in them - it just starts up and starts applying log reader agent changes. Cheers Dave

    Read the article

  • .htaccess ignored, SPECIFIC to EC2 - not the usual suspects

    - by tedneigerux
    I run 8-10 EC2 based web servers, so my experience is many hours, but is limited to CentOS; specifically Amazon's distribution. I'm installing Apache using yum, so therefore getting Amazon's default compilation of Apache. I want to implement canonical redirects from non-www (bare/root) domain to www.domain.com for SEO using mod_rewrite BUT MY .htaccess FILE IS CONSISTENTLY IGNORED. My troubleshooting steps (outlined below) lead me to believe it's something specific to Amazon's build of Apache. TEST CASE Launch a EC2 Instance, e.g. Amazon Linux AMI 2013.03.1 SSH to the Server Run the commands: $ sudo yum install httpd $ sudo apachectl start $ sudo vi /etc/httpd/conf/httpd.conf $ sudo apachectl restart $ sudo vi /var/www/html/.htaccess In httpd.conf I changed the following, in the DOCROOT section / scope: AllowOverride All In .htaccess, added: (EDIT, I added RewriteEngine On later) RewriteCond %{HTTP_HOST} ^domain\.com$ [NC] RewriteRule ^/(.*) http://www.domain.com/$1 [R=301,L] Permissions on .htaccess are correct, AFAI can tell: $ ls -al /var/www/html/.htaccess -rwxrwxr-x 1 git apache 142 Jun 18 22:58 /var/www/html/.htaccess Other info: $ httpd -v Server version: Apache/2.2.24 (Unix) Server built: May 20 2013 21:12:45 $ httpd -M Loaded Modules: core_module (static) ... rewrite_module (shared) ... version_module (shared) Syntax OK EXPECTED BEHAVIOR $ curl -I domain.com HTTP/1.1 301 Moved Permanently Date: Wed, 19 Jun 2013 12:36:22 GMT Server: Apache/2.2.24 (Amazon) Location: http://www.domain.com/ Connection: close Content-Type: text/html; charset=UTF-8 ACTUAL BEHAVIOR $ curl -I domain.com HTTP/1.1 200 OK Date: Wed, 19 Jun 2013 12:34:10 GMT Server: Apache/2.2.24 (Amazon) Connection: close Content-Type: text/html; charset=UTF-8 TROUBLESHOOTING STEPS In .htaccess, added: BLAH BLAH BLAH ERROR RewriteCond %{HTTP_HOST} ^domain\.com$ [NC] RewriteRule ^/(.*) http://www.domain.com/$1 [R=301,L] My server threw an error 500, so I knew the .htaccess file was processed. As expected, it created an Error log entry: [Wed Jun 19 02:24:19 2013] [alert] [client XXX.XXX.XXX.XXX] /var/www/html/.htaccess: Invalid command 'BLAH BLAH BLAH ERROR', perhaps misspelled or defined by a module not included in the server configuration Since I have root access on the server, I then tried moving my rewrite rule directly to the httpd.conf file. THIS WORKED. This tells us several important things are working. $ curl -I domain.com HTTP/1.1 301 Moved Permanently Date: Wed, 19 Jun 2013 12:36:22 GMT Server: Apache/2.2.24 (Amazon) Location: http://www.domain.com/ Connection: close Content-Type: text/html; charset=UTF-8 HOWEVER, it is bothering me that it didn't work in the .htaccess file. And I have other use cases where I need it to work in .htaccess (e.g. an EC2 instance with named virtual hosts). Thank you in advance for your help.

    Read the article

  • cd Command Linux and Mystery Flags

    - by Jason R. Mick
    Platform: CentOS 6.2 Shell:tcsh I'm playing around with cd for a BASH script, and noticed the wondrous cd - option, but was left with many questions... Why the cd -? Isn't this redundant with cd ..? EDIT [As FatalError points out, these two commands don't do the same things... so the answer is "no"] Can you delve farther back into your history with - flag, a la in a browser? e.g. When I type cd -, it takes me to my previous directory, but then if I enter that command again, it takes me to the directory I just came from, creating a sort of loop. Is a shorthand for going back multiple levels supported?EDITI realize I can go back with cd .., but was hoping this could be a gateway to a less verbose deep back, e.g. cd -3 vs. cd ../../../ ... hopefully that clarifies what I'm asking....EDIT2As to the current feedback, while .. is a special directory, I don't see a reason why the built-in cd to the terminal couldn't use a shorthand for ../../ ... ../ e.g. cd ..5 or why the built-in also couldn't have a history (a la auto pushd/popd) that could be turned on and used like cd -3. I get that this could be somewhat of security/privacy risk, but I don't see how it's any worst than storing a command history, which most shells/terminals do. The manpage for cd, accessible via man cd and help cd (it's the same for either command), only lists -L and -P flags. However when I type in cd --help it outputs Usage: cd [-plvn][-|<dir>].. Am I right in assuming the other flags and the - (back) option are nonstandard? What are the -n and -v flags for? Both seem to take me back to my home directory, that's all I've been able to figure out via experimentation. A quick read on web resources [1][2] offered just the same sort of info that the man page did and didn't answer my questions. Note: The second Linux-centric resource above claimed cd only had two options (obviously not true in current CentOS) hence my assumption that this functionality could be non-standard.

    Read the article

  • How to run a restricted set of programs with Administrator privileges without giving up Admin acces (Win7 Pro)

    - by frLich
    I have a shared system, running Windows7 X64, restricted to a 'standard user' with no password. Not everyone who has access to the system has the administrator password. This works rather well, except for some applications - specially the unlock-applications for encrypted hard drives/USB flash drives. The specific ones either require Administrator access (eg. Seagate Blackarmor) or simply fail without it -- since these programs are sending raw commands to a device, this is to be expected. I would like to be able to add the hashes of these particular programs to a whitelist, and have them run as administrator without needing any prompts. Since these are by definition on removable media, I can't simply use a filename or even a path. One of the users who shares the system can be considered 'crafty', so anything which temporarily grants administrator rights to an user account is certain to cause problems. What i'd like to be able to do: 1) Create an admin account that can only run programs from a whitelist (or, failing that, from a directory) I can't find a good way to do this: As far as I can tell, SRP applies equally to ALL users? Even if I put a "Deny" token on all directories on the system, such that new directories would inherit it, it could still potentially run things from the mounted USB devices. I also don't know whether it's possible to create a new directory that DOESN'T inherit from the parent, that would lake the deny token, and provide admin access. 2) Find a lightweight service that will run these programs in its local context Windows7 seems to block cross-privilege level communication by default, and I haven't found such for windows 7. One example seems to be "sudo" (http://pages.cpsc.ucalgary.ca/~nfriess/sudo/) but because it uses a WLNOTIFY hook, it won't work under Vista nor Windows7 Non-Solutions: - RunAs: Requires administrator password! (but everyone calls it "sudo" anyway) - RunAs /savecred: Nice idea, but appears to be completely insecure. - RUNASSPC - Same concept as RunAs, uses "encrypted" files with credentials, but checks in user-space. - Scheduled Tasks - "Fixed" permissions make this difficult, and doesn't support interactive processes even if it did. - SuRun: From Google: "Surun uses its own Windows service that adds the user to the group of administrators during program start and removes him automatically from that group again"

    Read the article

  • Hyper-V Ubuntu Networking Problems Copying Large Amounts of Data

    - by Anonymous
    I am trying to copy a large amount (about 50 GB) of data over my network from a Hyper-V-hosted virtual machine running Ubuntu 11.04 (Natty Narwhal) to another (non-virtual) Ubuntu host that I plan to use for testing upgrades to one of our web applications. The problem I am having is with the virtual machine, which I shall refer to in what follows as "source.host". This machine is running 64-bit Ubuntu Server with the 2.6.38-8-server kernel and the Microsoft Linux Integration Components for Hyper-V kernel modules (hv_utils, hv_timesource, hv_netvsc, hv_blkvsc, hv_storvsc, and hv_vmbus) loaded. It uses a Hyper-V "synthetic network adapter" for its networking interface. To do the copy, I log on to the machine with the data and run the following commands (Call the remote machine "destination.host".): $ cd /path/to/data $ tar -cvf - datafolder/ | ssh [email protected] "cat > ~/data.tar" This runs for a while and then suddenly stops after transferring somewhere from 2-6 GB. The terminal on the source.host machine displays a Write failed: broken pipe error. The odd part is this: after this occurs, the "source.host" machine is no longer able to talk to the rest of the network. I cannot ping any other hosts on the network from the "source.host" machine, and I cannot ping the "source.host" machine from any other host on the network. I am equally unable to access the any of the web services hosted on "source.host". Running ifconfig on "source.host" shows the network adapter to be up and running as usual with the correct IP address and everything. I tried restarting the networking service with $ /etc/init.d/networking restart but the problem does not go away. Restarting the machine makes it capable of talking to the network again -- it can ping and be pinged by other hosts, and the web services are also accessible and usable as normal -- but attempting the copy operation again results in the same failure, requiring another restart. As an experiment, I tried replacing the tar -- ssh pipeline above with a straight scp: $ scp -r datafolder/ [email protected]:~ but to no avail Thinking that the issue might have to do with the kernel packet-send buffers filling up, I tried increasing the buffer size to 12 MB (up from the 128 KB default) with # echo 12582911 > /proc/sys/net/core/wmem_max but this also had no effect. I'm guessing at this point that it might be a problem with the Microsoft synthetic network driver, but I don't really know. Does anyone have any suggestions? Thank you very much in advance!

    Read the article

  • Detection of battery status totally messed up

    - by Faabiioo
    I already posted this question in the Ubuntu forum and stackOverflow. I forward it here with the hope to find some different opinions about the problem. I have an Acer TravelMate 5730, which is 3 y.o., running Ubuntu 10.04 LTS. One year ago I changed the battery because the old one died. Since then, everything worked like a charm. A week ago I was using my laptop running on battery; it was charged up to 60%. Suddenly it shut down and for about 24h it was like the battery was totally broken: it didn't charge anymore and the 'upower --dump' said state: critical. I was kind of resigned to buy a new battery, when suddenly the orange light became green: battery was charged and actually working; strangely the battery indicator was stuck to 100%, even after 2 hours running. I tried again with 'upower --dump' or 'acpi -b' commands and it kept saying battery is discharging, though maintaining the percentage to 100%. Thus, battery working fine up to 3 hours, without any warning when it was almost empty, likely to result in a brute shut down. Today something different. the 'upower --dump' command says: ... present: yes rechargeable: yes state: fully-charged energy: 0 Wh energy-empty: 0 Wh energy-full: 65.12 Wh energy-full-design: 65.12 Wh energy-rate: 0 W voltage: 14.481 V percentage: 0% capacity: 100% technology: lithium-ion I tried to boot WinXP and the problem is pretty much the same, with the battery fully-charged, percentage equal to 0% and no way to fix it. While writing, the situation has changed again: present: yes rechargeable: yes state: charging energy: 0 Wh energy-empty: 0 Wh energy-full: 65.12 Wh energy-full-design: 65.12 Wh energy-rate: 0 W voltage: 14.474 V percentage: 0% capacity: 100% technology: lithium-ion ...charging, but it does not charge up. (Recall, the battery lasted 3 hours until yesterday!). So, the big question is: is it an hardware issue, like a dedicated internal circuit is broken? or maybe it is just the battery that must be changed. Or, rather, some BIOS problem that could be fixed in some way. I'd appreciate every help that can shed some light on this annoying problem thanks

    Read the article

  • Network unreachable on Ubuntu guest after trying to set up a host only network on Virtualbox

    - by gkb0986
    I have a Mac OS X host and a bunch of guests including Ubuntu and Arch Linux. I was trying to set up a host-only network at eth1 to let me ssh into the system. But now eth0 isn't working properly either. Ubuntu can no longer connect to remote hosts or browse the internet. It tells me that the network is unreachable. What's gone wrong here? I've included some diagnostics below. $ifconfig lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:10968 errors:0 dropped:0 overruns:0 frame:0 TX packets:10968 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:897264 (897.2 KB) TX bytes:897264 (897.2 KB) Other diagnostic commands and the output: $sudo lspci -n 00:00.0 0600: 8086:1237 (rev 02) 00:01.0 0601: 8086:7000 00:01.1 0101: 8086:7111 (rev 01) 00:02.0 0300: 80ee:beef 00:03.0 0200: 8086:100e (rev 02) 00:04.0 0880: 80ee:cafe 00:05.0 0401: 8086:2415 (rev 01) 00:06.0 0C03: 106B:003F 00:07.0 0680: 8086:7113 (REV 08) 00:0D.0 0106: 8086:2829 (REV 02) $sudo lshw -c network *-network DISABLED description: Ethernet interface product: 82540EM Gigabit Ethernet Controller vendor: Intel Corporation physical id: 3 bus info: pci@0000:00:03.0 logical name: eth0 version: 02 serial: 08:00:27:7d:22:df size: 1Gbit/s capacity: 1Gbit/s width: 32 bits clock: 66MHz capabilities: pm pcix bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=e1000 driverversion=7.3.21-k8-NAPI duplex=full firmware=N/A latency=64 link=no mingnt=255 multicast=yes port=twisted pair speed=1Gbit/s resources: irq:19 memory:f0000000-f001ffff ioport:d010(size=8) $lsmod Module Size Used by nls_utf8 12557 1 isofs 40257 1 vboxsf 43743 2 vesafb 13844 1 snd_intel8x0 38570 2 snd_ac97_codec 134869 1 snd_intel8x0 ac97_bus 12730 1 snd_ac97_codec snd_pcm 97275 2 snd_intel8x0,snd_ac97_codec snd_seq_midi 13324 0 snd_rawmidi 30748 1 snd_seq_midi snd_seq_midi_event 14899 1 snd_seq_midi rfcomm 47604 0 snd_seq 61929 2 snd_seq_midi,snd_seq_midi_event bnep 18281 2 bluetooth 180113 10 rfcomm,bnep ppdev 17113 0 psmouse 97519 0 snd_timer 29990 2 snd_pcm,snd_seq joydev 17693 0 snd_seq_device 14540 3 snd_seq_midi,snd_rawmidi,snd_seq vboxvideo 12622 1 serio_raw 13211 0 snd 79041 11 snd_intel8x0,snd_ac97_codec,snd_pcm,snd_rawmidi,snd_seq,snd_timer,snd_seq_device soundcore 15091 1 snd vboxguest 235498 7 vboxsf parport_pc 32866 0 drm 241971 2 vboxvideo i2c_piix4 13301 0 snd_page_alloc 18529 2 snd_intel8x0,snd_pcm mac_hid 13253 0 lp 17799 0 parport 46562 3 ppdev,parport_pc,lp usbhid 47238 0 hid 99636 1 usbhid e1000 108589 0

    Read the article

  • How to run a restricted set of programs with Administrator privileges without giving up Admin acces (Win7 Pro)

    - by frLich
    I have a shared system, running Windows7 X64, restricted to a 'standard user' with no password. Not everyone who has access to the system has the administrator password. This works rather well, except for some applications - specially the unlock-applications for encrypted hard drives/USB flash drives. The specific ones either require Administrator access (eg. Seagate Blackarmor) or simply fail without it -- since these programs are sending raw commands to a device, this is to be expected. I would like to be able to add the hashes of these particular programs to a whitelist, and have them run as administrator without needing any prompts. Since these are by definition on removable media, I can't simply use a filename or even a path. One of the users who shares the system can be considered 'crafty', so anything which temporarily grants administrator rights to an user account is certain to cause problems. What i'd like to be able to do: 1) Create an admin account that can only run programs from a whitelist (or, failing that, from a directory) I can't find a good way to do this: As far as I can tell, SRP applies equally to ALL users? Even if I put a "Deny" token on all directories on the system, such that new directories would inherit it, it could still potentially run things from the mounted USB devices. I also don't know whether it's possible to create a new directory that DOESN'T inherit from the parent, that would lake the deny token, and provide admin access. 2) Find a lightweight service that will run these programs in its local context Windows7 seems to block cross-privilege level communication by default, and I haven't found such for windows 7. One example seems to be "sudo" (http://pages.cpsc.ucalgary.ca/~nfriess/sudo/) but because it uses a WLNOTIFY hook, it won't work under Vista nor Windows7 Non-Solutions: - RunAs: Requires administrator password! (but everyone calls it "sudo" anyway) - RunAs /savecred: Nice idea, but appears to be completely insecure. - RUNASSPC - Same concept as RunAs, uses "encrypted" files with credentials, but checks in user-space. - Scheduled Tasks - "Fixed" permissions make this difficult, and doesn't support interactive processes even if it did. - SuRun: From Google: "Surun uses its own Windows service that adds the user to the group of administrators during program start and removes him automatically from that group again"

    Read the article

  • windows 7 virtual wireless adapter keeps going to sleep

    - by conners
    Just a quick question that I can't see mentioned anywhere online. I have a Windows 7 box configured like these guys recommend http://www.itgeekdiary.com/windows-7-as-an-wi-fi-access-point/ simply so that I can have my Windows 7 box as a wifi access point or a wifi emitter. It's also called a Microsoft Virtual WiFi Miniport Adapter. But it powers off and shuts down automatically and stops working. Basically everything works as intended and then - well -it will stopped working when I am not at the Windows 7 PC for a long time. The problem seems to be that every time my PC goes to "power save / sleep" and in the morning the Windows 7 machine "wakes" but blooming heck the wifi has stopped and you have to power cycle the PC (which is very uncool). When I power Cycle I have to do the following as administrator C:\Windows\System32\netsh.exe wlan start hostednetwork I then tried a gazllion things involving services and power management and eventually discovered that if I run the following commands as administrator it will be ok (for a bit) but every 3rd ot 4th time I try this "trick" it simply fails. the trick that seems to work 3 out of 4 times (i.e. "most" of the time) C:\Windows\System32\netsh.exe wlan stop hostednetwork C:\Windows\System32\netsh.exe wlan start hostednetwork But why does this only work "some" of the time? What else I did by myself: on every manage adapter properties (that relates to the wifi) I right clicked [configure] [power management] /disabled/ "allow the computer to power off to save power" <- this made no difference Also (and this is a bit annoying) there is no system tray app/GUI for the Microsoft Virtual WiFi Miniport Adapter output signal ... none... so (lame as it sounds) the ONLY way I can check if it's on is to physically go to another device and SCAN.. lame so my question can probably be solved by any of the following: a) can I stop Windows 7 sleeping this wifi when the machine sleeps b) can I force Windows to force wake this process on wake? if so how? c) what is the service / process REALLY called and how do I restart it if it crashes d) how can I flush the wifi properly rather power cycle the host machine e) anyone have a link to an program or app that can sit in the system tray that shows windows 7 wifi hotspot emission status (on/off/etc etc) Since I am a programmer I can easily write a vbs script / windows exe to fix this (and I will share this solution) and the gui problem if I can work out the actual service that is running that netsh stops/starts

    Read the article

  • bluetooth connection using pybluez

    - by srj0408
    I am working on bluetooth not exactly on bluetooth stack-development but to use bluetooth in one of my project. I had done all that before using some of the py-bluez commands like hciconfig, hcitool scan , then simple-agents and using serial module inside python. But that was quite random. We were able to connect only one specific device based on its bluetooth address and there was no facility of reconnection once the devices are disconnected. Now i want to try out this stuff in a sequential manner like this (i am doing that all on a RPI and for at present on ubuntu 12.04.) i) Store some names in a file along with some other information with respect to that device. ii) Run a script to find out the device in locality with those names and if any one if found, report that. For this step, i had taken a reference from BTBook , made available from MIT. Below is the script for the same, but that script only search for the single name. from bluetooth import * target_name = "XT1033" target_address = None nearby_devices = discover_devices() for address in nearby_devices: if target_name == lookup_name( address ): target_address = address break if target_address is not None: print "found target bluetooth device with address ", target_address connect_socket(target_address); else: print "could not find target bluetooth device nearby" iii) Connect the device using client sock. But i dont have any device on which i can write a simple python script. My client can be any device that will be publishing data. Now i came through a script in the same book, that actually connect to a client requesting permission to connect to server. from bluetooth import * port = 1 server_sock=BluetoothSocket( RFCOMM ) server_sock.bind(("",port)) server_sock.listen(1) client_sock, client_info = server_sock.accept() print "Accepted connection from ", client_info data = client_sock.recv(1024) print "received [%s]" % data client_sock.close() server_sock.close() here client_sock, client_info = server_sock.accept() provide the client address and port requested to be connected. Can i pass address obtained from the earlier script to this, so that it connect server to the client? iv) Then if client get disconnected, re-connect(a simple polling can be used.) All this stuff can be done using bash and py-bluez functions but i want to do that in a sequential manner.I am not a master in python but i can do some small stuff. Can any one guide me for the same or can direct me to more usefull resource through which i can continue my coding part after finding the "X", "Y" named devices.

    Read the article

  • ffmpeg - join / merge on top of each other

    - by AisIceEyes
    I'm trying to join together two videos on top of each other. I already did these two ffmpeg commands ffmpeg -i 2_Out_of_Control.VOB -aspect 16:9 \ -vf "yadif=0:-1:0,crop=w=714:h=476:x=6:y=0,scale=1280:720,boxblur=lp=13" \ -c:v libx264 -preset medium \ -c:a copy \ '2(blurred)Out_of_Control.mp4' ffmpeg -i 2_Out_of_Control.VOB \ -vf "yadif=0:-1:0,crop=w=714:h=476:x=6:y=0,scale=1080:720" \ -c:v libx264 -preset medium \ -c:a copy \ '2(clear)Out_of_Control.mp4' I'm currently stuck on making the "clear" version on top of the "blurred" version. I'm not sure how to do that. Can anybody help please? Been googling for around 2 days already. Only achieved it by using OpenShot but yeah, would prefer if there is an ffmpeg command to merge the two videos on top of each other. Edit: I want the "clear" video to be at the center at the top of the "blurred" video Edit2: console output would be the same as above: ffmpeg -i 2(blurred)Out_of_Control.mp4 \ -i 2(clear)Out_of_Control.mp4 \ -aspect 16:9 \ -vf <just something that will join the two together: the blurred at the bottom, clear at top that is centered> \ -c:v libx264 -preset medium \ -c:a copy \ '2_Out_of_Control_VOB.mp4' Edit3: here is the output when I used ffmpeg -i 2_Out_of_Control.VOB $ ffmpeg -i 2_Out_of_Control.VOB ffmpeg version git-2013-10-03-c7fe2a3 Copyright (c) 2000-2013 the FFmpeg developers built on Oct 4 2013 05:22:06 with gcc 4.6 (Ubuntu/Linaro 4.6.3-1ubuntu5) configuration: --prefix=/home/username/ffmpeg_build --extra-cflags=-I/home/username/ffmpeg_build/include --extra-ldflags=-L/home/username/ffmpeg_build/lib --bindir=/home/username/bin --extra-libs=-ldl --enable-gpl --enable-libass --enable-libfdk-aac --enable-libmp3lame --enable-libopus --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-nonfree --enable-x11grab libavutil 52. 46.100 / 52. 46.100 libavcodec 55. 34.100 / 55. 34.100 libavformat 55. 19.100 / 55. 19.100 libavdevice 55. 3.100 / 55. 3.100 libavfilter 3. 88.101 / 3. 88.101 libswscale 2. 5.100 / 2. 5.100 libswresample 0. 17.103 / 0. 17.103 libpostproc 52. 3.100 / 52. 3.100 Input #0, mpeg, from '2_Out_of_Control.VOB': Duration: 00:05:00.01, start: 0.500000, bitrate: 4574 kb/s Stream #0:0[0x1e0]: Video: mpeg2video (Main), yuv420p(tv, smpte170m), 720x480 [SAR 8:9 DAR 4:3], max. 9334 kb/s, 29.97 fps, 29.97 tbr, 90k tbn, 59.94 tbc Stream #0:1[0x80]: Audio: ac3, 48000 Hz, stereo, fltp, 384 kb/s At least one output file must be specified $

    Read the article

< Previous Page | 186 187 188 189 190 191 192 193 194 195 196 197  | Next Page >