Search Results

Search found 40031 results on 1602 pages for 'command message'.

Page 388/1602 | < Previous Page | 384 385 386 387 388 389 390 391 392 393 394 395  | Next Page >

  • Error while installing boost_1_54

    - by Farhat
    On trying to install boost I get this error during configuration checks. Googling did not give any pointers. [root@heracles boost_1_54_0]# ./b2 install Performing configuration checks - 32-bit : no (cached) - 64-bit : yes (cached) - arm : no (cached) - mips1 : no (cached) - power : no (cached) - sparc : no (cached) - x86 : yes (cached) error: No best alternative for libs/coroutine/build/allocator_sources next alternative: required properties: <link>static <target-os>windows <threading>multi not matched next alternative: required properties: <link>static <segmented-stacks>on <threading>multi not matched next alternative: required properties: <link>static <threading>multi not matched - has_icu builds : no (cached) warning: Graph library does not contain MPI-based parallel components. note: to enable them, add "using mpi ;" to your user-config.jam - zlib : yes (cached) - iconv (libc) : yes (cached) - icu : no (cached) - icu (lib64) : no (cached) - compiler-supports-ssse3 : yes (cached) - compiler-supports-avx2 : no (cached) - gcc visibility : yes (cached) - long double support : yes (cached) warning: skipping optional Message Passing Interface (MPI) library. note: to enable MPI support, add "using mpi ;" to user-config.jam. note: to suppress this message, pass "--without-mpi" to bjam. note: otherwise, you can safely ignore this message. error: No best alternative for libs/coroutine/build/allocator_sources next alternative: required properties: <link>static <target-os>windows <threading>multi not matched next alternative: required properties: <link>static <segmented-stacks>on <threading>multi not matched next alternative: required properties: <link>static <threading>multi not matched - zlib : yes (cached) How can the alternative for allocator sources be located? Thanks.

    Read the article

  • Copying compressed files from Server 2008 R2 network share to XP client via VPN fails

    - by Dejan Janjuševic
    At the first sight the question looks similar to this one. I have experienced an odd behavior while trying to copy a certain file from Windows Server 2008 R2 network share to Windows XP Professional client via VPN. The VPN was set up using RRAS on the server machine. I will try to provide as much informations as possible in order to make the issue more clear. When trying to copy the compressed file sized ~2.5 MB (via Explorer or CMD, doesn't matter), the process stalls after some 20%, producing an error message after few seconds: Cannot copy filename: The specified network name is no longer available. If i start the command ping -t 192.168.2.1 (where the IP address specified belongs to the server) side by side with the copy command, I can clearly see that the ping command times out for few seconds as the copy process stalls. When this happens all network activities are frozen. After a few seconds, the network recovers, ping continues to run normally, however the copy process stands still before it displays the above error message. Copying other files (I tried 4-5 files), of which some are larger and some are smaller, succeeds. Seems to me that I can copy all uncompressed files. As soon as I try to copy an archive, the process freezes. Even a 707 KB large archive can't be copied. I can only reproduce this behavior on 2 machines, both Windows XP Professional, one is w/ SP2 and the other w/ SP3. Other XP clients don't have this problem, neither do Windows 7 clients. If I connect to the server using Remote Desktop Connection without using VPN from either of these 2 machines (using the same user account), I can copy anything I want normally, even these "problematic" files. Does anyone have any clue about what could possibly be going on?

    Read the article

  • tap interfaces always disabled in linux bridge

    - by Dani Camps
    I have a physical interface eth0, and I want to create two virtual interfaces and bridge them with eth0. For this purpose I do: #Create the virtual interfaces tunctl -t tap0 tunctl -t tap1 ifconfig tap0 up ifconfig tap1 up #Create the bridge brctl addbr br0 brctl stp br0 off brctl addif br0 eth0 brctl addif br0 tap0 brctl addif br0 tap1 #Turning up the bridge ifconfig br0 up However my problem if that the tap interfaces always appear disabled in the bridge, and no traffic flows to them. $brctl show br0 bridge name bridge id STP enabled interfaces br0 8000.080027cabeba no eth2 tap0 tap1 $brctl showstp br0 br0 bridge id 8000.080027cabeba designated root 8000.080027cabeba root port 0 path cost 0 max age 20.00 bridge max age 20.00 hello time 2.00 bridge hello time 2.00 forward delay 15.00 bridge forward delay 15.00 ageing time 300.01 hello timer 0.00 tcn timer 0.00 topology change timer 0.00 gc timer 298.42 flags eth2 (1) port id 8001 state forwarding designated root 8000.080027cabeba path cost 4 designated bridge 8000.080027cabeba message age timer 0.00 designated port 8001 forward delay timer 12.97 designated cost 0 hold timer 1.24 flags tap0 (2) port id 8002 state disabled designated root 8000.080027cabeba path cost 100 designated bridge 8000.080027cabeba message age timer 0.00 designated port 8002 forward delay timer 0.00 designated cost 0 hold timer 0.00 flags tap1 (3) port id 8003 state disabled designated root 8000.080027cabeba path cost 100 designated bridge 8000.080027cabeba message age timer 0.00 designated port 8003 forward delay timer 0.00 designated cost 0 hold timer 0.00 flags Is there any way to set the tap interfaces in forwarding state? I do not understand why they are not because STP is disabled. Cheers Daniel

    Read the article

  • Win7 Credential manager and accessing SQL Server from outside of the domain

    - by David Lively
    My SQL Server is set to use windows authentication. If I am connected to the domain directly from my Win7 Ultimate x64 machine, SQL Management Studio (SSMS) will let me authenticate with Windows authentication. However, if I am connected via the VPN (from a different machine that is not joined to the domain), it won't. If I start SSMS with the following command line: C:\Windows\system32>runas /netonly /user:domainname\username "C:\Program Files (x86)\Microsoft SQL...\ssms.exe" then connecting to the SQL Server (which is in the domain) with Windows Authentication works fine. I'd like to save these credentials so that I don't have to launch SSMS from the command line, or modify the shortcut. I know I can use the SysInternals ShellRunAs extension to do this, but I again have to enter my domain username and password each time, and shift+right-click to see that menu option. The Windows Credential Manager seems designed to solve this problem, and works for network shares. However, it doesn't seem to work for SSMS. Any suggestions? I've tried using the /savecred option with runas to create the necessary credentials, but that appears to be incompatible with the /netonly option. Running the above command line with the addition of /savecred just displays the runas help screen. Grrr. Argh.

    Read the article

  • "Error: Unknown error" when trying to start virtual machine from VMware server

    - by slhck
    Problem We are running VMware Server 2.0.0 build-116503 on a Ubuntu 10.04 LTS server. There is a virtual machine installed, running Lotus Domino on Windows Server 2003. Ever since a sudden power failure last week, the virtual machine won't properly start up. When I run the command: vmrun -T server -h https://127.0.0.1:8333/sdk -u root -p jk2x2208 start "[standard] lotus/test.vmx" … after 30 seconds it displays: Error: Unknown error That's about everything I get. I know the command is right, since that's what we've used all the time. This has happened last Saturday after a scheduled backup shutdown, and somehow I was able to start it again. This week, it happened again, and I can't get it back up. Occasionally, I also get: Error: Cannot connect to the virtual machine When I get this, and I run the start command, it seemingly works. Why is this so random? Which configuration could have been messed up? What I've tried / other info I already shut down VMware itself with /etc/init.d/vmware stop. This works. I tried to start VMware again with /etc/init.d/vmware start. It complains that it's "not configured", which is why I had to rm /etc/vmware/not_configured, and then try to start again. There have been no software updates on the machine, and no configuration changes

    Read the article

  • Two parts: linux startup script to connect to bluetooth and cron to keep it connected

    - by D.R.
    I have a mini bluetooth keyboard and a Raspberry Pi running a Debian-based distro. I know the MAC address of the keyboard but for this question, let's just use AA:BB:CC:DD:EE:FF. Right now I have to have a wired keyboard connected as well as my bluetooth dongle for the mini-keyboard and on the wired keyboard, I have to run the following when the device boots up: sudo hidd --connect AA:BB:CC:DD:EE:FF If the device goes idle for too long, then the bluetooth disconnects and I have to pull out my wired keyboard and retype that same command. What I'm looking for it a way to have that command run at startup and a way to sense if it gets disconnected so that it will auto reconnect. The annoying thing is that they keyboard has to be in pairing mode (even though it has already been paired) when I run that command, otherwise it tells me the host is down. So perhaps the script needs to prevent it from disconnecting due to inactivity, otherwise I'll have to put it back in pairing mode to reconnect. So to recap: - A script to connect at startup (I can make sure to put the keyboard into pairing mode before turning it on) - A script to prevent it from disconnecting (maybe some sort of signal to send to it every 60 seconds or something?) Any help with this is greatly appreciated! StackOverflow is always the best place to find answers to weird questions! I've been searching long and hard for an answer, but finally had to resort to coming here! Thanks!

    Read the article

  • tar Cannot open: No such file or directory

    - by Jakobud
    Fresh install of CentOS 5.4 Downloaded the following: http://prdownloads.sourceforge.net/webadmin/webmin-1.510.tar.gz MD5 sum is correct (cdcc09d71d85d81914a90413eaf21d3f). The file is located here: /tmp/webmin-1.510.tar.gz tmp and webmin-1.510.tar.gz both have chmod 777. I am logged in as root. Command: tar -zxfv webmin-1.510.tar Result: tar: v: Cannot open: No such file or directory tar: Error is not recoverable: exiting now tar: Child returned status 2 tar: webmin-1.510.tar: Not found in archive tar: Error exit delayed from previous errors Never run across this before. It's like it thinks that v is a file I want to extract, but its one of the command arguments... If I leave out the v... tar -zxf webmin-1.510.tar.gz The command stalls. It doesn't do anything. Just goes to the next line and no prompt comes up. I have to CTRL-C to get back to the prompt and a ls verifies that it didn't extract anything... My first reaction is that its not a valid tar/gz file or something. But the MD5 matches just fine. So I'm at a loss just a bit...

    Read the article

  • How To Find Reasons of Why Site Goes Online/Offline

    - by HollerTrain
    Seems today a website I manage has been going online and offline throughout the entire day. I have no idea what is causing the issue so I am seeking guidance on where to start. It is a Wordpress based site. So here is what I DO know: I use a program that pings the server every minute and when the server is not responding me it emails me, so I can know exactly when the site is online and offline. The site between 8pm to 12pm 12.28, and around the 1a hour early morning 12.29 (New York City timezone, and all times below are in same timezone). At the time of the ups/downs I see a lot of strain on the memory usage. Look at the load average when the site is going online/offline (http://screencast.com/t/BRlfXkqrbJII). Then I ran this command to restart http (http://screencast.com/t/usVtYWZ2Qi) and the memory usage then goes down to this (http://screencast.com/t/VdTIy3bgZiQB). An hour after I restarted http, the site then went offline/online so restarting the http didn't do much help. When the site is going offline/online, I ran the top command and get this (http://screencast.com/t/zEwr7YQj3). Here is a top command when the site is at it's lowest (http://screencast.com/t/eaMfha9lbT - so this would be dubbged "normal"). Here is a bandwidth report (http://screencast.com/t/AS0h2CH1Gypq). The traffic doesn't seem to be that much (http://screencast.com/t/s7hrWNNic1K), but looking at my times the site is going up/down this may be one of the reasons? I have the dvp Nitro package at Media Temple (http://mediatemple.net/webhosting/nitro/). So at this point I would request some help in trying to figure out what the cause of this is, and how I can go about pinpointing this issue. ANY HELP is greatly appreciated.

    Read the article

  • Tracking down source of duplicate email messages in Outlook / Exchange environment

    - by Ken Pespisa
    I have a few users, who are also Blackberry users, that occasionally have duplicate emails generated from their "mailbox". I put mailbox in quotes because I'm not exactly sure where the duplicates are created. One of these users is in non-cached mode, and the other is in cached mode, and both experience the problem. In fact, the non-cached mode user was originally experiencing the problem while in cached mode, and I made the switch a few weeks ago to attempt to solve the problem. Today I discovered the issue still exists. I'm not sure if the fact that they are blackberry users could be causing the problem at all. I don't see how, but felt I should mention it anyway. Does anyone have ideas on how I might begin to troubleshoot this? I can see in the non-cached user's mailbox "Sent Items" that the message was sent only once. I confirmed the message does not state that there was a conflict and in fact that makes sense because they are in non-cached mode. On the server, we have a mail journaling feature turned on for our third-party mail archiving system, and I can see that that system sees two sent messages. And likewise, the recipient does in fact have two messages in their inbox with consecutive message IDs ([email protected]) and ([email protected]). It would seem to me that the duplicates are generated on the client, but is there a way to tell for sure?

    Read the article

  • Sporadic email delivery to one user

    - by minamhere
    I have a user that occasionally does not receive emails from outside our organization. It does not seem to matter whether the other person is replying to an initial email or sending a new message. I have checked the Exchange System Manager and there is no record of the sender at all during this time period. No record of the message getting captured by the spam software (GFI Mail Essentials). The sender does not receive an NDR or any other indication that the message didn't arrive. It seems to me that these messages are not even getting to our servers at all. But, this is only impacting one user(that I am aware of) and not all the time. Some messages get through without any problem, others just disappear. The senders are not related at all. One is in another country, one uses AOL, one uses a corporate Exchange server locally. I can't seem to find a pattern. Where else can I look to try to figure out where these messages are going/getting captured? Are there additional logs that I can enable either within GFI or Exchange that might shed some light on this? Thanks. We are using Exchange 2003 on Server 2003. Desktop client is Outlook 2003 on Windows XP Pro.

    Read the article

  • NRPE: Unable to read output with check_connections plugin

    - by Wlodzimierz
    I'm using plugin which gives me warning or crtis with established connections. If I run it on local machine it gives: *root@graber:/usr/lib/nagios/plugins# ./check_connections -w 1 -c 5 -C sshd CRITICAL Established connections: 6* I know, I run as root. But: Rights to the file: root@graber:/usr/lib/nagios/plugins# ls -all check_connections -rwxr-xr-x 1 nagios nagios 5459 2012-07-06 10:19 check_connections /etc/sudoers: root@graber:/usr/lib/nagios/plugins# cat /etc/sudoers Defaults env_reset root ALL=(ALL:ALL) ALL %admin ALL=(ALL) ALL nagios ALL=(ALL) NOPASSWD: /usr/bin/lsof nagios ALL=(ALL) NOPASSWD: /usr/lib/nagios/plugins/ /etc/nagios/nrpe.cfg: *nrpe_user=nagios nrpe_group=nagios* *dont_blame_nrpe=1* *command_prefix=/usr/bin/sudo command[check_connections]=/usr/lib/nagios/plugins/check_connections -w 1 -c 5 -C sshd* log from remote: *2012-07-06T11:12:49+02:00 graber nrpe[25928]: Handling the connection... 2012-07-06T11:12:49+02:00 graber nrpe[25928]: Host address is in allowed_hosts 2012-07-06T11:12:49+02:00 graber nrpe[25928]: Host is asking for command 'check_connections' to be run... 2012-07-06T11:12:49+02:00 graber nrpe[25928]: Running command: /usr/lib/nagios/plugins/check_connections -w 1 -c 5 -C sshd 2012-07-06T11:19:11+02:00 graber nrpe[26100]: Return Code: 2, Output: NRPE: Unable to read output* Why is this happening? I'm out of ideas, I've searched google for 2 days now :)

    Read the article

  • Cannot install git-core using macports

    - by robUK
    Hello, Snow Leopard 10.6.4 mac ports 1.9.1 I have just installed macports and I want to install git-core. However, I get the following errors: ---> Computing dependencies for git-core ---> Dependencies to be installed: python26 db46 gdbm readline sqlite3 rsync popt ---> Building db46 Error: Target org.macports.build returned: shell command failed Log for db46 is at: /opt/local/var/macports/logs/_opt_local_var_macports_sources_rsync.macports.org_release_ports_databases_db46/main.log Error: The following dependencies failed to build: python26 db46 gdbm readline sqlite3 rsync popt Error: Status 1 encountered during processing. To report a bug, see <http://guide.macports.org/#project.tickets> I have tried doing a port selfupdate and a port clean all and then trying to install again. But still get the same problem. I have also tried install the dependent db46 on its own. Here is the log message: :error:build Target org.macports.build returned: shell command failed :debug:build Backtrace: shell command failed while executing "command_exec build" (procedure "portbuild::build_main" line 8) invoked from within "$procedure $targetname" :info:build Warning: the following items did not execute (for db46): org.macports.activate org.macports.build org.macports.destroot org.macports.install This is my first time using mac ports. Many thanks for any suggestions.

    Read the article

  • Estimating compressed file size using a list parameter

    - by Sai
    I am currently compressing a list of files from a directory in the following format: tar -cvjf test_1.tar.gz -T test_1.lst --no-recursion The above command will compress only those files mentioned in the list. I am doing this because this list is generated such that it fits a DVD. However, during compression the compression rate decreases the estimated file size and there is abundant space left in the DVD. This is something like a Knapsack algorithm. I would like to estimate the compressed file size and add some more files to the list. I found that it is possible to estimate file size using the following command: tar -cjf - Folder/ | wc -c This command does not take a list parameter. Is there a way to estimate compressed file size? I am also looking into options like perl scripts etc. Edit: I think I should provide more information since I have been doing a lot of web search. I came across a perl script(Link)that sort of emulates the Knapsack algorithm. The current problem with the above mentioned script is that it splits the files in their original state. When I compress the files after splitting them, there are opportunities for adding more files which I consider to be inefficient. There are 2 ways I could resolve the inefficiency: a) Compress individual files and save them in a directory using a script. The compressed file could provide a best estimate. I could generate a script using a folder of compressed files and use them on the uncompressed ones. b) Check whether the compressed file's size is less than the required size. If so, I should keep adding files until I meet the requirement. However, the addition of new files to the compressed file is an optimization problem by itself.

    Read the article

  • Unable to specify parameters to cvlc in a script

    - by VxJasonxV
    I'm creating a script that issues a few curl commands in order to access a time-protected mms stream link, then set up a relay using cvlc (vlc's command line interface) for my own use on an unencumbered player. The curl aspect of this is working, as I can run as a browser and curl side by side and get the same access url. (It's time locked meaning the stream will work forever, but you have to connect quickly or the URL will time out.) The very end of the script prints the command I will run, which is then followed up by "exec $CMD". When I echo $CMD I get: cvlc --sout '#standard{access=http,mux=asf,dst=0.0.0.0:58194}' mms://[...] Manually Copy/Pasting this command in, verbatim, works perfectly fine, but as part of a script, the cvlc execution output says: [0x9743d0] main interface error: no suitable interface module [0x962120] main libvlc error: interface "globalhotkeys,none" initialization failed [0x9743d0] dummy interface: using the dummy interface module... [0xb16e30] stream_out_standard stream out error: no mux specified or found by extension [0xb16ad0] main stream output error: stream chain failed for `standard{mux="",access="",dst="'#standard{access=http,mux=asf,dst=0.0.0.0:58194}'"}' [0xb11cd0] main input error: cannot start stream output instance, aborting [0xb11f70] signals interface error: Caught Interrupt signal, exiting... Why is --sout behaving one way in a script (non-interactive shell?) vs. another way in the foreground (interactive shell) ?

    Read the article

  • Can't connect remotely to Windows Server 2008 R2

    - by JohnyD
    I have a new Dell R710 server running Windows Server 2008 R2. I one of it's 4 nic's set up and the rest are not being used. I have successfully given it an ip address, network mask, and dns servers. I can ping and resolve this machine from anywhere else in the network. However, when I try to connect to it via RDP it does several things: 1) it might just outright refuse me with the message, "This computer can't connect to the remote computer. Try connecting again." 2) it might connect me and let me chose the account I would like to log on as... but when you select an account then you receive the same message as in #1 3) it might actually allow you to connect but only for about 1 minute and then you receive the same message and it closes your session. I have configured the firewall service to allow for RDP over the domain network connection. This didn't have any noticible effect. I have now disabled the firewall for all 3 networks and have even stopped the Windows Firewall service. I am still having the same issue. I am new to Server 2008 R2 and things are very different. Please give me any advice you can on how to resolve this issue and/or any other gotchas that are sure to come my way. The 2003 - 2008 learning curve seems steep. Thanks

    Read the article

  • Ubuntu Laptop as a wireless hotspot on bridge mode

    - by nixnotwin
    I have a wired router to which my ubuntu laptop connects via ethernet. The wierless NIC of the laptop acts as a wireless hotspot on master mode. I use hostapd fo this. I have bridged eth0 and wlan0, so my wireless clients that connect to my laptop over wifi get ip from the wired router via dhcp, so the devices get registered at the wired router ( and the laptop is just an access point). I use the following commands to get my laptop+accesspoint working: sudo brctl addbr br0 sudo brctl addif br0 eth0 sudo hostapd /etc/hostapd/hostapd.conf & sudo dhclient -d br0 & sudo ifconfig wlan0 192.168.1.15 netmask 255.255.255.0 up sudo brctl addif br0 wlan0 These commands enable me to access internet on my wireless clients and also on the laptop which is acting as wireless accesspoint. But if I reboot the wired router (without rebooting the laptop that is acting as accesspoint), Internet access on the laptop+accesspoint gets lost, but on wireless clients it works fine. Even I have not been able to figure out a command which will reset the laptop interfaces to default settings, so everytime the router reboots, I have to reboot the laptop too to get into default settings so that I can re-enter the above mentioned commands. My first question is How can I have my bridge+accesspoint up and running even-though the router reboots? And is there a command to set the interfaces to a default state? (ifdown -a doesn't work, after issuing the command the bridge still remained).

    Read the article

  • Ubuntu Laptop as a wireless hotspot on bridge mode

    - by nixnotwin
    I have a wired router to which my ubuntu laptop connects via ethernet. The wierless NIC of the laptop acts as a wireless hotspot on master mode. I use hostapd fo this. I have bridged eth0 and wlan0, so my wireless clients that connect to my laptop over wifi get ip from the wired router via dhcp, so the devices get registered at the wired router ( and the laptop is just an access point). I use the following commands to get my laptop+accesspoint working: sudo brctl addbr br0 sudo brctl addif br0 eth0 sudo hostapd /etc/hostapd/hostapd.conf & sudo dhclient -d br0 & sudo ifconfig wlan0 192.168.1.15 netmask 255.255.255.0 up sudo brctl addif br0 wlan0 These commands enable me to access internet on my wireless clients and also on the laptop which is acting as wireless accesspoint. But if I reboot the wired router (without rebooting the laptop that is acting as accesspoint), Internet access on the laptop+accesspoint gets lost, but on wireless clients it works fine. Even I have not been able to figure out a command which will reset the laptop interfaces to default settings, so everytime the router reboots, I have to reboot the laptop too to get into default settings so that I can re-enter the above mentioned commands. My first question is How can I have my bridge+accesspoint up and running even-though the router reboots? And is there a command to set the interfaces to a default state? (ifdown -a doesn't work, after issuing the command the bridge still remained).

    Read the article

  • Resuming downloads in Firefox

    - by Kim
    Unfortunately, Firefox still has failed to add the option to resume downloads. I've ran into this problem SO MANY times, and in my previous searches I found posts saying Firefox was going to fix that. As of 3.6.3 they haven't. I just tried Free Download Manager (FDM), again, having the Firefox addon Flashgot use it. The download gets passed to FDM, and fails, giving the error message "access denied, invalid username or password." No password was required. The site I'm trying to get the file from is turbobit.net, which limits downloads speeds to 100kb/sec, and has a 59 second countdown before you get the link. I guess it's transparently using a password on their end. If I just download normally (save to disk) the download starts fine, but it fails after 30 minutes to 1 hour (always different), and my Wi-fi connection will stop briefly - and I have to start all over. So I will never be able to download a large file. I also tried DTA instead of FMD with Flashgot, and I get an "access denied" message in DTA. Again, I reloaded - waited the 59 seconds, and download w/Firefox, and the download starts fine. The failure message in the Firefox Downloads window is "source file at http... could not be read." Any help would be greatly appreciated. When is Firefox going to finally add the ability to resume downloads????? Is there some other software I haven't found using Google that will work?

    Read the article

  • get-eventlog issue

    - by Jim B
    I wanted to get a quick report of some log entries I saw on a server, so I ran: Get-Eventlog -logname system -newest 10 -computer fs1 | fl I got events back however the descriptions were all wrong. Here's an example: Index : 1260055 EntryType : Warning InstanceId : 2186936367 Message : The description for Event ID '-2108030929' in Source 'W32Time' cannot be found. The local compute r may not have the necessary registry information or message DLL files to display the message, or you may not have permission to access them. The following information is part of the event:'time. windows.com,0x1' Category : (0) CategoryNumber : 0 ReplacementStrings : {time.windows.com,0x1} Source : W32Time TimeGenerated : 1/25/2010 10:43:31 AM TimeWritten : 1/25/2010 10:43:31 AM UserName : Note that if I pull the event ID property it's correct (in this case 38) Is this is known issue or is something wrong. The messages resolve fine via event viewer locally and remotely Here is the powershell version info: Name : ConsoleHost Version : 2.0 InstanceId : bc58fcf8-bba3-4ca8-8972-17dbd5d9ff08 UI : System.Management.Automation.Internal.Host.InternalHostUserInterface CurrentCulture : en-US CurrentUICulture : en-US PrivateData : Microsoft.PowerShell.ConsoleHost+ConsoleColorProxy IsRunspacePushed : False Runspace : System.Management.Automation.Runspaces.LocalRunspace Here is the revised version info: Name Value ---- ----- CLRVersion 2.0.50727.3603 BuildVersion 6.0.6002.18111 PSVersion 2.0 WSManStackVersion 2.0 PSCompatibleVersions {1.0, 2.0} SerializationVersion 1.1.0.1 PSRemotingProtocolVersion 2.1

    Read the article

  • Exclude list of specific files in wget

    - by nanker
    I am trying to download a lot of pages from a website on dial-up and it can be brutally slow. I have almost got the perfect wget command, but because I'm downloading pages from the same site wget wastes times downloading the same standard images for each page. If I know the name of the default page images, is there any way to have wget ignore and thus avoid downloading those for each and every page? Here is an example of one of the wget commands that my shell script generates into another shell script to download all of the pages: mkdir candy-canes-on-the-flannel-board-in-preschool cd candy-canes-on-the-flannel-board-in-preschool wget -p -nd -A jpg,html -k http://www.teachpreschool.org/2011/12/candy-canes-on-the-flannel-board-in-preschool/ wget -c --random-wait --timeout=30 --user-agent="Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.3) Gecko/2008092416 Firefox/3.0.3" http://www.teachpreschool.org/2011/12/candy-canes-on-the-flannel-board-in-preschool/ -O "candy-canes-on-the-flannel-board-in-preschool" rm Baby-and-Toddler.jpg Childrens-Books.jpg Creative-Art.jpg Felt-Fun.jpg Happy_Rainbow-e1338766526528.jpg index.html Language-and-Literacy.jpg Light-table-Button.jpg Math.jpg Outdoor-Play.jpg outer-jacket1-300x153.jpg preschoolspot-button-small.jpg robots.txt Science-and-Nature.jpg Signature-2.jpg Story-Telling.jpg Tags-on-Preschool.jpg Teaching-Two-and-Three-Year-olds.jpg cd ../ Now I realize the script is not likely as savvy as it could be but it is doing what I need at the moment except that you can see from the rm command that I would just like to prevent wget from downloading the files in the first place if possible. I almost forgot to mention, there are two wget commands and that is because the first one downloads the page as index.html and for some reason it does not open in my browser, however, when I open it and look at it in vim all of the page's content is there, so I am not sure why it does not open. But if I just issue the second wget command as it is then that page, same file really with an alternate name, opens up fine. Something that if I could fix would also help to streamline the process.

    Read the article

  • Why is 32-bit-mode required in IIS7.5 for my app?

    - by Jonas Lincoln
    I have a .net4 web application running in a 64 bits 2008 server. I can only get it to run when I set the app pool to Enable 32-bits application to true. All dlls are compiled for .net4 (verified with corflags.exe). How can I figure out why Enable 32-bit is required? The error message from the event log when starting as a 64-bit app-pool Event code: 3008 Event message: A configuration error has occurred. Event time: 2011-03-16 08:55:46 Event time (UTC): 2011-03-16 07:55:46 Event ID: 3c209480ff1c4495bede2e26924be46a Event sequence: 1 Event occurrence: 1 Event detail code: 0 Application information: Application domain: removed Trust level: Full Application Virtual Path: removed Application Path: removed Machine name: NMLABB-EXT01 Process information: Process ID: 4324 Process name: w3wp.exe Account name: removed Exception information: Exception type: ConfigurationErrorsException Exception message: Could not load file or assembly 'System.Data' or one of its dependencies. An attempt was made to load a program with an incorrect format. at System.Web.Configuration.CompilationSection.LoadAssemblyHelper(String assemblyName, Boolean starDirective) at System.Web.Configuration.CompilationSection.LoadAllAssembliesFromAppDomainBinDirectory() at System.Web.Configuration.AssemblyInfo.get_AssemblyInternal() at System.Web.Compilation.BuildManager.GetReferencedAssemblies(CompilationSection compConfig) at System.Web.Compilation.BuildManager.CallPreStartInitMethods() at System.Web.Hosting.HostingEnvironment.Initialize(ApplicationManager appManager, IApplicationHost appHost, IConfigMapPathFactory configMapPathFactory, HostingEnvironmentParameters hostingParameters, PolicyLevel policyLevel, Exception appDomainCreationException) Could not load file or assembly 'System.Data' or one of its dependencies. An attempt was made to load a program with an incorrect format. at System.Reflection.RuntimeAssembly._nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, RuntimeAssembly locationHint, StackCrawlMark& stackMark, Boolean throwOnFileNotFound, Boolean forIntrospection, Boolean suppressSecurityChecks) at System.Reflection.RuntimeAssembly.InternalLoadAssemblyName(AssemblyName assemblyRef, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection, Boolean suppressSecurityChecks) at System.Reflection.RuntimeAssembly.InternalLoad(String assemblyString, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection) at System.Reflection.Assembly.Load(String assemblyString) at System.Web.Configuration.CompilationSection.LoadAssemblyHelper(String assemblyName, Boolean starDirective) Request information: Request URL: "our url" Request path: "url" User host address: ip-adddress User: Is authenticated: False Authentication Type: Thread account name: "app-pool" Thread information: Thread ID: 6 Thread account name: "app-pool" Is impersonating: False Stack trace: at System.Web.Configuration.CompilationSection.LoadAssemblyHelper(String assemblyName, Boolean starDirective) at System.Web.Configuration.CompilationSection.LoadAllAssembliesFromAppDomainBinDirectory() at System.Web.Configuration.AssemblyInfo.get_AssemblyInternal() at System.Web.Compilation.BuildManager.GetReferencedAssemblies(CompilationSection compConfig) at System.Web.Compilation.BuildManager.CallPreStartInitMethods() at System.Web.Hosting.HostingEnvironment.Initialize(ApplicationManager appManager, IApplicationHost appHost, IConfigMapPathFactory configMapPathFactory, HostingEnvironmentParameters hostingParameters, PolicyLevel policyLevel, Exception appDomainCreationException) Custom event details:

    Read the article

  • Samba between Ubuntu server 10.10 and Windows Vista, Windows 7

    - by chepukha
    I have a linux box running Linux server ubuntu 10.10. I have installed Samba on this linux box and want to share files with my laptops which run Windows Vista home and Windows 7 home. I have been struggling with the setup for almost a month but couldn't get it right. If I try to access share folder from Windows Vista, I get message "Windows cannot access \\server_ip_address". Error code: 0x80070035. The network path was not found. If I access from Windows 7, then after entering password to login I can see the list of share folders on Linux box. But if I click on a share folder, I get the same error message as above. Tail /var/log/samba/log.windows7-pc I got the following message: [2011/03/16 00:17:41.427238, 0] smbd/service.c:988(make_connection_snum) canonicalize_connect_path failed for service sharemedia, path /root/sharemedia Here is my setting in smb.conf [global] share modes = yes netbios name = Samba workgroup = WORKGROUP wins support = yes encrypt passwords = true [sharemedia] comment = Tesing sharing using Samba path=/root/sharemedia/ public = yes valid users = samba_usr_name ; make sure all files are sensible permissions create mask = 0660 force create mask = 0660 directory mask = 2770 force directory mask = 2770 directory security mask = 0000 ; Normal share parameters read only = no browseable = yes writable = yes guest ok = no

    Read the article

  • Adding Netem Filter Rules

    - by fontsix
    iam new in programming and using linux. My Question is, is it possible to add Netem Filter Rules later ? I want to create an PHP-Interface for Netem and I don't know how much filters were required. This should be some kind of dynamically. In Example : A user with a static IP starts an Netem Command (Latency) with PHP Interface this means these five command werde executed by php in the first step $classid = 11; $handle = 10; "sudo tc qdisc add dev eth0 handle 1: root htb"; "sudo tc class add dev eth0 parent 1: classid 1:1 htb rate 100Mbps"; "sudo tc class add dev eth0 parent 1:1 classid 1:$classid htb rate 100Mbps"; "sudo tc qdisc add dev eth0 parent 1:$classid handle $handle: netem delay 100ms"; "sudo tc filter add dev eth0 protocol ip parent 1:0 prio 3 u32 match ip dst $dest flowid 1:$classid"; Now, if there would be a second user who wants to use Netem independent of the first user, i only want to execute the last 3 commands, like "sudo tc class add dev eth0 parent 1:1 classid 1:$classid htb rate 100Mbps"; "sudo tc qdisc add dev eth0 parent 1:$classid handle $handle: netem delay 100ms"; "sudo tc filter add dev eth0 protocol ip parent 1:0 prio 3 u32 match ip dst $dest flowid 1:$classid"; There is an Algorithmus for increasing variables $classid and $handle. This should work. Now my Question: Is it possible only to add these 3 commands to add a new class with new qdisc and a new filter rule ? Or how can i realize it ? The Apache Error_log tells me "sh: line 1: flowid: command not found" but i can't find any mistake. I hope you could help Best regards fontsix

    Read the article

  • Install and enforce a scheduled task across a Windows domain

    - by Ricket
    We have a small domain of about 70 Windows computers (XP and 7). We want to schedule a command (an update mechanism) to run on all computers periodically, and we want the task to run regardless of the computer's connection to our network (i.e. the task should run even on a laptop that isn't connected to our VPN). We have a Microsoft System Center Essentials 2010 server so that might come in handy. The options I see are these: Do it completely manually. Install the scheduled task by hand or remotely using psexec (and the at command?) for each computer in our network. Enforce that newly imaged computers should have this task installed on them before deployed to the employee, or the task should be in the image. High initial cost (having to do this for each of 70 computers) but building it into the image might work... But there is some maintenance in making sure the task is added to everything. And I fear that a year or two down the road, we will have forgotten about it or gotten sloppy or had new IT employees who miss this step and some computers won't have the task. Having one of our servers run a script that loops through all computers and psexec's the command on each computer in the network -- it would only run on running, connected computers, so this solution wouldn't work. I suspect SCE could do something like this too, but again this is not a good solution. Neither of these are ideal, and I'm certain there is a better way to do it -- right? What is the best way to accomplish this task?

    Read the article

  • Outlook 2007 "Mark as Not Junk" Dialog Confusion

    - by David
    Outlook 2007's "Not Junk" button opens the "Mark as Not Junk" dialog. The dialog works correctly if I keep the "Always trust e-mail from <email address>" option checked. That is, the message is removed from the Junk folder and returns to the Inbox. However, if I uncheck the "Always trust" box, pressing OK dismisses the dialog, but nothing else happens. Why not? According to Outlook help, "When you mark a message as not junk, you are given the option of adding the sender or the mailing list name to your Safe Senders List or Safe Recipients List." That sure makes it sound like this is just an option, and not necessary for the core functionality of the action. I really don't want to trust a (possibly forged) From: address, but I do want my mail back in the Inbox. I could manually drag it, but I'm assuming that marking a message as not junk also trains some kind of bayesian filter. Am I mistaken? Thanks.

    Read the article

< Previous Page | 384 385 386 387 388 389 390 391 392 393 394 395  | Next Page >