Search Results

Search found 2368 results on 95 pages for 'jeff smith'.

Page 28/95 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • Timezone reset on Amazon AMI update

    - by Jeff
    I have a server set up through Amazon AWS. It's Amazon's RHEL-based AMI. When I initially booted up the machine, I set the proper timezone by creating a symlink: ln -sf /usr/share/zoneinfo/America/Los_Angeles /etc/localtime This works just fine, but whenever Amazon provides an update to their AMI the timezone automatically reverts to UTC. Is there a way around this or does it have to be manually set after each update?

    Read the article

  • apache webserver unresponsible with server-status showing all child processes waiting for connection

    - by Jeff
    My setup: i have 3 nearly identical webserver machines serving the same high loaded dynamic website with simple load balancing over dns. The service has been working for over two ears with the same apache config. apache2, php5, ubuntu 8.04 linux 2.6.24-29-server My problem: since about two weeks i'm experiencing problems with this config. Nearly every day i have one small moment about 5 minutes, in which the website is unreachable. I'm still able to login to the servers over ssh. If i run htop, i see the machine simply doing nothing. i have about 1000 apache processes running, but no cpu activity. i've used the apache mod_status to debug this situation. the process scoreboard looks like this: _C.___K_______________________R._______.__K_K____K___C_______.__ _______C__________.___________________________________.________C _.____K__________K___K_WK_____._K_____________________________._ W______K__________K________.____________________._______C_______ _C_.__K__K____.._.._____________________________________C_______ _R___________K___.______C________.C_________.______._____C______ ____________KKC____K_____K__WC_________________C_____.__.____.__ _____________________C_________K______.____C______._____________ _.___C____.___.___________________________.K______.____K________ W__.___________________C.__.____K________K_______R_._.__._______ __C__C_.__________C__C_______._____W______________C_.___C_______ ____.______C_____________C________.____C____________.________._K __.__________.K_____________K_________._____C____.K__________KW_ __K.W________R_________._______.___W___________.____.__K_____W__ W___.___..________W____K Scoreboard Key: "_" Waiting for Connection, "S" Starting up, "R" Reading Request, "W" Sending Reply, "K" Keepalive (read), "D" DNS Lookup, "C" Closing connection, "L" Logging, "G" Gracefully finishing, "I" Idle cleanup of worker, "." Open slot with no current process So the most of the processes are just waiting for connection. after about 5 minutes the situation will return to normal: i have lot least processes on every machine, the most workers have the "."-status (meaing they are open to process a request) and of course the website is reachable! so i'm trying to find something in the logs, but there is simply nothing... the apache access log is silent for about 4 minutes, the same is for the error log. i also can not figure out anything wrong in other system logs. the situation is the same on all 3 webservers (all of them have this load peak and unresposibility at the same time), so i do not thing this is hardware related. but i think, this might be related to some network (tcp) issue. any ideas? EDIT: some more information, that i have just discovered: it has just happened again. and i was able to verify that i'm also not able to connect locally when this problem occurs. i have made some connection statistics with the following command after it happend netstat -an|awk '/tcp/ {print $6}'|sort|uniq -c 109 CLOSE_WAIT 2652 ESTABLISHED 2 FIN_WAIT1 11 LAST_ACK 12 LISTEN 91 SYN_RECV 1 SYN_SENT 16 TIME_WAIT If i execute the same command some time later, i have something like this: 4 CLOSING 108 ESTABLISHED 18 FIN_WAIT1 182 FIN_WAIT2 37 LAST_ACK 12 LISTEN 50 SYN_RECV 11276 TIME_WAIT So in the normal situation i have only 100-200 open connections by clients beeing handled by apache in this moment. when i have this "crash", i have a lot more connections. what is the best way to analyse this? EDIT2: the important lines in apache2.conf are: KeepAlive On MaxKeepAliveRequests 20 KeepAliveTimeout 1 <IfModule mpm_prefork_module> ServerLimit 920 StartServers 30 MinSpareServers 80 MaxSpareServers 120 MaxClients 920 MaxRequestsPerChild 700 </IfModule> it is an apache2 prefork with php_mod. the server has 8GB ram and a 4gb swap partition.

    Read the article

  • Do RAID controllers commonly have SATA drive brand compatibility issues?

    - by Jeff Atwood
    We've struggled with the RAID controller in our database server, a Lenovo ThinkServer RD120. It is a rebranded Adaptec that Lenovo / IBM dubs the ServeRAID 8k. We have patched this ServeRAID 8k up to the very latest and greatest: RAID bios version RAID backplane bios version Windows Server 2008 driver This RAID controller has had multiple critical BIOS updates even in the short 4 month time we've owned it, and the change history is just.. well, scary. We've tried both write-back and write-through strategies on the logical RAID drives. We still get intermittent I/O errors under heavy disk activity. They are not common, but serious when they happen, as they cause SQL Server 2008 I/O timeouts and sometimes failure of SQL connection pools. We were at the end of our rope troubleshooting this problem. Short of hardcore stuff like replacing the entire server, or replacing the RAID hardware, we were getting desperate. When I first got the server, I had a problem where drive bay #6 wasn't recognized. Switching out hard drives to a different brand, strangely, fixed this -- and updating the RAID BIOS (for the first of many times) fixed it permanently, so I was able to use the original "incompatible" drive in bay 6. On a hunch, I began to assume that the Western Digital SATA hard drives I chose were somehow incompatible with the ServeRAID 8k controller. Buying 6 new hard drives was one of the cheaper options on the table, so I went for 6 Hitachi (aka IBM, aka Lenovo) hard drives under the theory that an IBM/Lenovo RAID controller is more likely to work with the drives it's typically sold with. Looks like that hunch paid off -- we've been through three of our heaviest load days (mon,tue,wed) without a single I/O error of any kind. Prior to this we regularly had at least one I/O "event" in this time frame. It sure looks like switching brands of hard drive has fixed our intermittent RAID I/O problems! While I understand that IBM/Lenovo probably tests their RAID controller exclusively with their own brand of hard drives, I'm disturbed that a RAID controller would have such subtle I/O problems with particular brands of hard drives. So my question is, is this sort of SATA drive incompatibility common with RAID controllers? Are there some brands of drives that work better than others, or are "validated" against particular RAID controller? I had sort of assumed that all commodity SATA hard drives were alike and would work reasonably well in any given RAID controller (of sufficient quality).

    Read the article

  • Why does my computer slow down so much when attaching Bluetooth dongle?

    - by Jeff Yates
    I have a Bluetooth dongle and I plugged it into my work laptop (a Dell Latitude D830). Windows detects the Generic Bluetooth USB or similar and then proceeds to go incredibly slow with a process, avp.exe¹, taking 50% CPU. The System Idle process is getting most of the other 50% CPU and the avp.exe process is only at Normal priority. The machine doesn't seem to recover, so I had to turn the power off and reboot. Now, I haven't installed the drivers yet for the device, which I am doing now and I expect it to resolve the problem, so I am not asking how to fix this. I would rather know why Windows goes so slow in the first place. What is it trying to do and failing at so badly that it barely crawls? ¹ Part of Kaspersky Internet Security suite

    Read the article

  • Manually accessing GMail via IMAP

    - by Jeff Mc
    I'm trying to connect to gmail imap, but I am unable to execute any commands after login. I'm running openssl s_client -connect imap.gmail.com:993 to connect then, * OK Gimap ready for requests from 128.146.221.118 42if6514983iwn.40 . CAPABILITY * CAPABILITY IMAP4rev1 UNSELECT IDLE NAMESPACE QUOTA XLIST CHILDREN XYZZY SASL-IR AUTH=XOAUTH . OK Thats all she wrote! 42if6514983iwn.40 . LOGIN {email removed} {password removed} * CAPABILITY IMAP4rev1 UNSELECT LITERAL+ IDLE NAMESPACE QUOTA ID XLIST CHILDREN X-GM-EXT-1 UIDPLUS COMPRESS=DEFLATE . OK {email removed} authenticated (Success) . CAPABILITY at which point it simply hangs with the connection open. I'm guessing gmail pushes you off to a node in a cluster after it authenticate me?

    Read the article

  • How do you add more space to a Fedora (LVM) partition?

    - by Trevor Boyd Smith
    In a nutshell, i have a VM that ran out of space. I increased the size of the VM's harddrive to be 4 times bigger but the OS partition is still only using 1x the space. I need to change the LVM partition to take up the extra 4x space but I don't know how to extend the LVM partition. (NOTE: To make the screenshots given below I had to boot from a live-cd for gnome-partition-manager (aka gparted). Very unfortunately gparted is only able to "detect LVM" and can't do any LVM operations.) Here is what "gparted" shows. Please notice that the "resize" option is not available: The Problem: I can't find good directions<1 on how to grow the LVM partition via GUI or command-line! How do you grow a LVM partition that was created by the default Fedora install? If you are giving command line directions. Please explain what each line of commands does.

    Read the article

  • Xml Serialization and the [Obsolete] Attribute

    - by PSteele
    I learned something new today: Starting with .NET 3.5, the XmlSerializer no longer serializes properties that are marked with the Obsolete attribute.  I can’t say that I really agree with this.  Marking something Obsolete is supposed to be something for a developer to deal with in source code.  Once an object is serialized to XML, it becomes data.  I think using the Obsolete attribute as both a compiler flag as well as controlling XML serialization is a bad idea. In this post, I’ll show you how I ran into this and how I got around it. The Setup Let’s start with some make-believe code to demonstrate the issue.  We have a simple data class for storing some information.  We use XML serialization to read and write the data: public class MyData { public int Age { get; set; } public string FirstName { get; set; } public string LastName { get; set; } public List<String> Hobbies { get; set; }   public MyData() { this.Hobbies = new List<string>(); } } Now a few simple lines of code to serialize it to XML: static void Main(string[] args) { var data = new MyData {    FirstName = "Zachary", LastName = "Smith", Age = 50, Hobbies = {"Mischief", "Sabotage"}, }; var serializer = new XmlSerializer(typeof (MyData)); serializer.Serialize(Console.Out, data); Console.ReadKey(); } And this is what we see on the console: <?xml version="1.0" encoding="IBM437"?> <MyData xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <Age>50</Age> <FirstName>Zachary</FirstName> <LastName>Smith</LastName> <Hobbies> <string>Mischief</string> <string>Sabotage</string> </Hobbies> </MyData>   The Change So we decided to track the hobbies as a list of strings.  As always, things change and we have more information we need to store per-hobby.  We create a custom “Hobby” object, add a List<Hobby> to our MyData class and we obsolete the old “Hobbies” list to let developers know they shouldn’t use it going forward: public class Hobby { public string Name { get; set; } public int Frequency { get; set; } public int TimesCaught { get; set; }   public override string ToString() { return this.Name; } } public class MyData { public int Age { get; set; } public string FirstName { get; set; } public string LastName { get; set; } [Obsolete("Use HobbyData collection instead.")] public List<String> Hobbies { get; set; } public List<Hobby> HobbyData { get; set; }   public MyData() { this.Hobbies = new List<string>(); this.HobbyData = new List<Hobby>(); } } Here’s the kicker: This serialization is done in another application.  The consumers of the XML will be older clients (clients that expect only a “Hobbies” collection) as well as newer clients (that support the new “HobbyData” collection).  This really shouldn’t be a problem – the obsolete attribute is metadata for .NET compilers.  Unfortunately, the XmlSerializer also looks at the compiler attribute to determine what items to serialize/deserialize.  Here’s an example of our problem: static void Main(string[] args) { var xml = @"<?xml version=""1.0"" encoding=""IBM437""?> <MyData xmlns:xsi=""http://www.w3.org/2001/XMLSchema-instance"" xmlns:xsd=""http://www.w3.org/2001/XMLSchema""> <Age>50</Age> <FirstName>Zachary</FirstName> <LastName>Smith</LastName> <Hobbies> <string>Mischief</string> <string>Sabotage</string> </Hobbies> </MyData>"; var serializer = new XmlSerializer(typeof(MyData)); var stream = new StringReader(xml); var data = (MyData) serializer.Deserialize(stream);   if( data.Hobbies.Count != 2) { throw new ApplicationException("Hobbies did not deserialize properly"); } } If you run the code above, you’ll hit the exception.  Even though the XML contains a “<Hobbies>” node, the obsolete attribute prevents the node from being processed.  This will break old clients that use the new library, but don’t yet access the HobbyData collection. The Fix This fix (in this case), isn’t too painful.  The XmlSerializer exposes events for times when it runs into items (Elements, Attributes, Nodes, etc…) it doesn’t know what to do with.  We can hook in to those events and check and see if we’re getting something that we want to support (like our “Hobbies” node). Here’s a way to read in the old XML data with full support of the new data structure (and keeping the Hobbies collection marked as obsolete): static void Main(string[] args) { var xml = @"<?xml version=""1.0"" encoding=""IBM437""?> <MyData xmlns:xsi=""http://www.w3.org/2001/XMLSchema-instance"" xmlns:xsd=""http://www.w3.org/2001/XMLSchema""> <Age>50</Age> <FirstName>Zachary</FirstName> <LastName>Smith</LastName> <Hobbies> <string>Mischief</string> <string>Sabotage</string> </Hobbies> </MyData>"; var serializer = new XmlSerializer(typeof(MyData)); serializer.UnknownElement += serializer_UnknownElement; var stream = new StringReader(xml); var data = (MyData)serializer.Deserialize(stream);   if (data.Hobbies.Count != 2) { throw new ApplicationException("Hobbies did not deserialize properly"); } }   static void serializer_UnknownElement(object sender, XmlElementEventArgs e) { if( e.Element.Name != "Hobbies") { return; }   var target = (MyData) e.ObjectBeingDeserialized; foreach(XmlElement hobby in e.Element.ChildNodes) { target.Hobbies.Add(hobby.InnerText); target.HobbyData.Add(new Hobby{Name = hobby.InnerText}); } } As you can see, we hook in to the “UnknownElement” event.  Once we determine it’s our “Hobbies” node, we deserialize it ourselves – as well as populating the new HobbyData collection.  In this case, we have a fairly simple solution to a small change in XML layout.  If you make more extensive changes, it would probably be easier to do some custom serialization to support older data. A sample project with all of this code is available from my repository on bitbucket. Technorati Tags: XmlSerializer,Obsolete,.NET

    Read the article

  • Loss of network connectivity when playing video on Optoma HD180 projector

    - by Jeff Fohl
    Hi Folks - New to Super User, so I hope this question fits in with the guidelines. Very strange problem I am having, and I am at a loss as to how to continue troubleshooting this one. The basic problem is that when I attempt to watch streamed video on a particular display device (an Optoma HD180 projector), my network connectivity drops like a stone to barely measurable levels. This is my setup: I have a Dell H2C 730x running Windows 7 64bit. This particular computer has two ATI Radeon HD 4800 video cards. I have two Samsung 22" monitors connected to one card, and an Optoma HD180 digital projector connected to the other card via an HDMI cable. My internet connection is normally a reliable 6Mbps. The problem I am having occurs when I stream video (or even just browse the web) on the Optoma Projector. When I do this, my internet connection drops to practically zero (just a few kilobits per second). When I move the browser away from the projector, and over to one of my Samsung monitors, the internet connection comes right back. Note that the Optoma projector is on and enabled as a third monitor all this time. I can move the mouse around on the projector without triggering the problem. I tried pinging my router when I was playing a movie on one of the monitors, and I get a 1 millisecond response. However, when I have the movie playing on the Optoma projecter, pinging the router gives me response times in the hundreds of milliseconds, or times out completely. So, it clearly is something local to my machine - and not some sort of throttling occurring down the line. I would think that it is possibly something to do with the HDMI driver conflicting somehow with my network driver (which is a USB-based wireless connection). This one has me really stumped. Anyone have any ideas?

    Read the article

  • "Outlook must be online or connected to complete this action" windows XP, outlook 2007, connect to exchange using HTTP

    - by bob franklin smith harriet
    Hey, I can't connect to an exchange server using windows XP and outlook 2007, using the "connect anywhere over HTTP" process, it has been working until recently and the user reports no recent changes to his environment. The error is "Outlook must be online or connected to complete this action" It will prompt me for the username and password which I can enter, then it will give the errorm however this only happens when I delete the account and enter all details for the excahnge server again. The client computer that is unable to connect using outlook can connect to the HTTPS mail service and login send/receive fine. Nobody else has reported issues. making a test environment with a clean install of XP and outlook 2007 gives the same error, but using windows 7 and outlook 2007 connects perfectly fine everytime. I also removed all passwords using control keymgr.dll which didnt help. Any assistance or ideas would be appreciated, at this point nothing I've tried from technet or google works <_<

    Read the article

  • Website doesn't work when missing "www"

    - by jeff
    Hello everyone, Does anyone know the solution to this problem? I checked my zone file and there are 2 A records mydomainname.com. 14400 IN A ip.address.x.x localhost 14400 IN A 127.0.0.1 I'm On CentOs 5.2, by the way. Thanks for the help!!

    Read the article

  • Group policy applied to AD OU attributes

    - by Eric Smith
    I'm not well-versed in AD, so would like to resolve a question I have with regards to AD information. I understand that it is possible to apply group policy to OU's, thereby restricting access. What I'd like to know is, is it possible to do the same with OU attributes. Some context would help. There's a requirement to store address information in AD (IMO, a natural fit), but for various reasons, although obviously things like name should be globally accessible, access restrictions are desired on the address. In this case, is it possible to apply security to the address portion of the OU attributes, or does each address have to be broken into a separate OU (a solution that feels smelly given that address doesn't have identity)?

    Read the article

  • Create Virtual Image of Laptop before Formatting

    - by Simon Mark Smith
    I have a 3 year old laptop running Windows XP that I used for business. Although I have not used the laptop in over a year, I now want to re-commission it with Windows 7 and a fresh install. Before I do the fresh install I want to create a Virtual Image of the laptop that I can keep and potentially run on my desktop machine should I ever need to access any of the old files/projects that it contains currently. I know that most people will say just copy the files over to your desktop, but my concern is the configuration of the laptop. I used to use it for development and it has older versions of Visual Studio, SQL Server, Active X controls etc, etc than I currently use so I really want to preserve the environment not just the files. So really I am asking what is the best tool-set/method to achieve this? I understand there are free VM tools available but I have never done this before and would appreciate any help.

    Read the article

  • exchange 2003 to 2007, recipient update services

    - by Jeff
    going to try my best to explain: our exchange 2003 was migrated to exchange 2007 (was not here for the migration). Looking in active directory sites and services - services - Microsoft exchange - company - address list container - recipient update services i have two recipient update services 1) recipient update service (enterprise configuration) 2) recipient update service (my-domain) my guess is one of these is left from the old exchange 2003. (im guessing this because the migration was not done properly, the exchange server was never actually uninstall, it was just taken offline, and then later disposed of). could having two of these cause me issues, and if so, how do i know which one i could get rid of ? Im asking because i honestly have no idea. it might even be nothing. thanks

    Read the article

  • data replication from a production web server back to the staging web server

    - by Dennis Smith
    Have two web servers, development/staging and production. Code and some documentation is moved from the staging area to production either through on-demand jobs or nightly via a global replication job. The production server of course sits isolated in a DMZ. There is some content that gets uploaded to the live server that needs to be replicated back to staging. Our security team is locking the network down (and they should) and restricting access to the live server. Best suggestions for replication of "live" data back to "stage" and backing up the live server also.

    Read the article

  • Sending an Email from 2 Mail Servers

    - by Ted Smith
    We are currently attempting to move away from using a "local" mail(exchange) server to an cloud based offering for all our automated emails. The problem is that we send and receive thousands for emails a day and its uptime is quite critical so the business do not want to put all their eggs in one basket, so if we would like to use a cloud based offering(mailgun) they would like a backup if this goes down. So my question is: Would it be possible to set multpile A, TXT and CNAME records to multiple IP address so if one mail server goes down we can automatically start sending emails from the fallover(without them being blocked doing a reverse DNS lookup)? I know we will still need to adjust the MX record for incoming emails but that is acceptable to not receive emails for a short(1-2 hours) of time. Does this make sense?

    Read the article

  • Which Twitter app do you use?

    - by Jeff Fritz
    It seems like everyone is writing their own Twitter front-end application nowadays. So I must ask: What is your preferred Twitter front-end management application? Please discuss: Form Factor: Desktop, Mobile, Web based OS Support: Windows, Mac, Linux, iPhone, BlackBerry, etc Killer Feature that made you convert Please try to format your responses using the bullet points above. This way, we can all easily compare features. Please list 1 app per response

    Read the article

  • 2 subnets off of 1 PC with 2 NICs

    - by Jeff
    I have a general setup I'd like to do with some IP cameras. This seems like it will work but I think I may be missing something. Our system consists of a video recorder PC connected to a switch which is connected to a number of IP cameras. I'd like to connect this system into an existing network but I want it on a different subnet. The main reason is that the cameras use a lot of bandwidth that I don't want slowing down the existing network. My idea was to install 2 NICs on the video recorder pc. 1 NIC connects to the existing network on 192.169.1.x for example, and the other NIC connect to the switch with the cameras. This NIC would be 192.168.100.x. Then we could remote to the video recorder PC with a GoToMyPC type thing for administration via the existing network. I've included a diagram of how I see this working but I'm a little fuzzy on the setup of the NICs (if this can work at all). My problem may be trying to deal with 2 subnets without a router but It really doesn't seem like it's necessary in this situation. BTW, gliffy is cool.

    Read the article

  • Loading dependencies for custom puppet functions

    - by Ben Smith
    I have written a custom puppet function, which is working fine, that depends on the cloudservers gem (a Rackspace client library). This is fine if I have pre-installed the gem on a server before running puppet but totally breaks if I have not installed the gem as the function seems to be run during the 'compilation' sweep, well before my package definition is realised. Here's what my .pp looks like, with get_hosts the function that requires the cloudservers gem. package { "rubygems": ensure => installed, provider => "gem"; } package { "cloudservers": ensure => installed, provider => "gem", require => Package["rubygems"]; } class hosts::us { $hosts = get_hosts("us") hostentry { $hosts: } } define hostentry() { $parts = split($name, ',') $address = $parts[0] $ip = $parts[1] $aliases = $parts[2] host{ $address: ip => $ip, host_aliases => $aliases } } Is there a way to stop the function getting run so early, or at least having it's run depend up the library being installed. Alternatively, is there a way that I can add dependencies somewhere in the functions folder that will be available to the function?

    Read the article

  • Where do I connect the two red SATA cables on my motherboard?

    - by Dennis Smith
    I have a Compaq Presario PC SR5110NX. The Processor is AMD Athlon 64 3800+. It has 512 MB of RAM and a 40GB Hard Drive. I'm running Windows XP Professional on it. I have 2 SATA drives, one is black and the other is white. I have 2 red little cables, and they have the letters and numbers on them. On one side of the cable it says "HP P/N:5188-2897 0720". My motherboard is a MCP61PM-HM Rev 1.0B. Where do I connect the two SATA connectors?

    Read the article

  • dd-wrt router firmware QoS troubleshooting

    - by Jeff Atwood
    I've been using the dd-wrt firmware on my router and I like it a lot! But -- I'm not sure the quality of service (QoS) is working on it. I have it set up as follows: http, port 80 -- Premium bittorrent, port 6969 -- Bulk https, port 443 -- Premium dns, port 53 -- Premium Per the QoS documentation, these levels are: bandwidth is allocated based on the following percentages of uplink and downlink values for each class: Exempt: 100mbps - ignores global limits. Premium: 75% - 100% Express: 15% - 100% Standard: 10% - 100% Bulk: 1.5% - 100% This doesn't entirely seem to work, though -- with busy torrents going I get major pauses in my web browsing which sucks! The QoS documentation gives some steps to check the QoS ... What you'll be interested to look at will be the first set of source and destination IP, including the port numbers. Next the presence of l7proto and the "mark" field. The entries indicate the current live connection QoS priority applied on them based on the "mark" field. The "mark" values correspond to the following Exempt: 100 Premium: 10 Express: 20 Standard: 30 Bulk: 40 (no QoS matched): 0 You may see "mark=0" for some l7proto service even though they are in configured in the list of QoS rules. This may mean that the layer 7 pattern matching system didn't match a new or changed header for that protocol. Custom service on port matches will usually take care of these. On port 6969 (bittorrent) I see a weird mixture of stuff with mark=0 and mark=40 like so cat /proc/net/ip_conntrack udp 17 105 src=98.162.182.42 dst=1.2.3.4 sport=64512 dport=6969 packets=3 bytes=290 src=10.0.0.2 dst=98.162.182.42 sport=6969 dport=64512 packets=4 bytes=202 [ASSURED] mark=0 secmark=0 use=1 tcp 6 117 TIME_WAIT src=98.248.173.174 dst=1.2.3.4 sport=51114 dport=6969 packets=12 bytes=704 src=10.0.0.2 dst=98.248.173.174 sport=6969 dport=51114 packets=10 bytes=440 [ASSURED] mark=40 secmark=0 use=1 tcp 6 598 ESTABLISHED src=165.132.128.201 dst=1.2.3.4 sport=57218 dport=6969 packets=8024 bytes=9919881 src=10.0.0.2 dst=165.132.128.201 sport=6969 dport=57218 packets=4211 bytes=239607 [ASSURED] mark=0 secmark=0 use=1 tcp 6 586 ESTABLISHED src=68.46.9.24 dst=1.2.3.4 sport=64688 dport=6969 packets=6 bytes=490 src=10.0.0.2 dst=68.46.9.24 sport=6969 dport=64688 packets=8 bytes=944 [ASSURED] mark=40 secmark=0 use=1 udp 17 45 src=222.254.228.38 dst=1.2.3.4 sport=25438 dport=6969 packets=5 bytes=454 src=10.0.0.2 dst=222.254.228.38 sport=6969 dport=25438 packets=3 bytes=154 [ASSURED] mark=0 secmark=0 use=1 ( full file visible at http://pastebin.com/AZE6EtWm ) I've been playing around with this log for a little while and I can't see any patterns! Why is some port 6969 bittorrent traffic tagged mark=0 (not matched) by dd-wrt's QoS while others are tagged mark=40 (Bulk) .. any ideas?

    Read the article

  • Autodiscover service seems to reply with User Principal Name instead of email address

    - by Jeff McJunkin
    After this latest round of Windows updates (on 1/11/11, in fact) my Exchange 2007 server of course rebooted. This may have had the side effect of making any changes I'd inadvertently made take effect. Since then, the Autodiscover service in Exchange 2007 from Outlook 2007 seems to reply with the User Principal Name ([email protected] instead of [email protected]). I'm specifically seeing this from within the "Test Email AutoConfiguration" tool in Outlook (the UPN appears in the first text box labeled "E-mail") and when creating a new profile in Outlook. If I disregard the UPN and instead fill in my email address, Autodiscover works as expected and I can connect without issue. I've confirmed using ADSI Edit that the SMTP email address is properly set for my users. I even went a bit crazy and set the UPN to the email address using ADSI Edit. I've re-installed the Client Access role on the server in question. Exchange server is Server 2008, 64-bit of course. Clients are mostly XP 32-bit, though the issue happens from a Windows 7 machine as well.

    Read the article

  • Compiling Assembly Manually [migrated]

    - by John Smith
    I am having trouble with translating a specific line in assembly to machine code for the Nios II. I have successfully compiled these lines: START_TIMER = 0xF68C r0 = 0x0 r8 = 0x8 label = 50000 addi r8, r8, %lo(label) - 01000 01000 1100001101010000 000100 subi r8, r8, 1 - 01000 01000 1111111111111111 000100 bne r8, r0, START_TIMER - 01000 00000 1111011010001100 011110 The line in question that I have trouble with is this one: orhi r8, r0, %hiadj(label) As explained in the handbook linked above, "%lo" means "Extract bits [15..0] of immed32" and "%hiadj" means "Extract bits [31..16] and adds bit 15 of immed32". However, 50000 in binary is 1100001101010000, and is therefore a 16 bit number. As far as I can see, it doesn't contain any bits between 16 and 31. I tried with 0000000000000001, but it's incorrect. What am I doing wrong?

    Read the article

  • PHP on IIS7 not showing pages

    - by Jeff
    I have a PHP website on a Windows 7 machine I'm working with and it cannot be viewed by any browser - IE, Chrome, Firefox. When navigating to the root of the website (default index.php) the browser reports it cannot find the address. Not a 404 error from the webserver, just as if it cannot resolve the name. Other websites in the same default web application that are also PHP work perfectly. I've aligned all folder permissions and everything else but this has got me stumped. I even went as far to create a new folder and throw in a test phpinfo() page and it worked. Copied this website's content to the new folder and it cannot find the index.php page. I checked all setting I know and can't seem to find what I'm missing. Anyone else encounter this issue? Remember the fix for it?

    Read the article

  • How do I get Windows 7 wallpaper to display the company logo properly?

    - by David Silva Smith
    Windows 7 is not displaying our company background properly. Curves show pixelation and straight lines are jagged. I'm working with a scalable vector graphics (SVG) image that I've exported to the same resolution (pixel dimensions, to be technical) as the desktop, which is 1440x900. I have tried exporting the image as a .png, .jpg, and .bmp. All of these look correct in an image viewing program, such as Windows Photo Viewer and Paint, but when I set the Windows background to these images, curves show pixelation and straight lines are jagged. Reading online, it seems that behind the scenes, Windows is converting the image to a .jpg with low quality compression, which is causing the issue. I've tried setting the image as a background through Internet Explorer, saving it as a .jpg, and putting the file in the Windows photo directory as suggested in some online forums, but none of those solutions have fixed my issue.

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >