Search Results

Search found 16987 results on 680 pages for 'second'.

Page 451/680 | < Previous Page | 447 448 449 450 451 452 453 454 455 456 457 458  | Next Page >

  • Sort order in Windows Explorer

    - by Haim H.
    The behaviour described below occurs on Windows-7 systems and on Windows XP. We operate in a dual-language environment - English and Hebrew. When in Windows Explorer we sort files by name, the order in which they are listed is not what we would expect. Here is a list of file names as sorted by Windows Explorer (all of the files have a .pdf suffix): 1G110033H-PP 19C050G-PP-ORB 19C050H-PPRM 19C100H-PPRM 19C-MBPS-PP 19C-MBPS-PP-1 29AAC050-PP 29AAC100-PP 29AAC100-PPUL 29B004064-PP 101AC050-PP 101AC100-PP 101B100-PPE 1091003G-PPFSUL 10108033G-PPSA 10125033H-PPM It looks to me that first the items are sorted according to the position of the first alphabetic character in the name, and then, within those groups, they are sorted in "normal" alpha-numeric order. That is, all the files with an alpha character in the first position are on top of the list, followed by those with the first alpha character in the second position, followed by those with the first alpha character in the third position, and so on. An alternate way of looking at this is that, in a file name composed of numbers and letters, the sort treats the first group of numbers in the name as the major sort node, with the rest of the name being the secondary sort node. Now that I understand the sequencing logic, it's not a big problem, but I was wondering why this happens?

    Read the article

  • Apache doesn't immediately notice a change in the document root

    - by Tom
    We use capistrano for website deployments and our Apache document root is a symlink to a particular code release. The deployment procedure switches the symlink from the old release to the new release as the final step of the deployment. We are migrating our webservers from real servers running RHEL 5.6 to Amazon EC2 virtual machines running Ubuntu 11.10 and the new servers are suffering from a problem where Apache doesn't immediately notice the change to it's document root when the symlink is switched. It can take a second or so (and I think I've even seen it take a couple of minutes). It's kind of like Apache has cached the physical path of the symlink for some time. Does anyone know some Apache settings I could look at to get it to "scan" for changes to it's served files quicker. Thoughts: I read that the disks on virtual machines are much slower (since they are network attached storage). Perhaps the filesystem cache somehow works differently too? If so, is there anything that can be done? The website runs PHP code. Perhaps there is some PHP config differences between RHEL and Ubuntu? I checked realpath_cache_ttl but both servers have it commented out: e.g. ; Duration of time, in seconds for which to cache realpath information for a given ; file or directory. For systems with rarely changing files, consider increasing this ; value. ; http://www.php.net/manual/en/ini.core.php#ini.realpath-cache-ttl ;realpath_cache_ttl = 120 We do use the APC opcode cache but don't think it's the issue due to experimentation. The PHP code is in different file paths for each deployment and we ensure stat=1. Here is a similar question that is very interesting: 294107 - but doesn't provide an answer for me. One solution would be to reload Apache everytime we modify the document root symlink. I'll do this if we can't find another solution.

    Read the article

  • Workstations cannot see new MS Server 2008 domain, but can access DHCP.

    - by Radix
    The XP Pro workstations do not see the new replacement domain upon boot; they only see their cached entry for the old (server 2003) domain controller. The old_server is not connected to the network. I have DHCP working with the same scope as the old_server. In my "before-asking" search for a solution I came across the following two articles, and I recall doing things as suggested by the articles. http://www.windowsreference.com/windows-server-2008/how-to-setup-dhcp-server-in-windows-server-2008-step-by-step-guide/ http://www.windowsreference.com/windows-server-2008/step-by-step-guide-for-windows-server-2008-domain-controller-and-dns-server-setup/ The only possible issue is: I was under the impression that the domain netbios needed to match the DC's netbios. The DC netbios is city01 while the domain's FQDN is city.domain.org (I think this is mistaken and should have been just domain.org) But, the second link led me to a post which I believe answers my question. I did as they instructed by opening Local Area Connection Properties, then selecting TCP/IPv4 and setting the sole preferred DNS server to the local hosts static IP (10.10.1.1). Search for "Your problems should clear up" for the post I'm referencing: http://forums.techarena.in/active-directory/1032797.htm Have I misunderstood their instructions? I am hoping to reach the point where I can define users and user groups. Also, does TechNet have a single theoretical overview document I could read. I really don't like treating comps as magic. I will be watching this closely and will quickly answer any questions. If I've left anything out it is because I did not know it was needed. PS: I am loath to ask obviously basic questions, but I am tired and wish to fix this before tomorrow. Also, this is my first server installation, thank you for your help.

    Read the article

  • Monitor connected with WiDi just shows a black screen

    - by Pops
    I have a Dell XPS 18 and want to use an external monitor with it. The monitor has VGA and DVI-D inputs. The XPS 18 has no video output ports, but it does support Intel WiDi. I have a Netgear P2TV2000 WiDi receiver, but it only has composite and HDMI outputs. I'm connecting the receiver to the monitor with an HDMI cable and an HDMI/DVI-D adapter. So, in short, the video path is: Computer → WiDi → WiDi receiver → HDMI cable → HDMI/DVI-D adapter → Monitor After setting all this up, I can't get anything to display on the monitor. At the moment I power the monitor on, I can see my desktop for a brief fraction of a second, and then everything goes black. All of the relevant drivers have been updated to the latest versions. When I use a different WiDi-enabled computer to connect to the same monitor, everything works fine. When I use the original computer and receiver to connect to a TV, everything works fine. It's only when I connect the original computer to the monitor I want to use that the connection fails. What could be going wrong here, and how can I get the video to work consistently?

    Read the article

  • Requiring SSH-key Login From Specific IP Ranges

    - by Sean M
    I need to be able to access my server (Ubuntu 8.04 LTS) from remote sites, but I'd like to worry a bit less about password complexity. Thus, I'd like to require that SSH keys be used for login instead of name/password. However, I still have a lot to learn about security, and having already badly broken a test box when I was trying to set this up, I'm acutely aware of the chance of screwing myself while trying to accomplish this. So I have a second goal: I'd like to require that certain IP ranges (e.g. 10.0.0.0/8) may log in with name/password, but everyone else must use an SSH key to log in. How can I satisfy both of these goals? There already exists a very similar question here, but I can't quite figure out how to get to what I want from that information. Current tactic: reading through the PAM documentation (pam_access looks promising) and looking at /etc/ssh/sshd_config. Edit: Alternatively, is there a way to specify that certain users must authenticate with SSH keys, and others may authenticate with name/password? Solution that's currently working: # Globally deny logon via password, only allow SSH-key login. PasswordAuthentication no # But allow connections from the LAN to use passwords. Match Address 192.168.*.* PasswordAuthentication yes The Match Address block can also usefully be a Match User block, answering my secondary question. For now I'm just chalking the failure to parse CIDR addresses up to a quirk of my install, and resolving to try again when I go to Ubuntu 10.04 not too long from now. PAM turns out not to be necessary.

    Read the article

  • multiple folder structures/views

    - by Sojourner
    Newbie. Setting up a server for a law firm. Want to set up the folder structure as follows: Client 1 Name -- Matter 1 (i.e. setting up corporation) -- Matter 2 (i.e. divorce) -- Matter 3 (i.e. setting up trust) Client 2 Name -- Matter 1 Client 3 Name -- Matter 1 and so on. But the attorneys prefer navigating a folder structure, more based on what case type: Civil -- Client 1 Name (i.e. Smythe) -- Client 2 Name (i.e. Jones) -- Client 3 Name (i.e. Johson) -- Landlord/Tenant -- Client 1 Name (i.e. Jones) -- Client 2 Name (i.e. Johson) -- Class Action Suits -- Suit 1 -- Suit 2 Personal Injury -- Client 1 Name -- Client 2 Name -- Client 3 Name Criminal -- Client 1 Name (i.e. Smythe) I'd like to know if it's possible to set up the server with the first folder structure (it's more organized and easier to employ scripts), while having the second folder structure available for users who find it easier to deal with the same types of cases grouped together.

    Read the article

  • Mountain Lion overheating issue have to do with launchd/Python?

    - by Christopher Jones
    So, Ever since I installed ML, my MacBook Air has been running SUPER hot. Opened up activity monitor, and everything seemed to be pretty normal, until I had it refresh every .5 seconds... and then I started seeing some interesting things. A 'Python' process appears and is terminated several times a second, and uses TONS of CPU 70-110. It's parent process is 'launchd' - and when I sample the process, there is a lot going on with Python. http://db.tt/ovuX3hZM These appear and disappear too quickly to get one... this one only happened to be using 70 ish percent of CPU... but they consistently hit 100-110%. http://db.tt/ovuX3hZMg The parent process... launchd. lots of context switches and UNIX system calls... What is the deal here? (photo goes here when I earn the street cred) The sample of launchd. ANY help here could be of help to not only me, but possibly many others experiencing decreased battery life and warmer laps these days because of this Mountain Lion weirdness. PLEASE HELP! PS - I'd put the screen grabs inline, but i don't have enough street cred yet.

    Read the article

  • Is it possible to rate limit based on host headers? i.e. not just on ip address

    - by Blankman
    I have a web service endpoint that I am building where people will post an xml file to, and it will really get pounded with over 1K requests per second. Now they are sending in these xml files via http post, but a good majority of them will be rate limited. The problem is, the rate limiting will be done by the web application by looking up the source_id in the xml, and if it is over x requests per minute, it will not be processed further. I was wondering if I could do rate limit checking earlier in the processing somehow and thus save the 50K file going threw the pipeline to my web servers and eating up resources. Could a load balancer make a call out to verify rate usage somehow? If this is possible, I could maybe put the source_id in a host header so even the XML file doesn't have to be parsed and loaded into memory. Is it possible to just look at host headers and not load up the entire 50K xml file into memory? I really appreciate your insights as this takes more knowledge of the entire tcp/ip stack etc.

    Read the article

  • mysql mass insert data

    - by user12145
    Edit: I realized that if I construct a large query in memory, the speed has increased almost 10 times of magnitude "insert ignore into xxx(col1, col2) values('a',1), values('b',1), values('c',1)..." Edit: since I have an index on the first column, the insert time creeps up as I insert more. Can I delay the index until the end? Original: I'm using the following to batch insert 10 million rows into mysql db(not all at once, since they don't all fit into memory), it's too slow(taking many hours). should I use load file to improve performance? I would have to create a second file to store all the 10 million rows, then load that into db. are there better ways? PreparedStatement st=con.prepareStatement("insert ignore into xxx (col1, col2) "+ " values (?, 1)"); Iterator d=data.iterator(); while(d.hasNext()){ st.clearParameters(); st.setString(1, (d.next()).toLowerCase()); st.addBatch(); } int[]updateCounts=st.executeBatch();

    Read the article

  • How to change mod_rewrite to avoid {REQUEST_FILENAME} in order to get around 255 character URL limit?

    - by Jeremy Reimer
    According to this answer: max length of url 257 characters for mod_rewrite? there is a maximum 255 character hard limit based on the file system for using mod_rewrite. According to the accepted answer, there are two solutions: Change the URL format of your application to a max of 255 characters between each slash. Move the Rewrite rules into the apache virtual host config and remove the REQUEST_FILENAME. I cannot use the first method, so I am trying to figure out the second. I have put the Rewrite rules into the Apache virtual host config as requested. However I cannot figure out how to remove the REQUEST_FILENAME and still have my web application framework (Dragonfly) still work. Here is the portion of the rewrite rules that I moved from .htaccess into the virtual host config file of Apache: RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-l RewriteCond %{REQUEST_FILENAME} !-f [OR] # if don't want Dragonfly to process html files comment # out the line below (you may need to remove the [OR] above too). RewriteCond %{REQUEST_FILENAME} \.(html|nl)$ # Main URL rewriting. RewriteRule (.*) index.cgi?$1 [L,QSA] I've tried removing {REQUEST_FILENAME} and it just breaks the framework in various ways. How do I rewrite this without using {REQUEST_FILENAME}?

    Read the article

  • Configure J2EE Agent with OpenAM behind Reverse Proxy

    - by Troy
    I have a reverse proxy with two SSL enabled NamedVirtualHosts on different ports. Both containers on each internal host is GF 2.1.1. Proxy configuration as follows: Proxy URL -> Internal URL https://apps.mydomain.com -> http://apps.internal.com https://secure.otherdomain.com:8080/ -> http://secure.internal.com I initially tried configuring the J2EE agent in OpenAM and the web app container to use the internal URLs (I appended /openam and /agentapp respectively). However, I received the following errors when trying to access a secured application such as https://apps.mydomain.com/webapp. java.lang.RuntimeException: Failed to load configuration: ApplicationSSOTokenProvider.getApplicationSSOToken(): Unable to get Application SSO Token A second attempt gives the following error: java.lang.NoClassDefFoundError: Could not initialize class com.sun.identity.agents.filter.AmFilterManager Along with these in the agent debug.out: ERROR: Failed to obtain auth service url from server: null://null:null ... SiteMonitor: Site URL http://secure.internal.com/openam/namingservice is not available. If I specify the server and agent urls using the proxy urls, then the agent appears to be working and I am redirected to the OpenAM login page. However, the goto in the URL is http://apps.mydomain.com/webapp instead of https://apps.mydomain.com/webapp (missing https). So after authentication, the redirect fails. Now I could possibly get by with mod_rewrite, but it feels hackish and I really want to know what's going on. Any ideas?

    Read the article

  • Force SSL on one page via .htaccess without looping

    - by Will Martin
    Okay, I have this code: RewriteCond %{HTTPS} off RewriteCond %{REQUEST_URI} ^/borrowing/ill/request\.php$ RewriteRule ^.*$ https://%{HTTP_HOST}%{REQUEST_URI} [R,L] The way I would expect this to work is: A request for /borrowing/ill/request.php comes in on HTTP. The rule matches. The server redirects to HTTPS. The rule does not match, because HTTPS is now on. The way it actually works is: A request for /borrowing/ill/request.php comes in on HTTP. The rule matches. The server redirects to HTTPS. The rule matches. The server redirects to HTTPS. The rule matches. The server redirects to HTTPS ... And so on. I know that the second condition (matching the file name) is working, because the redirect loop only hits that specific page. The question is, why isn't the switch to HTTPS causing the first condition to not match? EDIT: I put the exact same .htaccess rules into a test area on another server -- same file and path info. And they worked just fine. There's got to be something wrong with the server configuration elsewhere.

    Read the article

  • How to tackle dell support system? [closed]

    - by Nishant Kumar
    We have purchased a Dell Optiplex 9010 SSFV for our organization's work. Since the first installation two of the USB keyboard keys were not working properly. I had to press those keys two times simultaneously, on first time keys did not work and for for second time it printed two characters (as it were buffering first character.) Two keys that were not working properly: Hexangrave (Below the ESC key: `) Double Quotes (Left the enter key ") We registered our complaint with DELL and they suggested (with some hard to understand and weird ENGLISH accent) some test and tricks, such as switching to different ports, checking keyboard on different PC, and it worked well with diff. PC(with Windows 7 Home Premium installed). It was clear that it is an OS fault, hence they suggested to re-install OS. Problem began here, we have a project on the run and currently a video editing project setup on our system, so can't re-install system in hurry and also DELL persons were not providing any other solution such as updating keyboard driver, etc. Arguments I am a Software Engg. and don't think it is a feasible solution to re-install entire system for simple problems. This prob is coming since the fresh system installation, so I don't think it will solve the problem. Finally, I had to find solution myself and got it here, now I want to show my disappointment to dell persons or at least tell them that they should improve there support system to not advice to re-install entire system for that simple problems. Notes We have purchased 5 years NEXT business day support from DELL for around 8000 INR (Not for that kind of solutions from DELL). So can anyone tell me how to tackle dell support system officially, so that they will pay more attention in near future. Thanks

    Read the article

  • Cannot set up dual monitors correctly in Fedora15 with KDE.

    - by adivasile
    I have 2 monitors: 24" LCD connected via DVI(primary) 19" LCD connected via VGA(secondary) Everytime Fedora starts the second display is always set to clone the first one and they both run at 1280x1024 and I always have to disable the 19" monitor, in order for the bigger one to run at 1920x1080. I want to set them up so that my secondary monitor extends the primary one.The problem is that no matter what kind of configuration I choose it has no effect.My secondary monitor remains disabled. I've tried using both the Display manager from KDE and the ATI Control Panel and the behaviour is always the same.The moment I click apply, the screen flickers and nothing changes. I've succesfully used the extended setup in Fedora15 with Gnome3. I have a RadeonHD 4300 series videocard and I'm using the drivers downloaded from the AMD site. This is the output of xrandr -q : Screen 0: minimum 320 x 200, current 1920 x 1080, maximum 1920 x 1920 VGA-0 connected (normal left inverted right x axis y axis) 1280x1024 75.0 60.0 1280x960 60.0 1152x864 75.0 1024x768 75.0 70.1 66.0 60.0 832x624 74.6 800x600 72.2 75.0 60.3 56.2 640x480 75.0 72.8 66.7 59.9 720x400 70.1 DVI-0 connected 1920x1080+0+0 (normal left inverted right x axis y axis) 477mm x 268mm 1920x1080 60.0*+ 60.0 1680x1050 59.9 1600x900 60.0 1280x1024 75.0 60.0 1280x960 60.0 1152x864 75.0 1280x720 60.0 1152x720 60.0 1024x768 75.0 60.0 832x624 74.6 800x600 75.0 60.3 640x480 75.0 59.9 720x400 70.1 Later edit: The problem seems to come from the ATI drivers.I managed to set up the monitors like I wanted after I uninstalled the drivers. Unfortunately I'm working on an OpenCL project so I had to reinstall them.The moment I did that, all my previous settings were forgotten and I was back to square one.

    Read the article

  • Changing IP address in IIS for SharePoint site results in Directory listing error

    - by Dan
    I have a server here that has 2 roles. One is Exchange 2007 and the other is MOSS 2007. In IIS i have a site, go.domain.com which has our OWA. The other is internal.domain.com which is the MOSS site. I have given the NIC local IPs and each site is using host headers. The GO site has an SSL cert from NetSol, and the MOSS site has a self signed. Right now going to either shows the NetSol site, which browsers complain about when going to the internal.domain.com site, obviously, since they are on the same IP in IIS. Both sites have always run off the original IP of 10.0.0.3 in IIS. When i added the second IP to the NIC, (10.0.0.6) and changed the Sharepoint site in IIS to use this for http and https access, I now get this message in a browser when trying to connect. Directory Listing Denied This Virtual Directory does not allow contents to be listed. Changing the IP back to 10.0.0.3 and the internal site is back up. What am I missing here? Do i need to fool around with Alternate Access Mappings in Central Admin? Am i completely missing the point with multiple SSL certs and host headers?

    Read the article

  • Configuring apache and php to handle many connections

    - by Marc
    My preliminary setup is like this. Two QuadCore 8GB servers running debian 6, with php and apache, One QuadCore 16GB server running debian 6, with mysql My plan is to have one 8Gb server to act as a proxy server, using vertx java to handle connections. I will let vertx use HttpClient to send web requests to the second 8GB server. This would have apache installed and use php to deliver any information that it gets from the mysql server on the third, 16GB server. The main reason I want this setup is to have things separated, so the "proxy" will be the only way to access the system, as the other two server will only be reachable from the local network. I can have the vertx proxy handle 5000+ concurrent connections, but, I don't know how to configure apache to handle all the requests coming from the proxy. Php will connect over mysqli with persistent connection pool of 500-800 connections, the mysql server seems not to have any issues on this part. In previous projects, the apache part was always causing issues, no matter how I set it up. I might not fully understand how to setup apache, since normally apache should handle many concurrent connections, but it does seem to now.

    Read the article

  • Troubleshooting iptables and configuring it to drop the priority of long-term connections

    - by intuited
    I'm somewhat familiar with the general concepts of iptables, and would like to learn it in more detail. I'm hoping that my learning experience can also be useful. The situation: I'm running dd-wrt on my router. Despite its purported QoS skills, I'm still seeing connection latency shoot up hugely whenever there's an ongoing http connection, eg some large download. Under such conditions, it can take 10 seconds or more to load a basic webpage; sometimes the connections are dropped entirely. I've tried adjusting the parameters, dropping the allotted bandwidth for up and download to well under my limit, but nothing seems to work. dd-wrt is configured to use HTB as the QoS algorithm; HFSC, although presented as an option, seems to cause the router to crash, and is rumoured to not actually work on any linux system. I'd like to be able to troubleshoot this issue and hopefully improve the settings that dd-wrt is using, but I'm finding the learning curve a bit overwhelming. For starters I am not sure what HTB actually specifies: is this a set of iptables commands, or do some of those commands specify how HTB is to be used? I would like it to prioritize based on protocol the way that it already supposed to, and in addition I'd like to have it drop the priority of connections which have a high total byte count, say over 400KB. Also tips on utilities that can be run under dd-wrt to get more info on what's going on in there are appreciated. I've tried to get iftop to work but there were issues running curses. I'm leaning towards replacing dd-wrt with openwrt; comments on this strategy are also welcome. I suspect that I would be well advised to get a second router as a standin before trying that. It may be worth noting that my total bandwidth is pretty limited (256Kbit/s).

    Read the article

  • Resource reference passing in puppet

    - by paweloque
    Is it possible to pass puppet resource references to other resources? My use-case is to build a jenkins build pipeline with puppet. To chain jenkins jobs into a pipeline I need to pass the successor job to a job. A subset of the definition is: jobs::build { "Build ${release_name}": release => $release_name, jenkins_jobs_path => $jenkins_jobs_path, successors => 'Deploy', } jobs::deploy { "Deploy ${release_name}": release => $release_name, jenkins_jobs_path => $jenkins_jobs_path, successors => 'Smoke Test', } In the def you see that I define the successors by name, i.e. 'Deploy' and in case of the second job 'Smoke Test'. What I'd like to do is to pass a reference to a resource and extract the name from it: jobs::build { "Build ${release_name}": release => $release_name, jenkins_jobs_path => $jenkins_jobs_path, successors => Jobs::Deploy["Deploy ${release_name}"], } jobs::deploy { "Deploy ${release_name}": release => $release_name, jenkins_jobs_path => $jenkins_jobs_path, successors => Jobs::Smoke_test["Smoke Test ${release_name}"], } And then within the jobs::deploy and jobs::build definition I'd access the resource by reference and query for it's type, etc.. Is it possible to achieve this in puppet?

    Read the article

  • fail2ban log parsing too slow on Raspberry Pi - options? [migrated]

    - by Gordon Morehouse
    I'm running fail2ban on a Raspberry Pi at 950MHz which I cannot overclock further. The Pi is occasionally subject to SYN floods on particular ports. I've set up iptables to throttle the rate of SYNs on the port of interest; when the throttle limits are exceeded, hosts which send SYNs are dropped into the REJECT chain and the particular SYN packet which exceeded the limit is logged. fail2ban then watches for these logged SYNs and, after seeing a few, temporarily bans the host for a short time (this is a transient issue in the app I'm working with). The problem is that the SYN floods can occasionally reach rates which are too fast for fail2ban to keep up with; I'll see 20-40 log messages per second, and eventually fail2ban falls behind and becomes ineffective. To add insult to injury, it continues consuming a LOT of CPU as it tries to catch up. I have verified that DROP chained packets from hosts already banned by fail2ban are not logged, and thus do not add to its load. What are my options here? I have a few ideas, but no clear path forward. Could I make the log-parse regex "easier" so it takes fewer cycles? Would using iptables --log-prefix to put a token near the start of the log message, and/or otherwise simplifying/altering the fail2ban regex help? Here is the current fail2ban config line containing a regex: failregex = kernel:.*?SRC=(?:::f{4,6}:)?(?P<host>[\w\-.^_]+) DST.*?SYN Is there a faster way for fail2ban to watch for the packets exceeding the limits than parsing kern.log? Could fail2ban be run under PyPy instead of CPython with minimal nonstandard wizardry (the OS is Raspbian 7, so, mostly Debian 7)? Is there something better than fail2ban that I could use to watch for the packets which exceed the SYN limits, and after N exceeds in X seconds, temporarily put the offending IP into the iptables DROP bucket, and take it out when the ban timer expires? Again, I'd vastly prefer a solution that uses as much software available in Debian as possible, though I can build Debian packages in a pinch.

    Read the article

  • What do "Unknown SSAP" and "Unknown DSAP" mean in tcpdump?

    - by lacker
    While trying to fix a problem with intermittently losing internet connection on a machine with a wireless connection to a router, I ran tcpdump and noticed packets with "Unknown SSAP" and "Unknown DSAP" errors coming at a rate of a few per second. 20:27:21.703178 00:24:a5:af:24:f6 (oui Unknown) Unknown SSAP 0xde > 1c:65:9d:48:38:95 (oui Unknown) Unknown DSAP 0xe2 Information, send seq 0, rcv seq 16, Flags [Response], length 171 20:27:21.724726 00:24:a5:af:24:f6 (oui Unknown) Unknown SSAP 0xde > 1c:65:9d:48:38:95 (oui Unknown) Unknown DSAP 0xe2 Information, send seq 0, rcv seq 16, Flags [Response], length 104 20:27:21.746449 00:24:a5:af:24:f6 (oui Unknown) Unknown SSAP 0xde > 1c:65:9d:48:38:95 (oui Unknown) Unknown DSAP 0xe4 Information, send seq 0, rcv seq 16, Flags [Response], length 88 20:27:21.970963 00:24:a5:af:24:f6 (oui Unknown) Unknown SSAP 0xde > 1c:65:9d:48:38:95 (oui Unknown) Unknown DSAP 0xe8 Information, send seq 0, rcv seq 16, Flags [Response], length 76 20:27:22.016565 00:24:a5:af:24:f6 (oui Unknown) Unknown SSAP 0xde > 1c:65:9d:48:38:95 (oui Unknown) Unknown DSAP 0xea Information, send seq 0, rcv seq 16, Flags [Response], length 88 20:27:22.038471 00:24:a5:af:24:f6 (oui Unknown) Unknown SSAP 0xde > 1c:65:9d:48:38:95 (oui Unknown) Unknown DSAP 0xea Information, send seq 0, rcv seq 16, Flags [Response], length 171 What does the "Unknown SSAP" and "Unknown DSAP" mean, and does it indicate a problem?

    Read the article

  • New monitor connected to HDMI adaptor doesn't show output after booting

    - by Paul
    Hello out there in the multiple monitors’ world. I am a very old newbie in your world and need help. I just purchased a new Asus VH236H monitor and hooked it up the HDMI port of an ATI Radeon HD4300 / 4500 Series display adaptor. I left the old Princeton LCD19 (TMDS) hooked up to the DVI port of the same display adaptor. Both monitors displayed the boot sequence, after I fired good old Sarastro2 (Asus P5Q Pro Turbo – Dual Core E5300 – 2.60 GHz) up. The Asus lacked one half of a second behind the Princeton until the Windows 7 Ultimate SP 1 boot up was complete. Then the Asus displayed “HDMI NO SIGNAL” and went into hibernation. The Princeton stayed lit up as before. Both monitors are displayed on the “Screen Resolution Setup Display” and I plaid around with them for a while. The only thing I accomplished was to shove the desktop icons from the Princeton to the still hibernating Asus. The “Multiple displays:” is set to “Extend these displays”, the Orientation is “Landscape” and the Resolutions are set on both to the “recommended” one. Both monitors show that they work properly in the advanced Properties display. What am I doing wrong, what am I missing? Never mind the opinions about the different resolutions of the two monitors. I always can unhook the Princeton and give it to a Goodwill Store if I do not like the setup. I just would like to make it work. Any constructive help is very much appreciated, Thank you.

    Read the article

  • btrfs: can i create a btrfs file system with data as jbod and metadata mirrored

    - by Yogi
    I am trying to build a home server that will be my NAS/Media server as well a the XBMC front end. I am planning on using Ubuntu with btrfs for the NAS part of it. The current setup consists of 1TB hdd for the OS etc and two 2TB hdd's for data. I plan to have the 2TB hdd's used as JBOD btrfs system in which i can add hdd's as needed later, basically growing the filesystem online. They way I had setup the file system for testing was while installing the OS just have one of the HDD's connected and have btrfs on it mounted as /data. Later on add another hdd to this file system. When the second disk was added btrfs made as RAID 0, with metadata being RAID 1. However, this presents a problem: even if one of the disk fails I loose all my data (mostly media). Also most of the time the server will be running without doing any disk access, i.e. the HDD's can be spun down, when a access request comes in this with the current RAID 0 setup both disks will spin up. in case I manage a JBOD only the disk that has the file needs to be spun up. This should hopefully reduce the MTBF for each disk. So, is there a way in which I can have btrfs setup such that metadata is mirrored but data stays in a JBOD formation? Another question I have is this, I understand that a full drive failure in JBOD will lose data on the drive, but having metadeta mirrored across all drives, will this help the filesytem correct errors that migh creep in (ex bit rot?) and is btrfs capable of doing this.

    Read the article

  • How can I get my routers to forward ports correctly?

    - by Giffyguy
    My network currently looks like this (simplified): Note that Router #2 is connected to the LAN interface of Router #1. This should be familiar to anyone who has seen a standard static-IP setup with an additional firewall for a residence or other small building. Router #1 is actually my cable gateway, but since it is a fully functional router/firewall, I am going to refer to it as a router. Now, I need to open various ports in both firewalls for incoming communication to my server - port 80 is a good example. So I've opened up port 80 in Router #2, and so far all incoming traffic at the public IP X.X.X.129 is being routed correctly. The problem is that I also need my server to respond to incoming traffic at the public IP X.X.X.130 on the WAN interface of Router #1. Naturally, I can't just tell Router #1 to forward port 80 to another public IP. Port forwarding is only supported when the traffic is being directed to the LAN subnet. I am willing to restructure my network topology if required, with the following conditions: Router #1 cannot have its WAN IP reassigned - X.X.X.130 is mandatory. Router #1 cannot be moved or disconnected from the cloud. The server cannot be given a second IP address. I would prefer the server to have a private IP address - e.g. 10.0.0.10 I'd like to keep Router #2, but it can have a private IP - e.g. 10.0.1.10 Following these rules, I need to get my server to receive incoming traffic on port 80 from both public IP addresses. Does anyone on SU know if this is possible? So far my only theories have been to set up a static route on either router, or to somehow combine my two subnets into a single subnet.

    Read the article

  • Symbolic directory link shared in domain

    - by Sabre
    We have a file server that is 2008R2 STD, it is a member server in a 2008 AD. I need to relocate some of the files and directories and would like to do it behind the scenes more or less without impacting the users. (Reason for this is that some of the files, due to recent software changes, HAVE to be located locally on one of the workstations, but they can be accessed by other applications remotely.) So symbolic links seem the panacea here, I moved a directory to another network share in the same domain (Windows 7 professional), created a symlink to it in the location it used to be in, named it the same thing, and to the local user it seems almost transparent. I.E. When logged into the desktop of the file server, I can go to the directory, open the link, it leaps to the other share as if it were local, exactly what would be expected. Then I tried it from another client computer (Windows 7 professional as well), went through the normal provisioning of R2R and L2R with fsutil... No joy. What I am getting is an access denied "Logon failure: Unknown username or bad password." using the same account that I log on locally to the file server with (Which happens to be the domain admin) So I cannot believe it is telling the truth, or... I assume it is not passing the credentials I am connecting to the first share all the way through the symlink. The end result is I want users on the domain to browser to share A, inside share A is a mixture of directories/files that reside there, and symlinks to directories/files on the second machine over the network in the same domain. Possible? Or am I misunderstanding how the symlink should work?

    Read the article

  • Instabilities with Bridged and bonded interfaces

    - by Henry-Nicolas Tourneur
    I did post yesterday to get a working setup with several bridged interfaces used for virtual machines (KVM/libvirt). One of the bridged interface is just using eth3 as its ports while the second one (public traffic) is using an ethernet bonded interface. That setup is working but not all the time ! I can start a download from a vm, then it will stop and freeze! So I don't know if my bridge parameters are correct, could you check the below config ? iface eth3 inet manual auto bond0 iface bond0 inet manual slaves eth1 eth2 pre-up ip link set bond0 up down ip link set bond0 down auto br0 iface br0 inet static address 10.160.0.7 netmask 255.255.255.128 bridge_ports eth3 bridge_fd 9 bridge_hello 2 bridge_maxage 12 bridge_stp on auto br0:1 iface br0:1 inet static address 10.160.0.9 netmask 255.255.255.255 auto br0:2 iface br0:2 inet static address 10.160.0.10 netmask 255.255.255.255 auto br1 iface br1 inet static address 217.4.40.242 netmask 255.255.255.240 gateway 217.4.40.241 pre-up /etc/network/firewall start bridge_ports bond0 bridge_fd 9 bridge_hello 2 bridge_maxage 12 bridge_stp on auto br1:1 iface br1:1 inet static address 217.4.40.252 netmask 255.255.255.255 auto br1:2 iface br1:2 inet static address 217.4.40.253 netmask 255.255.255.255 And yes, it also sometimes speaks about martian on the host: kernel: [249146.055172] martian source 10.160.0.17 from 10.160.0.10, on dev vnet2 kernel: [249146.073122] ll header: ff:ff:ff:ff:ff:ff:54:52:00:76:c3:5c:08:06

    Read the article

< Previous Page | 447 448 449 450 451 452 453 454 455 456 457 458  | Next Page >