Search Results

Search found 18092 results on 724 pages for 'matt long'.

Page 339/724 | < Previous Page | 335 336 337 338 339 340 341 342 343 344 345 346  | Next Page >

  • How to submit Windows 8 bugs on Microsoft Connect?

    - by ahmd1
    I was testing Windows 8 release preview and came across several bugs. One of my friends suggested letting Microsoft know through Microsft Connect. So I went to this web site: connect.microsoft.com I don't think that I'm a stupid person. I'm pretty good at figuring software and websites out, but this one beats me.... So I click on the "Products currently accepting bugs" link and get a long list of products. First off, most of those names is something that I've never heard before. Then I try searching the page for Windows 8 and ... find nothing :) Can someone explain in one or two sentences how to submit bugs to Microsoft?

    Read the article

  • Autoscaling in a modern world&hellip;. Part 1

    - by Steve Loethen
    It has been a while since I have had time to sit down and blog.  I need to make sure I take the time.  It helps me to focus on technology and not let the administrivia keep me from doing the things I love. I have been focusing on the cloud for the last couple of years.  Specifically the  PaaS platform from Microsoft called Azure.  Time to dig in.. I wanted to explore Autoscaling.  Autoscaling is not native part of Azure.  The platform has the needed connection points.  You can write code that looks at the health and performance of your application components and react to needed scaling changes.  But that means you have to write all the code.  Luckily, an add on to the Enterprise Library provides a lot of code that gets you a long way to being able to autoscale without having to start from scratch. The tool set is primarily composed of a Autoscaler object that you need to host.  This object, when hosted and configured, looks at the performance criteria you specify and adjusts your application based on your needs.  Sounds perfect. I started with the a set of HOL’s that gave me a good basis to understand the mechanics.  I worked through labs 1 and 2 just to get the feel, but let’s start our saga at the end of lab3.  Lab3 end results in a web application, hosted in Azure and a console app running on premise.  The web app has a few buttons on it.  One set adds messages to a queue, another removes them.  A second set of buttons drives processor utilization to 100%.  If you want to guess, a safe bet is that the Autoscaler is configured to react to a queue that has filled up or high cpu usage.  We will continue our saga in the next post…

    Read the article

  • How do I see what connections are being made?

    - by Coldblackice
    My DDWRT router is showing that my computer has a connection count around 600! The router is at 100% CPU use. How can I see what's making all the connections? So far, I've opened up Resource Monitor and checked the network tab. I can sort by which program is using the most network bandwidth (Pale Moon browser), but I can't see what's making all of the connections. Or rather, where all the connections are being toward (trying to find what tab must be making all of these connections). I've also tried netstat -A, but it apparently doesn't show the actual number of connections being made. At least, the list of established connections isn't very long, by any means (like enough to account for the 500+ connections apparently being made.

    Read the article

  • Microsoft SQL Server 2012 Analysis Services – The BISM Tabular Model #ssas #tabular #bism

    - by Marco Russo (SQLBI)
    I, Alberto and Chris spent many months (many nights, holidays and also working days of the last months) writing the book we would have liked to read when we started working with Analysis Services Tabular. A book that explains how to use Tabular, how to model data with Tabular, how Tabular internally works and how to optimize a Tabular model. All those things you need to start on a real project in order to make an happy customer. You know, we’re all consultants after all, so customer satisfaction is really important to be paid for our job! Now the book writing is finished, we’re in the final stage of editing and reviews and we look forward to get our print copy. Its title is very long: Microsoft SQL Server 2012 Analysis Services – The BISM Tabular Model. But the important thing is that you can already (pre)order it. This is the list of chapters: 01. BISM Architecture 02. Guided Tour on Tabular 03. Loading Data Inside Tabular 04. DAX Basics 05. Understanding Evaluation Contexts 06. Querying Tabular 07. DAX Advanced 08. Understanding Time Intelligence in DAX 09. Vertipaq Engine 10. Using Tabular Hierarchies 11. Data modeling in Tabular 12. Using Advanced Tabular Relationships 13. Tabular Presentation Layer 14. Tabular and PowerPivot for Excel 15. Tabular Security 16. Interfacing with Tabular 17. Tabular Deployment 18. Optimization and Monitoring And this is the book cover – have a good read!

    Read the article

  • Temporarily Utilizing 304 Header on Apache for Crawlers

    - by Volomike
    I have a client who has a hosting arrangement with 400 customer sites all hosted through SuPHP in CGI mode on Apache. The sysop is now gone and the client is calling on me for rolling out a new PHP thing. Trouble is -- server load is very high right now and we have found that it's due to the crawlers. We had one customer in particular who complained of slow websites, and we engaged a 304 header plugin in his site against most crawlers, and his site perked right up. We'd like to lower that load by issuing a global 304 header to all the crawlers, letting human visitors through. I have a long list of user agent keywords to trap for. What's the best way to temporarily engage that global 304 header, while allowing human visitors to get right on through? I mean, I could roll out 400 .htaccess file changes, but it would be ideal to make this change in like one central Apache config and then it automatically affect all the sites at once.

    Read the article

  • Change in Job Title and Responsibilities

    - by John Conwell
    I've spent the past 7 years focused primarily on code and database performance.  It's an area that I have a passion for, as well as a propensity.  But what I've found is that its very hard to change the culture of a development environment.  You can teach performance, you can encourage performance, you might see slight shift in how devs think about performance.  But without full management backing and support you wont get long lasting changes in the development culture.  And in the end, you are back to being the "Perf Guy", fixing performance design flaws, after the fact, one by one by one. Which is why last year I asked my boss to changed my title and responsibilities to more naturally align with the team I was working for.  So now I'm a Computing Research Engineer (vague, I know), researching in the field of Big Data analytics and visualization. I've found this change revitalizing and a lot of fun.  And given the nature of Big Data (its, um…big) the performance aspects are always ever present.

    Read the article

  • Duplicate IP address detection with multiple NICs

    - by sfink
    I am using arping -D to detect duplicate IP addresses within a network when setting up servers. (The network is controlled by someone else, and we have had many issues with IP allocation in the past.) It works fine as long as my host has a single NIC on a given VLAN, but when my host has more than one (I have one with 9 NICs on one VLAN and 1 on the other), arping -D always returns false collisions. The problem is that all 9 of my NICs respond to an ARP request for any of the IPs on those NICs. (These are real physical NICs, not aliases or anything.) I send out one ARP request packet, and get 9 ARP is-at ARP replies, one for each MAC address. I could implement my own solution by sniffing packets and checking for any replies with a MAC address other than the local NICs', but it seems like there ought to be an easier way.

    Read the article

  • Trouble with setting up Mac SSH with TP-LINK router

    - by arxanas
    I have a Mac running OS X 10.7.2, and a TP-Link TL-WR740N (whose control panel looks like this). Remote Login is on in the Mac's System Preferences, and port 22 is set to forward on the router. I can access my Mac as a web server using the external IP on port 80, which I have set up through the same port-forwarding mechanism provided by the router, but when I try to ssh server@external-ip, it just times out after a long while. (The same thing happens when I try vnc.) I can, however, ssh and vnc successfully into that computer while I'm on the same network when using its internal IP. Since ssh appears to work and port forwarding appears to work, I can't figure out what's causing the problem. Does anyone have any idea what might cause this?

    Read the article

  • MySQL Workbench on Ubuntu 12.04 doesn't starts after latest (Jun12) updates

    - by Atul Kakrana
    MySQL workbench was working fine till today. I installed the regular updates and now its just doesnt starts. When started its just shows the 'opening screen' and nothing happens. I tried re-installing it from synaptic but no luck. I use it all the time and now suffering a lot. Any help will be appreciated. When run from terminal with: mysql-workbench --log-level=debug3 --verbose It gives a long log. Please see at: http://pastebin.com/Z2t8pdZF I see these error in the log but don't know what they mean and how it stopped working automatically, /home/atul/.mysql/workbench/wb_state.xml:1: parser error : Document is empty ^ /home/atul/.mysql/workbench/wb_state.xml:1: parser error : Start tag expected, '<' not found ^ /home/atul/.mysql/workbench/user_starters.xml:1: parser error : Document is empty ^ /home/atul/.mysql/workbench/user_starters.xml:1: parser error : Start tag expected, '<' not found ^ /home/atul/.mysql/workbench/starters_settings.xml:1: parser error : Document is empty ^ /home/atul/.mysql/workbench/starters_settings.xml:1: parser error : Start tag expected, '<' not found Atul

    Read the article

  • dpkg E: Sub-process /usr/bin/dpkg returned an error

    - by user81269
    I decided to shift around my partitions on my hard drive for a fresh install of Kubuntu. I booted my Ubuntu 10.10 live disc, shifted everything around and attempted to install grub and it didn't work, so I burnt an Ubuntu 12.04 disc and installed it. I got the computer working and wanted to install some packages, but didn't have an internet connection at the time. So (I know this was stupid) I got some debs from previous versions of Ubuntu, as I needed my music, and the other install took a long of time to boot. Once I got my internet connection back, everything worked ok, for a little while. Then I stumbled upon this problem after removing ten broken packages using synaptic: drhax@Spamotard:~$ sudo apt-get install -f Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be REMOVED: libgtk2.0-cil 0 upgraded, 0 newly installed, 1 to remove and 417 not upgraded. 1 not fully installed or removed. After this operation, 2,638 kB disk space will be freed. Do you want to continue [Y/n]? y (Reading database ... 103052 files and directories currently installed.) Removing libgtk2.0-cil ... E: File does not exist: /usr/share/cli-common/packages.d/policy.2.6.gtk-dotnet.installcligac dpkg: error processing libgtk2.0-cil (--remove): subprocess installed post-removal script returned error exit status 1 Errors were encountered while processing: libgtk2.0-cil E: Sub-process /usr/bin/dpkg returned an error code (1) Help would be appreciated. This is my first post, but I do know fair bit about Ubuntu, so feel free to point out any stupid mistakes I have made.

    Read the article

  • Keeping file for personal use with GPG

    - by trixcit
    I have a small text file with personal (sensitve) information. I'm currently encrypting/decrypting it with the Makefile, as described on http://www.madboa.com/geek/gpg-quickstart/ ; relevant section is edit: @umask 0077;\ $(GPG) --output $(FILEPLAIN) --decrypt $(FILECRYPT) @emacs $(FILEPLAIN) @umask 0077;\ $(GPG) --encrypt --recipient $(GPGID) $(FILEPLAIN) @$(RM) $(FILEPLAIN) view: @umask 0077; $(GPG) --decrypt $(FILECRYPT) | less this works fine for viewing, but not for editting: I first have to enter my password, then edit the file, but to encrypt it afterwards I again have to enter my password twice (and it's a long one). Is there a better way to do this?

    Read the article

  • AutoCAD 11 and network file shares

    - by gravyface
    Small network of perhaps half a dozen engineers, currently working on local copies of AutoCAD project files, which are then copied back up to file server (2008 Standard, 1-2 year old Dell server hardware, RAID 5 SAS disks (10k? not positive)) at end of day. To me, this sounds horribly inefficient and error-prone, however, I've been told that "AutoCAD and network files = bad idea" and this is gospel. The network is currently 10/100 (perhaps this is the reason for the "gospel") but all the workstations are within 2 years old and have GbE NICs so an upgrade of the core switch is long overdue. However, I know certain applications don't like network access, at all, and any sign of latency or disruption brings the whole thing crashing down. Anyone care to chime in?

    Read the article

  • apache2 mod_proxy configuration for single threaded servers

    - by The Doctor What
    I have a multiple instances of thin running behind apache 2.2's mod_proxy. The problem I have is that a couple pages, by design, take a while to run. If I just configure apache the obvious way (just add the thin urls as BalanceMember lines and no other configurations) then what happens is if someone clicks on the long-running page, then if enough web requests happen while it is running, someone eventually gets the same thin server and has to wait. Does anyone have some best practices or suggested configuration for mod_proxy and thin? Ciao!

    Read the article

  • Transitioning to Transaction Base

    - by Glen McCallum
    I was actually hired at Oracle Health Sciences to work on the HTB application. Long story short, when HL7 version 3 was relatively new ... Canada made an initial sprint at adoption. Since then progress has slowed. I was part of that initial adoption and learned a lot about the Reference Information Model. At that time we worked mostly with CDA R2 Level 3 (fully coded/ structured xml) documents.HTB is a HL7 v3 RIM-based repository. Love it or hate it, the product is unique in the market place. One of the advantages is the flexibility of the model. You can aggregate information from literally any source system without any HTB data model modification and then use that data in a semantically meaningful way. That's extremely powerful.There is a minor speed bump getting up to speed with HL7 v3, there's no doubt about that. I believe that is why Oracle recruited me from Canada originally - so I could have a running start at HTB. In the near future I'm looking forward to an application deep dive with John Hatem.

    Read the article

  • How can you invert the colors of a PDF?

    - by legr3c
    I need to invert all the colors of a PDF document (background, text, graphics, and images). I want it persistent in the file so the inverted viewing options, that some viewers offer, won't help. Rasterizing the document and using image manipulation software is also not an option. I read somewhere that this can be done with the Enfocus PitStop plugin for Acrobat. However I didn't see a corresponding command anywhere. Am I missing something? Then I read that the ARTS PDF Crackerjack plugin for Acrobat offers negative printing so I tried that, too. The option is there but it simply doesn't work. I have been searching for very long for a way to do this. It seems like a common enough task but I just can't find out how to do it. Are there maybe any virtual printer drivers or something of the sort that support negative printing? Can anyone help?

    Read the article

  • How to enable connection security for WMI firewall rules when using VAMT 2.0?

    - by Ondrej Tucny
    I want to use VAMT 2.0 to install product keys and active software in remote machines. Everything works fine as long as the ASync-In, DCOM-In, and WMI-In Windows Firewall rules are enabled and the action is set to Allow the connection. However, when I try using Allow the connection if it is secure (regardless of the connection security option chosen) VAMT won't connect to the remote machine. I tried using wbemtest and the error always is “The RPC server is unavailable”, error code 0x800706ba. How do I setup at least some level of connection security for remote WMI access for VAMT to work? I googled for correct VAMT setup, read the Volume Activation 2.0 Step-by-Step guide, but no luck finding anything about connection security.

    Read the article

  • Gigabyte GA-P55A-UD4P won't boot from USB drive with RAID enabled

    - by Daniel Schaffer
    I've got a Windows 7 installation image on my USB hard drive, which is set up to be bootable. I know it works because I've used it on several computers, and it works on the computer I'm trying to install as long as the RAID controller is disabled. However, when I enable the RAID controller and attempt to boot from the USB driver, it hangs for 30-60 seconds and then gives me the "disk boot failure, insert system disk" error like it can't find any OS. Just for laughs, I disabled the RAID controller again and it booted fine. I'm having separate, unrelated issues burning a dvd with the ISO, so I would prefer to get this working.

    Read the article

  • What happened to Alan Cooper's Unified File Model?

    - by PAUL Mansour
    For a long time Alan Cooper (in the 3 versions of his book "About Face") has been promoting a "unified file model" to, among other things, dispense with what he calls the most idiotic message box ever invented - the one the pops up when hit the close button on an app or form saying "Do you want to discard your changes?" I like the idea and his arguments, but also have the knee-jerk reaction against it that most seasoned programmers and users have. While Cooper's book seems quite popular and respected, there is remarkably little discussion of this particular issue on the Web that I can find. Petter Hesselberg, the author of "Programming Industrial Strength Windows" mentions it but that seems about it. I have an opportunity to implement this in the (desktop) project I am working on, but face resistance by customers and co-workers, who are of course familiar with the MS Word and Excel way of doing things. I'm in a position to override their objections, but am not sure if I should. My questions are: Are there any good discussions of this that I have failed to find? Is anyone doing this in their apps? Is it a good idea that it is unfortunately not practical to implement until, say, Microsoft does it?

    Read the article

  • Linux / apache web-server segmentation fault warnings

    - by jeroen
    Lately I have been receiving a lot of segmentation fault warnings on my web-server. The warnings look like: [notice] child pid xxxx exit signal Segmentation fault (11) I have consulted with the server provider (it is a dedicated redhat enterprise server) and they could not find anything. What I have done so far: Since the error I have already tried the following: I have added more ram I have turned off / turned on several php modules (they sent me to a web-page someone had the same problem, caused by an excessive amount of php modules) At the moments the warnings occur, there seems to be plenty of free ram left and the number of processes is very low (the number of httpd processes is about a quarter of the maximum allowed). What can be causing these errors? Edit: current versions apache: 2.0.52 php: 5.2.8 RHEL 4 Edit 2: Although I asked this a long time ago, I never was able to solve it until I upgraded to php 5.3.

    Read the article

  • Redirect users to internal webpage on first visit

    - by Sihan Zheng
    I have a lan of maybe 20 - 30 computers, and a Windows server 2003 server on hand (I can also run any x86 Linux distro). What I am trying to do, is to redirect users to a webserver inside the LAN the first time they visit certain domains. For example, the first time a user visits "google.com", they will be redirected to 192.168.1.2 (a webserver, where they will be shown a custom webpage), attempts after that will go to google. Pretty much what I am trying to do, is to provide a captive server like service, showing people a custom webpage the first time they try to access certain websites (but not others). I'm pretty flexable on how this can be done, as long as it works. Can you guys give me an idea on how to approach this problem? I am looking for (hopefully) a free solutions. Thanks

    Read the article

  • What are the options for hosting a small Plone site?

    - by Tina Russell
    I’ve developed a portfolio website for myself using Plone 4, and I’m looking for someplace to host it. Most Plone hosting services seem to focus on large, corporate deployments, but I need something that I can afford on a very limited budget and fits a small, single-admin website. My understanding is that my basic options are thus: I can go with a hosting service that specifically provides Plone. I know of WebFaction, but what others exist? Also, I’d have two stipulations for a Plone hosting service: (a) It needs to use Plone 4, for which I’ve developed my site, and (b) it needs to allow me SSH access to a home directory (including the Plone configuration), so that I may use my custom development eggs and such. I could use a VPS hosting service. What are my options here? Again, I need something cheap and scaled to my level. I could use Amazon EC2 or a similar service (please tell me of any) and pay by the tiniest unit of data. I’m a little scared of this because I have no idea how to do a cost-benefit analysis between this and a regular VPS host. The advantage of this approach would be that I only pay for what I use, making it very scalable, but I don’t know how the overall cost would compare to any VPS host under similar circumstances. What factors enter into the cost of Amazon EC2? What can I expect to pay under either option for regular traffic for a new website? Which one is more desirable for when a rush of visitors drive up my bandwidth bill? One last note: I know Plone isn’t common for websites for individuals, but please don’t try to talk me out of it here; that’s a completely different subject. For now, assume I’m sticking with Plone for good. Also, I have seen the Plone hosting services list on Plone.org—it’s twenty pages long, and the first page was nothing but professional Plone consulting services that sometimes offer hosting for business clients. So, that wasn’t much help. Thank you!

    Read the article

  • Cablemodem frequent connection loss

    - by LVDave
    I have a Linksys BEFCMU10 cablemodem and a WRT54GL router with Tomato 1.27 firmware on Cox cable. My question is this: I get what seems to be random disconnects from the internet, where the cable modem lights are still normal, but I can connect nowhere, either via a url or an ip address. At the same time these disconnects are happening, I can go to the router's Tomato management webpage, and release/renew my external IP address from Cox's DHCP server. I've had Cox look at the signal levels on the cable modem, and they say they look fine. What brings back the modem, for sometimes as long as 17 days, is several power-cycles of the modem. I don't understand the underlying cable modem technology too well, but I do know that if I'm able to release/renew the DHCP-provided WAN address, I'd expect that the cable modem was working ok... Anybody have any ideas??

    Read the article

  • Please help me, I need some solid career advice, put myself in a dumb situation

    - by Kevin
    Hi, First off, I just want to say thank you in advance for looking at my question and would really value your input on this subject. My core question is how do I proceed from the following predicament. I will be honest with you, I wasted my College Experience. I slacked off and didn't take any of my comp sci classes that seriously, somehow i still got out with a 3.25 GPA. But truth be told I learned nothing. I befriended most of my professors who went pretty lenient on me in terms of grading. However, I basically came out of College knowing how to program a simple calculator in VB.Net. I was (to my great surprise) hired by a very large respected company in Denver as a Junior developer. Well the long and the short of it is that I knew so little about programming that I quickly became the office pariah and was almost fired due to my incompetence. It has been 8 months now and I feel I have learned some basic things and I am not as picked on as I used to be by the other developers. However, everyone hates me and the first few months have given the other developers a horrible perception of me. I am no longer afraid of code or learning, but I have put my self in the precarious position of being the scapegoat of our department. I hate going to work every day because no one there is my friend and pretty much everyone is hostile to me. What should I do? Any advice?

    Read the article

  • vsftpd: chroot_local_user causes GNU/TLS-error

    - by akrosikam
    Distro: Ubuntu 12.04.2 Server 32-bit Server client: vsftpd 2.3.5 (from default "main" repository) Problem: Since upgrading from Ubuntu 10.04 to Ubuntu 12.04 (nothing changed on client-side), vsftp has refused to make chroot-jails with the "chroot_local_user" directive on FTP(e/i)S-connections. Here's my vsftpd.conf: anonymous_enable=NO local_enable=YES write_enable=YES local_umask=022 dirmessage_enable=YES xferlog_enable=YES xferlog_std_format=YES ftpd_banner=How are you gentlemen. listen=YES pam_service_name=vsftpd userlist_enable=YES userlist_deny=NO tcp_wrappers=YES connect_from_port_20=YES ftp_data_port=20 listen_port=21 pasv_enable=YES pasv_promiscuous=NO pasv_min_port=4242 pasv_max_port=4252 pasv_addr_resolve=YES pasv_address=your.domain.com ssl_enable=YES allow_anon_ssl=NO force_local_logins_ssl=YES force_local_data_ssl=YES ssl_tlsv1=YES ssl_sslv2=NO ssl_sslv3=NO rsa_cert_file=/home/maw/ssl_ftp_test/vsftpd.pem rsa_private_key_file=/home/maw/ssl_ftp_test/vsftpd.pem debug_ssl=YES log_ftp_protocol=YES ssl_ciphers=HIGH chroot_local_user=NO How to reproduce: Have a working SSL/TLS-secured vsftpd-configuration (I suggest similar to the one above) ready. Try to connect with an FTP user client and upload some files. With my setup, the above listed config works well at this point. Edit /etc/vsftpd.conf and set chroot_local_user= to YES. Make sure that chroot_list_enable= and/or chroot_list_file= are not set. Comment them out if they are. Save and exit. Run sudo restart vsftpd (or sudo service vsftpd restart if you like) in a terminal. Try to connect with an FTP user client. You should see a message more or less like this: GnuTLS error -15: An unexpected TLS packet was received. This is an issue for me, as I do not want FTP-sessions to be able to list files outside the user's home folder. I have checked with several client-side apps, and I get the same results with every one of them. Filezilla is not so good regarding cipher methods nowadays, but as I am able to make an FTP(e)s-connection over TLS (as long as chroot'ing is disabled and ssl_ciphers is set to HIGH) I have a feeling ciphers are not the issue this time, and that I won't find the answer by tweaking configs on the client side. My vsftpd.log stays empty, even though debug_ssl and log_ftp_protocol are enabled, so no info there either.

    Read the article

  • wireless detected but Can't connect to network, Wifi card intel pro wireless N100 BGN

    - by Alexandre777
    Thanks in advance, I installed Linux Mint kernel 3.0.0-12 x64 ,wireless is detected and I configure it in network manager but can't coonect, there are the results of command line: $ iwconfig lo no wireless extensions. eth0 no wireless extensions. wlan0 IEEE 802.11bgn ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=off Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off $ sudo lshw -C network** *-network description: Wireless interface product: Centrino Wireless-N 1000 vendor: Intel Corporation physical id: 0 bus info: pci@0000:03:00.0 logical name: wlan0 version: 00 serial: 00:1e:64:51:c9:d6 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=iwlagn driverversion=3.0.0-12-generic firmware=39.31.5.1 build 35138 latency=0 link=no multicast=yes wireless=IEEE 802.11bgn resources: irq:48 memory:d7400000-d7401fff *-network description: Ethernet interface product: AR8131 Gigabit Ethernet vendor: Atheros Communications physical id: 0 bus info: pci@0000:06:00.0 logical name: eth0 version: c0 serial: 48:5b:39:99:8f:16 size: 100Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress vpd bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=atl1c driverversion=1.0.1.0-NAPI duplex=full firmware=N/A ip=192.168.1.2 latency=0 link=yes multicast=yes port=twisted pair speed=100Mbit/s resources: irq:51 memory:d3800000-d383ffff ioport:8000(size=128 $ rfkill list all 0: phy0: Wireless LAN Soft blocked: no Hard blocked: no Thank you.

    Read the article

< Previous Page | 335 336 337 338 339 340 341 342 343 344 345 346  | Next Page >