Search Results

Search found 36645 results on 1466 pages for 'local content'.

Page 441/1466 | < Previous Page | 437 438 439 440 441 442 443 444 445 446 447 448  | Next Page >

  • MySql #2002 problem

    - by Systeem Faillure
    when i try to login in apache i get #2002 error. when i try to login via terminal it ask my password mysql -u root and i get ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) i tried to remove mysql and install it again and nothing happen ( i cant even remove it) i try to reboot apache but still nothing. i tried to reboot my pc and of course still aint working. in terminal i put sudo start mysql i got this: start: Job failed to start mysql -ubob -hlocalhost -P3306 -p got me this ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) sudo service mysql stop gave me this stop: Unknown instance: my.cnf * Basic Settings # user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp lc-messages-dir = /usr/share/mysql skip-external-locking # i tried to find if there is a file in my /var/run/mysqld/ or in /var/tmp/ and dint find the file mysqld.sock .. i looked in /var/log/mysql/ and dont even have anything in it can help me???i try to find help on google for hours and dint help at all... -------EDIT---------- salem : sudo service mysql start i get start: Job failed to start cat /var/log/syslog | grep mysql here http://paste.ubuntu.com/1335984/ /var/run/mysqld/ this folder exist but it dosent have anything in it /var/run/mysqld/ i get total 0 drwxr-xr-x 2 mysql root 40 Nov 5 22:31 . drwxr-xr-x 25 root root 860 Nov 5 22:32 .. sorry that i post anything my terminal gives but i am noob at this. hope this information will do.

    Read the article

  • Running psql in Linux Command Line

    - by Mr Shoubs
    I've just installed Postgres9 and it is up and running without any issues. There is one thing however that is confusing me: If I type /usr/local/pgsql/bin/psql test then postgres command line loads and I can use it as expected, however... If I cd /usr/local/pgsql/bin then type psql test I get the following error: The program 'psql' is currently not installed. To run 'psql' please ask your administrator to install the package 'postgresql-client-common' Does anyone know why? (please don't say install postgresql-client-common as this doesn't solve the problem)

    Read the article

  • Full speed internal switch bandwidth but per-port set external bandwidth?

    - by garg
    I am in an environment where all the machines are behind a switch that I don't have access to. Each ethernet wall port has limited bandwidth depending on how much has been paid for each port. The problem is that some people have 10Mbps connections and some have 100Mbps connections and this causes problems with local intranet file transfers and operating system/software deployments. Operating systems can take hours to be deployed if the machine is on 10mbps. Do you know if it is possible with most switches to set a rule that would limit bandwidth coming in/going out to an extranet, but keep full bandwidth if the packets are destined to go to a local machine? For example, the internet might be limited to 10Mbps, but internal servers would get gigabit speeds? Thanks

    Read the article

  • How&rsquo;s your Momma an&rsquo; them?

    - by Bill Jones Jr.
    When a Southern “boy” like me sees somebody that used to be, or should be, a close friend or relative that they haven’t seen in a long time, that’s a typical greeting.  Come to think of it, we were often related to close friends. So “back in the day”, we not only knew people but everybody close to them.  When I started driving, my Dad told me to always drive carefully in Polk county.  He said if I ran into anybody there, it was likely they would be related or close family friends. Not so much any more… the cities have gotten bigger and more people come south and stay.  One of the curses of air conditioning I guess. Anyway, it’s been a while.  So “How’s your Momma and them”?  Have you been waiting for me to blog again?  Too bad, I’m back anyway <smile>. Here in Charlotte we just had another great code camp.  The Enterprise Developers Guild is going strong, thanks to the help of a lot of dedicated people.  Mark Wilson, Brian Gough, Syl Walker, Ghayth Hilal, Alberto Botero, Dan Thyer, Jean Doiron, Matt Duffield all come to mind.  Plus all the regulars who volunteer for every special event we have. Brian Gough put on a successful SharePoint Saturday.  Rafael Salas and our friends at the local Pass SQL group had a great SQL Saturday.  Brian Hitney and Glen Gordon keep on doing their usual great job for developers in the southeast as our local Microsoft reps. Since my last post, I have the honor of being designated the INetA Membership Mentor for Georgia in addition to mentoring the groups in the Carolinas for the past several years.  Georgia could be a really good thing since my wife likes shopping in Atlanta, not to mention how much we both like Georgia in general.  As I recall, my Momma had people in Georgia.  Wonder how their “Mommas an’ them” are doing?   Bill J

    Read the article

  • Options for installing software on Amazon EC2 Windows instances

    - by gareth_bowles
    I've been running Linux servers on Amazon EC2 for a while now; the experience has been great. I've recently needed to bring up a Windows server to run some Windows-only software that our product needs to use, and am running into a problem figuring out how to install the software, which is only available on DVD. With Linux I can just install packages from a Web-based repository and take advantage of EC2's fast network throughput, but so far on the Windows instance I've had to upload my ISO images to EC2 and mount them from the Windows EC2 instance. For some reason I'm getting really slow upload speeds to EC2, even though the regular upload speed from our office is pretty good (around 7Mbps). I've also tried mounting the DVD drive on my machine as a local drive on the EC2 instance via Remote Desktop, and then running the software install from the local drive, but I run into the same slow upload speed issue. Does anyone have a better way to install software from physical media onto an EC2 instance ?

    Read the article

  • SharePoint 2010 not seeing Active Directory users

    - by user117927
    I'm pretty new to Active Directory and SharePoint but I was given to understand they are supposed to play well together. Now I have successfully set up AD with multiple user accounts that work on any member computer. I have also successfully installed SharePoint 2010 Server on an AD machine. Both the AD server and SharePoint servers are on separate machines (VMs running on ESXi to be precise). I can only log on with user accounts I create on the local server. Furthermore the user browser thing for adding users will only see local users. I've followed the advice here http://technet.microsoft.com/en-us/library/cc262350.aspx#section2 for Classic authentication and also NTLM claims based authentication but to no avail. Is there something fundamental I am getting wrong here? I'd be really thankful for any help you can lend me; I've been googling and scratching my head for a couple of days now. P

    Read the article

  • Microsoft Entourage/Exchange Server problem: all objects disappeared from server - still in some for

    - by splattne
    One of our employees works with Entourage on his MacBook Pro (OSX 10.6) accessing Exchange Server 2007. Last Friday morning, I think while working over a VPN, Entourage (I think it was Entourage) deleted all his objects (mail, calendar, contacts) on the server and while creating a lot of strange folders (starting with underscores) on the client. The local data seems to be there, but not in a consistent form. Since the user's mailbox is rather big, I suspect, that there was some kind of "move" operation which did not complete. I tried to export the data, but the export stops because of a corrupted object. Is there a tool or another way to export or retrieve the local data? Edit - FYI: we solved the problem getting his data from the previous night's backup.

    Read the article

  • What are my options for sharing music between Windows & Ubuntu on the same network?

    - by jgbelacqua
    We have a few Windows(XP & 7) and Ubuntu machines in the house sharing a wireless connection, and want to share music between them. If possible, I would like to be able to serve music from both Windows and Ubuntu (but it doesn't have to be the same time). I don't know much about sharing folders or streaming, but I'm guessing both would be options (that is, using a local client to access a shared song or a local client to access a shared stream). I want to be able to share the music between the systems as simply as possible. Bonus points (but not requirements) for cross-platform -- same application on both Windows and Ubuntu? available on startup (via daemon or autostart or whatnot) open source More info: All systems have dynamic addresses (DHCP) supplied from the ISP-supplied wireless router. There are several Gigabytes of music on one Windows XP box and one Ubuntu 10.10 The music is not well-sorted (I'm thinking this might have an impact on UI usability). Only has to be available internally (private address space behind the wireless router) bandwidth is not a problem We don't have (legitimate) admin access to the wireless router

    Read the article

  • Re-streaming RTMP stream

    - by Yvan JANSSENS
    I have a set of local RTMP stream servers in my network, but I want them to be reachable outside. The bandwidth is too narrow to serve multiple clients on the streamservers of my network, so the idea is to pull the local RTMP streams on a computer serving as a gateway, which pushes them on his turn to a hosted streaming provider. It is not possible to let the sources of the stream push their stream directly to the server outside due to network policy restrictions. Scheme of what I'm trying to accomplish: Internal network | External network ------------ ------------ ----------------------- | internal | <---- | Gateway | ------> | streamserver outside| | streams | ------------ ----------------------- ------------ | ^ | | | ----------- | | clients | | ----------- My question now is: which application which can pull a live stream from an RTMP source (Flash Media Server) and push it to another one (Flash Media Server at hosting provider).

    Read the article

  • Skinning with DotNetNuke 5 Super Stylesheets Layouts - 12 Videos

    In this tutorial we demonstrate how to use Super Stylesheets in DotNetNuke for quickly and easily designing the layout of your DotNetNuke skin. Super Stylesheets are ideal for both beginner and experienced skin designers, the advantage of Super Stylesheets is that you can easily create a skin layout which works in all browsers without the need to learn complex CSS techniques. We show you how to build a skin from the very beginning using Super Stylesheets. The videos contain: Video 1 - Introduction to the Super Stylesheets DNN Layouts and Initial Setup Video 2 - Setting Up the Skin Layout Template Code Video 3 - Using the ThreeCol-Portal Layout Template for a Skin Video 4 - How to Add Tokens to the Skin Video 5 - Setting Background Colors for Content Panes and Creating CSS Containers Video 6 - How to Create a Footer Area and Reset the Default Styles Video 7 - How to Style the Text in the Content, Left and Right Panes Video 8 - SEO Skin Layouts for DotNetNuke Tokens Video 9 - Creating Several Skin Layouts Using the Layout Templates Video 10 - Further Layout Templates and MultiLayout Templates Video 11 - SEO Layout Template Skins Video 12 - Final SEO Positioning of the Skin Code Total Time Length: 97min 53secsDid you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How to benchmark apache/nginx setup

    - by Saif Bechan
    I am planning to setup nginx as reverse proxy. I will have apache to deliver my dynamic content, and nginx will deliver the static content. My configuration i have now is just Apache with fastCGI. This gives me no configuration problems and runs great. After I have set up nginx I want to run some benchmarks to see if I really got some performance increases, else i will switch back. Does anyone know how I can benchmark this type of setup? Or maybe someone did this already and have some canned results, I will be glad to hear them.

    Read the article

  • Problem with glaux.h locating

    - by Rodnower
    Hello, I try to compile code, that beggins with: #include<stdlib.h> #include<GL/gl.h> #include<glaux.h> with command: cc -o test test.c -I/usr/local/include -L/usr/local/lib -lMesaaux -lMesatk -lMesaGL -lXext -lX11 -lm But one of errors I got is: test.c:3:18: error: glaux.h: No such file or directory Then I try: yum provides glaux.h but yum find anything. Before all I installed Mesa with: yum install mesa* So, can anyone tell me from where I can get the header file? Thank you for ahead.

    Read the article

  • New technical product guide for Sun Ray clients

    - by Jaap
    In the Oracle online documentation system a new Sun Ray clients Technical Product guide has been published. The document provides detailed information about the similarities and differences between the three Sun Ray client hardware models: Sun Ray 3, Sun Ray 3 plus and Sun Ray 3i. From the description of the Technical Product guide I want to quote the following section: "......Since Sun Ray 3 Series Clients have no local operating system and require no local management, they eliminate the complexity, expenses, and security vulnerabilities associated with other thin client and PC solutions. ......" This is always one of the great advantages of Sun Ray clients compared to other thin clients (which are actually low-fat PCs where you have to manage thin client OS images). The guide lists the features and technical specifications of the Sun Ray Client such as number of ports, chassis, graphics, network interfaces, power supply, operating conditions, MTBF, reliability, and other standards. The guide also contains a separate chapter about environmental data. As you may know, the Sun Ray 3 Series clients are designed specifically to be sensitive to a spectrum of environmental concerns and standards, from materials to manufacturing processes to shipping, operation, and end of life. The Sun Ray 3 Series clients complies to environmental standards and certifications such as Energy Star 5.0, EPEAT, WEEE and RoHS (see the Oracle policy for RoHS and REACH).

    Read the article

  • Is there a way I can use $PATH as defined by my bash profile?

    - by Adam Backstrom
    I spend most of my day ssh'd into servers. I have a series of aliases/functions/scripts that allow me to type p hostname from the terminal and execute GNU screen(1) on the remote side, using the following command: exec ssh hostname -t 'screen -RD'` I've only recently noticed that ssh -t does not get my custom $PATH. Here's some terminal output: adam@workstation:~:0$ sh server 'echo $PATH' /home/adam/bin:/usr/local/bin:/bin:/usr/bin:/opt/git/bin:/opt/git/libexec/git-core adam@workstation:~:0$ ssh server -t 'echo $PATH' /usr/local/bin:/bin:/usr/bin Connection to uranus.plymouth.edu closed. My biggest problem is my custom aliases only try to execute screen, since I can't guarantee an absolute path, and my $PATH is structured so the shell should find the correct one. If my $PATH settings aren't honored, my scripts don't work. Is there a way I can use $PATH as defined by my .bashrc/.bash_profile? I believe PermitUserEnvironment is disabled.

    Read the article

  • How do I find a qualified web designer in my area?

    - by Incognito
    I just sent out emails to five local web design companies to my area asking to take drawings to HTML/CSS/jQuery. None of the ones who accepted the deal seem suitable to myself. Others rejected the offer because they wanted to 'provide an end-to-end solution' or are 'booked till June'. The local companies did not seem suitable to myself because my review process is this: goto their website, do a view-source. I'll see really weird things (contact us forms that go nowhere), really old things (mm_menu.js), and portfolios that are non-existent (aren't on the site, don't link anywhere, or otherwise). The company would like to hire as locally as they can rather than out-source to another country. Answers I'm looking for Processes you use when searching for someone How you qualify their aptitude for the project Anything that you think I'm doing wrong, or should be doing also. Answers I'm not looking for: "Hello sir please contact me we do everything for 10 dolla." My bud's great at this stuff, call him. example.com is the best for this.

    Read the article

  • Apache redirecting: reason unknown

    - by Sinan
    I have a simple php script. The script is not important. It just prints out $_SERVER. When I request an URL like www.server.com/?ref=bar everything is fine. However if the request contains something like www.server.com/?ref=http://www.test.com (?ref=http%3A%2F%2Fwww.test.com) the server redirects to 403.shtml. No redirect for http://x but redirects http://x.y As far as I can understand somehow the server doesn't like "http://x". It always redirects to 403.shtml when there is a valid query string in the form of a valid url. my .htacess file is the same both on my server and local test server and local test server behaves as expected (no redirects). So I don't it is related to .htaccess. I'm on shared host on Hostgator. Can anyone help? Edit: Here's the .htaccess file Options +FollowSymLinks RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php?$1 [L,QSA] When there is an http://xx.x it redirects to 403 even if there is physical file. However if I remove the .htaccess redirect to 403 also disappears. But I need the above .htaccess file. Is there a way to get around this?

    Read the article

  • Prevent Eclipse Java Builder from Compiling Java-Like Source

    - by redjamjar
    I'm in the process of writing an eclipse plugin for my programming language Whiley (see http://whiley.org). The plugin is working reasonably well, although there's lots to do. Two pieces of the jigsaw are: I've created a "Whiley Builder" by subclassing incremental project builder. This handles building and cleaning of "*.whiley" files. I've created a content-type called "Whiley Source Files" for "*.whiley" files, which extends "org.eclipse.jdt.core.javaSource" (this follows Andrew Eisenberg suggestion). The advantage of having the content-type extend javaSource is that it immediately fits into the package explorer, etc. In principle, I could fleshout ICompilationUnit to provide more useful info, although I haven't done that yet. The disadvantage is that the Java builder is trying to compile my whiley files ... and it obviously can't. Originally, I had the Java Builder run first, then the Whiley builder. Superficially, this actually worked out quite well since all of the errors from the Java Builder were discarded by the Whiley Builder (for whiley files). However, I actually want the Whiley Builder to run first, as this is the best way for me to resolve dependencies between Java and Whiley files. Which leads me to my question: can I stop the Java builder from trying to compile certain java-like resources? Specifically, in my case, those with the "*.whiley" extension. As an alternative, I was wondering whether my Whiley Builder could somehow update the resource delta to remove those files which it has dealt with. Thoughts?

    Read the article

  • Problem with installing sqlite3 module for python 2.6 on an ubuntu system

    - by Hoang
    I need to run the sqlite3 module on python 2.6 in an ubuntu system. How do I install this module for Python 2.6? Somehow I don't have this module, it raises the error: >>> import sqlite3 Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python2.6/sqlite3/__init__.py", line 24, in <module> from dbapi2 import * File "/usr/local/lib/python2.6/sqlite3/dbapi2.py", line 27, in <module> from _sqlite3 import * ImportError: No module named _sqlite3

    Read the article

  • OSX 10.6 integration into NIS/netgroup/automount infrastructure

    - by mdpc
    I have an existing infrastructure where accounts are maintained under NIS (yp) with no local unix accounts. Also, all the standard maps including hosts, mail aliases, netgroups, etc...are maintained in this form. Extensive use of the UNIX/Linux automounter with items scattered over the network on NFS servers. There are NO ACLs on any local or shared files. All mail needs to use basically the nullclient sendmail configuration feeding into a different system. I now have a requirement to integrate an Apple OSX 10.6 system into this environment and make it run seamlessly. My initial reading and second-hand information seems to indicate that this may not be possible on the native OSX 10.6 system. I'm concerned. Any ideas as to how to accomplish this task and make everybody happy? Thanks PS: I have never used an Apple OSX system.

    Read the article

  • New cloud development workflow using Github, Cloud9ide and CloudFoundry.

    - by weng
    So time is changing towards cloud development/computing. I'm trying to get the new "cloud" workflow based on the services I'm going to use: Github, Cloud9ide and CloudFoundry. Here is what is on my mind: Github acts like a central (main repo) just like yesterday's local filesystem. Every service will base it service upon this main repo. Workflow: Github: I create a new Github repo served as main repo for the project. Cloud9ide. I open my Github repo and write my tests and implementation (BDD/TDD). When I'm ready I save (commit) it to main repo on Github. X: A running instance of Jenkins detects someone has committed and fetches the latest commit, builds, deploys, tests (yeti and/or selenium) and reports if the tests were passed or not. If not, I make another commit til all tests are passing. X: I run the CloudFoundry commands to push the main Github repo to CloudFoundry's server and it will deploy my app automatically. What I'm still confused about is where this X environment will be. On a local server where I have to install Jenkins? Or could I install it on Cloud9ide (when java is supported) or will it be on another cloud service? Also, that X environment has to be able to fetch (clone) the Github repo and run the build scripts. And since the concept of Cloud9ide is very new and there haven't been any other predecessors I really wonder how the workflow will look like. We all know Github's workflow. We now know CloudFoundry's workflow (deploy/scale with a restful API/command line tool). But how Cloud9Ide will operate is still somewhat unclear to me. Someone on Cloud9ide mentioned that there will be buttons like deploy so I can deploy with one click. But that I guess will depend on what services that deploy process will hook up into etc. Could someone enlighten this cloud workflow topic and fill in the gaps. Thanks.

    Read the article

  • GWT: reporting crawling errors for non existing links

    - by pixeline
    Google Webmaster Tools is reporting crawl errors for links that never existed, and if i check the "Linked from" tab for a given error link, it shows another that never existed. They all mention joomla/ which is not the cms used on this domain (it's wordpress fyi). Exampled: http://example.com/joomla/index.php/component/user/register Linked from: http://example.com/joomla/component/user/login?return=L2###### What is going on? UPDATE 1 I tried something: I provided one of the faulty urls to the "Fetch as Google" functionality. Instead of returning a 404, it returns a 301 to another Joomla page. HTTP/1.1 301 Moved Permanently Server: Apache/2.4.3 X-Powered-By: PHP/5.4.4-10 X-Pingback: http://example.com/xmlrpc.php Expires: Wed, 11 Jan 1984 05:00:00 GMT Cache-Control: no-cache, must-revalidate, max-age=0 Pragma: no-cache Set-Cookie: PHPSESSID=1fgr5v2oip39miibuptd51s8h0; path=/ Set-Cookie: woocommerce_items_in_cart=0; expires=Sat, 12-Jan-2013 11:44:01 GMT; path=/ Location: http://example.com/joomla/component/user/register Content-Type: text/html; charset=iso-8859-1 Content-Length: 387 Date: Sat, 12 Jan 2013 12:44:01 GMT Via: 1.1 varnish Connection: keep-alive Accept-Ranges: bytes Age: 0 <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>301 Moved Permanently</title> </head><body> <h1>Moved Permanently</h1> <p>The document has moved <a href="http://example.com/joomla/component/user/register">here</a>.</p> <p>Additionally, a 301 Moved Permanently error was encountered while trying to use an ErrorDocument to handle the request.</p> </body></html>

    Read the article

  • 401 - Unauthorized On Server 2008 R2 IIS 7.5

    - by mxmissile
    I have a web application deployed to Server 2008 IIS 7.5 box. From remote it gives this error: 401 - Unauthorized: Access is denied due to invalid credentials. (remote = desktops on the same LAN) Have tried several remote clients using different browsers, all the same result. (IE, FF, and Chrome) Hitting the application from the desktop of the server itself works flawlessly. However I have not tried Firebug on the server desktop. I would assume it's still issuing a 401 status code yet returning the content anyway. See Update #2. The application is using Anonymous Authentication. The application is written in .NET 4.0 Asp.Net using the MVC framework. Static content works fine, example: http://server.com/content/image.jpg Sysinternals procmon returns these 2 results for each request: FAST IO DISALLOWED and PATH NOT FOUND. I have 2 other MVC apps running fine on the same server. I have checked the security on the folders and they all match. App runs fine on a Server 2008 IIS 7.0 box. Nothing shows up in the Event log on the server related to this. Pulling my hair out here, any troubleshooting tips? UPDATE #1: This just get's more WTF as I dig. If I click on the Application in IIS Manager - Error Pages - Edit Feature Settings select Detailed Errors, the app works remotely. Not leaving this on, so problem is not solved yet, its just more confusing. UPDATE #2: Using Firebug, I see that the Status is still 401 Unauthorized, but the Response is returning the application's correct HTML. UPDATE #3 Playing around with Failed Request Tracing, here is the WARNING Request Trace that is causing the 401: ModuleName ManagedPipelineHandler Notification 128 HttpStatus 401 HttpReason Unauthorized HttpSubStatus 0 ErrorCode 0 ConfigExceptionInfo Notification EXECUTE_REQUEST_HANDLER ErrorCode The operation completed successfully. (0x0) Update #4 Regular IIS log is showing this: #Software: Microsoft Internet Information Services 7.5 #Version: 1.0 #Date: 2010-07-20 19:17:22 #Fields: date time s-ip cs-method cs-uri-stem cs-uri-query s-port cs-username c-ip cs(User-Agent) sc-status sc-substatus sc-win32-status time-taken 2010-07-20 19:17:22 10.10.1.10 GET /Purchasing/Home - 80 - 10.10.1.12 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+en-US;+rv:1.9.2.6)+Gecko/20100625+Firefox/3.6.6 401 0 0 4414

    Read the article

  • ubuntu eth0 not reconnecting after cable unplugged

    - by Alex
    I'm running kubuntu 9.10 w/ gnome, I have a static IP defined in /etc/network/interfaces When I unplugged my network cable and rebooted, then reconnected the network cable I was not able to connect. I tried using sudo ifup eth0, and then ifconfig and it seemed as though the IP address had been assigned and I was connected, but I wasn't. I then did ifdown eth0, and again ifup eth0. For some reason I'm not able to access the network. Furthermore, I also attempted to connect via wlan, and was able to connect to the wireless network, but cannot "see" the network. I can't transfer data or access the internet or anything on the network including the router. How do I resolve this? topsy@monolyth:~$ ifconfig eth0 Link encap:Ethernet HWaddr 00:1c:25:1c:df:70 inet addr:192.168.1.145 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::21c:25ff:fe1c:df70/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:5720 errors:0 dropped:0 overruns:0 frame:0 TX packets:565 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:378035 (378.0 KB) TX bytes:46832 (46.8 KB) Memory:fe000000-fe020000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:4 errors:0 dropped:0 overruns:0 frame:0 TX packets:4 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:240 (240.0 B) TX bytes:240 (240.0 B) By access the network I mean the local network as well as the internet. topsy@monolyth:~$ ping 192.168.1.1 PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data. 64 bytes from 192.168.1.1: icmp_seq=1 ttl=64 time=9.14 ms 64 bytes from 192.168.1.1: icmp_seq=2 ttl=64 time=1.24 ms 64 bytes from 192.168.1.1: icmp_seq=3 ttl=64 time=1.01 ms 64 bytes from 192.168.1.1: icmp_seq=4 ttl=64 time=1.00 ms [snip... all OK, icmp_seq from 5-30, time between 0.981-1.25ms] ^C --- 192.168.1.1 ping statistics --- 30 packets transmitted, 30 received, 0% packet loss, time 29035ms rtt min/avg/max/mdev = 0.971/1.300/9.140/1.458 ms topsy@monolyth:~$ route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.1.0 * 255.255.255.0 U 0 0 0 eth0 link-local * 255.255.0.0 U 1000 0 0 eth0 default 192.168.1.1 0.0.0.0 UG 100 0 0 eth0 root@monolyth:~# cat /etc/resolv.conf # Generated by NetworkManager

    Read the article

  • Conditional attribute in XML - most concise solution?

    - by Lech Rzedzicki
    I am tasked with setting up conditional profiling - a method of tagging chunks of XML with an attribute, which will then be used as a conditional value to extract subset of that XML. Have a look at another definition/example: DITA profiling The XML is documents that are equivalent to printed books - i.e. documents that are often looked at by a human, even if indirectly. Therefore I am looking at a few requirements here: 1. keeping the value list brief - so it doesn't affect the readability of the document 2. be able to process with standard XML tools - a space-separated list inside an attribute is still probably fine, but I'd rather not use too much regexp for this 3. be obvious for various users, including 3rd parties, which content goes where 4. Be easy to maintain going forward Therefore one easy solution is: The problem with this: 1. As the list grows the value of the attribute can be a bit verbose 2. One needs to explicitly state every value even if it's a scenario of this vs everything else Therefore I am also looking at other approaches such as: 1. Using + and - modifiers, Apache htaccess style to override the default cascading of profiling - by default all content goes everywhere and if we want to exclude a bit we just say "-kindle". It does require parsing the whole tree, is not supported by editing tools and one needs to regexp the attribute value a bit deeper... 2. Using an intermediate file to define groups of values such as "other" or "non-print", example of this in DITA. It allows concise XML as well as different grouping and values for each document but it does create a certain level of abstraction which may make it a little less obvious for a 3rd party? Altogether, if you received such XML and were tasked to process it, which option you'd rather receive? If you have any experiences like that, even in an unrelated areas such a builds, don't hesitate to comment!

    Read the article

  • Not able to save Global navigation - SharePoint 2007

    - by Ryan
    I have migrated my sitecollection(migsitecollection) to different farm using content deployment job. http://vsmoss/sites/migsitecollection I used collaboration portal to create it.Its working fine from where I migrated it but after running content deployment jobs my new migrated site global navigation settings are not getting saved when I am trying ot change them by going in settings-Navigation and in logs I can see this error The SPNavigation store is likely corrupt. I saw on net the solution for this problem is changing onet.xml and running script on sql database for the site, I am eager to better answer than this but if its the same I have few doubts on it: First,As my site template is not customised its the collobartion portal so I am not sure where exactly to change the onet.xml. Second, I am using the same database as of my webapplication running that script would not affect anything else on the main site of mine?

    Read the article

< Previous Page | 437 438 439 440 441 442 443 444 445 446 447 448  | Next Page >