Search Results

Search found 13892 results on 556 pages for 'employee info starter kit'.

Page 118/556 | < Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >

  • Use windows 7 inside virtual box,as guest i mean, to create a Windows 7 USB using "Windows 7 USB/DVD Download Tool" ? (Linux as host)

    - by Abel Coto
    I want to download the Windows 7 professional iso (x32), from microsoft, and , i can do two things. Or buy a new burner , as mine doesn't work (i am trying to decide what dvd writer i could buy) or use a usb dongle to copy the iso to it , and install it via usb. I want to install Windows 7 in a netbook that now has debian,and in my pc. I think i have to buy only the license for the pc , as the netbook came with windows 7 preinstalled, so i suppose that i can use that serial to activate the windows , although i don't know how to install windows 7 starter instead of professional (i think if you remove a file from the iso, windows let you choose the edition to install). The problem is that in both pcs there isn't any windows , only debian. My father has a netbook with windows 7 starter, but i think it hasn't antivirus (at least until have the Karspersky Internet security for 3 pcs bought ), and i don't trust to make the usb there , if i don't now that there isn't any virus or malware. So i am trying to find a way of Create a Windows 7 usb installation , to at least be able to install windows 7 in the netbook without a external dvd writer. I know that with dd in linux you can copy a debian.iso to the usb , and then install debian with it (i've done it) using something like dd if=win7.iso of=/dev/sdb, but i don't know if this would work for windows 7 iso,and if dd will correctly copy the iso to the usb. I suppose that if you are able to boot and install windows 7 from the usb , is that the method works,and you can forget of problems later with the windows 7 installation (problems because some files could not be copied or like). So , i remembered that Microsoft created a tool to copy the iso to the usb using windows. So i thought that i could install in my pc , virtual box , as i have VT and 8 GB ram in it, and download the iso from microsoft ,install windows 7 in the virtual machine , and then copy the iso inside the machine , donwload the iso tool, and atach a usb to the pc, connect it to the guest , and use the tool to copy the iso to the USB. But i don't now if is possible to use a virtual machine to do this , or the virtualization could give problems with the usb, or something. I have found some minutes ago this How to make a windows 7 usb flash install media, from linux? The first method (dd) is the one i like more , and i trust more ( i don't now if the second method using ms-sys , works well , and if i can trust it. I understand that a iso is like a .rar , but no compressed,only containing the files ,so mount the iso and cp the data inside perhaps is ok. Although the method i like more is the microsoft one (more because is from microsoft , and i suppose they now what they do ,at least with this usb related thing, than anything). Perhaps worth more to buy a external dvd writer haha ... Should the virtual machine method work ?

    Read the article

  • Will this netbook allow me to run mutiple programs without issues?

    - by erik
    I'd like to use a netbook to run mIRC skype Messenger pretty much all at the same time. It this netbook a good choice? http://www.notebookzone.co.za/default/sony-vpc-w216.html Quick Overview Intel Atom N450 (1.66GHz), 2GB Ram /320GB HDD, 10.1" LCD-WXGA:1366 x 768, LED, Windows 7 Starter 32bit, Only 1.19Kg, Web Cam, Wireless, BT The combination of high-resolution wide 10.1" screen and Isolation Keyboard helps to put the Internet at your fingertips anytime you want it. Available in : white / pink / blue / brown

    Read the article

  • How to disable my netbook's touchpad when a usb mouse is connected.

    - by overmann
    This is the first computer I have ever bought and I couldn't bring it home without a mouse of its own. I'm trying to disable the touchpad but the only option I find is by uninstalling the drivers, which I think is a bit drastic, the buttons for activation and deactivation are disabled (I'm using windows 7 starter). Do you have any idea of how to disable the touch pad when an external mouse is hooked up?

    Read the article

  • what does "openssl FIPS mode(0) unavailable" mean?

    - by fisherman
    I compiled and installed strongswan ipsec vpn successfully, as demonstrated by the fact that the service starts successfully: as3:~# ipsec restart Stopping strongSwan IPsec... Starting strongSwan 5.0.4 IPsec [starter]... as3:~# When I run command ipsec pki --gen --outform pem > caKey.pem I see the error: as3:~# ipsec pki --gen --outform pem > caKey.pem openssl FIPS mode(0) unavailable as3:~# What does "openssl FIPS mode(0) unavailable" mean? How to fix it?

    Read the article

  • Do cheap color laserjets come with toner?

    - by jblocksom
    I'm thinking of getting a budget color laserjet. Will I need to buy four toner cartridges at the same time I get the printer, or do they come with starter toner cartridges? If there is toner with it, any idea how many pages I can get on what's in the box? The Samsung CL-315 or the HP Color LaserJet CP1215, both under $200, are good examples of the class of printer I'm looking at. Thanks!

    Read the article

  • Wondering about the Windows 7 serial number my laptop has, and its uses.

    - by overmann
    So that's the serial number of my pre-installed windows copy, I take it. But am I allowed to use it again when, say, I don't know, my system gets crippled by a sneaky virus? If I format my computer and install windows starter again from a USB drive (speculating. I've never format before, I suppose is completely possible) Is that serial number still valid? I'm talking about the number printed on the back of my laptop.

    Read the article

  • Linksys WPSM54G Print Server on Windows 7

    - by user20285
    I'm running an Asus Eee PC with Windows 7 Starter Edition I already have the Linksys print server set up for my Windows Vista laptop. When I open the setup wizard on the Eee PC, as soon as I click on "set up computer" I get a prompt: The OS not support There don't seem to be any Win 7 drivers on Linksys' website (http://www.linksysbycisco.com/US/en/support/WPSM54G/download)

    Read the article

  • DotNetOpenAuth OpenID Provider "Sequence contains more than one element"

    - by Matthew Johnson
    Hello, all, I'm having trouble implementing my OpenID provider with DNOA 3.4.3. Everything was going absolutely peachy until I needed AX support as well. I set AXFetchAsSregTransform in the web config, as recommended by Andrew at http://groups.google.com/group/dotnetopenid/browse_thread/thread/5629a24c0a7e8d99. Doing this caused me to get the exception "Sequence Contains More Than One Element" on my decide.aspx page, however, and I haven't been able to get past it. The following line is throwing the exception: Edit: Strangely enough, this is not the line throwing the error anymore. The SendResponse() is now triggering the exception ClaimsRequest requestedFields = ProviderEndpoint.PendingRequest.GetExtension(); ProviderEndpoint.SendResponse() Any thoughts on why this may be? Any help would be greatly appreciated! The logs leading up to the error are as follows: 2010-04-28 12:38:20,247 (GMT-7) [5] INFO DotNetOpenAuth.Messaging.Channel - Scanning incoming request for messages: https://myprovider/provider.ashx?openid.ns=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0&openid.claimed_id=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0%2Fidentifier_select&openid.identity=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0%2Fidentifier_select&openid.mode=checkid_setup&openid.ns.ext1=http%3A%2F%2Fopenid.net%2Fsrv%2Fax%2F1.0&openid.ext1.mode=fetch_request&openid.ext1.type.email=http%3A%2F%2Faxschema.org%2Fcontact%2Femail&openid.ext1.type.fullname=http%3A%2F%2Faxschema.org%2FnamePerson&openid.ext1.type.language=http%3A%2F%2Faxschema.org%2Fpref%2Flanguage&openid.ext1.required=email&openid.return_to=http%3A%2F%2Fmyrelyingparty%2Flogin.jsp%3Foidreturn%3D%252Fhome&openid.assoc_handle=%7B634080802953194640%7D%7BHxjFNw==%7D%7B20%7D&openid.realm=http%3A%2F%2Fmyrelyingparty 2010-04-28 12:38:20,285 (GMT-7) [5] INFO DotNetOpenAuth.Messaging.Channel - Processing incoming CheckIdRequest (2.0) message: openid.claimed_id: http://specs.openid.net/auth/2.0/identifier_select openid.identity: http://specs.openid.net/auth/2.0/identifier_select openid.assoc_handle: {634080802953194640}{HxjFNw==}{20} openid.return_to: http://myrelyingparty/login.jsp?oidreturn=%2Fhome openid.realm: http://myrelyingparty/ openid.mode: checkid_setup openid.ns: http://specs.openid.net/auth/2.0 openid.ns.ext1: http://openid.net/srv/ax/1.0 openid.ext1.mode: fetch_request openid.ext1.type.email: http://axschema.org/contact/email openid.ext1.type.fullname: http://axschema.org/namePerson openid.ext1.type.language: http://axschema.org/pref/language openid.ext1.required: email 2010-04-28 12:38:22,773 (GMT-7) [14] INFO DotNetOpenAuth.Messaging.Channel - Scanning incoming request for messages: https://myprovider/login.aspx?ReturnUrl=%2fdecide.aspx 2010-04-28 12:38:36,167 (GMT-7) [5] INFO DotNetOpenAuth.Messaging.Channel - Scanning incoming request for messages: https://myprovider/login.aspx?ReturnUrl=%2fdecide.aspx 2010-04-28 12:38:38,147 (GMT-7) [14] ERROR DotNetOpenAuth.Messaging - Protocol error: An HTTP request to the realm URL (http://myrelyingparty/) resulted in a redirect, which is not allowed during relying party discovery. at DotNetOpenAuth.Messaging.ErrorUtilities.VerifyProtocol(Boolean condition, String message, Object[] args) at DotNetOpenAuth.OpenId.Realm.Discover(IDirectWebRequestHandler requestHandler, Boolean allowRedirects) at DotNetOpenAuth.OpenId.Realm.DiscoverReturnToEndpoints(IDirectWebRequestHandler requestHandler, Boolean allowRedirects) at DotNetOpenAuth.OpenId.Provider.HostProcessedRequest.IsReturnUrlDiscoverableCore(OpenIdProvider provider) at DotNetOpenAuth.OpenId.Provider.HostProcessedRequest.IsReturnUrlDiscoverable(OpenIdProvider provider) at OpenIdProviderWebForms.decide.Page_Load(Object src, EventArgs e) at System.Web.Util.CalliHelper.EventArgFunctionCaller(IntPtr fp, Object o, Object t, EventArgs e) at System.Web.Util.CalliEventHandlerDelegateProxy.Callback(Object sender, EventArgs e) at System.Web.UI.Control.OnLoad(EventArgs e) at System.Web.UI.Control.LoadRecursive() at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) at System.Web.UI.Page.ProcessRequest(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) at System.Web.UI.Page.ProcessRequest() at System.Web.UI.Page.ProcessRequest(HttpContext context) at ASP.decide_aspx.ProcessRequest(HttpContext context) at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) at System.Web.HttpApplication.PipelineStepManager.ResumeSteps(Exception error) at System.Web.HttpApplication.BeginProcessRequestNotification(HttpContext context, AsyncCallback cb) at System.Web.HttpRuntime.ProcessRequestNotificationPrivate(IIS7WorkerRequest wr, HttpContext context) at System.Web.Hosting.PipelineRuntime.ProcessRequestNotificationHelper(IntPtr managedHttpContext, IntPtr nativeRequestContext, IntPtr moduleData, Int32 flags) at System.Web.Hosting.PipelineRuntime.ProcessRequestNotification(IntPtr managedHttpContext, IntPtr nativeRequestContext, IntPtr moduleData, Int32 flags) at System.Web.Hosting.PipelineRuntime.ProcessRequestNotificationHelper(IntPtr managedHttpContext, IntPtr nativeRequestContext, IntPtr moduleData, Int32 flags) at System.Web.Hosting.PipelineRuntime.ProcessRequestNotification(IntPtr managedHttpContext, IntPtr nativeRequestContext, IntPtr moduleData, Int32 flags) 2010-04-28 12:38:38,149 (GMT-7) [14] INFO DotNetOpenAuth.Yadis - Relying party discovery at URL http://myrelyingparty/ failed. DotNetOpenAuth.Messaging.ProtocolException: An HTTP request to the realm URL (http://myrelyingparty/) resulted in a redirect, which is not allowed during relying party discovery. at DotNetOpenAuth.Messaging.ErrorUtilities.VerifyProtocol(Boolean condition, String message, Object[] args) in c:\TeamCity\buildAgent\work\bf9e2ca68b75a334\src\DotNetOpenAuth\Messaging\ErrorUtilities.cs:line 235 at DotNetOpenAuth.OpenId.Realm.Discover(IDirectWebRequestHandler requestHandler, Boolean allowRedirects) in c:\TeamCity\buildAgent\work\bf9e2ca68b75a334\src\DotNetOpenAuth\OpenId\Realm.cs:line 446 at DotNetOpenAuth.OpenId.Realm.DiscoverReturnToEndpoints(IDirectWebRequestHandler requestHandler, Boolean allowRedirects) in c:\TeamCity\buildAgent\work\bf9e2ca68b75a334\src\DotNetOpenAuth\OpenId\Realm.cs:line 424 at DotNetOpenAuth.OpenId.Provider.HostProcessedRequest.IsReturnUrlDiscoverableCore(OpenIdProvider provider) in c:\TeamCity\buildAgent\work\bf9e2ca68b75a334\src\DotNetOpenAuth\OpenId\Provider\HostProcessedRequest.cs:line 142 2010-04-28 12:38:42,076 (GMT-7) [8] ERROR OpenIdProviderWebForms.Global - An unhandled exception was raised. Details follow: System.Web.HttpUnhandledException: Exception of type 'System.Web.HttpUnhandledException' was thrown. --- System.InvalidOperationException: Sequence contains more than one element at System.Linq.Enumerable.SingleOrDefault[TSource](IEnumerable`1 source) at DotNetOpenAuth.OpenId.Provider.Request.GetExtension[T]() in c:\TeamCity\buildAgent\work\bf9e2ca68b75a334\src\DotNetOpenAuth\OpenId\Provider\Request.cs:line 176 at DotNetOpenAuth.OpenId.Extensions.ExtensionsInteropHelper.ConvertSregToMatchRequest(IHostProcessedRequest request) in c:\TeamCity\buildAgent\work\bf9e2ca68b75a334\src\DotNetOpenAuth\OpenId\Extensions\ExtensionsInteropHelper.cs:line 180 at DotNetOpenAuth.OpenId.Behaviors.AXFetchAsSregTransform.DotNetOpenAuth.OpenId.Provider.IProviderBehavior.OnOutgoingResponse(IAuthenticationRequest request) in c:\TeamCity\buildAgent\work\bf9e2ca68b75a334\src\DotNetOpenAuth\OpenId\Behaviors\AXFetchAsSregTransform.cs:line 139 at DotNetOpenAuth.OpenId.Provider.OpenIdProvider.ApplyBehaviorsToResponse(IRequest request) in c:\TeamCity\buildAgent\work\bf9e2ca68b75a334\src\DotNetOpenAuth\OpenId\Provider\OpenIdProvider.cs:line 482 at DotNetOpenAuth.OpenId.Provider.OpenIdProvider.SendResponse(IRequest request) in c:\TeamCity\buildAgent\work\bf9e2ca68b75a334\src\DotNetOpenAuth\OpenId\Provider\OpenIdProvider.cs:line 325 at OpenIdProviderWebForms.decide.Yes_Click(Object sender, EventArgs e) in C:\Projects\OpenIdProviderWebForms\decide.aspx.cs:line 130 at System.Web.UI.WebControls.Button.OnClick(EventArgs e) at System.Web.UI.WebControls.Button.RaisePostBackEvent(String eventArgument) at System.Web.UI.Page.RaisePostBackEvent(IPostBackEventHandler sourceControl, String eventArgument) at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) --- End of inner exception stack trace --- at System.Web.UI.Page.HandleError(Exception e) at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) at System.Web.UI.Page.ProcessRequest(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) at System.Web.UI.Page.ProcessRequest() at System.Web.UI.Page.ProcessRequest(HttpContext context) at ASP.decide_aspx.ProcessRequest(HttpContext context) in c:\Windows\Microsoft.NET\Framework64\v2.0.50727\Temporary ASP.NET Files\root\7f580b93\b3e4d917\App_Web_tulh9ymv.1.cs:line 0 at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously)

    Read the article

  • Errors trying to run MongoDB

    - by SomeKittens
    I'm running Ubuntu Server 12.04 (32 bit) on an old (1998) computer. Everything's working fine until I try and start MongoDB. somekittens@DLserver01:~$ mongo MongoDB shell version: 2.2.2 connecting to: test Sun Dec 16 22:47:50 Error: couldn't connect to server 127.0.0.1:27017 src/mongo/shell/mongo.js:91 exception: connect failed Googling the error lead me to all sorts of "repair" options, none of which fixed anything. I've also removed MongoDB and installed it again (using apt-get, have not built from source). Mongo's log shows the following error: Thu Dec 13 18:36:32 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability. Thu Dec 13 18:36:32 Thu Dec 13 18:36:32 [initandlisten] MongoDB starting : pid=758 port=27017 dbpath=/var/lib/mongodb 32-bit host=DLserver01 Thu Dec 13 18:36:32 [initandlisten] Thu Dec 13 18:36:32 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data Thu Dec 13 18:36:32 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations Thu Dec 13 18:36:32 [initandlisten] ** with --journal, the limit is lower Thu Dec 13 18:36:32 [initandlisten] Thu Dec 13 18:36:32 [initandlisten] db version v2.2.2, pdfile version 4.5 Thu Dec 13 18:36:32 [initandlisten] git version: d1b43b61a5308c4ad0679d34b262c5af9d664267 Thu Dec 13 18:36:32 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49 Thu Dec 13 18:36:32 [initandlisten] options: { config: "/etc/mongodb.conf", dbpath: "/var/lib/mongodb", logappend: "true", logpath: "/var/log/mongodb/mongodb.log" } Thu Dec 13 18:36:32 [initandlisten] Unable to check for journal files due to: boost::filesystem::basic_directory_iterator constructor: No such file or directory: "/var/lib/mongodb/journal" ************** Unclean shutdown detected. Please visit http://dochub.mongodb.org/core/repair for recovery instructions. ************* Thu Dec 13 18:36:32 [initandlisten] exception in initAndListen: 12596 old lock file, terminating Thu Dec 13 18:36:32 dbexit: Thu Dec 13 18:36:32 [initandlisten] shutdown: going to close listening sockets... Thu Dec 13 18:36:32 [initandlisten] shutdown: going to flush diaglog... Thu Dec 13 18:36:32 [initandlisten] shutdown: going to close sockets... Thu Dec 13 18:36:32 [initandlisten] shutdown: waiting for fs preallocator... Thu Dec 13 18:36:32 [initandlisten] shutdown: closing all files... Thu Dec 13 18:36:32 [initandlisten] closeAllFiles() finished Thu Dec 13 18:36:32 dbexit: really exiting now Running through the recovery instructions lead to the following adventure: somekittens@DLserver01:/var/log/mongodb$ mongod --repair Sun Dec 16 22:42:54 Sun Dec 16 22:42:54 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability. Sun Dec 16 22:42:54 Sun Dec 16 22:42:54 [initandlisten] MongoDB starting : pid=1887 port=27017 dbpath=/data/db/ 32-bit host=DLserver01 Sun Dec 16 22:42:54 [initandlisten] Sun Dec 16 22:42:54 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data Sun Dec 16 22:42:54 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations Sun Dec 16 22:42:54 [initandlisten] ** with --journal, the limit is lower Sun Dec 16 22:42:54 [initandlisten] Sun Dec 16 22:42:54 [initandlisten] db version v2.2.2, pdfile version 4.5 Sun Dec 16 22:42:54 [initandlisten] git version: d1b43b61a5308c4ad0679d34b262c5af9d664267 Sun Dec 16 22:42:54 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49 Sun Dec 16 22:42:54 [initandlisten] options: { repair: true } Sun Dec 16 22:42:54 [initandlisten] exception in initAndListen: 10296 ********************************************************************* ERROR: dbpath (/data/db/) does not exist. Create this directory or give existing directory in --dbpath. See http://dochub.mongodb.org/core/startingandstoppingmongo ********************************************************************* , terminating Sun Dec 16 22:42:54 dbexit: Sun Dec 16 22:42:54 [initandlisten] shutdown: going to close listening sockets... Sun Dec 16 22:42:54 [initandlisten] shutdown: going to flush diaglog... Sun Dec 16 22:42:54 [initandlisten] shutdown: going to close sockets... Sun Dec 16 22:42:54 [initandlisten] shutdown: waiting for fs preallocator... Sun Dec 16 22:42:54 [initandlisten] shutdown: closing all files... Sun Dec 16 22:42:54 [initandlisten] closeAllFiles() finished Sun Dec 16 22:42:54 dbexit: really exiting now somekittens@DLserver01:/var/log/mongodb$ sudo mkdir /data somekittens@DLserver01:/var/log/mongodb$ sudo mkdir /data/db somekittens@DLserver01:/var/log/mongodb$ mongod --repair Sun Dec 16 22:43:51 Sun Dec 16 22:43:51 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability. Sun Dec 16 22:43:51 Sun Dec 16 22:43:51 [initandlisten] MongoDB starting : pid=1909 port=27017 dbpath=/data/db/ 32-bit host=DLserver01 Sun Dec 16 22:43:51 [initandlisten] Sun Dec 16 22:43:51 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data Sun Dec 16 22:43:51 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations Sun Dec 16 22:43:51 [initandlisten] ** with --journal, the limit is lower Sun Dec 16 22:43:51 [initandlisten] Sun Dec 16 22:43:51 [initandlisten] db version v2.2.2, pdfile version 4.5 Sun Dec 16 22:43:51 [initandlisten] git version: d1b43b61a5308c4ad0679d34b262c5af9d664267 Sun Dec 16 22:43:51 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49 Sun Dec 16 22:43:51 [initandlisten] options: { repair: true } Sun Dec 16 22:43:51 [initandlisten] exception in initAndListen: 10309 Unable to create/open lock file: /data/db/mongod.lock errno:13 Permission denied Is a mongod instance already running?, terminating Sun Dec 16 22:43:51 dbexit: Sun Dec 16 22:43:51 [initandlisten] shutdown: going to close listening sockets... Sun Dec 16 22:43:51 [initandlisten] shutdown: going to flush diaglog... Sun Dec 16 22:43:51 [initandlisten] shutdown: going to close sockets... Sun Dec 16 22:43:51 [initandlisten] shutdown: waiting for fs preallocator... Sun Dec 16 22:43:51 [initandlisten] shutdown: closing all files... Sun Dec 16 22:43:51 [initandlisten] closeAllFiles() finished Sun Dec 16 22:43:51 [initandlisten] shutdown: removing fs lock... Sun Dec 16 22:43:51 [initandlisten] couldn't remove fs lock errno:9 Bad file descriptor Sun Dec 16 22:43:51 dbexit: really exiting now somekittens@DLserver01:/var/log/mongodb$ service mongodb stop stop: Unknown instance: somekittens@DLserver01:/var/log/mongodb$ sudo mongod --repair Sun Dec 16 22:45:04 Sun Dec 16 22:45:04 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability. Sun Dec 16 22:45:04 Sun Dec 16 22:45:04 [initandlisten] MongoDB starting : pid=1921 port=27017 dbpath=/data/db/ 32-bit host=DLserver01 Sun Dec 16 22:45:04 [initandlisten] Sun Dec 16 22:45:04 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data Sun Dec 16 22:45:04 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations Sun Dec 16 22:45:04 [initandlisten] ** with --journal, the limit is lower Sun Dec 16 22:45:04 [initandlisten] Sun Dec 16 22:45:04 [initandlisten] db version v2.2.2, pdfile version 4.5 Sun Dec 16 22:45:04 [initandlisten] git version: d1b43b61a5308c4ad0679d34b262c5af9d664267 Sun Dec 16 22:45:04 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49 Sun Dec 16 22:45:04 [initandlisten] options: { repair: true } Sun Dec 16 22:45:04 [initandlisten] Unable to check for journal files due to: boost::filesystem::basic_directory_iterator constructor: No such file or directory: "/data/db/journal" Sun Dec 16 22:45:04 [initandlisten] finished checking dbs Sun Dec 16 22:45:04 dbexit: Sun Dec 16 22:45:04 [initandlisten] shutdown: going to close listening sockets... Sun Dec 16 22:45:04 [initandlisten] shutdown: going to flush diaglog... Sun Dec 16 22:45:04 [initandlisten] shutdown: going to close sockets... Sun Dec 16 22:45:04 [initandlisten] shutdown: waiting for fs preallocator... Sun Dec 16 22:45:04 [initandlisten] shutdown: closing all files... Sun Dec 16 22:45:04 [initandlisten] closeAllFiles() finished Sun Dec 16 22:45:04 [initandlisten] shutdown: removing fs lock... Sun Dec 16 22:45:04 dbexit: really exiting now Which didn't change anything. What can I do to resolve this? It's an old computer (640MB RAM, single-core P2). Could that be causing it?

    Read the article

  • SOLVED mwfeedparser integrating in my app gives EXC_BAD_ACCESS (code=1, address=0xa0040008)

    - by Pranoy C
    SOLVED- Got it! The problem was that since I am creating the DoParsingStuff *parseThisUrl object in the viewDidLoad method, it's scope was only within that method. So after the method finished, the object got deallocated. I changed it to an instance variable instead and now it works. It gives a different error but that it an entirely different issue. Issue was: I have been struggling with trying to integrate the mwfeedparser library in my app for parsing RSS and ATOM feeds. It throws a gives EXC_BAD_ACCESS error which I can't seem to troubleshoot. //My Class looks like - My interface looks like: #import <Foundation/Foundation.h> #import "MWFeedParser.h" #import "NSString+HTML.h" @protocol ParseCompleted <NSObject> -(void)parsedArray:(NSMutableArray *)parsedArray; @end @interface DoParsingStuff : NSObject<MWFeedParserDelegate> @property (nonatomic,strong) NSMutableArray *parsedItems; @property (nonatomic, strong) NSArray *itemsToDisplay; @property (nonatomic,strong) MWFeedParser *feedParser; @property (nonatomic,strong) NSURL *feedurl; @property (nonatomic,strong) id <ParseCompleted> delegate; -(id)initWithFeedURL:(NSURL *)url; @end //And Implementaion: #import "DoParsingStuff.h" @implementation DoParsingStuff @synthesize parsedItems = _parsedItems; @synthesize itemsToDisplay = _itemsToDisplay; @synthesize feedParser = _feedParser; @synthesize feedurl=_feedurl; @synthesize delegate = _delegate; -(id)initWithFeedURL:(NSURL *)url{ if(self = [super init]){ _feedurl=url; _feedParser = [[MWFeedParser alloc] initWithFeedURL:_feedurl]; _feedParser.delegate=self; _feedParser.feedParseType=ParseTypeFull; _feedParser.connectionType=ConnectionTypeAsynchronously; } return self; } -(void)doParsing{ BOOL y = [_feedParser parse]; } # pragma mark - # pragma mark MWFeedParserDelegate - (void)feedParserDidStart:(MWFeedParser *)parser { //Just tells what url is being parsed e.g. http://www.wired.com/reviews/feeds/latestProductsRss NSLog(@"Started Parsing: %@", parser.url); } - (void)feedParser:(MWFeedParser *)parser didParseFeedInfo:(MWFeedInfo *)info { //What is the Feed about e.g. "Product Reviews" NSLog(@"Parsed Feed Info: “%@”", info.title); //self.title = info.title; } - (void)feedParser:(MWFeedParser *)parser didParseFeedItem:(MWFeedItem *)item { //Prints current element's title e.g. “An Arthropod for Your iDevices” NSLog(@"Parsed Feed Item: “%@”", item.title); if (item) [_parsedItems addObject:item]; } - (void)feedParserDidFinish:(MWFeedParser *)parser {//This is where you can do your own stuff with the parsed items NSLog(@"Finished Parsing%@", (parser.stopped ? @" (Stopped)" : @"")); [_delegate parsedArray:_parsedItems]; //[self updateTableWithParsedItems]; } - (void)feedParser:(MWFeedParser *)parser didFailWithError:(NSError *)error { NSLog(@"Finished Parsing With Error: %@", error); if (_parsedItems.count == 0) { //self.title = @"Failed"; // Show failed message in title } else { // Failed but some items parsed, so show and inform of error UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"Parsing Incomplete" message:@"There was an error during the parsing of this feed. Not all of the feed items could parsed." delegate:nil cancelButtonTitle:@"Dismiss" otherButtonTitles:nil]; [alert show]; } //[self updateTableWithParsedItems]; } @end //I am calling this from my main viewcontroller as such: #import "DoParsingStuff.h" @interface ViewController : UIViewController <ParseCompleted> .... //And I have the following methods in my implementation: DoParsingStuff *parseThisUrl = [[DoParsingStuff alloc] initWithFeedURL:[NSURL URLWithString:@"http://www.theverge.com/rss/index.xml"]]; parseThisUrl.delegate=self; [parseThisUrl doParsing]; I have the method defined here as- -(void)parsedArray:(NSMutableArray *)parsedArray{ NSLog(@"%@",parsedArray); } //I stepped through breakpoints- When I try to go through the breakpoints, I see that everything goes fine till the very last [parseThisUrl doParsing]; in my delegate class. After that it starts showing me memory registers where I get lost. I think it could be due to arc as I have disabled arc on the mwfeedparser files but am using arc in the above classes. If you need the entire project for this, let me know. I tried it with NSZombies enabled and got a bit more info out of it: -[DoParsingStuff respondsToSelector:]: message sent to deallocated instance 0x6a52480 I am not using release/autorelease/retain etc. in this class...but it is being used in the mwfeedparser library.

    Read the article

  • Randomly displayed flashing lines, no response to all shortcuts, just power off. [syslog included]

    - by B. Roland
    Hello! I have an old machine, and I want to use for that to learn employees how to use Ubuntu, and to be easyer to switch from Windows. I've been installed 10.04, and updated, but this strange stuff is happend. Graphical installion failed, same strange thing. With alternate workd. Sometimes, when I boot up, a boot message displayed: Keyboard failure..., often diplayed after reboot, and after shutdown, when I haven't plugged off from AC. I replaced the keyboard yet, same failure... If I powered off, and plugged off from AC, no keyboard problems displayed in boot time. Details Configuration: Dell OptiPlex GX60 - in original cover, no changes. 256 MB DDR 166 MHz Intel® Celeron® Processor 2.40 GHz Dell 0C3207 Base Board I know, that is not enough, but I have three other Nec compuers, with nearly similar config, and they works well with 9.10, 10.04, 10.10. Live CDs I've been tried with 10.04 and 10.10, but the problem is displayed too. With 9.10 no strange things displayed, but it froze, during a simple apt-get install. Syslog An error loop is logged here, but I paste the whole startup and error lines. The flashing lines are displayed sometimes immediately after login, but sometimes after 10 minutes, but once occured, that nothing happend. Strange thing is displayed immediately after login: here. An other boot, after some minutes, strange lines, and loop in log appeard: here. The loop should be that: Jan 23 00:20:08 machine_name kernel: [ 46.782212] [drm:i915_gem_entervt_ioctl] *ERROR* Reenabling wedged hardware, good luck Jan 23 00:20:08 machine_name kernel: [ 47.100033] [drm:i915_hangcheck_elapsed] *ERROR* Hangcheck timer elapsed... GPU hung Jan 23 00:20:08 machine_name kernel: [ 47.100045] render error detected, EIR: 0x00000000 Jan 23 00:20:08 machine_name kernel: [ 47.101487] [drm:i915_do_wait_request] *ERROR* i915_do_wait_request returns -5 (awaiting 16 at 9) Jan 23 00:20:11 machine_name kernel: [ 49.152020] [drm:i915_gem_idle] *ERROR* hardware wedged Jan 23 00:20:11 machine_name gdm-simple-slave[1245]: WARNING: Unable to load file '/etc/gdm/custom.conf': No such file or directory Jan 23 00:20:11 machine_name acpid: client 1239[0:0] has disconnected Jan 23 00:20:11 machine_name acpid: client connected from 1247[0:0] Jan 23 00:20:11 machine_name acpid: 1 client rule loaded UPDATE Added syslog things: before errors, error loop, the complete shutdown(after the big updates): Jan 28 20:40:30 machine_name rtkit-daemon[1339]: Sucessfully called chroot. Jan 28 20:40:30 machine_name rtkit-daemon[1339]: Sucessfully dropped privileges. Jan 28 20:40:30 machine_name rtkit-daemon[1339]: Sucessfully limited resources. Jan 28 20:40:30 machine_name rtkit-daemon[1339]: Running. Jan 28 20:40:30 machine_name rtkit-daemon[1339]: Watchdog thread running. Jan 28 20:40:30 machine_name rtkit-daemon[1339]: Canary thread running. Jan 28 20:40:30 machine_name rtkit-daemon[1339]: Sucessfully made thread 1337 of process 1337 (n/a) owned by '1001' high priority at nice level -11. Jan 28 20:40:30 machine_name rtkit-daemon[1339]: Supervising 1 threads of 1 processes of 1 users. Jan 28 20:40:32 machine_name rtkit-daemon[1339]: Sucessfully made thread 1345 of process 1337 (n/a) owned by '1001' RT at priority 5. Jan 28 20:40:32 machine_name rtkit-daemon[1339]: Supervising 2 threads of 1 processes of 1 users. Jan 28 20:40:32 machine_name rtkit-daemon[1339]: Sucessfully made thread 1349 of process 1337 (n/a) owned by '1001' RT at priority 5. Jan 28 20:40:32 machine_name rtkit-daemon[1339]: Supervising 3 threads of 1 processes of 1 users. Jan 28 20:40:37 machine_name pulseaudio[1337]: ratelimit.c: 2 events suppressed Jan 28 20:41:33 machine_name AptDaemon: INFO: Initializing daemon Jan 28 20:41:44 machine_name kernel: [ 167.691563] lo: Disabled Privacy Extensions Jan 28 20:47:33 machine_name AptDaemon: INFO: Quiting due to inactivity Jan 28 20:47:33 machine_name AptDaemon: INFO: Shutdown was requested Jan 28 20:59:50 machine_name kernel: [ 1253.840513] lo: Disabled Privacy Extensions Jan 28 21:17:02 machine_name CRON[1874]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Jan 28 21:17:38 machine_name kernel: [ 2321.553239] lo: Disabled Privacy Extensions Jan 28 22:07:44 machine_name kernel: [ 5327.840254] lo: Disabled Privacy Extensions Jan 28 22:17:02 machine_name CRON[2665]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Jan 28 22:32:38 machine_name sudo: pam_sm_authenticate: Called Jan 28 22:32:38 machine_name sudo: pam_sm_authenticate: username = [some_user] Jan 28 22:32:38 machine_name sudo: pam_sm_authenticate: /home/some_user is already mounted Jan 28 22:57:03 machine_name kernel: [ 8286.641472] lo: Disabled Privacy Extensions Jan 28 22:57:24 machine_name sudo: pam_sm_authenticate: Called Jan 28 22:57:24 machine_name sudo: pam_sm_authenticate: username = [some_user] Jan 28 22:57:24 machine_name sudo: pam_sm_authenticate: /home/some_user is already mounted Jan 28 23:07:42 machine_name kernel: [ 8925.272030] [drm:i915_hangcheck_elapsed] *ERROR* Hangcheck timer elapsed... GPU hung Jan 28 23:07:42 machine_name kernel: [ 8925.272048] render error detected, EIR: 0x00000000 Jan 28 23:07:42 machine_name kernel: [ 8925.272093] [drm:i915_do_wait_request] *ERROR* i915_do_wait_request returns -5 (awaiting 171453 at 171452) Jan 28 23:07:45 machine_name kernel: [ 8928.868041] [drm:i915_gem_idle] *ERROR* hardware wedged Jan 28 23:08:10 machine_name acpid: client 925[0:0] has disconnected Jan 28 23:08:10 machine_name acpid: client connected from 8127[0:0] Jan 28 23:08:10 machine_name acpid: 1 client rule loaded Jan 28 23:08:11 machine_name kernel: [ 8955.046248] [drm:i915_gem_entervt_ioctl] *ERROR* Reenabling wedged hardware, good luck Jan 28 23:08:12 machine_name kernel: [ 8955.364016] [drm:i915_hangcheck_elapsed] *ERROR* Hangcheck timer elapsed... GPU hung Jan 28 23:08:12 machine_name kernel: [ 8955.364027] render error detected, EIR: 0x00000000 Jan 28 23:08:12 machine_name kernel: [ 8955.364407] [drm:i915_do_wait_request] *ERROR* i915_do_wait_request returns -5 (awaiting 171457 at 171452) Jan 28 23:08:14 machine_name kernel: [ 8957.472025] [drm:i915_gem_idle] *ERROR* hardware wedged Jan 28 23:08:14 machine_name acpid: client 8127[0:0] has disconnected Jan 28 23:08:14 machine_name acpid: client connected from 8141[0:0] Jan 28 23:08:14 machine_name acpid: 1 client rule loaded Jan 28 23:08:15 machine_name kernel: [ 8958.671722] [drm:i915_gem_entervt_ioctl] *ERROR* Reenabling wedged hardware, good luck Jan 28 23:08:15 machine_name kernel: [ 8958.988015] [drm:i915_hangcheck_elapsed] *ERROR* Hangcheck timer elapsed... GPU hung Jan 28 23:08:15 machine_name kernel: [ 8958.988026] render error detected, EIR: 0x00000000 Jan 28 23:08:15 machine_name kernel: [ 8958.989400] [drm:i915_do_wait_request] *ERROR* i915_do_wait_request returns -5 (awaiting 171459 at 171452) Jan 28 23:08:16 machine_name init: tty4 main process (848) killed by TERM signal Jan 28 23:08:16 machine_name init: tty5 main process (856) killed by TERM signal Jan 28 23:08:16 machine_name NetworkManager: nm_signal_handler(): Caught signal 15, shutting down normally. Jan 28 23:08:16 machine_name init: tty2 main process (874) killed by TERM signal Jan 28 23:08:16 machine_name init: tty3 main process (875) killed by TERM signal Jan 28 23:08:16 machine_name init: tty6 main process (877) killed by TERM signal Jan 28 23:08:16 machine_name init: cron main process (890) killed by TERM signal Jan 28 23:08:16 machine_name init: tty1 main process (1146) killed by TERM signal Jan 28 23:08:16 machine_name avahi-daemon[644]: Got SIGTERM, quitting. Jan 28 23:08:16 machine_name avahi-daemon[644]: Leaving mDNS multicast group on interface eth0.IPv4 with address 10.238.11.134. Jan 28 23:08:16 machine_name acpid: exiting Jan 28 23:08:16 machine_name init: avahi-daemon main process (644) terminated with status 255 Jan 28 23:08:17 machine_name kernel: Kernel logging (proc) stopped. Jan 28 23:09:00 machine_name kernel: imklog 4.2.0, log source = /proc/kmsg started. Jan 28 23:09:00 machine_name rsyslogd: [origin software="rsyslogd" swVersion="4.2.0" x-pid="516" x-info="http://www.rsyslog.com"] (re)start Jan 28 23:09:00 machine_name rsyslogd: rsyslogd's groupid changed to 103 Jan 28 23:09:00 machine_name rsyslogd: rsyslogd's userid changed to 101 Jan 28 23:09:00 machine_name rsyslogd-2039: Could no open output file '/dev/xconsole' [try http://www.rsyslog.com/e/2039 ] When I hit the On/Off button, the system shuts down normally. May be it a hardware problem, but I don't know... Can you say something useful to solve my problem?

    Read the article

  • 13.10 - Weird WiFi connection problems - WMP300N - Broadcom BCM4321

    - by user1898041
    Just installed 13.10 on my desktop and I really like it. After having problems with getting the wifi to work, I installed it connected to the internet with an ethernet cable and added in the 3rd party software and updates as per the installation procedure. After installation was completed, I saw the wifi icon in the upper right hand corner, but it was not seeing any wifi networks. Some Googling brought me to use the 'Additional Drivers' application. It found the WMP300N Broadcom BDM4321 based pci wifi card and installed the proprietary Broadcom STA wireless driver, which may have been installed before. I'm not sure. Here is the weird part: when I start my system, wifi seems to be in some sort of suspended state where the system sees that the card exists but the card will not detect any wifi networks. It will work after booting once I 'Additional Drivers' application and then start FireFox. I know it seems weird, but this is the process I've got down to get the card to recognize wifi networks. After those applications are open for a few seconds, the card starts to function like normal (although maintaining the wifi connection is problem but most likely a seperate issue). The reason this is a problem is because this is supposed to just be a headless box managed through SSH. Here are the readouts from the common network diagnosis programs BEFORE I open 'Additional Drivers' and 'FireFox'. All commands were done with sudo. lspci 00:00.0 Host bridge: Intel Corporation 82G35 Express DRAM Controller (rev 03) 00:01.0 PCI bridge: Intel Corporation 82G35 Express PCI Express Root Port (rev 03) 00:1a.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #4 (rev 02) 00:1a.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #5 (rev 02) 00:1a.2 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #6 (rev 02) 00:1a.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #2 (rev 02) 00:1b.0 Audio device: Intel Corporation 82801I (ICH9 Family) HD Audio Controller (rev 02) 00:1c.0 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 1 (rev 02) 00:1c.4 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 5 (rev 02) 00:1c.5 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 6 (rev 02) 00:1d.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #1 (rev 02) 00:1d.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #2 (rev 02) 00:1d.2 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #3 (rev 02) 00:1d.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #1 (rev 02) 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev 92) 00:1f.0 ISA bridge: Intel Corporation 82801IR (ICH9R) LPC Interface Controller (rev 02) 00:1f.2 SATA controller: Intel Corporation 82801IR/IO/IH (ICH9R/DO/DH) 6 port SATA Controller [AHCI mode] (rev 02) 00:1f.3 SMBus: Intel Corporation 82801I (ICH9 Family) SMBus Controller (rev 02) 01:00.0 VGA compatible controller: NVIDIA Corporation GT216 [GeForce GT 220] (rev a2) 01:00.1 Audio device: NVIDIA Corporation High Definition Audio Controller (rev a1) 02:00.0 Ethernet controller: Qualcomm Atheros Attansic L1 Gigabit Ethernet (rev b0) 03:00.0 IDE interface: JMicron Technology Corp. JMB368 IDE controller 05:00.0 Network controller: Broadcom Corporation BCM4321 802.11b/g/n (rev 01) 05:03.0 FireWire (IEEE 1394): VIA Technologies, Inc. VT6306/7/8 [Fire II(M)] IEEE 1394 OHCI Controller (rev c0) - lshw *-network description: Ethernet interface product: Attansic L1 Gigabit Ethernet vendor: Qualcomm Atheros physical id: 0 bus info: pci@0000:02:00.0 logical name: eth0 version: b0 serial: 00:22:15:00:a8:12 capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list rom ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=atl1 driverversion=2.1.3 latency=0 link=no multicast=yes port=twisted pair resources: irq:46 memory:feac0000-feafffff memory:feaa0000-feabffff *-network description: Wireless interface product: BCM4321 802.11b/g/n vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:05:00.0 logical name: eth1 version: 01 serial: 00:23:69:d8:2b:16 width: 32 bits clock: 33MHz capabilities: bus_master ethernet physical wireless configuration: broadcast=yes driver=wl0 driverversion=6.30.223.141 (r415941) latency=64 multicast=yes wireless=IEEE 802.11abg resources: irq:16 memory:febfc000-febfffff - ifconfig eth0 Link encap:Ethernet HWaddr 00:22:15:00:a8:12 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) eth1 Link encap:Ethernet HWaddr 00:23:69:d8:2b:16 inet6 addr: fe80::223:69ff:fed8:2b16/64 Scope:Link UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:16 Base address:0xc000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:24 errors:0 dropped:0 overruns:0 frame:0 TX packets:24 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1856 (1.8 KB) TX bytes:1856 (1.8 KB) - iwconfig eth1 IEEE 802.11abg ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=200 dBm Retry long limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Management:off - iwlist scan eth1 No scan results - Here are the various commands AFTER I open 'Additional Drivers' and 'FireFox' lspci 00:00.0 Host bridge: Intel Corporation 82G35 Express DRAM Controller (rev 03) 00:01.0 PCI bridge: Intel Corporation 82G35 Express PCI Express Root Port (rev 03) 00:1a.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #4 (rev 02) 00:1a.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #5 (rev 02) 00:1a.2 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #6 (rev 02) 00:1a.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #2 (rev 02) 00:1b.0 Audio device: Intel Corporation 82801I (ICH9 Family) HD Audio Controller (rev 02) 00:1c.0 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 1 (rev 02) 00:1c.4 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 5 (rev 02) 00:1c.5 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 6 (rev 02) 00:1d.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #1 (rev 02) 00:1d.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #2 (rev 02) 00:1d.2 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #3 (rev 02) 00:1d.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #1 (rev 02) 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev 92) 00:1f.0 ISA bridge: Intel Corporation 82801IR (ICH9R) LPC Interface Controller (rev 02) 00:1f.2 SATA controller: Intel Corporation 82801IR/IO/IH (ICH9R/DO/DH) 6 port SATA Controller [AHCI mode] (rev 02) 00:1f.3 SMBus: Intel Corporation 82801I (ICH9 Family) SMBus Controller (rev 02) 01:00.0 VGA compatible controller: NVIDIA Corporation GT216 [GeForce GT 220] (rev a2) 01:00.1 Audio device: NVIDIA Corporation High Definition Audio Controller (rev a1) 02:00.0 Ethernet controller: Qualcomm Atheros Attansic L1 Gigabit Ethernet (rev b0) 03:00.0 IDE interface: JMicron Technology Corp. JMB368 IDE controller 05:00.0 Network controller: Broadcom Corporation BCM4321 802.11b/g/n (rev 01) 05:03.0 FireWire (IEEE 1394): VIA Technologies, Inc. VT6306/7/8 [Fire II(M)] IEEE 1394 OHCI Controller (rev c0) - lshw *-network description: Ethernet interface product: Attansic L1 Gigabit Ethernet vendor: Qualcomm Atheros physical id: 0 bus info: pci@0000:02:00.0 logical name: eth0 version: b0 serial: 00:22:15:00:a8:12 capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list rom ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=atl1 driverversion=2.1.3 latency=0 link=no multicast=yes port=twisted pair resources: irq:46 memory:feac0000-feafffff memory:feaa0000-feabffff *-network description: Wireless interface product: BCM4321 802.11b/g/n vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:05:00.0 logical name: eth1 version: 01 serial: 00:23:69:d8:2b:16 width: 32 bits clock: 33MHz capabilities: bus_master ethernet physical wireless configuration: broadcast=yes driver=wl0 driverversion=6.30.223.141 (r415941) ip=192.168.1.103 latency=64 multicast=yes wireless=IEEE 802.11abg resources: irq:16 memory:febfc000-febfffff - ifconfig eth0 Link encap:Ethernet HWaddr 00:22:15:00:a8:12 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) eth1 Link encap:Ethernet HWaddr 00:23:69:d8:2b:16 inet addr:192.168.1.103 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::223:69ff:fed8:2b16/64 Scope:Link UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:85 errors:0 dropped:0 overruns:0 frame:11901 TX packets:132 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:52641 (52.6 KB) TX bytes:19058 (19.0 KB) Interrupt:16 Base address:0xc000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:76 errors:0 dropped:0 overruns:0 frame:0 TX packets:76 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:6084 (6.0 KB) TX bytes:6084 (6.0 KB) - iwconfig eth1 IEEE 802.11abg ESSID:"BU" Mode:Managed Frequency:2.447 GHz Access Point: 00:26:F2:1F:81:02 Bit Rate=54 Mb/s Tx-Power=200 dBm Retry long limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Management:off Link Quality=59/70 Signal level=-51 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:0 Missed beacon:0 - iwlist scan A LOT OF SSIDs FOUND! - I'd like to have this problem fixed, but I'm not quite sure where to go. Been Googling a lot and can't seem to find anyone else with this problem.

    Read the article

  • Ivy and Snapshots (Nexus)

    - by Uberpuppy
    Hey folks, I'm using ant, ivy and nexus repo manager to build and store my artifacts. I managed to get everything working: dependency resolution and publishing. Until I hit a problem... (of course!). I was publishing to a 'release' repo in nexus, which is locked to 'disable redeploy' (even if you change the setting to 'allow redeploy' (really lame UI there imo). You can imagine how pissed off I was getting when my changes weren't updating through the repo before I realised that this was happening. Anyway, I now have to switch everything to use a 'Snapshot' repo in nexus. Problem is that this messes up my publish. I've tried a variety of things, including extensive googling, and haven't got anywhere whatsoever. The error I get is a bad PUT request, error code 400. Can someone who has got this working please give me a pointer on what I'm missing. Many thanks, Alastair fyi, here's my config: Note that I have removed any attempts at getting snapshots to work as I didn't know what was actually (potentially) useful and what was complete guff. This is therefore the working release-only setup. Also, please note that I've added the XXX-API ivy.xml for info only. I can't even get the xxx-common to publish (and that doesn't even have dependencies). Ant task: <target name="publish" depends="init-publish"> <property name="project.generated.ivy.file" value="${project.artifact.dir}/ivy.xml"/> <property name="project.pom.file" value="${project.artifact.dir}/${project.handle}.pom"/> <echo message="Artifact dir: ${project.artifact.dir}"/> <ivy:deliver deliverpattern="${project.generated.ivy.file}" organisation="${project.organisation}" module="${project.artifact}" status="integration" revision="${project.revision}" pubrevision="${project.revision}" /> <ivy:resolve /> <ivy:makepom ivyfile="${project.generated.ivy.file}" pomfile="${project.pom.file}"/> <ivy:publish resolver="${ivy.omnicache.publisher}" module="${project.artifact}" organisation="${project.organisation}" revision="${project.revision}" pubrevision="${project.revision}" pubdate="now" overwrite="true" publishivy="true" status="integration" artifactspattern="${project.artifact.dir}/[artifact]-[revision](-[classifier]).[ext]" /> </target> Couple of ivy files to give an idea of internal dependencies: XXX-Common project: <ivy-module version="2.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://ant.apache.org/ivy/schemas/ivy.xsd"> <info organisation="com.myorg.xxx" module="xxx_common" status="integration" revision="1.0"> </info> <publications> <artifact name="xxx_common" type="jar" ext="jar"/> <artifact name="xxx_common" type="pom" ext="pom"/> </publications> <dependencies> </dependencies> </ivy-module> XXX-API project: <ivy-module version="2.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://ant.apache.org/ivy/schemas/ivy.xsd"> <info organisation="com.myorg.xxx" module="xxx_api" status="integration" revision="1.0"> </info> <publications> <artifact name="xxx_api" type="jar" ext="jar"/> <artifact name="xxx_api" type="pom" ext="pom"/> </publications> <dependencies> <dependency org="com.myorg.xxx" name="xxx_common" rev="1.0" transitive="true" /> </dependencies> </ivy-module> IVY Settings.xml: <ivysettings> <properties file="${ivy.project.dir}/project.properties" /> <settings defaultResolver="chain" defaultConflictManager="all" /> <credentials host="${ivy.credentials.host}" realm="Sonatype Nexus Repository Manager" username="${ivy.credentials.username}" passwd="${ivy.credentials.passwd}" /> <caches> <cache name="ivy.cache" basedir="${ivy.cache.dir}" /> </caches> <resolvers> <ibiblio name="xxx_publisher" m2compatible="true" root="${ivy.xxx.publish.url}" /> <chain name="chain"> <url name="xxx"> <ivy pattern="${ivy.xxx.repo.url}/com/myorg/xxx/[module]/[revision]/ivy-[revision].xml" /> <artifact pattern="${ivy.xxx.repo.url}/com/myorg/xxx/[module]/[revision]/[artifact]-[revision].[ext]" /> </url> <ibiblio name="xxx" m2compatible="true" root="${ivy.xxx.repo.url}"/> <ibiblio name="public" m2compatible="true" root="${ivy.master.repo.url}" /> <url name="com.springsource.repository.bundles.release"> <ivy pattern="http://repository.springsource.com/ivy/bundles/release/[organisation]/[module]/[revision]/[artifact]-[revision].[ext]" /> <artifact pattern="http://repository.springsource.com/ivy/bundles/release/[organisation]/[module]/[revision]/[artifact]-[revision].[ext]" /> </url> <url name="com.springsource.repository.bundles.external"> <ivy pattern="http://repository.springsource.com/ivy/bundles/external/[organisation]/[module]/[revision]/[artifact]-[revision].[ext]" /> <artifact pattern="http://repository.springsource.com/ivy/bundles/external/[organisation]/[module]/[revision]/[artifact]-[revision].[ext]" /> </url> </chain> </resolvers> </ivysettings>

    Read the article

  • CURL Authentication being lost?

    - by John Sloan
    I am authenticating a login via CURL just fine. I have a variable I am using to display the returned HTML, and it is returning my user control panel as if I am logged in. After authenticating, I want to communicate variables with a form on another page within the site; but for some reason the HTML from that page is returning a non-authenticated version of the header (as if the original authentication never took place.) I have a cookies.txt file with 777 permissions, and have tried just getting the contents of the same page shown when I authenticate and it is as if I am losing any associated session/cookie data somewhere along the way. Here is my curl.class file - <? class Curl { public $cookieJar = ""; // Make sure the cookies.txt file is read/write permissions public function __construct($cookieJarFile = 'cookies.txt') { $this->cookieJar = $cookieJarFile; } function setup() { $header = array(); $header[0] = "Accept: text/xml,application/xml,application/xhtml+xml,"; $header[0] .= "text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5"; $header[] = "Cache-Control: max-age=0"; $header[] = "Connection: keep-alive"; $header[] = "Keep-Alive: 300"; $header[] = "Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7"; $header[] = "Accept-Language: en-us,en;q=0.5"; $header[] = "Pragma: "; // browsers keep this blank. curl_setopt($this->curl, CURLOPT_USERAGENT, 'Mozilla/5.0 (Windows; U; Windows NT 5.2; en-US; rv:1.8.1.7) Gecko/20070914 Firefox/2.0.0.7'); curl_setopt($this->curl, CURLOPT_HTTPHEADER, $header); curl_setopt($this->curl, CURLOPT_COOKIEJAR, $this->cookieJar); curl_setopt($this->curl, CURLOPT_COOKIEFILE, $this->cookieJar); curl_setopt($this->curl, CURLOPT_AUTOREFERER, true); curl_setopt($this->curl, CURLOPT_COOKIESESSION, true); curl_setopt($this->curl, CURLOPT_FOLLOWLOCATION, true); curl_setopt($this->curl, CURLOPT_RETURNTRANSFER, true); } function get($url) { $this->curl = curl_init($url); $this->setup(); return $this->request(); } function getAll($reg, $str) { preg_match_all($reg, $str, $matches); return $matches[1]; } function postForm($url, $fields, $referer = '') { $this->curl = curl_init($url); $this->setup(); curl_setopt($this->curl, CURLOPT_URL, $url); curl_setopt($this->curl, CURLOPT_POST, 1); curl_setopt($this->curl, CURLOPT_REFERER, $referer); curl_setopt($this->curl, CURLOPT_POSTFIELDS, $fields); return $this->request(); } function getInfo($info) { $info = ($info == 'lasturl') ? curl_getinfo($this->curl, CURLINFO_EFFECTIVE_URL) : curl_getinfo($this->curl, $info); return $info; } function request() { return curl_exec($this->curl); } } ?> And here is my curl.php file - <? include('curl.class.php'); // This path would change to where you store the file $curl = new Curl(); $url = "http://www.site.com/public/member/signin"; $fields = "MAX_FILE_SIZE=50000000&dado_form_3=1&member[email]=email&member[password]=pass&x=16&y=5&member[persistent]=true"; // Calling URL $referer = "http://www.site.com/public/member/signin"; $html = $curl->postForm($url, $fields, $referer); echo($html); ?> <hr style="clear:both;"/> <? $html = $curl->postForm('http://www.site.com/index.php','nid=443&sid=733005&tab=post&eval=yes&ad=&MAX_FILE_SIZE=10000000&ip=63.225.235.30','http://www.site.com/public/member/signin'); echo $html; // This will show you the HTML of the current page you and logged into ?> Any ideas?

    Read the article

  • springTestContextBeforeTestMethod failed in Maven spring-test

    - by joejax
    I try to setup a project with spring-test using TestNg in Maven. The code is like: @ContextConfiguration(locations={"test-context.xml"}) public class AppTest extends AbstractTestNGSpringContextTests { @Test public void testApp() { assert true; } } A test-context.xml simply defined a bean: <bean id="app" class="org.sonatype.mavenbook.simple.App"/> I got error for Failed to load ApplicationContext when running mvn test from command line, seems it cannot find the test-context.xml file; however, I can get it run correctly inside Eclipse (with TestNg plugin). So, test-context.xml is under src/test/resources/, how do I indicate this in the pom.xml so that 'mvn test' command will work? Thanks, UPDATE: Thanks for the reply. Cannot load context file error was caused by I moved the file arround in different location since I though the classpath was the problem. Now I found the context file seems loaded from the Maven output, but the test is failed: Running TestSuite May 25, 2010 9:55:13 AM org.springframework.beans.factory.xml.XmlBeanDefinitionReader loadBeanDefinitions INFO: Loading XML bean definitions from class path resource [test-context.xml] May 25, 2010 9:55:13 AM org.springframework.context.support.AbstractApplicationContext prepareRefresh INFO: Refreshing org.springframework.context.support.GenericApplicationContext@171bbc9: display name [org.springframework.context.support.GenericApplicationContext@171bbc9]; startup date [Tue May 25 09:55:13 PDT 2010]; root of context hierarchy May 25, 2010 9:55:13 AM org.springframework.context.support.AbstractApplicationContext obtainFreshBeanFactory INFO: Bean factory for application context [org.springframework.context.support.GenericApplicationContext@171bbc9]: org.springframework.beans.factory.support.DefaultListableBeanFactory@1df8b99 May 25, 2010 9:55:13 AM org.springframework.beans.factory.support.DefaultListableBeanFactory preInstantiateSingletons INFO: Pre-instantiating singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@1df8b99: defining beans [app,org.springframework.context.annotation.internalCommonAnnotationProcessor,org.springframework.context.annotation.internalAutowiredAnnotationProcessor,org.springframework.context.annotation.internalRequiredAnnotationProcessor]; root of factory hierarchy Tests run: 3, Failures: 2, Errors: 0, Skipped: 1, Time elapsed: 0.63 sec <<< FAILURE! Results : Failed tests: springTestContextBeforeTestMethod(org.sonatype.mavenbook.simple.AppTest) springTestContextAfterTestMethod(org.sonatype.mavenbook.simple.AppTest) Tests run: 3, Failures: 2, Errors: 0, Skipped: 1 If I use spring-test version 3.0.2.RELEASE, the error becomes: org.springframework.test.context.testng.AbstractTestNGSpringContextTests.springTestContextPrepareTestInstance() is depending on nonexistent method null Here is the structure of the project: simple |-- pom.xml `-- src |-- main | `-- java `-- test |-- java `-- resources |-- test-context.xml `-- testng.xml testng.xml: <suite name="Suite" parallel="false"> <test name="Test"> <classes> <class name="org.sonatype.mavenbook.simple.AppTest"/> </classes> </test> </suite> test-context.xml: <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd" default-lazy-init="true"> <bean id="app" class="org.sonatype.mavenbook.simple.App"/> </beans> In the pom.xml, I add testng, spring, and spring-test artifacts, and plugin: <dependency> <groupId>org.testng</groupId> <artifactId>testng</artifactId> <version>5.1</version> <classifier>jdk15</classifier> <scope>test</scope> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring</artifactId> <version>2.5.6</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-test</artifactId> <version>2.5.6</version> <scope>test</scope> </dependency> <build> <finalName>simple</finalName> <plugins> <plugin> <artifactId>maven-compiler-plugin</artifactId> <configuration> <source>1.6</source> <target>1.6</target> </configuration> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <configuration> <suiteXmlFiles> <suiteXmlFile>src/test/resources/testng.xml</suiteXmlFile> </suiteXmlFiles> </configuration> </plugin> </plugins> Basically, I replaced 'A Simple Maven Project' Junit with TestNg, hope it works. UPDATE: I think I got the problem (still don't know why) - Whenever I extends AbstractTestNGSpringContextTests or AbstractTransactionalTestNGSpringContextTests, the test will failed with this error: Failed tests: springTestContextBeforeTestMethod(org.sonatype.mavenbook.simple.AppTest) springTestContextAfterTestMethod(org.sonatype.mavenbook.simple.AppTest) So, eventually the error went away when I override the two methods. I don't think this is the right way, didn't find much info from spring-test doc. If you know spring test framework, please shred some light on this.

    Read the article

  • I have 6 updates that won't install on Ubuntu 12.04?

    - by Taylor
    I'm an Ubuntu novice, so any help here is greatly appreciated! I'm running Ubuntu 12.04, and I have six updates that just won't install. I've tried Update Manger, sudo apt-get upgrade, and sudo apt-get update. Nothing has worked so far. Here are the details I get from Update Manager: installArchives() failed: Setting up linux-image-3.2.0-24-generic-pae (3.2.0-24.37) ... Running depmod. sh: 1: /usr/sbin/update-initramfs: not found Failed to create initrd image. dpkg: error processing linux-image-3.2.0-24-generic-pae (--configure): subprocess installed post-installation script returned error exit status 2 Setting up linux-image-3.2.0-27-generic-pae (3.2.0-27.43) ... No apport report written because MaxReports is reached already Running depmod. sh: 1: /usr/sbin/update-initramfs: not found Failed to create initrd image. dpkg: error processing linux-image-3.2.0-27-generic-pae (--configure): subprocess installed post-installation script returned error exit status 2 No apport report written because MaxReports is reached already Setting up linux-image-3.2.0-29-generic-pae (3.2.0-29.46) ... Running depmod. sh: 1: /usr/sbin/update-initramfs: not found Failed to create initrd image. dpkg: error processing linux-image-3.2.0-29-generic-pae (--configure): subprocess installed post-installation script returned error exit status 2 No apport report written because MaxReports is reached already Setting up udev (175-0ubuntu9.1) ... udev stop/waiting udev start/running, process 3685 /var/lib/dpkg/info/udev.postinst: 87: /var/lib/dpkg/info/udev.postinst: update-initramfs: not found dpkg: error processing udev (--configure): subprocess installed post-installation script returned error exit status 127 No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of xserver-xorg-core: xserver-xorg-core depends on udev (= 149); however: Package udev is not configured yet. dpkg: error processing xserver-xorg-core (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of fglrx: fglrx depends on xserver-xorg-core; however: Package xserver-xorg-core is not configured yet. dpkg: error processing fglrx (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of fglrx-amdcccle: fglrx-amdcccle depends on fglrx; however: Package fglrx is not configured yet. dpkg: error processing fglrx-amdcccle (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of linux-image-generic-pae: linux-image-generic-pae depends on linux-image-3.2.0-24-generic-pae; however: Package linux-image-3.2.0-24-generic-pae is not configured yet. dpkg: error processing linux-image-generic-pae (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of linux-generic-pae: linux-generic-pae depends on linux-image-generic-pae (= 3.2.0.24.26); however: Package linux-image-generic-pae is not configured yet. dpkg: error processing linux-generic-pae (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of xserver-xorg-video-intel: xserver-xorg-video-intel depends on xorg-video-abi-11; however: Package xorg-video-abi-11 is not installed. Package xserver-xorg-core which provides xorg-video-abi-11 is not configured yet. xserver-xorg-video-intel depends on xserver-xorg-core (= 2:1.10.99.901); however: Package xserver-xorg-core is not configured yet. dpkg: error processing xserver-xorg-video-intel (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of fglrx-dev:No apport report written because MaxReports is reached already fglrx-dev depends on fglrx; however: Package fglrx is not configured yet. dpkg: error processing fglrx-dev (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already Errors were encountered while processing: linux-image-3.2.0-24-generic-pae linux-image-3.2.0-27-generic-pae linux-image-3.2.0-29-generic-pae udev xserver-xorg-core fglrx fglrx-amdcccle linux-image-generic-pae linux-generic-pae xserver-xorg-video-intel fglrx-dev Error in function: Setting up linux-image-3.2.0-24-generic-pae (3.2.0-24.37) ... Running depmod. sh: 1: /usr/sbin/update-initramfs: not found Failed to create initrd image. dpkg: error processing linux-image-3.2.0-24-generic-pae (--configure): subprocess installed post-installation script returned error exit status 2 Setting up linux-image-3.2.0-29-generic-pae (3.2.0-29.46) ... Running depmod. sh: 1: /usr/sbin/update-initramfs: not found Failed to create initrd image. dpkg: error processing linux-image-3.2.0-29-generic-pae (--configure): subprocess installed post-installation script returned error exit status 2 Setting up linux-image-3.2.0-27-generic-pae (3.2.0-27.43) ... Running depmod. sh: 1: /usr/sbin/update-initramfs: not found Failed to create initrd image. dpkg: error processing linux-image-3.2.0-27-generic-pae (--configure): subprocess installed post-installation script returned error exit status 2 Setting up udev (175-0ubuntu9.1) ... udev stop/waiting udev start/running, process 3782 /var/lib/dpkg/info/udev.postinst: 87: /var/lib/dpkg/info/udev.postinst: update-initramfs: not found dpkg: error processing udev (--configure): subprocess installed post-installation script returned error exit status 127 dpkg: dependency problems prevent configuration of linux-image-generic-pae: linux-image-generic-pae depends on linux-image-3.2.0-24-generic-pae; however: Package linux-image-3.2.0-24-generic-pae is not configured yet. dpkg: error processing linux-image-generic-pae (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of xserver-xorg-core: xserver-xorg-core depends on udev (= 149); however: Package udev is not configured yet. dpkg: error processing xserver-xorg-core (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of fglrx: fglrx depends on xserver-xorg-core; however: Package xserver-xorg-core is not configured yet. dpkg: error processing fglrx (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of linux-generic-pae: linux-generic-pae depends on linux-image-generic-pae (= 3.2.0.24.26); however: Package linux-image-generic-pae is not configured yet. dpkg: error processing linux-generic-pae (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of xserver-xorg-video-intel: xserver-xorg-video-intel depends on xorg-video-abi-11; however: Package xorg-video-abi-11 is not installed. Package xserver-xorg-core which provides xorg-video-abi-11 is not configured yet. xserver-xorg-video-intel depends on xserver-xorg-core (= 2:1.10.99.901); however: Package xserver-xorg-core is not configured yet. dpkg: error processing xserver-xorg-video-intel (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of fglrx-amdcccle: fglrx-amdcccle depends on fglrx; however: Package fglrx is not configured yet. dpkg: error processing fglrx-amdcccle (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of fglrx-dev: fglrx-dev depends on fglrx; however: Package fglrx is not configured yet. dpkg: error processing fglrx-dev (--configure): dependency problems - leaving unconfigured

    Read the article

  • Thoughts on Build 2013

    - by D'Arcy Lussier
    Originally posted on: http://geekswithblogs.net/dlussier/archive/2013/06/30/153294.aspxAnd so another Build conference has come to an end. Below are my thoughts/perspectives on various aspects of the event. I’ll do a separate blog post on my thoughts of the Build message for developers. The Good Moscone center was a great venue for Build! Easy to get around, easy to get to, and well maintained, it was a very comfortable conference venue. Yeah, the free swag was nice. Build has built up an expectation that attendees will always get something; it’ll be interesting to see how Microsoft maintains this expectation over the next few Build events. I still maintain that free swag should never be the main reason one attends an event, and for me this was definitely just an added bonus. I’m planning on trying to use the Surface as a dedicated 2nd device at work for meetings, I’ll share my experiences over the next few months. The hackathon event was a great idea, although personally I couldn’t justify spending the money on a conference registration just to spend the entire conference coding. Still, the apps that were created were really great and there was a lot of passion and excitement around the hackathon. I wonder if they couldn’t have had the hackathon on the Monday/Tuesday for those that wanted to participate so they didn’t miss any of the actual conference over Wed/Thurs. San Francisco was a great city to host Build. Getting from hotels to the conference center was very easy (well especially for me, I was only 3 blocks away) and the city itself felt very safe. However, if I never have to fly into SFO again I’ll be alright with that! Delays going into and out of SFO and both apparently were due to the airport itself. The Bad Build is one of those oddities on the conference landscape where people will pay to commit to attending an event without knowing anything about the sessions. We got our list of conference sessions when we registered on Tuesday, not before. And even then, we only got titles and not descriptions (those were eventually made available via the conference’s mobile application). I get it…they’re going to make announcements and they don’t want to give anything away through the session titles. But honestly, there wasn’t anything in the session titles that I would have considered a surprise. Breakfasts were brutal. High-carb pastries, donuts, and muffins with fruit and hard boiled eggs does not a conference breakfast make. I can’t believe that the difference between a continental breakfast per person and a hot breakfast buffet would have been a huge impact to a conference fee that was already around $2000. The vendor area was anemic. I don’t know why Microsoft forces the vendors into cookie-cutter booth areas (this year they were all made of plywood material). WPC, TechEd – booth areas there allow the vendors to be creative with their displays. Not so much for Build. Really odd was the lack of Microsoft’s own representation around Bing. In the day 1 keynote Microsoft made a big deal about Bing as an API. Yet there was nobody in the vendor area set up to provide more information or have discussions with about the Bing API. The Ugly Our name badges were NFC enabled. The purpose of this, beyond the vendors being able to scan your info, wasn’t really made clear. An attendee I talked to showed how you could get a reader app on your phone so you can scan other members cards and collect their contact info – which is a kewl idea; business cards are so 1990’s. But I was *shocked* at the amount of information that was on our name badges! Here’s what’s displayed on our name badge: - Name - Company - Twitter Handle I’m ok with that. But here’s what actually gets read: - Name - Company - Address Used for Registration - Phone Number Used for Registration So sharing that info with another attendee, they get way more of my info than just how to find me on Twitter! Microsoft, you need to fix this for the future. If vendors want to collect information on attendees, they should be able to collect an ID from the badge, then get a report with corresponding records afterwards. My personal information should not be so readily available, and without my knowledge! Final Verdict Maybe its my older age, maybe its where I’m at in life with family, maybe its where I’m at in my career, but when I consider whether a conference experience was valuable I get to the core reason I attend: opportunities to learn, opportunities to network, opportunities to engage with Microsoft. Opportunities to Learn:  Sessions I attended were generally OK, with some really stand out ones on Day 2. I would love to see Microsoft adopt the Dojo format for a portion of their sessions. Hands On Labs are dull, lecture style sessions are great for information sharing. But a guided hands-on coding session (Read: Dojo) provides the best of both worlds. Being that all content is publically available online to everyone (Build attendee or not), the value of attending the conference sessions is decreased. The value though is in the discussions that take part in person afterwards, which leads to… Opportunities to Network: I enjoyed getting together with old friends and connecting with Twitter friends in person for the first time. I also had an opportunity to meet total strangers. So from a networking perspective, Build was fantastic! I still think it would have been great to have an area for ad-hoc discussions – where speakers could announce they’d be available for more questions after their sessions, or attendees who wanted to discuss more in depth on a topic with other attendees could arrange space. Some people have no problems being outgoing and making these things happen, but others are not and a structured model is more attractive. Opportunities to Engage with Microsoft: Hit and miss on this one. Outside of the vendor area, unless you cornered or reached out to a speaker, there wasn’t any defined way to connect with blue badges. And as I mentioned above, Microsoft didn’t have full representation in the vendor area (no Bing). All in all, Build was a fun party where I was informed about some new stuff and got some free swag. Was it worth the time away from home and the hit to my PD budget? I’d say Somewhat. Build is a great informational conference, but I wouldn’t call it a learning conference. Considering that TechEd seems to be moving to more of an IT Pro focus, independent developer conferences seem to be the best value for those looking to learn and not just be informed. With the rapid development cycle Microsoft is embracing, we’re already seeing Build happening twice within a 12 month period. If that continues, the value of attending Build in person starts to diminish – especially with so much content available online. If Microsoft wants Build to be a must-attend event in the future, they need to start incorporating aspects of Tech Ed, past PDCs, and other conferences so those that want to leave with more than free swag have something to attract them.

    Read the article

  • Where to look for real url

    - by smallB
    I'm trying to write simple application for downloading videos from youtube. My code for getting file (http://www.youtube.com/watch?v=pViMzR_ylXg) looks like: bool FD_core::get_file() { QNetworkRequest request; request.setUrl(QUrl("http://www.youtube.com/watch?v=pViMzR_ylXg")); connect(network_access_manager_, SIGNAL(finished(QNetworkReply*)), this, SLOT(onRequestCompleted(QNetworkReply *))); network_access_manager_->get(request); return true; } void FD_core::onRequestCompleted(QNetworkReply * reply) { QByteArray data_ = reply->readAll(); cout << data_.constData(); qDebug() << "size: " << data_.size(); } In the above function data_.constData() produces lots of text, part (very small) of it: <!DOCTYPE html> <html lang="en" dir="ltr" > <head> <script> var yt = yt || {};yt.timing = yt.timing || {};yt.timing.tick = function(label, opt_time) {var timer = yt.timing['timer'] || {};if(opt_time) {timer[label] = opt_time;}else {timer[label] = new Date().getTime();}yt.timing['timer'] = timer;};yt.timing.info = function(label, value) {var info_args = yt.timing['info_args'] || {};info_args[label] = value;yt.timing['info_args'] = info_args;};yt.timing.info('e', "907050,906359,927900,919320,914021,916611,922401,920704,912806,927201,925706,928001,922403,913546,913556,920201,911116,901451");yt.timing.wff = true;yt.timing.info('pr', "1");yt.timing.info('an', "dclk,aftv,afv");if (document.webkitVisibilityState == 'prerender') {document.addEventListener('webkitvisibilitychange', function() {yt.timing.tick('start');}, false);}yt.timing.tick('start');yt.timing.info('li','0');try {yt.timing['srt'] = window.gtbExternal && window.gtbExternal.pageT() ||window.external && window.external.pageT;} catch(e) {}if (window.chrome && window.chrome.csi) {yt.timing['srt'] = Math.floor(window.chrome.csi().pageT);}if (window.msPerformance && window.msPerformance.timing) {yt.timing['srt'] = window.msPerformance.timing.responseStart - window.msPerformance.timing.navigationStart;} </script> <script>var yt = yt || {};yt.preload = {};yt.preload.counter_ = 0;yt.preload.start = function(src) {var img = new Image();var counter = ++yt.preload.counter_;yt.preload[counter] = img;img.onload = img.onerror = function () {delete yt.preload[counter];};img.src = src;img = null;};yt.preload.start("http:\/\/o-o---preferred---sn-xn5ucu-q0ce---v3---lscache7.c.youtube.com\/crossdomain.xml");yt.preload.start("http:\/\/o-o---preferred---sn-xn5ucu-q0ce---v3---lscache7.c.youtube.com\/generate_204?ip=95.83.224.63\u0026upn=A3aUhLYV55M\u0026sparams=algorithm%2Cburst%2Ccp%2Cfactor%2Cgcr%2Cid%2Cip%2Cipbits%2Citag%2Csource%2Cupn%2Cexpire\u0026fexp=907050%2C906359%2C927900%2C919320%2C914021%2C916611%2C922401%2C920704%2C912806%2C927201%2C925706%2C928001%2C922403%2C913546%2C913556%2C920201%2C911116%2C901451\u0026mt=1354207274\u0026key=yt1\u0026algorithm=throttle-factor\u0026burst=40\u0026ipbits=8\u0026itag=34\u0026sver=3\u0026signature=692E605215EB4D2CA407291CA26E14B844768A89.7A2930CE25FDDFC7C4FF5AA56DD02538B0020267\u0026mv=m\u0026source=youtube\u0026ms=au\u0026gcr=ie\u0026expire=1354228237\u0026factor=1.25\u0026cp=U0hUSVJNVl9IUUNONF9KR1pDOi0tSFhhRzVFRkd6\u0026id=a5588ccd1ff29578");</script><title>Die Antwoord - Fok Julle Naaiers (Mike Tyson&#39;s Words NOT DJ Hi-Teks) - YouTube</title><link rel="search" type="application/opensearchdescription+xml" href="http://www.youtube.com/opensearch?locale=en_US" title="YouTube Video Search"><link rel="icon" href="http://s.ytimg.com/yts/img/favicon-vfldLzJxy.ico" type="image/x-icon"><link rel="shortcut icon" href="http://s.ytimg.com/yts/img/favicon-vfldLzJxy.ico" type="image/x-icon"> <link rel="icon" href="//s.ytimg.com/yts/img/favicon_32-vflWoMFGx.png" sizes="32x32"><link rel="canonical" href="/watch?v=pViMzR_ylXg"><link rel="alternate" media="handheld" href="http://m.youtube.com/watch?v=pViMzR_ylXg"><link rel="alternate" media="only screen and (max-width: 640px)" href="http://m.youtube.com/watch?v=pViMzR_ylXg"><link rel="shortlink" href="http://youtu.be/pViMzR_ylXg"> <meta name="title" content="Die Antwoord - Fok Julle Naaiers (Mike Tyson&#39;s Words NOT DJ Hi-Teks)"> <meta name="description" content="Some of the lyrics of &quot;Die Antwoord&quot; new single &quot;Fok Julle Naaiers&quot; have caused such controversy that Die Antwoord have split with their record label Intersc..."> <meta name="keywords" content="Die Antwoord, Fok Julle Naaiers, Mike Tyson, DJ Hi-Tek, Faggot"> <link rel="alternate" type="application/json+oembed" href="http://www.youtube.com/oembed?url=http%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DpViMzR_ylXg&amp;format=json" title="Die Antwoord - Fok Julle Naaiers (Mike Tyson&#39;s Words NOT DJ Hi-Teks)"> <link rel="alternate" type="text/xml+oembed" href="http://www.youtube.com/oembed?url=http%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DpViMzR_ylXg&amp;format=xml" title="Die Antwoord - Fok Julle Naaiers (Mike Tyson&#39;s Words NOT DJ Hi-Teks)"> <meta property="og:url" content="http://www.youtube.com/watch?v=pViMzR_ylXg"> <meta property="og:title" content="Die Antwoord - Fok Julle Naaiers (Mike Tyson&#39;s Words NOT DJ Hi-Teks)"> <meta property="og:description" content="Some of the lyrics of &quot;Die Antwoord&quot; new single &quot;Fok Julle Naaiers&quot; have caused such controversy that Die Antwoord have split with their record label Intersc..."> <meta property="og:type" content="video"> <meta property="og:image" content="http://i1.ytimg.com/vi/pViMzR_ylXg/mqdefault.jpg"> <meta property="og:video" content="http://www.youtube.com/v/pViMzR_ylXg?version=3&amp;autohide=1"> <meta property="og:video:type" content="application/x-shockwave-flash"> <meta property="og:video:width" content="853"> <meta property="og:video:height" content="480"> <meta property="og:site_name" content="YouTube"> <meta property="fb:app_id" content="87741124305"> <meta name="twitter:card" value="player"> <meta name="twitter:site" value="@youtube"> <meta name="twitter:player" value="https://www.youtube.com/embed/pViMzR_ylXg"> <meta property="twitter:player:width" content="853"> <meta property="twitter:player:height" content="480"> So my question is, where in this file is the url hidden which will allow me to download the wanted file?

    Read the article

  • Inventory Management concepts in XNA game

    - by user1332755
    I am trying to code the inventory system in my first real game so I have very little experience in both c# and game engine development. Basically, I need some general guidance and tips with how to structure and organize these sorts of systems. Please tell me if I am on the right track or not before I get too deep into making some badly structured system. It's fine if you don't feel like looking through my code, suggestions about general structure would also be appreciated. What I am aiming to end up with is some sort of system like Minecraft or Terraria. It must include: main inventory GUI (items can be dragged and placed in whatever slot desired Itembar outside of the main inventory which can be assigned to certain items the ability to use items from either location So far, I have 4 main classes: Inventory holds the general info and methods, inventoryslot holds info for individual slots, Itembar holds all info and methods for itself, and finally, ItemManager to manage interactions between the two and hold a master list of items. So far, my itembar works perfectly and interacts well with mousedragging items into and out of it as well as activating the item effect. Here is the code I have so far: (there is a lot but I will try to keep it relevant) This is the code for the itembar on the main screen: class Itembar { public Texture2D itembarfull, iSelected; public static Rectangle itembar = new Rectangle(5, 218, 40, 391); public Rectangle box1 = new Rectangle(itembar.X, 218, 40, 40); //up to 10 Rectangles for each slot public int Selected = 0; private ItemManager manager; public Itembar(Texture2D texture, Texture2D texture3, ItemManager mann) { itembarfull = texture; iSelected = texture3; manager = mann; } public void Update(GameTime gametime) { } public void Draw(SpriteBatch spriteBatch) { spriteBatch.Draw( itembarfull, new Vector2 (itembar.X, itembar.Y), null, Color.White, 0.0f, Vector2.Zero, 1.0f, SpriteEffects.None, 1.0f); if (Selected == 1) spriteBatch.Draw(iSelected, new Rectangle(box1.X-3, box1.Y-3, box1.Width+6, box1.Height+6), Color.White); //goes up to 10 slots } public int Box1Query() { foreach (Item item in manager.items) { if(box1.Contains(item.BoundingBox)) return manager.items.IndexOf(item); } return 999; } //10 different box queries It is working fine right now. I just put an Item in there and the box will query things like the item's effects, stack number, consumable or not etc...This one is basically almost complete. Here is the main inventory class: class Inventory { public bool isActive; public List<Rectangle> mainSlots = new List<Rectangle>(24); public List<InventorySlot> mainSlotscheck = new List<InventorySlot>(24); public static Rectangle inv = new Rectangle(841, 469, 156, 231); public Rectangle invfull = new Rectangle(inv.X, inv.Y, inv.Width, inv.Height); public Rectangle inv1 = new Rectangle(inv.X + 4, inv.Y +3, 32, 32); //goes up to inv24 resulting in a 6x4 grid of Rectangles public Inventory() { mainSlots.Add(inv1); mainSlots.Add(inv2); mainSlots.Add(inv3); mainSlots.Add(inv4); //goes up to 24 foreach (Rectangle slot in mainSlots) mainSlotscheck.Add(new InventorySlot(slot)); } //update and draw methods are empty because im not too sure what to put there public int LookforfreeSlot() { int slotnumber = 999; for (int x = 0; x < mainSlots.Count; x++) { if (mainSlotscheck[x].isFree) { slotnumber = x; break; } } return slotnumber; } } } LookforFreeSlot() method is meant to be called when I do AddtoInventory(). I'm kinda stumped about what other things I need to put in this class. Here is the inventorySlot class: (its main purpose is to check the bool "isFree" to see whether or not something already occupies the slot. But i guess it can also do other stuff like get item info.) class InventorySlot { public int X, Y; public int Width = 32, Height = 32; public Vector2 Position; public int slotnumber; public bool free = true; public int? content = null; public bool isFree { get { return free; } set { free = value; } } public InventorySlot(Rectangle slot) { slot = new Rectangle(X, Y, Width, Height); } } } Finally, here is the ItemManager (I am omitting the master list because it is too long) class ItemManager { public List<Item> items = new List<Item>(20); public List<Item> inventory1 = new List<Item>(24); public List<Item> inventory2 = new List<Item>(24); public List<Item> inventory3 = new List<Item>(24); public List<Item> inventory4 = new List<Item>(24); public Texture2D icon, filta; private Rectangle msRect; MouseState mouseState; public int ISelectedIndex; Inventory inventory; SpriteFont font; public void GenerateItems() { items.Add(new Item(new Rectangle(0, 0, 32, 32), icon, font)); items[0].name = "Grass Chip"; items[0].itemID = 0; items[0].consumable = true; items[0].stackable = true; items[0].maxStack = 99; items.Add(new Item(new Rectangle(32, 0, 32, 32), icon, font)); //master list continues. it will generate all items in the game; } public ItemManager(Inventory inv, Texture2D itemsheet, Rectangle mouseRectt, MouseState ms, Texture2D fil, SpriteFont f) { icon = itemsheet; msRect = mouseRectt; filta = fil; mouseState = ms; inventory = inv; font = f; } //once again, no update or draw public void mousedrag() { items[0].DestinationRect = new Rectangle (msRect.X, msRect.Y, 32, 32); items[0].dragging = true; } public void AddtoInventory(Item item) { int index = inventory.LookforfreeSlot(); if (index == 999) return; item.DestinationRect = inventory.mainSlots[index]; inventory.mainSlotscheck[index].content = item.itemID; inventory.mainSlotscheck[index].isFree = false; item.IsActive = true; } } } The mousedrag works pretty well. AddtoInventory doesn't work because LookforfreeSlot doesn't work. Relevant code from the main program: When I want to add something to the main inventory, I do something like this: foreach (Particle ether in ether1.ethers) { if (ether.isCollected) itemmanager.AddtoInventory(itemmanager.items[14]); } This turned out to be much longer than I had expected :( But I hope someone is interested enough to comment.

    Read the article

  • Benchmarking MySQL Replication with Multi-Threaded Slaves

    - by Mat Keep
    0 0 1 1145 6530 Homework 54 15 7660 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} The objective of this benchmark is to measure the performance improvement achieved when enabling the Multi-Threaded Slave enhancement delivered as a part MySQL 5.6. As the results demonstrate, Multi-Threaded Slaves delivers 5x higher replication performance based on a configuration with 10 databases/schemas. For real-world deployments, higher replication performance directly translates to: · Improved consistency of reads from slaves (i.e. reduced risk of reading "stale" data) · Reduced risk of data loss should the master fail before replicating all events in its binary log (binlog) The multi-threaded slave splits processing between worker threads based on schema, allowing updates to be applied in parallel, rather than sequentially. This delivers benefits to those workloads that isolate application data using databases - e.g. multi-tenant systems deployed in cloud environments. Multi-Threaded Slaves are just one of many enhancements to replication previewed as part of the MySQL 5.6 Development Release, which include: · Global Transaction Identifiers coupled with MySQL utilities for automatic failover / switchover and slave promotion · Crash Safe Slaves and Binlog · Optimized Row Based Replication · Replication Event Checksums · Time Delayed Replication These and many more are discussed in the “MySQL 5.6 Replication: Enabling the Next Generation of Web & Cloud Services” Developer Zone article  Back to the benchmark - details are as follows. Environment The test environment consisted of two Linux servers: · one running the replication master · one running the replication slave. Only the slave was involved in the actual measurements, and was based on the following configuration: - Hardware: Oracle Sun Fire X4170 M2 Server - CPU: 2 sockets, 6 cores with hyper-threading, 2930 MHz. - OS: 64-bit Oracle Enterprise Linux 6.1 - Memory: 48 GB Test Procedure Initial Setup: Two MySQL servers were started on two different hosts, configured as replication master and slave. 10 sysbench schemas were created, each with a single table: CREATE TABLE `sbtest` (    `id` int(10) unsigned NOT NULL AUTO_INCREMENT,    `k` int(10) unsigned NOT NULL DEFAULT '0',    `c` char(120) NOT NULL DEFAULT '',    `pad` char(60) NOT NULL DEFAULT '',    PRIMARY KEY (`id`),    KEY `k` (`k`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 10,000 rows were inserted in each of the 10 tables, for a total of 100,000 rows. When the inserts had replicated to the slave, the slave threads were stopped. The slave data directory was copied to a backup location and the slave threads position in the master binlog noted. 10 sysbench clients, each configured with 10 threads, were spawned at the same time to generate a random schema load against each of the 10 schemas on the master. Each sysbench client executed 10,000 "update key" statements: UPDATE sbtest set k=k+1 WHERE id = <random row> In total, this generated 100,000 update statements to later replicate during the test itself. Test Methodology: The number of slave workers to test with was configured using: SET GLOBAL slave_parallel_workers=<workers> Then the slave IO thread was started and the test waited for all the update queries to be copied over to the relay log on the slave. The benchmark clock was started and then the slave SQL thread was started. The test waited for the slave SQL thread to finish executing the 100k update queries, doing "select master_pos_wait()". When master_pos_wait() returned, the benchmark clock was stopped and the duration calculated. The calculated duration from the benchmark clock should be close to the time it took for the SQL thread to execute the 100,000 update queries. The 100k queries divided by this duration gave the benchmark metric, reported as Queries Per Second (QPS). Test Reset: The test-reset cycle was implemented as follows: · the slave was stopped · the slave data directory replaced with the previous backup · the slave restarted with the slave threads replication pointer repositioned to the point before the update queries in the binlog. The test could then be repeated with identical set of queries but a different number of slave worker threads, enabling a fair comparison. The Test-Reset cycle was repeated 3 times for 0-24 number of workers and the QPS metric calculated and averaged for each worker count. MySQL Configuration The relevant configuration settings used for MySQL are as follows: binlog-format=STATEMENT relay-log-info-repository=TABLE master-info-repository=TABLE As described in the test procedure, the slave_parallel_workers setting was modified as part of the test logic. The consequence of changing this setting is: 0 worker threads:    - current (i.e. single threaded) sequential mode    - 1 x IO thread and 1 x SQL thread    - SQL thread both reads and executes the events 1 worker thread:    - sequential mode    - 1 x IO thread, 1 x Coordinator SQL thread and 1 x Worker thread    - coordinator reads the event and hands it to the worker who executes 2+ worker threads:    - parallel execution    - 1 x IO thread, 1 x Coordinator SQL thread and 2+ Worker threads    - coordinator reads events and hands them to the workers who execute them Results Figure 1 below shows that Multi-Threaded Slaves deliver ~5x higher replication performance when configured with 10 worker threads, with the load evenly distributed across our 10 x schemas. This result is compared to the current replication implementation which is based on a single SQL thread only (i.e. zero worker threads). Figure 1: 5x Higher Performance with Multi-Threaded Slaves The following figure shows more detailed results, with QPS sampled and reported as the worker threads are incremented. The raw numbers behind this graph are reported in the Appendix section of this post. Figure 2: Detailed Results As the results above show, the configuration does not scale noticably from 5 to 9 worker threads. When configured with 10 worker threads however, scalability increases significantly. The conclusion therefore is that it is desirable to configure the same number of worker threads as schemas. Other conclusions from the results: · Running with 1 worker compared to zero workers just introduces overhead without the benefit of parallel execution. · As expected, having more workers than schemas adds no visible benefit. Aside from what is shown in the results above, testing also demonstrated that the following settings had a very positive effect on slave performance: relay-log-info-repository=TABLE master-info-repository=TABLE For 5+ workers, it was up to 2.3 times as fast to run with TABLE compared to FILE. Conclusion As the results demonstrate, Multi-Threaded Slaves deliver significant performance increases to MySQL replication when handling multiple schemas. This, and the other replication enhancements introduced in MySQL 5.6 are fully available for you to download and evaluate now from the MySQL Developer site (select Development Release tab). You can learn more about MySQL 5.6 from the documentation  Please don’t hesitate to comment on this or other replication blogs with feedback and questions. Appendix – Detailed Results

    Read the article

  • Chrome and Firefox not able to find some sites after Ubuntu's recent update?

    - by gkr
    Loading of revision3.com in Chrome stops and status saying "waiting for static.inplay.tubemogul.com" grooveshark.com in Chrome never loads But wikipedia.org, google.com works just normal same behavior in Firefox too. I use wired DSL connection in Ubuntu 12.04. I guess this started happening after I upgraded the Chrome browser or Flash plugin few days ago from Ubuntu updates through update-manager. Thanks EDIT: Everything works normal in my WinXP. Problem is only in Ubuntu 12.04. This is what I get when I use wget gkr@gkr-desktop:~$ wget revision3.com --2012-06-12 08:58:01-- http://revision3.com/ Resolving revision3.com (revision3.com)... 173.192.117.198 Connecting to revision3.com (revision3.com)|173.192.117.198|:80... connected. HTTP request sent, awaiting response... 200 OK Length: unspecified [text/html] Saving to: `index.html' [ ] 81,046 32.0K/s in 2.5s 2012-06-12 08:58:04 (32.0 KB/s) - `index.html' saved [81046] gkr@gkr-desktop:~$ wget static.inplay.tubemogul.com --2012-06-12 08:51:25-- http://static.inplay.tubemogul.com/ Resolving static.inplay.tubemogul.com (static.inplay.tubemogul.com)... 72.21.81.253 Connecting to static.inplay.tubemogul.com (static.inplay.tubemogul.com)|72.21.81.253|:80... connected. HTTP request sent, awaiting response... 404 Not Found 2012-06-12 08:51:25 ERROR 404: Not Found. grooveshark.com nevers responses so I have to Ctrl+C to terminate wget. gkr@gkr-desktop:~$ wget grooveshark.com --2012-06-12 08:51:33-- http://grooveshark.com/ Resolving grooveshark.com (grooveshark.com)... 8.20.213.76 Connecting to grooveshark.com (grooveshark.com)|8.20.213.76|:80... connected. HTTP request sent, awaiting response... ^C gkr@gkr-desktop:~$ This is the apt term.log of updates I mentioned Log started: 2012-06-09 04:51:45 (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 161392 files and directories currently installed.) Preparing to replace google-chrome-stable 19.0.1084.52-r138391 (using .../google-chrome-stable_19.0.1084.56-r140965_i386.deb) ... Unpacking replacement google-chrome-stable ... Preparing to replace libpulse0 1:1.1-0ubuntu15 (using .../libpulse0_1%3a1.1-0ubuntu15.1_i386.deb) ... Unpacking replacement libpulse0 ... Preparing to replace libpulse-mainloop-glib0 1:1.1-0ubuntu15 (using .../libpulse-mainloop-glib0_1%3a1.1-0ubuntu15.1_i386.deb) ... Unpacking replacement libpulse-mainloop-glib0 ... Preparing to replace libpulsedsp 1:1.1-0ubuntu15 (using .../libpulsedsp_1%3a1.1-0ubuntu15.1_i386.deb) ... Unpacking replacement libpulsedsp ... Preparing to replace sudo 1.8.3p1-1ubuntu3.2 (using .../sudo_1.8.3p1-1ubuntu3.3_i386.deb) ... Unpacking replacement sudo ... Preparing to replace flashplugin-installer 11.2.202.235ubuntu0.12.04.1 (using .../flashplugin-installer_11.2.202.236ubuntu0.12.04.1_i386.deb) ... Unpacking replacement flashplugin-installer ... Preparing to replace pulseaudio-utils 1:1.1-0ubuntu15 (using .../pulseaudio-utils_1%3a1.1-0ubuntu15.1_i386.deb) ... Unpacking replacement pulseaudio-utils ... Preparing to replace pulseaudio 1:1.1-0ubuntu15 (using .../pulseaudio_1%3a1.1-0ubuntu15.1_i386.deb) ... Unpacking replacement pulseaudio ... Preparing to replace pulseaudio-module-gconf 1:1.1-0ubuntu15 (using .../pulseaudio-module-gconf_1%3a1.1-0ubuntu15.1_i386.deb) ... Unpacking replacement pulseaudio-module-gconf ... Preparing to replace pulseaudio-module-x11 1:1.1-0ubuntu15 (using .../pulseaudio-module-x11_1%3a1.1-0ubuntu15.1_i386.deb) ... Unpacking replacement pulseaudio-module-x11 ... Preparing to replace shared-mime-info 1.0-0ubuntu4 (using .../shared-mime-info_1.0-0ubuntu4.1_i386.deb) ... Unpacking replacement shared-mime-info ... Preparing to replace pulseaudio-module-bluetooth 1:1.1-0ubuntu15 (using .../pulseaudio-module-bluetooth_1%3a1.1-0ubuntu15.1_i386.deb) ... Unpacking replacement pulseaudio-module-bluetooth ... Processing triggers for menu ... /usr/share/menu/downverter.menu: 1: /usr/share/menu/downverter.menu: Syntax error: word unexpected (expecting ")") Processing triggers for man-db ... locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for desktop-file-utils ... Processing triggers for gnome-menus ... Processing triggers for ureadahead ... ureadahead will be reprofiled on next reboot Processing triggers for update-notifier-common ... flashplugin-installer: downloading http://archive.canonical.com/pool/partner/a/adobe-flashplugin/adobe-flashplugin_11.2.202.236.orig.tar.gz Installing from local file /tmp/tmpNrBt4g.gz Flash Plugin installed. Processing triggers for doc-base ... Processing 1 changed doc-base file... Registering documents with scrollkeeper... Setting up google-chrome-stable (19.0.1084.56-r140965) ... Setting up libpulse0 (1:1.1-0ubuntu15.1) ... Setting up libpulse-mainloop-glib0 (1:1.1-0ubuntu15.1) ... Setting up libpulsedsp (1:1.1-0ubuntu15.1) ... Setting up sudo (1.8.3p1-1ubuntu3.3) ... Installing new version of config file /etc/pam.d/sudo ... Setting up flashplugin-installer (11.2.202.236ubuntu0.12.04.1) ... locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Setting up pulseaudio-utils (1:1.1-0ubuntu15.1) ... Setting up pulseaudio (1:1.1-0ubuntu15.1) ... Setting up pulseaudio-module-gconf (1:1.1-0ubuntu15.1) ... Setting up pulseaudio-module-x11 (1:1.1-0ubuntu15.1) ... Setting up shared-mime-info (1.0-0ubuntu4.1) ... Setting up pulseaudio-module-bluetooth (1:1.1-0ubuntu15.1) ... Processing triggers for menu ... /usr/share/menu/downverter.menu: 1: /usr/share/menu/downverter.menu: Syntax error: word unexpected (expecting ")") Processing triggers for libc-bin ... ldconfig deferred processing now taking place Log ended: 2012-06-09 04:53:32 This is from history.log Start-Date: 2012-06-09 04:51:45 Commandline: aptdaemon role='role-commit-packages' sender=':1.56' Upgrade: shared-mime-info:i386 (1.0-0ubuntu4, 1.0-0ubuntu4.1), pulseaudio:i386 (1.1-0ubuntu15, 1.1-0ubuntu15.1), libpulse-mainloop-glib0:i386 (1.1-0ubuntu15, 1.1-0ubuntu15.1), sudo:i386 (1.8.3p1-1ubuntu3.2, 1.8.3p1-1ubuntu3.3), pulseaudio-module-bluetooth:i386 (1.1-0ubuntu15, 1.1-0ubuntu15.1), pulseaudio-module-x11:i386 (1.1-0ubuntu15, 1.1-0ubuntu15.1), flashplugin-installer:i386 (11.2.202.235ubuntu0.12.04.1, 11.2.202.236ubuntu0.12.04.1), pulseaudio-utils:i386 (1.1-0ubuntu15, 1.1-0ubuntu15.1), pulseaudio-module-gconf:i386 (1.1-0ubuntu15, 1.1-0ubuntu15.1), libpulse0:i386 (1.1-0ubuntu15, 1.1-0ubuntu15.1), libpulsedsp:i386 (1.1-0ubuntu15, 1.1-0ubuntu15.1), google-chrome-stable:i386 (19.0.1084.52-r138391, 19.0.1084.56-r140965) End-Date: 2012-06-09 04:53:32

    Read the article

  • Html shows after submitting form and is nowhere to be found in php script.

    - by Kelbizzle
    Upon submitting this form on my site. It send me to a page that says. "Use Back - fill in all fields Use back! ! " But this html isn't in the mail script anywhere. Where could this be coming from? I started out using this contact form (http://www.ibdhost.com/contact/) then changed it a little. Here is the mail script. <?php session_start(); ?> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Sendemail Script</title> </head> <body> <!-- Reminder: Add the link for the 'next page' (at the bottom) --> <!-- Reminder: Change 'YourEmail' to Your real email --> <?php //the 3 variables below were changed to use the SERVER variable $ip = $_SERVER['REMOTE_ADDR']; $httpref = $_SERVER['HTTP_REFERER']; $httpagent = $_SERVER['HTTP_USER_AGENT']; $visitorf = $_POST['visitorf']; $visitorl = $_POST['visitorl']; $visitormail = $_POST['visitormail']; $visitorphone = $_POST['visitorphone']; //$notes = $_POST['notes']; //$attn = $_POST['attn']; $lookup = array( 'The Election Report' => 'http://www.mydowmain.net/', '5 Resons' => 'http://www.mydomain.net/', 'Report 3' => 'http://someotherurl3.com/', 'Report 4' => 'http://someotherurl4.com/', 'Report 5' => 'http://someotherurl5.com/', // et cetera for your other values ); $attn = trim($_POST['attn']); $url = $lookup[$attn]; //echo 'attn: ' . $attn . ', url:' . $url; die; //additional headers $headers = 'From: US <[email protected]>' . "\r\n"; //$headers .= 'BCC: [email protected]' . "\r\n"; $todayis = date("l, F j, Y, g:i a") ; $subject = "your lead has downloaded a report."; $subjectdp = "Someone has downloaded a report!"; $notes = stripcslashes($notes); $message = "Dear PAl Affiliate,\n\nA prospective lead of yours has downloaded a report from our Website.\nAny contact information they have left and a link to the report they downloaded\ncan be found below. This is the perfect opportunity for you to open up a line of\ncommunication with the prospect and find out their intrests! If you have any questions\nabout this email please feel free to email us at [email protected]\n\n\nFrom: $visitorf $visitorl ($visitormail)\nTelephone Number: $visitorphone \nReport Downloaded:$url\n \n\nBest regards,\nThe Crew"; //$message = "$todayis [EST] \nAttention: \nMessage: $notes \nFrom: $visitorf $visitorl ($visitormail) \nTelephone Number: //$visitorphone \nReport Downloaded:$url\nAdditional Info : IP = $ip \nBrowser Info: $httpagent \nReferral : $httpref\n"; $messagedp = "A Visitor has just downloaded a report. You can find their contact information below.\n \n ***********************************************************************\n From: $visitorf $visitorl\n Email: $visitormail\n Telephone Number: $visitorphone \n Report Downloaded:$url\n \n \n Best regards,\n The Crew\n"; $messagelead = "Dear, $visitorf\n \n \n We appreciate your interest. Below you will find the URL to download the report you requested.\n Things are always changing in costa rica , so check back often. Also, check us out on Facebook & Twitter \n for daily updates. If there is anything we can do at anytime to enhance your experience, please do\n not hesitate to contact us.\n \n To download your report simply click on the link below. (You must have Adobe Reader or an alternative PDF reader installed)\n \n *** Download Link ***\n $url\n"; //check if the function even exists if(function_exists("mail")) { //send the email mail($_SESSION['email'], $subject, $message, $headers) or die("could not send email"); } else { die("mail fucntion not enabled"); } //send the email to us mail('[email protected]', $subjectdp, $messagedp); //send the email to the lead mail($visitormail, 'Thanks for downloading the report!', $messagelead, $headers); header( "Location: http://www.mydomain.com/thanks_report.php" ); ?> </body> </html>

    Read the article

  • Ubuntu 12.04 + Wifi not working

    - by user171154
    i'm having problems connecting over wireless. At the moment, I'm using wicd. It seems to get stuck on "Verifying AP association...". Without wicd I can get the connection up and ping the Net - but if I take eth0 down (ifconfig eth0 down), my wireless goes away too (same result if I unplug the wire instead). wicd is the only way I can bring eth0 back (which is the main reason I'm using it) - ifconfig eth0 and/or ifup eth0 do not re-enable the connection (I just discovered it leaves out the gateway. Adding the gateway back in re-enables the connection including wifi; I didn't want to delete the info about wicd above in case it gives someone an idea.) Doing it manually, despite the errors (which it would be nice to also resolve) - allows me to ping the outside world: ifup wlan0 ioctl[SIOCSIWENCODEEXT]: Invalid argument ioctl[SIOCSIWENCODEEXT]: Invalid argument ssh stop/waiting ssh start/running, process 17336 ping -I wlan0 -c 4 8.8.8.8 PING 8.8.8.8 (8.8.8.8) from 192.168.0.12 wlan0: 56(84) bytes of data. 64 bytes from 8.8.8.8: icmp_req=1 ttl=43 time=48.8 ms 64 bytes from 8.8.8.8: icmp_req=2 ttl=43 time=47.9 ms 64 bytes from 8.8.8.8: icmp_req=3 ttl=43 time=48.7 ms 64 bytes from 8.8.8.8: icmp_req=4 ttl=43 time=53.2 ms --- 8.8.8.8 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3003ms rtt min/avg/max/mdev = 47.975/49.711/53.235/2.063 ms # iwconfig lo no wireless extensions. wlan0 IEEE 802.11bgn ESSID:"TPLINK" Mode:Managed Frequency:2.427 GHz Access Point: 64:66:xx:xx:xx:22 Bit Rate=108 Mb/s Tx-Power=27 dBm Retry long limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Management:off Link Quality=70/70 Signal level=-39 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:3 Missed beacon:0 bus info: pci@0000:03:00.0 logical name: wlan0 version: 01 serial: f0:7d:68:c1:b4:13 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=ath9k driverversion=3.2.0-67-generic-pae firmware=N/A latency=0 link=no multicast=yes wireless=IEEE 802.11bgn resources: irq:17 memory:dfbf0000-dfbfffff ip route default via 192.168.0.1 dev eth0 default via 192.168.0.1 dev wlan0 metric 100 169.254.0.0/16 dev wlan0 scope link metric 1000 192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.102 192.168.0.0/24 dev wlan0 proto kernel scope link src 192.168.0.12 (For the record, I have no idea what the 169.254.0.0 address is doing there.) uname -a 3.2.0-67-generic-pae #101-Ubuntu SMP Tue Jul 15 18:04:54 UTC 2014 i686 i686 i386 GNU/Linux lshw -C network *-network description: Ethernet interface product: NetXtreme BCM5751 Gigabit Ethernet PCI Express vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:02:00.0 logical name: eth0 version: 01 serial: 00:11:11:59:fc:09 size: 100Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm vpd msi pciexpress bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=tg3 driverversion=3.121 duplex=full firmware=5751-v3.23a ip=192.168.0.102 latency=0 link=yes multicast=yes port=twisted pair speed=100Mbit/s resources: irq:16 memory:dfcf0000-dfcfffff *-network description: Wireless interface product: AR5418 Wireless Network Adapter [AR5008E 802.11(a)bgn] (PCI-Express) vendor: Qualcomm Atheros physical id: 0 /etc/network/interfaces # interfaces(5) file used by ifup(8) and ifdown(8) auto lo iface lo inet loopback source /etc/network/interfaces.eth0 source /etc/network/interfaces.wlan0 /etc/network/interfaces.eth0 #Main Interface auto eth0 iface eth0 inet static address 192.168.0.102 netmask 255.255.255.0 gateway 192.168.0.1 /etc/network/interfaces.wlan0 auto wlan0 iface wlan0 inet static address 192.168.0.12 gateway 192.168.0.1 dns-nameservers 192.168.0.1 8.8.8.8 netmask 255.255.255.0 wpa-driver wext wpa-ssid TPLINK wpa-ap-scan 1 wpa-proto RSN wpa-pairwise CCMP wpa-group CCMP wpa-key-mgmt WPA-PSK wpa-psk dca1badb5fd4e9axxx4xxdaaxxfa91xx610bxx6a7d57ef67af9809dxx6af42e39 /etc/wpa_supplicant.conf ctrl_interface=/var/run/wpa_supplicant network={ ssid="TPLINK" psk="my password" key_mgmt=WPA-PSK proto=RSN pairwise=CCMP group=CCMP } ifdown eth0 ifdown: interface eth0 not configured ifconfig eth0 Link encap:Ethernet HWaddr 00:11:xx:xx:xx:09 inet addr:192.168.0.102 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::211:11ff:fe59:fc09/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:213690 errors:0 dropped:0 overruns:0 frame:0 TX packets:155266 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:220057808 (220.0 MB) TX bytes:21137696 (21.1 MB) Interrupt:16 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:196412 errors:0 dropped:0 overruns:0 frame:0 TX packets:196412 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:153270697 (153.2 MB) TX bytes:153270697 (153.2 MB) wlan0 Link encap:Ethernet HWaddr f0:7d:xx:xx:xx:13 inet addr:192.168.0.12 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::f27d:68ff:fec1:b413/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:11335 errors:0 dropped:0 overruns:0 frame:0 TX packets:7287 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2563290 (2.5 MB) TX bytes:855746 (855.7 KB) ifconfig eth0 down ifconfig eth0 Link encap:Ethernet HWaddr 00:xx:xx:xx:xx:09 inet addr:192.168.0.102 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::211:11ff:fe59:fc09/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:2 errors:0 dropped:0 overruns:0 frame:0 TX packets:1 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:192 (192.0 B) TX bytes:94 (94.0 B) Interrupt:16 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:196418 errors:0 dropped:0 overruns:0 frame:0 TX packets:196418 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:153270871 (153.2 MB) TX bytes:153270871 (153.2 MB) wlan0 Link encap:Ethernet HWaddr f0:7d:xx:xx:xx:13 inet addr:192.168.0.12 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::f27d:68ff:fec1:b413/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:11359 errors:0 dropped:0 overruns:0 frame:0 TX packets:7293 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2565482 (2.5 MB) TX bytes:856363 (856.3 KB) ip route default via 192.168.0.1 dev wlan0 metric 100 169.254.0.0/16 dev wlan0 scope link metric 1000 192.168.0.0/24 dev wlan0 proto kernel scope link src 192.168.0.12 192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.102 ping -I wlan0 -c 4 8.8.8.8 PING 8.8.8.8 (8.8.8.8) from 192.168.0.12 wlan0: 56(84) bytes of data. --- 8.8.8.8 ping statistics --- 4 packets transmitted, 0 received, 100% packet loss, time 3024ms ping -I eth0 -c 3 router PING router (192.168.0.1) from 192.168.0.102 eth0: 56(84) bytes of data. --- router ping statistics --- 3 packets transmitted, 0 received, 100% packet loss, time 2015ms ping -I wlan0 -c 3 router PING router (192.168.0.1) from 192.168.0.12 wlan0: 56(84) bytes of data. --- router ping statistics --- 3 packets transmitted, 0 received, 100% packet loss, time 2014ms Let me know if you need more info. Thank you in advance.

    Read the article

  • Linux C: "Interactive session" with separate read and write named pipes?

    - by ~sd-imi
    Hi all, I am trying to work with "Introduction to Interprocess Communication Using Named Pipes - Full-Duplex Communication Using Named Pipes", http://developers.sun.com/solaris/articles/named_pipes.html#5 ; in particular fd_server.c (included below for reference) Here is my info and compile line: :~$ cat /etc/issue Ubuntu 10.04 LTS \n \l :~$ gcc --version gcc (Ubuntu 4.4.3-4ubuntu5) 4.4.3 :~$ gcc fd_server.c -o fd_server fd_server.c creates two named pipes, one for reading and one for writing. What one can do, is: in one terminal, run the server and read (through cat) its write pipe: :~$ ./fd_server & 2/dev/null [1] 11354 :~$ cat /tmp/np2 and in another, write (using echo) to server's read pipe: :~$ echo "heeellloooo" /tmp/np1 going back to first terminal, one can see: :~$ cat /tmp/np2 HEEELLLOOOO 0[1]+ Exit 13 ./fd_server 2 /dev/null What I would like to do, is make sort of a "interactive" (or "shell"-like) session; that is, the server is run as usual, but instead of running "cat" and "echo", I'd like to use something akin to screen. What I mean by that, is that screen can be called like screen /dev/ttyS0 38400, and then it makes a sort of a interactive session, where what is typed in terminal is passed to /dev/ttyS0, and its response is written to terminal. Now, of course, I cannot use screen, because in my case the program has two separate nodes, and as far as I can tell, screen can refer to only one. How would one go about to achieve this sort of "interactive" session in this context (with two separate read/write pipes)? Thanks, Cheers! Code below: #include <stdio.h> #include <errno.h> #include <ctype.h> #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> //#include <fullduplex.h> /* For name of the named-pipe */ #define NP1 "/tmp/np1" #define NP2 "/tmp/np2" #define MAX_BUF_SIZE 255 #include <stdlib.h> //exit #include <string.h> //strlen int main(int argc, char *argv[]) { int rdfd, wrfd, ret_val, count, numread; char buf[MAX_BUF_SIZE]; /* Create the first named - pipe */ ret_val = mkfifo(NP1, 0666); if ((ret_val == -1) && (errno != EEXIST)) { perror("Error creating the named pipe"); exit (1); } ret_val = mkfifo(NP2, 0666); if ((ret_val == -1) && (errno != EEXIST)) { perror("Error creating the named pipe"); exit (1); } /* Open the first named pipe for reading */ rdfd = open(NP1, O_RDONLY); /* Open the second named pipe for writing */ wrfd = open(NP2, O_WRONLY); /* Read from the first pipe */ numread = read(rdfd, buf, MAX_BUF_SIZE); buf[numread] = '0'; fprintf(stderr, "Full Duplex Server : Read From the pipe : %sn", buf); /* Convert to the string to upper case */ count = 0; while (count < numread) { buf[count] = toupper(buf[count]); count++; } /* * Write the converted string back to the second * pipe */ write(wrfd, buf, strlen(buf)); } Edit: Right, just to clarify - it seems I found a document discussing something very similar, it is http://en.wikibooks.org/wiki/Serial_Programming/Serial_Linux#Configuration_with_stty - a modification of the script there ("For example, the following script configures the device and starts a background process for copying all received data from the serial device to standard output...") for the above program is below: # stty raw # ( ./fd_server 2>/dev/null; )& bgPidS=$! ( cat < /tmp/np2 ; )& bgPid=$! # Read commands from user, send them to device echo $(kill -0 $bgPidS 2>/dev/null ; echo $?) while [ "$(kill -0 $bgPidS 2>/dev/null ; echo $?)" -eq "0" ] && read cmd; do # redirect debug msgs to stderr, as here we're redirected to /tmp/np1 echo "$? - $bgPidS - $bgPid" >&2 echo "$cmd" echo -e "\nproc: $(kill -0 $bgPidS 2>/dev/null ; echo $?)" >&2 done >/tmp/np1 echo OUT # Terminate background read process - if they still exist if [ "$(kill -0 $bgPid 2>/dev/null ; echo $?)" -eq "0" ] ; then kill $bgPid fi if [ "$(kill -0 $bgPidS 2>/dev/null ; echo $?)" -eq "0" ] ; then kill $bgPidS fi # stty cooked So, saving the script as say starter.sh and calling it, results with the following session: $ ./starter.sh 0 i'm typing here and pressing [enter] at end 0 - 13496 - 13497 I'M TYPING HERE AND PRESSING [ENTER] AT END 0~?.N=?(?~? ?????}????@??????~? [garble] proc: 0 OUT which is what I'd call for "interactive session" (ignoring the debug statements) - server waits for me to enter a command; it gives its output after it receives a command (and as in this case it exits after first command, so does the starter script as well). Except that, I'd like to not have buffered input, but sent character by character (meaning the above session should exit after first key press, and print out a single letter only - which is what I expected stty raw would help with, but it doesn't: it just kills reaction to both Enter and Ctrl-C :) ) I was just wandering if there already is an existing command (akin to screen in respect to serial devices, I guess) that would accept two such named pipes as arguments, and establish a "terminal" or "shell" like session through them; or would I have to use scripts as above and/or program own 'client' that will behave as a terminal..

    Read the article

  • FreeBSD performance tuning. Sysctls, loader.conf, kernel

    - by SaveTheRbtz
    I wanted to share knowledge of tuning FreeBSD via sysctl.conf/loader.conf/KENCONF. It was initially based on Igor Sysoev's (author of nginx) presentation about FreeBSD tuning up to 100,000-200,000 active connections. Tunings are for FreeBSD-CURRENT. Since 7.2 amd64 some of them are tuned well by default. Prior 7.0 some of them are boot only (set via /boot/loader.conf) or does not exist at all. sysctl.conf: # No zero mapping feature # May break wine # (There are also reports about broken samba3) #security.bsd.map_at_zero=0 # If you have really busy webserver with apache13 you may run out of processes #kern.maxproc=10000 # Same for servers with apache2 / Pound #kern.threads.max_threads_per_proc=4096 # Max. backlog size kern.ipc.somaxconn=4096 # Shared memory // 7.2+ can use shared memory > 2Gb kern.ipc.shmmax=2147483648 # Sockets kern.ipc.maxsockets=204800 # Can cause this on older kernels: # http://old.nabble.com/Significant-performance-regression-for-increased-maxsockbuf-on-8.0-RELEASE-tt26745981.html#a26745981 ) kern.ipc.maxsockbuf=10485760 # Mbuf 2k clusters (on amd64 7.2+ 25600 is default) # For such high value vm.kmem_size must be increased to 3G kern.ipc.nmbclusters=262144 # Jumbo pagesize(_SC_PAGESIZE) clusters # Used as general packet storage for jumbo frames # can be monitored via `netstat -m` #kern.ipc.nmbjumbop=262144 # Jumbo 9k/16k clusters # If you are using them #kern.ipc.nmbjumbo9=65536 #kern.ipc.nmbjumbo16=32768 # For lower latency you can decrease scheduler's maximum time slice # default: stathz/10 (~ 13) #kern.sched.slice=1 # Increase max command-line length showed in `ps` (e.g for Tomcat/Java) # Default is PAGE_SIZE / 16 or 256 on x86 # This avoids commands to be presented as [executable] in `ps` # For more info see: http://www.freebsd.org/cgi/query-pr.cgi?pr=120749 kern.ps_arg_cache_limit=4096 # Every socket is a file, so increase them kern.maxfiles=204800 kern.maxfilesperproc=200000 kern.maxvnodes=200000 # On some systems HPET is almost 2 times faster than default ACPI-fast # Useful on systems with lots of clock_gettime / gettimeofday calls # See http://old.nabble.com/ACPI-fast-default-timecounter,-but-HPET-83--faster-td23248172.html # After revision 222222 HPET became default: http://svnweb.freebsd.org/base?view=revision&revision=222222 kern.timecounter.hardware=HPET # Small receive space, only usable on http-server, on file server this # should be increased to 65535 or even more #net.inet.tcp.recvspace=8192 # This is useful on Fat-Long-Pipes #net.inet.tcp.recvbuf_max=10485760 #net.inet.tcp.recvbuf_inc=65535 # Small send space is useful for http servers that serve small files # Autotuned since 7.x net.inet.tcp.sendspace=16384 # This is useful on Fat-Long-Pipes #net.inet.tcp.sendbuf_max=10485760 #net.inet.tcp.sendbuf_inc=65535 # Turn off receive autotuning # You can play with it. #net.inet.tcp.recvbuf_auto=0 #net.inet.tcp.sendbuf_auto=0 # This should be enabled if you going to use big spaces (>64k) # Also timestamp field is useful when using syncookies net.inet.tcp.rfc1323=1 # Turn this off on high-speed, lossless connections (LAN 1Gbit+) # If you set it there is no need in TCP_NODELAY sockopt (see man tcp) net.inet.tcp.delayed_ack=0 # This feature is useful if you are serving data over modems, Gigabit Ethernet, # or even high speed WAN links (or any other link with a high bandwidth delay product), # especially if you are also using window scaling or have configured a large send window. # Automatically disables on small RTT ( http://www.freebsd.org/cgi/cvsweb.cgi/src/sys/netinet/tcp_subr.c?#rev1.237 ) # This sysctl was removed in 10-CURRENT: # See: http://www.mail-archive.com/[email protected]/msg06178.html #net.inet.tcp.inflight.enable=0 # TCP slowstart algorithm tunings # We assuming we have very fast clients #net.inet.tcp.slowstart_flightsize=100 #net.inet.tcp.local_slowstart_flightsize=100 # Disable randomizing of ports to avoid false RST # Before usage check SA here www.bsdcan.org/2006/papers/ImprovingTCPIP.pdf # (it's also says that port randomization auto-disables at some conn.rates, but I didn't checked it thou) #net.inet.ip.portrange.randomized=0 # Increase portrange # For outgoing connections only. Good for seed-boxes and ftp servers. net.inet.ip.portrange.first=1024 net.inet.ip.portrange.last=65535 # # stops route cache degregation during a high-bandwidth flood # http://www.freebsd.org/doc/en/books/handbook/securing-freebsd.html #net.inet.ip.rtexpire=2 net.inet.ip.rtminexpire=2 net.inet.ip.rtmaxcache=1024 # Security net.inet.ip.redirect=0 net.inet.ip.sourceroute=0 net.inet.ip.accept_sourceroute=0 net.inet.icmp.maskrepl=0 net.inet.icmp.log_redirect=0 net.inet.icmp.drop_redirect=1 net.inet.tcp.drop_synfin=1 # # There is also good example of sysctl.conf with comments: # http://www.thern.org/projects/sysctl.conf # # icmp may NOT rst, helpful for those pesky spoofed # icmp/udp floods that end up taking up your outgoing # bandwidth/ifqueue due to all that outgoing RST traffic. # #net.inet.tcp.icmp_may_rst=0 # Security net.inet.udp.blackhole=1 net.inet.tcp.blackhole=2 # IPv6 Security # For more info see http://www.fosslc.org/drupal/content/security-implications-ipv6 # Disable Node info replies # To see this vulnerability in action run `ping6 -a sglAac ::1` or `ping6 -w ::1` on unprotected node net.inet6.icmp6.nodeinfo=0 # Turn on IPv6 privacy extensions # For more info see proposal http://unix.derkeiler.com/Mailing-Lists/FreeBSD/net/2008-06/msg00103.html net.inet6.ip6.use_tempaddr=1 net.inet6.ip6.prefer_tempaddr=1 # Disable ICMP redirect net.inet6.icmp6.rediraccept=0 # Disable acceptation of RA and auto linklocal generation if you don't use them #net.inet6.ip6.accept_rtadv=0 #net.inet6.ip6.auto_linklocal=0 # Increases default TTL, sometimes useful # Default is 64 net.inet.ip.ttl=128 # Lessen max segment life to conserve resources # ACK waiting time in miliseconds # (default: 30000. RFC from 1979 recommends 120000) net.inet.tcp.msl=5000 # Max bumber of timewait sockets net.inet.tcp.maxtcptw=200000 # Don't use tw on local connections # As of 15 Apr 2009. Igor Sysoev says that nolocaltimewait has some buggy realization. # So disable it or now till get fixed #net.inet.tcp.nolocaltimewait=1 # FIN_WAIT_2 state fast recycle net.inet.tcp.fast_finwait2_recycle=1 # Time before tcp keepalive probe is sent # default is 2 hours (7200000) #net.inet.tcp.keepidle=60000 # Should be increased until net.inet.ip.intr_queue_drops is zero net.inet.ip.intr_queue_maxlen=4096 # Interrupt handling via multiple CPU, but with context switch. # You can play with it. Default is 1; #net.isr.direct=0 # This is for routers only #net.inet.ip.forwarding=1 #net.inet.ip.fastforwarding=1 # This speed ups dummynet when channel isn't saturated net.inet.ip.dummynet.io_fast=1 # Increase dummynet(4) hash #net.inet.ip.dummynet.hash_size=2048 #net.inet.ip.dummynet.max_chain_len # Should be increased when you have A LOT of files on server # (Increase until vfs.ufs.dirhash_mem becomes lower) vfs.ufs.dirhash_maxmem=67108864 # Note from commit http://svn.freebsd.org/base/head@211031 : # For systems with RAID volumes and/or virtualization envirnments, where # read performance is very important, increasing this sysctl tunable to 32 # or even more will demonstratively yield additional performance benefits. vfs.read_max=32 # Explicit Congestion Notification (see http://en.wikipedia.org/wiki/Explicit_Congestion_Notification) net.inet.tcp.ecn.enable=1 # Flowtable - flow caching mechanism # Useful for routers #net.inet.flowtable.enable=1 #net.inet.flowtable.nmbflows=65535 # Extreme polling tuning #kern.polling.burst_max=1000 #kern.polling.each_burst=1000 #kern.polling.reg_frac=100 #kern.polling.user_frac=1 #kern.polling.idle_poll=0 # IPFW dynamic rules and timeouts tuning # Increase dyn_buckets till net.inet.ip.fw.curr_dyn_buckets is lower net.inet.ip.fw.dyn_buckets=65536 net.inet.ip.fw.dyn_max=65536 net.inet.ip.fw.dyn_ack_lifetime=120 net.inet.ip.fw.dyn_syn_lifetime=10 net.inet.ip.fw.dyn_fin_lifetime=2 net.inet.ip.fw.dyn_short_lifetime=10 # Make packets pass firewall only once when using dummynet # i.e. packets going thru pipe are passing out from firewall with accept #net.inet.ip.fw.one_pass=1 # shm_use_phys Wires all shared pages, making them unswappable # Use this to lessen Virtual Memory Manager's work when using Shared Mem. # Useful for databases #kern.ipc.shm_use_phys=1 # ZFS # Enable prefetch. Useful for sequential load type i.e fileserver. # FreeBSD sets vfs.zfs.prefetch_disable to 1 on any i386 systems and # on any amd64 systems with less than 4GB of avaiable memory # For additional info check this nabble thread http://old.nabble.com/Samba-read-speed-performance-tuning-td27964534.html #vfs.zfs.prefetch_disable=0 # On highload servers you may notice following message in dmesg: # "Approaching the limit on PV entries, consider increasing either the # vm.pmap.shpgperproc or the vm.pmap.pv_entry_max tunable" vm.pmap.shpgperproc=2048 loader.conf: # Accept filters for data, http and DNS requests # Useful when your software uses select() instead of kevent/kqueue or when you under DDoS # DNS accf available on 8.0+ accf_data_load="YES" accf_http_load="YES" accf_dns_load="YES" # Async IO system calls aio_load="YES" # Linux specific devices in /dev # As for 8.1 it only /dev/full #lindev_load="YES" # Adds NCQ support in FreeBSD # WARNING! all ad[0-9]+ devices will be renamed to ada[0-9]+ # 8.0+ only #ahci_load="YES" #siis_load="YES" # FreeBSD 8.2+ # New Congestion Control for FreeBSD # http://caia.swin.edu.au/urp/newtcp/tools/cc_chd-readme-0.1.txt # http://www.ietf.org/proceedings/78/slides/iccrg-5.pdf # Initial merge commit message http://www.mail-archive.com/[email protected]/msg31410.html #cc_chd_load="YES" # Increase kernel memory size to 3G. # # Use ONLY if you have KVA_PAGES in kernel configuration, and you have more than 3G RAM # Otherwise panic will happen on next reboot! # # It's required for high buffer sizes: kern.ipc.nmbjumbop, kern.ipc.nmbclusters, etc # Useful on highload stateful firewalls, proxies or ZFS fileservers # (FreeBSD 7.2+ amd64 users: Check that current value is lower!) #vm.kmem_size="3G" # If your server has lots of swap (>4Gb) you should increase following value # according to http://lists.freebsd.org/pipermail/freebsd-hackers/2009-October/029616.html # Otherwise you'll be getting errors # "kernel: swap zone exhausted, increase kern.maxswzone" # kern.maxswzone="256M" # Older versions of FreeBSD can't tune maxfiles on the fly #kern.maxfiles="200000" # Useful for databases # Sets maximum data size to 1G # (FreeBSD 7.2+ amd64 users: Check that current value is lower!) #kern.maxdsiz="1G" # Maximum buffer size(vfs.maxbufspace) # You can check current one via vfs.bufspace # Should be lowered/upped depending on server's load-type # Usually decreased to preserve kmem # (default is 10% of mem) #kern.maxbcache="512M" # Sendfile buffers # For i386 only #kern.ipc.nsfbufs=10240 # FreeBSD 9+ # HPET "legacy route" support. It should allow HPET to work per-CPU # See http://www.mail-archive.com/[email protected]/msg03603.html #hint.atrtc.0.clock=0 #hint.attimer.0.clock=0 #hint.hpet.0.legacy_route=1 # syncache Hash table tuning net.inet.tcp.syncache.hashsize=1024 net.inet.tcp.syncache.bucketlimit=512 net.inet.tcp.syncache.cachelimit=65536 # Increased hostcache # Later host cache can be viewed via net.inet.tcp.hostcache.list hidden sysctl # Very useful for it's RTT RTTVAR # Must be power of two net.inet.tcp.hostcache.hashsize=65536 # hashsize * bucketlimit (which is 30 by default) # It allocates 255Mb (1966080*136) of RAM net.inet.tcp.hostcache.cachelimit=1966080 # TCP control-block Hash table tuning net.inet.tcp.tcbhashsize=4096 # Disable ipfw deny all # Should be uncommented when there is a chance that # kernel and ipfw binary may be out-of sync on next reboot #net.inet.ip.fw.default_to_accept=1 # # SIFTR (Statistical Information For TCP Research) is a kernel module that # logs a range of statistics on active TCP connections to a log file. # See prerelease notes http://groups.google.com/group/mailing.freebsd.current/browse_thread/thread/b4c18be6cdce76e4 # and man 4 sitfr #siftr_load="YES" # Enable superpages, for 7.2+ only # Also read http://lists.freebsd.org/pipermail/freebsd-hackers/2009-November/030094.html vm.pmap.pg_ps_enabled=1 # Usefull if you are using Intel-Gigabit NIC #hw.em.rxd=4096 #hw.em.txd=4096 #hw.em.rx_process_limit="-1" # Also if you have ALOT interrupts on NIC - play with following parameters # NOTE: You should set them for every NIC #dev.em.0.rx_int_delay: 250 #dev.em.0.tx_int_delay: 250 #dev.em.0.rx_abs_int_delay: 250 #dev.em.0.tx_abs_int_delay: 250 # There is also multithreaded version of em/igb drivers can be found here: # http://people.yandex-team.ru/~wawa/ # # for additional em monitoring and statistics use # sysctl dev.em.0.stats=1 ; dmesg # sysctl dev.em.0.debug=1 ; dmesg # Also after r209242 (-CURRENT) there is a separate sysctl for each stat variable; # Same tunings for igb #hw.igb.rxd=4096 #hw.igb.txd=4096 #hw.igb.rx_process_limit=100 # Some useful netisr tunables. See sysctl net.isr #net.isr.maxthreads=4 #net.isr.defaultqlimit=4096 #net.isr.maxqlimit: 10240 # Bind netisr threads to CPUs #net.isr.bindthreads=1 # # FreeBSD 9.x+ # Increase interface send queue length # See commit message http://svn.freebsd.org/viewvc/base?view=revision&revision=207554 #net.link.ifqmaxlen=1024 # Nicer boot logo =) loader_logo="beastie" And finally here is KERNCONF: # Just some of them, see also # cat /sys/{i386,amd64,}/conf/NOTES # This one useful only on i386 #options KVA_PAGES=512 # You can play with HZ in environments with high interrupt rate (default is 1000) # 100 is for my notebook to prolong it's battery life #options HZ=100 # Polling is goot on network loads with high packet rates and low-end NICs # NB! Do not enable it if you want more than one netisr thread #options DEVICE_POLLING # Eliminate datacopy on socket read-write # To take advantage with zero copy sockets you should have an MTU >= 4k # This req. is only for receiving data. # Read more in man zero_copy_sockets # Also this epic thread on kernel trap: # http://kerneltrap.org/node/6506 # Here Linus says that "anybody that does it that way (FreeBSD) is totally incompetent" #options ZERO_COPY_SOCKETS # Support TCP sign. Used for IPSec options TCP_SIGNATURE # There was stackoverflow found in KAME IPSec stack: # See http://secunia.com/advisories/43995/ # For quick workaround you can use `ipfw add deny proto ipcomp` options IPSEC # This ones can be loaded as modules. They described in loader.conf section #options ACCEPT_FILTER_DATA #options ACCEPT_FILTER_HTTP # Adding ipfw, also can be loaded as modules options IPFIREWALL # On 8.1+ you can disable verbose to see blocked packets on ipfw0 interface. # Also there is no point in compiling verbose into the kernel, because # now there is net.inet.ip.fw.verbose tunable. #options IPFIREWALL_VERBOSE #options IPFIREWALL_VERBOSE_LIMIT=10 options IPFIREWALL_FORWARD # Adding kernel NAT options IPFIREWALL_NAT options LIBALIAS # Traffic shaping options DUMMYNET # Divert, i.e. for userspace NAT options IPDIVERT # This is for OpenBSD's pf firewall device pf device pflog # pf's QoS - ALTQ options ALTQ options ALTQ_CBQ # Class Bases Queuing (CBQ) options ALTQ_RED # Random Early Detection (RED) options ALTQ_RIO # RED In/Out options ALTQ_HFSC # Hierarchical Packet Scheduler (HFSC) options ALTQ_PRIQ # Priority Queuing (PRIQ) options ALTQ_NOPCC # Required for SMP build # Pretty console # Manual can be found here http://forums.freebsd.org/showthread.php?t=6134 #options VESA #options SC_PIXEL_MODE # Disable reboot on Ctrl Alt Del #options SC_DISABLE_REBOOT # Change normal|kernel messages color options SC_NORM_ATTR=(FG_GREEN|BG_BLACK) options SC_KERNEL_CONS_ATTR=(FG_YELLOW|BG_BLACK) # More scroll space options SC_HISTORY_SIZE=8192 # Adding hardware crypto device device crypto device cryptodev # Useful network interfaces device vlan device tap #Virtual Ethernet driver device gre #IP over IP tunneling device if_bridge #Bridge interface device pfsync #synchronization interface for PF device carp #Common Address Redundancy Protocol device enc #IPsec interface device lagg #Link aggregation interface device stf #IPv4-IPv6 port # Also for my notebook, but may be used with Opteron device amdtemp # Same for Intel processors device coretemp # man 4 cpuctl device cpuctl # CPU control pseudo-device # Support for ECMP. More than one route for destination # Works even with default route so one can use it as LB for two ISP # For now code is unstable and panics (panic: rtfree 2) on route deletions. #options RADIX_MPATH # Multicast routing #options MROUTING #options PIM # Debug & DTrace options KDB # Kernel debugger related code options KDB_TRACE # Print a stack trace for a panic options KDTRACE_FRAME # amd64-only(?) options KDTRACE_HOOKS # all architectures - enable general DTrace hooks #options DDB #options DDB_CTF # all architectures - kernel ELF linker loads CTF data # Adaptive spining in lockmgr (8.x+) # See http://www.mail-archive.com/[email protected]/msg10782.html options ADAPTIVE_LOCKMGRS # UTF-8 in console (8.x+) #options TEKEN_UTF8 # FreeBSD 8.1+ # Deadlock resolver thread # For additional information see http://www.mail-archive.com/[email protected]/msg18124.html # (FYI: "resolution" is panic so use with caution) #options DEADLKRES # Increase maximum size of Raw I/O and sendfile(2) readahead #options MAXPHYS=(1024*1024) #options MAXBSIZE=(1024*1024) # For scheduler debug enable following option. # Debug will be available via `kern.sched.stats` sysctl # For more information see http://svnweb.freebsd.org/base/head/sys/conf/NOTES?view=markup #options SCHED_STATS If you are tuning network for maximum performance you may wish to play with ifconfig options like: # You can list all capabilities via `ifconfig -m` ifconfig [-]rxcsum [-]txcsum [-]tso [-]lro mtu In case you've enabled DDB in kernel config, you should edit your /etc/ddb.conf and add something like this to enable automatic reboot (and textdump as bonus): script kdb.enter.panic=textdump set; capture on; show pcpu; bt; ps; alltrace; capture off; call doadump; reset script kdb.enter.default=textdump set; capture on; bt; ps; capture off; call doadump; reset And do not forget to add ddb_enable="YES" to /etc/rc.conf Since FreeBSD 9 you can select to enable/disable flowcontrol on your NIC: # See http://en.wikipedia.org/wiki/Ethernet_flow_control and # http://www.mail-archive.com/[email protected]/msg07927.html for additional info ifconfig bge0 media auto mediaopt flowcontrol PS. Also most of FreeBSD's limits can be monitored by # vmstat -z and # limits PPS. variety of network counters can be monitored via # netstat -s In FreeBSD-9 netstat's -Q option appeared, try following command to display netisr stats # netstat -Q PPPS. also see # man 7 tuning PPPPS. I wanted to thank FreeBSD community, especially author of nginx - Igor Sysoev, nginx-ru@ and FreeBSD-performance@ mailing lists for providing useful information about FreeBSD tuning. FreeBSD WIP * Whats cooking for FreeBSD 7? * Whats cooking for FreeBSD 8? * Whats cooking for FreeBSD 9? So here is the question: What tunings are you using on yours FreeBSD servers? You can also post your /etc/sysctl.conf, /boot/loader.conf, kernel options, etc with description of its' meaning (do not copy-paste from sysctl -d). Don't forget to specify server type (web, smb, gateway, etc) Let's share experience!

    Read the article

< Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >