Search Results

Search found 3061 results on 123 pages for 'interfaces'.

Page 64/123 | < Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >

  • iptables change destination IP without DNAT

    - by Mad_Ady
    Hello, I'm trying to workaround a broken application which insists on connecting to the private address (and thus unreachable) of a server, instead of connecting to the public address (even if the relevant port is open). Changing the application is not an option. I'm trying to add iptables rules on the client(s) to change the destination ip for the packets going to 192.168.251.3 to go to 1.2.3.4 instead. DNAT isn't working since 1.2.3.4 is not an IP on any of my client interfaces. Can anyone point me to the relevant documentation that allows me to use MANGLE to change destination IPs?

    Read the article

  • Cisco ASA Multiple Public IP

    - by KGDI
    I have a Cisco ASA5510 and articles related to ASA and mulitple Public IP says this cant be done. My question is how to best solve a scenario like this: I have 3 zones, Outside, Inside and DMZ Outside is Internet Inside is Client machines DMZ is a zone for servers related to external and internal services. My scenario is a bit more complex, but to keep things simple this will do: I want to place an Exchange server and a web server (externally reachable in the DMZ zone) The webserver uses both TCP80/443, the Exchange server uses 443 So to the problem: With the ASA only having one public IP, how would you make a DNAT to port 443 on both the internal hosts behind 1 Public IP? Usually, when i do this kind of scenario With Linux boxes i use alias Interfaces like eth0:0, eth0:1 and set 1 Public IP on each. To me this must be a pretty common scenario, any ideas on how to solve it With ASA? /KGDI

    Read the article

  • How to control routes added by RasDial

    - by Robert Dodier
    I am using the RasDial function on a Windows box (Windows Server 2008) to dial a device from which the server then reads data. It seems that some new routes are added to the network routing table when the dial-up connection is made. That interferes with other network interfaces on the server. In particular, RasDial adds a default route which routes traffic to the device, which makes the server unreachable until the connection is dropped. Is there a way to control which routes are added by RasDial? I have been studying Microsoft's document for RasDial and associated items (RASDIALPARAMS, RASDIALEXTENSIONS) without finding anything about routing. There is an option for "Use default gateway on remote network" when configuring a VPN, but I don't see how to apply that in this case. Thanks for any light you can shed on this problem.

    Read the article

  • How do I automate the handling of a problem (no network device found) in Ubuntu 10.04 w/ preseed?

    - by user61183
    I have a preseed file that is doing some automation for an installation of Ubuntu 10.04. At the point where the network hardware is auto-detected, however, it fails to find hardware and displays a message, "No network interfaces detected". To make a long story short, I don't care if it can detect my network interface. How do I do one of the following: Skip that step alltogether. Handle the error page automagically. PS. I found somewhere where it suggested this: netcfg/no_interfaces seen true That didn't work. Thanks

    Read the article

  • What is the point of connecting two routers directly together

    - by rubixibuc
    The way subnets work, wouldn't connecting two router interfaces together require their own subnet between them. Unless that subnet mask has 31 bits, wouldn't that was adress space. I'm asking because I often seen that done in networking books. How can this be done without wasting IP Addresses? They usually draw this when explaining subnetting. They have a central router connected to several other each one supposed to be creating their own subnet. Is this really how subnettimg is done? Example <-------[Router 1]-----Wasteful Subnet-------[Router 2]------> | | | \/

    Read the article

  • How to connect 2 virtual machines(VMWare Workstation 7.0) in a separate network?

    - by goluhaque
    There are supposed to be 2 networks: i) The first one is the one which all the virtual machines and the host share(Host-only condition). This one is easily achievable for me, as an amateurish beginner. ii)The second network is the one in which only 2 virtual machines are to be connected. These 2 virtual machines should also be connected to the Network(i). I understand that for the 2 virtual hosts that are to be connected in a separate networks simultaneously, it means that they need to have 2 IPs, and hence 2 ports(physical)/ethernet interfaces?

    Read the article

  • How can I retrieve "remembered"(stored) wi-fi password from a win. 7 device?

    - by user180880
    I have access to PC, and I am a standard user. Everything(incl. "show charecters" tickbox at wiriless menu) requires admin access. Now, that said machine is actually like a big tv with touch. Type-stuff is handled by virtual keyboard of windows. I can reach to c:\ProgramData\Microsoft\Wlansvc\Profiles\Interfaces and can see-open files there, which is I assume where passwords stored are. Now the problem is that these passwords is encypted. I'm ok with if there is a way with changing/resetting admin password as well. Considering this device has nothing but massive amounts of usb(yep, not even cd-dvd // rw) the only way is from inside or with usb without admin rights.

    Read the article

  • Ninject 3.0 MVC kernel.bind error Auto Registration

    - by user295734
    Getting and error on kernel.Bind(scanner = ... "scanner" has the little error line under it in VS 2010. Cannot convert lambda expression to type 'System.Type[]' because it is not a delegate type Tyring to Auto Register like the old kernel.scan in 2.0. I can not figure out what i am doing wrong. Added and removed so many Ninject packages. completely lost, getting to be a big waste of time. using System; using System.Web; using Microsoft.Web.Infrastructure.DynamicModuleHelper; using Ninject; using Ninject.Web.Common; //using Ninject.Extensions.Conventions; using Ninject.Web.WebApi; using Ninject.Web.Mvc; using CommonServiceLocator.NinjectAdapter; using System.Reflection; using System.IO; using LR.Repository; using LR.Repository.Interfaces; using LR.Service.Interfaces; using System.Web.Http; public static class NinjectWebCommon { private static readonly Bootstrapper bootstrapper = new Bootstrapper(); /// <summary> /// Starts the application /// </summary> public static void Start() { DynamicModuleUtility.RegisterModule(typeof(OnePerRequestHttpModule)); DynamicModuleUtility.RegisterModule(typeof(NinjectHttpModule)); bootstrapper.Initialize(CreateKernel); } /// <summary> /// Stops the application. /// </summary> public static void Stop() { bootstrapper.ShutDown(); } /// <summary> /// Creates the kernel that will manage your application. /// </summary> /// <returns>The created kernel.</returns> private static IKernel CreateKernel() { var kernel = new StandardKernel(); kernel.Bind<Func<IKernel>>().ToMethod(ctx => () => new Bootstrapper().Kernel); kernel.Bind<IHttpModule>().To<HttpApplicationInitializationHttpModule>(); RegisterServices(kernel); return kernel; } /// <summary> /// Load your modules or register your services here! /// </summary> /// <param name="kernel">The kernel.</param> private static void RegisterServices(IKernel kernel) { kernel.Bind(scanner => scanner.FromAssembliesInPath(Path.GetDirectoryName(Assembly.GetExecutingAssembly().Location)) .Select(IsServiceType) .BindToDefaultInterface() .Configure(binding => binding.InSingletonScope()) ); } private static bool IsServiceType(Type type) { // temp return true; // .Any() is not recognized either. return true; // type.IsClass && type.GetInterfaces().Any(intface => intface.Name == "I" + type.Name); }

    Read the article

  • Is Multicast broken for Android 2.0.1 (currently on the DROID) or am I missing something?

    - by Gubatron
    This code works perfectly in Ubuntu, in Windows and MacOSX, it also works fine with a Nexus-One currently running firmware 2.1.1. I start sending and listening multicast datagrams, and all the computers and the Nexus-One will see each other perfectly. Then I run the same code on a Droid (Firmware 2.0.1), and everybody will get the packets sent by the Droid, but the droid will listen only to it's own packets. This is the run() method of a thread that's constantly listening on a Multicast group for incoming packets sent to that group. I'm running my tests on a local network where I have multicast support enabled in the router. My goal is to have devices meet each other as they come on line by broadcasting packages to a multicast group. public void run() { byte[] buffer = new byte[65535]; DatagramPacket dp = new DatagramPacket(buffer, buffer.length); try{ MulticastSocket ms = new MulticastSocket(_port); ms.setNetworkInterface(_ni); //non loopback network interface passed ms.joinGroup(_ia); //the multicast address, currently 224.0.1.16 Log.v(TAG,"Joined Group " + _ia); while (true) { ms.receive(dp); String s = new String(dp.getData(),0,dp.getLength()); Log.v(TAG,"Received Package on "+ _ni.getName() +": " + s); Message m = new Message(); Bundle b = new Bundle(); b.putString("event", "Listener ("+_ni.getName()+"): \"" + s + "\""); m.setData(b); dispatchMessage(m); //send to ui thread } } catch (SocketException se) { System.err.println(se); } catch (IOException ie) { System.err.println(ie); } } Over here, is the code that sends the Multicast Datagram out of every valid network interface available (that's not the loopback interface). public void sendPing() { MulticastSocket ms = null; try { ms = new MulticastSocket(_port); ms.setTimeToLive(TTL_GLOBAL); List<NetworkInterface> interfaces = getMulticastNonLoopbackNetworkInterfaces(); for (NetworkInterface iface : interfaces) { //skip loopback if (iface.getName().equals("lo")) continue; ms.setNetworkInterface(iface); _buffer = ("FW-"+ _name +" PING ("+iface.getName()+":"+iface.getInetAddresses().nextElement()+")").getBytes(); DatagramPacket dp = new DatagramPacket(_buffer, _buffer.length,_ia,_port); ms.send(dp); Log.v(TAG,"Announcer: Sent packet - " + new String(_buffer) + " from " + iface.getDisplayName()); } } catch (IOException e) { e.printStackTrace(); } catch (Exception e2) { e2.printStackTrace(); } } Update (April 2nd 2010) I found a way to have the Droid's network interface to communicate using Multicast! _wifiMulticastLock = ((WifiManager) context.getSystemService(Context.WIFI_SERVICE)).createMulticastLock("multicastLockNameHere"); _wifiMulticastLock.acquire(); Then when you're done... if (_wifiMulticastLock != null && _wifiMulticastLock.isHeld()) _wifiMulticastLock.release(); After I did this, the Droid started sending and receiving UDP Datagrams on a Multicast group. gubatron

    Read the article

  • Deploying an EAR to JBOSS times out (org.rhq.core.pc.inventory.TimeoutException:)

    - by rangalo
    Hi, I am trying to deploy an ear file to JBOSS AS (defalut server). The application is the mavenised version of examples of SeamInAction book. When I copy the file to $JBOSS_HOME/server/default/deploy, I don't get any exception but the application doesn't respond, after some time trying to access the application from the browser gives following in the log... While deploying with admin-console (http://localhost:8080/admin-console) I get following error messgae: PS: After this Jboss gets into unusable state. I cannot even access admin-console. I just have to kill it. ErrorMessage in admin-console: Failed to create Resource Open18.ear - cause: org.rhq.core.pc.inventory.TimeoutException: Call to [org.rhq.plugins.jbossas5.ApplicationServerComponent.createResource()] with args [[CreateResourceReport: ResourceType=[ResourceType[id=0, category=Service, name=Enterprise Application (EAR), plugin=JBossAS5]], ResourceKey=[null]]] timed out. Invocation thread will be interrupted at org.rhq.core.pc.inventory.ResourceContainer$ResourceComponentInvocationHandler.invokeInNewThreadWithLock(ResourceContainer.java:437) at org.rhq.core.pc.inventory.ResourceContainer$ResourceComponentInvocationHandler.invoke(ResourceContainer.java:406) at $Proxy266.createResource(Unknown Source) at org.rhq.core.pc.inventory.CreateResourceRunner.call(CreateResourceRunner.java:113) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:619) Error Logs: 4:08:58,555 INFO [TableMetadata] foreign keys: [fkaf42e01ba13c3380, fk_course_ref_facility] 14:08:58,555 INFO [TableMetadata] indexes: [course_pkey] 14:08:58,645 INFO [TableMetadata] table found: public.facility 14:08:58,645 INFO [TableMetadata] columns: [zip, phone, state, type, uri, city, country, id, price_range, address, county, description, nam e] 14:08:58,645 INFO [TableMetadata] foreign keys: [] 14:08:58,645 INFO [TableMetadata] indexes: [facility_pkey] 14:08:58,705 INFO [TableMetadata] table found: public.hole 14:08:58,705 INFO [TableMetadata] columns: [id, m_par, l_handicap, name, l_par, number, course_id, m_handicap] 14:08:58,705 INFO [TableMetadata] foreign keys: [fk_hole_ref_course, fk30f4c09c3f1200] 14:08:58,705 INFO [TableMetadata] indexes: [hole_pkey, uniq_hole_number] 14:08:58,764 INFO [TableMetadata] table found: public.tee 14:08:58,764 INFO [TableMetadata] columns: [hole_id, distance, tee_set_id] 14:08:58,764 INFO [TableMetadata] foreign keys: [fk1c014f8de7677, fk_tee_ref_hole, fk1c014c69de560, fk_tee_ref_tee_set] 14:08:58,764 INFO [TableMetadata] indexes: [tee_pkey] 14:08:58,826 INFO [TableMetadata] table found: public.tee_set 14:08:58,826 INFO [TableMetadata] columns: [id, color, m_slope_rating, l_slope_rating, name, course_id, m_course_rating, l_course_rating, p os] 14:08:58,826 INFO [TableMetadata] foreign keys: [fk_tee_set_ref_course, fkaa6881b79c3f1200] 14:08:58,826 INFO [TableMetadata] indexes: [tee_set_pkey, uniq_tee_set_pos, uniq_tee_set_color] 14:08:58,827 INFO [SchemaUpdate] schema update complete 14:08:58,829 INFO [NamingHelper] JNDI InitialContext properties:{java.naming.factory.initial=org.jnp.interfaces.NamingContextFactory, java. naming.factory.url.pkgs=org.jboss.naming:org.jnp.interfaces} 14:08:58,850 INFO [TomcatDeployment] deploy, ctxPath=/Open18 14:15:53,969 WARN [DiscoveryComponentProxyFactory] The discovery component for resource type [ResourceType[id=0, category=Service, name=Connector, plugin=JBossAS5]] has been blacklisted 14:15:53,970 WARN [InventoryManager] Failure during discovery for [Connector] Resources - failed after 300002 ms. org.rhq.core.pc.inventory.TimeoutException: Call to [org.rhq.plugins.jbossas5.ConnectorDiscoveryComponent.discoverResources()] with args [[org.rhq.core.pluginapi.inventory.ResourceDiscoveryContext@96db1]] timed out. Invocation thread will be interrupted at org.rhq.core.pc.util.DiscoveryComponentProxyFactory$ResourceDiscoveryComponentInvocationHandler.invokeInNewThread(DiscoveryComponentProxyFactory.java:208) at org.rhq.core.pc.util.DiscoveryComponentProxyFactory$ResourceDiscoveryComponentInvocationHandler.invoke(DiscoveryComponentProxyFactory.java:181) at $Proxy249.discoverResources(Unknown Source) at org.rhq.core.pc.inventory.InventoryManager.invokeDiscoveryComponent(InventoryManager.java:272) at org.rhq.core.pc.inventory.InventoryManager.executeComponentDiscovery(InventoryManager.java:1697) at org.rhq.core.pc.inventory.RuntimeDiscoveryExecutor.discoverForResource(RuntimeDiscoveryExecutor.java:218) at org.rhq.core.pc.inventory.RuntimeDiscoveryExecutor.discoverForResource(RuntimeDiscoveryExecutor.java:234) at org.rhq.core.pc.inventory.RuntimeDiscoveryExecutor.runtimeDiscover(RuntimeDiscoveryExecutor.java:134) at org.rhq.core.pc.inventory.RuntimeDiscoveryExecutor.call(RuntimeDiscoveryExecutor.java:94) at org.rhq.core.pc.inventory.RuntimeDiscoveryExecutor.call(RuntimeDiscoveryExecutor.java:51) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:98) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:207) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:619) 14:15:53,981 WARN [NavigationContent] Unable to find node for deleted resource [Resource[id=-5, type=Connector, key=ajp://127.0.0.1:8009, name=ajp://127.0.0.1:8009, parent=JBoss Web]].

    Read the article

  • Problem Implementing StructureMap in VB.Net Conversion of SharpArchitecture

    - by Monkeeman69
    I work in a VB.Net environment and have recently been tasked with creating an MVC enviroment to use as a base to work from. I decided to convert the latest SharpArchitecture release (Q3 2009) into VB, which on the whole has gone fine after a bit of hair pulling. I came across a problem with Castle Windsor where my custom repository interface (lives in the core/domain project) that was reference in the constructor of my test controller was not getting injected with the concrete implementation (from the data project). I hit a brick wall with this so basically decided to switch out Castle Windsor for StructureMap. I think I have implemented this ok as everything compiles and runs and my controller ran ok when referencing a custom repository interface. It appears now that I have/or cannot now setup my generic interfaces up properly (I hope this makes sense so far as I am new to all this). When I use IRepository(Of T) (wanting it to be injected with a concrete implementation of Repository(Of Type)) in the controller constructor I am getting the following runtime error: "StructureMap Exception Code: 202 No Default Instance defined for PluginFamily SharpArch.Core.PersistenceSupport.IRepository`1[[DebtRemedy.Core.Page, DebtRemedy.Core, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null]], SharpArch.Core, Version=1.0.0.0, Culture=neutral, PublicKeyToken=b5f559ae0ac4e006" Here are my code excerpts that I am using (my project is called DebtRemedy). My structuremap registry class Public Class DefaultRegistry Inherits Registry Public Sub New() ''//Generic Repositories AddGenericRepositories() ''//Custom Repositories AddCustomRepositories() ''//Application Services AddApplicationServices() ''//Validator [For](GetType(IValidator)).Use(GetType(Validator)) End Sub Private Sub AddGenericRepositories() ''//ForRequestedType(GetType(IRepository(Of ))).TheDefaultIsConcreteType(GetType(Repository(Of ))) [For](GetType(IEntityDuplicateChecker)).Use(GetType(EntityDuplicateChecker)) [For](GetType(IRepository(Of ))).Use(GetType(Repository(Of ))) [For](GetType(INHibernateRepository(Of ))).Use(GetType(NHibernateRepository(Of ))) [For](GetType(IRepositoryWithTypedId(Of ,))).Use(GetType(RepositoryWithTypedId(Of ,))) [For](GetType(INHibernateRepositoryWithTypedId(Of ,))).Use(GetType(NHibernateRepositoryWithTypedId(Of ,))) End Sub Private Sub AddCustomRepositories() Scan(AddressOf SetupCustomRepositories) End Sub Private Shared Sub SetupCustomRepositories(ByVal y As IAssemblyScanner) y.Assembly("DebtRemedy.Core") y.Assembly("DebtRemedy.Data") y.WithDefaultConventions() End Sub Private Sub AddApplicationServices() Scan(AddressOf SetupApplicationServices) End Sub Private Shared Sub SetupApplicationServices(ByVal y As IAssemblyScanner) y.Assembly("DebtRemedy.ApplicationServices") y.With(New FirstInterfaceConvention) End Sub End Class Public Class FirstInterfaceConvention Implements ITypeScanner Public Sub Process(ByVal type As Type, ByVal graph As PluginGraph) Implements ITypeScanner.Process If Not IsConcrete(type) Then Exit Sub End If ''//only works on concrete types Dim firstinterface = type.GetInterfaces().FirstOrDefault() ''//grabs first interface If firstinterface IsNot Nothing Then graph.AddType(firstinterface, type) Else ''//registers type ''//adds concrete types with no interfaces graph.AddType(type) End If End Sub End Class I have tried both ForRequestedType (which I think is now deprecated) and For. IRepository(Of T) lives in SharpArch.Core.PersistenceSupport. Repository(Of T) lives in SharpArch.Data.NHibernate. My servicelocator class Public Class StructureMapServiceLocator Inherits ServiceLocatorImplBase Private container As IContainer Public Sub New(ByVal container As IContainer) Me.container = container End Sub Protected Overloads Overrides Function DoGetInstance(ByVal serviceType As Type, ByVal key As String) As Object Return If(String.IsNullOrEmpty(key), container.GetInstance(serviceType), container.GetInstance(serviceType, key)) End Function Protected Overloads Overrides Function DoGetAllInstances(ByVal serviceType As Type) As IEnumerable(Of Object) Dim objList As New List(Of Object) For Each obj As Object In container.GetAllInstances(serviceType) objList.Add(obj) Next Return objList End Function End Class My controllerfactory class Public Class ServiceLocatorControllerFactory Inherits DefaultControllerFactory Protected Overloads Overrides Function GetControllerInstance(ByVal requestContext As RequestContext, ByVal controllerType As Type) As IController If controllerType Is Nothing Then Return Nothing End If Try Return TryCast(ObjectFactory.GetInstance(controllerType), Controller) Catch generatedExceptionName As StructureMapException System.Diagnostics.Debug.WriteLine(ObjectFactory.WhatDoIHave()) Throw End Try End Function End Class The initialise stuff in my gloabal.asax Dim container As IContainer = New Container(New DefaultRegistry) ControllerBuilder.Current.SetControllerFactory(New ServiceLocatorControllerFactory()) ServiceLocator.SetLocatorProvider(Function() New StructureMapServiceLocator(container)) My test controller Public Class DataCaptureController Inherits BaseController Private ReadOnly clientRepository As IClientRepository() Private ReadOnly pageRepository As IRepository(Of Page) Public Sub New(ByVal clientRepository As IClientRepository(), ByVal pageRepository As IRepository(Of Page)) Check.Require(clientRepository IsNot Nothing, "clientRepository may not be null") Check.Require(pageRepository IsNot Nothing, "pageRepository may not be null") Me.clientRepository = clientRepository Me.pageRepository = pageRepository End Sub Function Index() As ActionResult Return View() End Function The above works fine when I take out everything to do with the pageRepository which is IRepository(Of T). Any help with this would be greatly appreciated.

    Read the article

  • nhibernate sql Express connection issue - error: 26 - Error Locating Server/Instance Specified

    - by frosty
    I can connect fine with normal ado.net. However i get the following error when i tried to connect nHibernate. hibernate.cfg.xml <?xml version="1.0" encoding="utf-8" ?> <hibernate-configuration xmlns="urn:nhibernate-configuration-2.2"> <session-factory> <property name="dialect">NHibernate.Dialect.MsSql2005Dialect</property> <property name="connection.provider">NHibernate.Connection.DriverConnectionProvider</property> <property name="connection.driver_class">NHibernate.Driver.SqlClientDriver</property> <property name="connection.connection_string">Server=xxxxx\SQLEXPRESS; Database=xxxxx; User ID=xxxxx; Password=xxxxx; Trusted_Connection=True</property> <property name="proxyfactory.factory_class">NHibernate.ByteCode.Castle.ProxyFactoryFactory, NHibernate.ByteCode.Castle</property> <property name="show_sql">true</property> </session-factory> </hibernate-configuration> Server error A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) Full stack [SqlException (0x80131904): A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified)] System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) +4845255 System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) +194 System.Data.SqlClient.TdsParser.Connect(ServerInfo serverInfo, SqlInternalConnectionTds connHandler, Boolean ignoreSniOpenTimeout, Int64 timerExpire, Boolean encrypt, Boolean trustServerCert, Boolean integratedSecurity, SqlConnection owningObject) +4858557 System.Data.SqlClient.SqlInternalConnectionTds.AttemptOneLogin(ServerInfo serverInfo, String newPassword, Boolean ignoreSniOpenTimeout, Int64 timerExpire, SqlConnection owningObject) +90 System.Data.SqlClient.SqlInternalConnectionTds.LoginNoFailover(String host, String newPassword, Boolean redirectedUserInstance, SqlConnection owningObject, SqlConnectionString connectionOptions, Int64 timerStart) +342 System.Data.SqlClient.SqlInternalConnectionTds.OpenLoginEnlist(SqlConnection owningObject, SqlConnectionString connectionOptions, String newPassword, Boolean redirectedUserInstance) +221 System.Data.SqlClient.SqlInternalConnectionTds..ctor(DbConnectionPoolIdentity identity, SqlConnectionString connectionOptions, Object providerInfo, String newPassword, SqlConnection owningObject, Boolean redirectedUserInstance) +189 System.Data.SqlClient.SqlConnectionFactory.CreateConnection(DbConnectionOptions options, Object poolGroupProviderInfo, DbConnectionPool pool, DbConnection owningConnection) +185 System.Data.ProviderBase.DbConnectionFactory.CreatePooledConnection(DbConnection owningConnection, DbConnectionPool pool, DbConnectionOptions options) +31 System.Data.ProviderBase.DbConnectionPool.CreateObject(DbConnection owningObject) +433 System.Data.ProviderBase.DbConnectionPool.UserCreateRequest(DbConnection owningObject) +66 System.Data.ProviderBase.DbConnectionPool.GetConnection(DbConnection owningObject) +499 System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) +65 System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory) +117 System.Data.SqlClient.SqlConnection.Open() +122 NHibernate.Connection.DriverConnectionProvider.GetConnection() +102 NHibernate.Tool.hbm2ddl.SuppliedConnectionProviderConnectionHelper.Prepare() +15 NHibernate.Tool.hbm2ddl.SchemaMetadataUpdater.GetReservedWords(Dialect dialect, IConnectionHelper connectionHelper) +65 NHibernate.Tool.hbm2ddl.SchemaMetadataUpdater.Update(ISessionFactory sessionFactory) +80 NHibernate.Impl.SessionFactoryImpl..ctor(Configuration cfg, IMapping mapping, Settings settings, EventListeners listeners) +599 NHibernate.Cfg.Configuration.BuildSessionFactory() +87 XXX.Domain.Repositories.NHibernateHelper.get_SessionFactory() in D:\dev\MyProject\XXX\XXX.Domain\Repositories\NHibernateHelper.cs:23 XXX.Domain.Repositories.NHibernateHelper.OpenSession() in D:\dev\MyProject\XXX\XXX.Domain\Repositories\NHibernateHelper.cs:31 XXX.Domain.Repositories.EntryRepository.GetCountByGmapId(Int32 gmapId) in D:\dev\MyProject\XXX\XXX.Domain\Repositories\EntryRepository.cs:152 XXX.Controls.Activity.BindRepeater(Int32 id) in D:\dev\MyProject\XXX\XXX.Controls\Activity.ascx.cs:58 XXX.Controls.Activity.DropDownListMaps_SelectedIndexChanged(Object sender, EventArgs e) in D:\dev\MyProject\XXX\XXX.Controls\Activity.ascx.cs:75 System.Web.UI.WebControls.ListControl.OnSelectedIndexChanged(EventArgs e) +111 System.Web.UI.WebControls.DropDownList.RaisePostDataChangedEvent() +134 System.Web.UI.WebControls.DropDownList.System.Web.UI.IPostBackDataHandler.RaisePostDataChangedEvent() +10 System.Web.UI.Page.RaiseChangedEvents() +165 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +1485

    Read the article

  • WebSphere Application Server EJB Optimization

    - by Chris Aldrich
    We are working on developing a Java EE based application. Our application is Java 1.5 compatible and will be deployed to WAS ND 6.1.0.21 with EBJ 3.0 and Web Services feature packs. The configuration is currently one cell with two clusters. Each cluster will have two nodes. Our application, or our system, as I should rather say, comes in two or three parts. Part 1: An ear deployed to one cluster that contains 3rd party vendor code combined with customization code. Their code is EJB 2.0 compliant and has a lot of Remote Home interfaces. Part 2: An ear deployed to the same cluster as the first ear. This ear contains EBJ 3's that make calls into the EJB 2's supplied by the vendor and the custom code. These EJB 3's are used by the JSF UI also packaged with the EAR, and some of them are also exposed as web services (JAX-WS 2.0 with SOAP 1.2 compliance) for other clients. Part 3: There may be other services that do not depend on our vendor/custom code app. These services will be EJB 3.0's and web services that are deployed to the other cluster. Per a recommendation from some IBM staff on site here, communication between nodes in a cluster can be EJB RMI. But if we are going across clusters and/or other cells, then the communication should be web services. That said, some of us are wondering about performance and optimizing communication for speed of our applications that will use our web services and EJB's. Right now most EJB's are exposed as remote. (and our vendor set theirs up that way, rather than also exposing local home interfaces). We are wondering if WAS does any optimizations between apps in the same node/cluster node space. If two apps are installed in the same area and they call each other via remote home interface, is WAS smart enough to make it a local home interface call? Are their other optimization techniques? Should we consider them? Should we not? What are the costs/benefits? Here is the question from one of our team members as sent in their email: The question is: Supposing we develop our EJBs as remote EJBs, where our UI controller code is talking to our EXT java services via EJB3...what are our options for performance optimization when both the EJB server and client are running in the same container? As one point of reference, google has given me some oooooold websphere performance tuning documentation from 2000 that explains a tuning configuration you can set to enable Call By Reference for EJB communication when they're in the same application server JVM. It states the following: Because EJBs are inherently location independent, they use a remote programming model. Method parameters and return values are serialized over RMI-IIOP and returned by value. This is the intrinsic RMI "Call By Value" model. WebSphere provides the "No Local Copies" performance optimization for running EJBs and clients (typically servlets) in the same application server JVM. The "No Local Copies" option uses "Call By Reference" and does not create local proxies for called objects when both the client and the remote object are in the same process. Depending on your workload, this can result in a significant overhead savings. Configure "No Local Copies" by adding the following two command line parameters to the application server JVM: * -Djavax.rmi.CORBA.UtilClass=com.ibm.CORBA.iiop.Util * -Dcom.ibm.CORBA.iiop.noLocalCopies=true CAUTION: The "No Local Copies" configuration option improves performance by changing "Call By Value" to "Call By Reference" for clients and EJBs in the same JVM. One side effect of this is that the Java object derived (non-primitive) method parameters can actually be changed by the called enterprise bean. Consider Figure 16a: Also, we will also be using Process Server 6.2 and WESB 6.2 as well in the future. Any ideas? recommendations? Thanks

    Read the article

  • Architecture for Qt SIGNAL with subclass-specific, templated argument type

    - by Barry Wark
    I am developing a scientific data acquisition application using Qt. Since I'm not a deep expert in Qt, I'd like some architecture advise from the community on the following problem: The application supports several hardware acquisition interfaces but I would like to provide an common API on top of those interfaces. Each interface has a sample data type and a units for its data. So I'm representing a vector of samples from each device as a std::vector of Boost.Units quantities (i.e. std::vector<boost::units::quantity<unit,sample_type> >). I'd like to use a multi-cast style architecture, where each data source broadcasts newly received data to 1 or more interested parties. Qt's Signal/Slot mechanism is an obvious fit for this style. So, I'd like each data source to emit a signal like typedef std::vector<boost::units::quantity<unit,sample_type> > SampleVector signals: void samplesAcquired(SampleVector sampleVector); for the unit and sample_type appropriate for that device. Since tempalted QObject subclasses aren't supported by the meta-object compiler, there doesn't seem to be a way to have a (tempalted) base class for all data sources which defines the samplesAcquired Signal. In other words, the following won't work: template<T,U> //sample type and units class DataSource : public QObject { Q_OBJECT ... public: typedef std::vector<boost::units::quantity<U,T> > SampleVector signals: void samplesAcquired(SampleVector sampleVector); }; The best option I've been able to come up with is a two-layered approach: template<T,U> //sample type and units class IAcquiredSamples { public: typedef std::vector<boost::units::quantity<U,T> > SampleVector virtual shared_ptr<SampleVector> acquiredData(TimeStamp ts, unsigned long nsamples); }; class DataSource : public QObject { ... signals: void samplesAcquired(TimeStamp ts, unsigned long nsamples); }; The samplesAcquired signal now gives a timestamp and number of samples for the acquisition and clients must use the IAcquiredSamples API to retrieve those samples. Obviously data sources must subclass both DataSource and IAcquiredSamples. The disadvantage of this approach appears to be a loss of simplicity in the API... it would be much nicer if clients could get the acquired samples in the Slot connected. Being able to use Qt's queued connections would also make threading issues easier instead of having to manage them in the acquiredData method within each subclass. One other possibility, is to use a QVariant argument. This necessarily puts the onus on subclass to register their particular sample vector type with Q_REGISTER_METATYPE/qRegisterMetaType. Not really a big deal. Clients of the base class however, will have no way of knowing what type the QVariant value type is, unless a tag struct is also passed with the signal. I consider this solution at least as convoluted as the one above, as it forces clients of the abstract base class API to deal with some of the gnarlier aspects of type system. So, is there a way to achieve the templated signal parameter? Is there a better architecture than the one I've proposed?

    Read the article

  • OTN Developer Days - Calgary, Alberta March 18 & Atlanta, GA April 1

    - by dana.singleterry
    Discover a Faster Way to Develop Ajax -Enabled Application Based on Java and SOA Standards Get Hands-on with Oracle Jdeveloper, Oracle Application Developer Framework and Oracle Fusion Middleware 11g. You are invited to attend Oracle Technology Network (OTN) Developer Day, a free, hands-on workshop that will give you insight into how to create Ajax-enabled rich Web user interfaces and Java EE-based SOA services with ease. We'll introduce you to the development platform Oracle is using for its Fusion enterprise applications, and show you how to get up to speed with it. The workshop will get you started developing with the latest versions of Oracle JDeveloper and Oracle ADF 11g, including the Ajax-enabled ADF Faces rich client components. Thursday, March 18, 2010 8:00 a.m. - 5:00 p.m. Calgary Marriott hotel 110 9th Avenue, SE Calgary, Alberta T2G 5A6 Wednesday, April 1, 2010 8:00 a.m. - 5:00 p.m. Four Seasons Hotel Atlanta 75 Fourteenth Street Atlanta, Georgia 30309 This workshop is designed for developers, project managers, and architects. Whether you are currently using Java, traditional 4GL tools like Oracle Forms, PeopleTools, and Visual Basic, or just looking for a better development platform - this session is for you. Get explanation from Oracle experts, try your hands at actual development, and get a chance to win an Apple iPod Touch and Oracle prizes. Come see how Oracle can help you deliver cutting edge UIs and standard -based applications faster with the Oracle Fusion Development software stack. At this event you will: * Get to know the Oracle Fusion development architecture and strategy from Oracle's experts. * Learn the easy way to extend your existing development skill sets to incorporate new technologies and architectures that include Service-Oriented Architecture, Java EE, and Web 2.0 * Participate in hands-on labs and experience new technologies in a familiar and productive development environment with Oracle experts guidance. Click on the Register Now Calgary, Alberta to register for the Calgary event and click on the Register Now Atlanta, GA to register for the Atlanta FREE events. Don't miss your exclusive opportunity to network with your peers and discuss today's most vital application development topics with Oracle experts.

    Read the article

  • Oracle UCM Integration with WebCenter

    - by john.brunswick
    Portal deployments always contain some level of content that requires management. Like peanut butter and jelly, the ying and yang, they are inseparable. Unfortunately, unlike peanut butter and jelly content and portals usually require that an extensive amount of work be completed to create a seamless experience for end users who will be serviced by the portal, as well as for users who will be contributing and managing the content. With WebCenter Suite Oracle has understood this need and addressed it by including Universal Content Management (UCM, formerly Stellent) licensing to allow content to be delivered into the portal from a mature, class-leading content management technology. To unlock the most value from this content technology, WebCenter portal technology can leverage a series of integration strategies available through its open standards support, as well as a series of native components to enable content consumption from UCM. This have been done to enable IT teams to reduce solution deployment time and provide quick wins to their business stakeholders. The ongoing cost of ownership for the solution is also greatly reduced through these various integrations. Within this post we will explore various ways in which the content can be Contributed through out of the box interfaces Displayed natively within the portal (configuration) Exposed programmatically (development) The information below showcases how to quickly take advantage of WebCenter's marriage of content and portal technologies, then leverage various programmatic integrations available with UCM.

    Read the article

  • Java Champion Stephen Chin on New Features and Functionality in JavaFX

    - by janice.heiss(at)oracle.com
    In an Oracle Technology Network interview, Java Champion Stephen Chin, Chief Agile Methodologist for GXS, and one of the most prolific and innovative JavaFX developers, provides an update on the rapidly developing changes in JavaFX.Chin expressed enthusiasm about recent JavaFX developments:"There is a lot to be excited about -- JavaFX has a new API face. All the JavaFX 2.0 APIs will be exposed via Java classes that will make it much easier to integrate Java server and client code. This also opens up some huge possibilities for JVM language integration with JavaFX." Chin also spoke about developments in Visage, the new language project created to fill the gap left by JavaFX Script:"It's a domain-specific language for writing user interfaces, which addresses the needs of UI developers. Visage takes over where JavaFX Script left off, providing a statically typed, declarative language with lots of features to make UI development a pleasure.""My favorite language features from Visage are the object literal syntax for quickly building scene graphs and the bind keyword for connecting your UI to the backend model. However, the language is built for UI development from the top down, including subtle details like null-safe dereferencing for exception-less code."Read the entire article.

    Read the article

  • Using Completed User Stories to Estimate Future User Stories

    - by David Kaczynski
    In Scrum/Agile, the complexity of a user story can be estimated in story points. After completing some user stories, a programmer or team of programmers can use those experiences to better estimate how much time it might take to complete a future user story. Is there a methodology for breaking down the complexity of user stories into quantifiable or quantifiable attributes? For example, User Story X requires a rich, new view in the GUI, but User Story X can perform most of its functionality using existing business logic on the server. On a scale of 1 to 10, User Story X has a complexity of 7 on the client and a complexity of 2 on the server. After User Story X is completed, someone asks how long would it take to complete User Story Y, which has a complexity of 3 on the client and 6 on the server. Looking at how long it took to complete User Story X, we can make an educated estimate on how long it might take to complete User Story Y. I can imagine some other details: The complexity of one attribute (such as complexity of client) could have sub-attributes, such as number of steps in a sequence, function points, etc. Several other attributes that could be considered as well, such as the programmer's familiarity with the system or the number of components/interfaces involved These attributes could be accumulated into some sort of user story checklist. To reiterate: is there an existing methodology for decomposing the complexity of a user story into complexity of attributes/sub-attributes, or is using completed user stories as indicators in estimating future user stories more of an informal process?

    Read the article

  • Getting beyond basic web programming languages. How to be awesome?

    - by user73962
    I'm a web developer that's done a bunch of projects using PHP, JQuery/JS, Mysql using PhPMyAdmin, CSS, HTML and a tiny bit of XML. Basically lots of work with CMS's and freehand coding. I'm looking to take things to the next level. I've done a lot of freelance and small contract work, but I'm dying to excel. I'm tired of acting as tech support for all these "non-tech" companies that barely know how to use their own computers..."really, you didn't think to backup your files before switching to a new server??". Think of potential employers as amazon, netflix, twitter, google, etc. I don't necessarily want to work for these guys specifically, but potentially organizations like this. I could be wrong, but I feel like a big company like this would laugh at me if I interviewed. For example, how helpful is knowing Ruby, SQL (commands without interface), C++, API's, Oracle, Java, debugging, qa, etc? (I realize this is a very random list). I use Notepad ++, but have heard that the bigger boys use IDE interfaces. I'm not really interested in building desktop apps, only web related stuff. I feel like I've reached my potential and want to really take it up a notch. I see a lot of projects on GitHub and I'm amazed at what people have created. Note - my degree is in economics but I've done web dev since high school. I definitely wish I took more comp sci/programming courses in college. I'm 27 and want to be awesome at web dev before it's too late. Not just decent. Any advice? Book suggestions? Thanks

    Read the article

  • Do abstractions have to reduce code readability?

    - by Martin Blore
    A good developer I work with told me recently about some difficulty he had in implementing a feature in some code we had inherited; he said the problem was that the code was difficult to follow. From that, I looked deeper into the product and realised how difficult it was to see the code path. It used so many interfaces and abstract layers, that trying to understand where things began and ended was quite difficult. It got me thinking about the times I had looked at past projects (before I was so aware of clean code principles) and found it extremely difficult to get around in the project, mainly because my code navigation tools would always land me at an interface. It would take a lot of extra effort to find the concrete implementation or where something was wired up in some plugin type architecture. I know some developers strictly turn down dependency injection containers for this very reason. It confuses the path of the software so much that the difficulty of code navigation is exponentially increased. My question is: when a framework or pattern introduces so much overhead like this, is it worth it? Is it a symptom of a poorly implemented pattern? I guess a developer should look to the bigger picture of what that abstractions brings to the project to help them get through the frustration. Usually though, it's difficult to make them see that big picture. I know I've failed to sell the needs of IOC and DI with TDD. For those developers, use of those tools just cramps code readability far too much.

    Read the article

  • Automated unit testing, integration testing or acceptance testing

    - by bjarkef
    TDD and unit testing seems to be the big rave at the moment. But it is really that useful compared to other forms of automated testing? Intuitively I would guess that automated integration testing is way more useful than unit testing. In my experience the most bugs seems to be in the interaction between modules, and not so much the actual (usual limited) logic of each unit. Also regressions often happened because of changing interfaces between modules (and changed pre and post-conditions.) Am I misunderstanding something, or why are unit testing getting so much focus compared to integration testing? It is simply because it is assumed that integration testing is something you have, and unit testing is the next thing we need to learn to apply as developers? Or maybe unit testing simply yields the highest gain compared to the complexity of automating it? What are you experience with automated unit testing, automated integration testing, and automated acceptance testing, and in your experience what has yielded the highest ROI? and why? If you had to pick just one form of testing to be automated on your next project, which would it be? Thanks in advance.

    Read the article

  • Your finger prints may unlock your iPhone and it’s digital wallets

    - by Gopinath
    The next version of iPhone is going to have a biometric sensor which may allow your finger prints to authenticate and authorize – unlock the device, sign in to an account, authorize a credit card transaction, etc . The iOS 7 beta 4 released couple of days ago had many traces of biometric software libraries embedded in the OS and they make it pretty clear that Apple is preparing a new iPhone with finger sensor. Biometric sensors are not something new in digital devices. Most of us have been already using them on your laptops to unlock the computers as well as to launch applications. Though these sensors are available in many devices, they are hardly reliable. My personal laptop has a biometric sensor and half of the time either it does not work or it does not recognize my finger prints. When works, it works like a charm and very easy to unlock my device. But Apple is known for delivering great products by nailing down technical challenges and blending technology with beautiful user interfaces.  They had been doing when Steve Jobs was leading the pack and hope his legacy will be carried forward by Tim Cook by delivering amazing products in coming months.  I expect iPhone finger sensors to work flawlessly. Photo credit: flickr/nettsu

    Read the article

  • Is there a way to track data structure dependencies from the database, through the tiers, all the way out to a web page?

    - by Sean Mickey
    When we design applications, we generally end up with the same tiered sets of data structures: A persistent data structure that is described using DDL and implemented as RDBMS tables and columns. A set of domain objects that consist primarily of data structures, usually combined with business-rule level logic, that are implemented in a programming language such as Java. A set of service layer interfaces that directly support use case implementations (which use the domain data structures as parameters), implemented as EJBs or something equivalent in another programming language. UI screens that allow users to C reate, R etrieve, U pdate, and (maybe) D elete all manner of data structures and graphs of data structures, with numerous screens and with multiple UI widgets, all structured to support the same data structures. But if you want to change the data structures in any of these tiers, it always seems extremely difficult to assess the impact(s) the change will have across the application. UML can help, but tracing through diagram after diagram is not a real solution to this problem. The best I have ever seen was a homespun data tracking spreadsheet document that listed all of the data structures and walked the relationships from tier-to-tier. Is there a tool or accepted approach that makes it easy to identify a data structure in any tier and easily obtain a list of all dependent: database table and column data structures domain object data structures service layer interface methods and parameter data structures screen & UI component data structures

    Read the article

  • Davicom Semiconductor, Inc. 21x4x DEC-Tulip not detected by Wireshark but IP operational

    - by deepsix86
    Recently flipped to Ubuntu 11.10 on a Dell 4300 (Intel). Getting IP address and no issues (ping/surf) but Wireshark unable to detect eth0 interface. I see references in forums to blacklist tulip but looks like I am running dmfe. Not sure if the blacklist is required and where to go from here. Maybe Driver update? Got a little lost looking in that area. Some h/w details below (IP/MAC/HOSTNAME removed) Linux xxxxxx 3.0.0-17-generic #30-Ubuntu SMP Thu Mar 8 17:34:21 UTC 2012 i686 i686 i386 GNU/Linux network-admin (HOSTS TAB) does not list eth0, only loopback and bunch of IPv6 interfaces ifconfig eth0 Link encap:Ethernet HWaddr xxxxxxxx inet addr:192.168.x.xx Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: xxxxxxxxxxx 64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:36662 errors:0 dropped:1 overruns:0 frame:0 TX packets:24975 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:42115779 (42.1 MB) TX bytes:3056435 (3.0 MB) Interrupt:18 Base address:0xe800 lspci 02:09.0 Ethernet controller: Davicom Semiconductor, Inc. 21x4x DEC-Tulip compatible 10/100 Ethernet (rev 31) Subsystem: Device 4554:434e Flags: bus master, medium devsel, latency 64, IRQ 18 I/O ports at e800 [size=256] Memory at fe1ffc00 (32-bit, non-prefetchable) [size=256] Expansion ROM at fe200000 [disabled] [size=256K] Capabilities: [50] Power Management version 2 Kernel driver in use: dmfe Kernel modules: dmfe hwinfo --netcard 20: PCI 209.0: 0200 Ethernet controller [Created at pci.318] Unique ID: rBUF.0NgK5ZS9c0D Parent ID: 6NW+.siohrLUzzI4 SysFS ID: /devices/pci0000:00/0000:00:1e.0/0000:02:09.0 SysFS BusID: 0000:02:09.0 Hardware Class: network Model: "Davicom 21x4x DEC-Tulip compatible 10/100 Ethernet" Vendor: pci 0x1282 "Davicom Semiconductor, Inc." Device: pci 0x9102 "21x4x DEC-Tulip compatible 10/100 Ethernet" SubVendor: pci 0x4554 SubDevice: pci 0x434e Revision: 0x31 Driver: "dmfe" Driver Modules: "dmfe" Device File: eth0 I/O Ports: 0xe800-0xe8ff (rw) Memory Range: 0xfe1ffc00-0xfe1ffcff (rw,non-prefetchable) Memory Range: 0xfe200000-0xfe23ffff (ro,non-prefetchable,disabled) IRQ: 18 (61379 events) HW Address: 00:08:a1:01:35:70 Link detected: yes Module Alias: "pci:v00001282d00009102sv00004554sd0000434Ebc02sc00i00" Driver Info #0: Driver Status: dmfe is active Driver Activation Cmd: "modprobe dmfe" Config Status: cfg=new, avail=yes, need=no, active=unknown Attached to: #11 (PCI bridge)

    Read the article

  • jackd fails to start

    - by wickedchicken
    I'm trying to have a setup where JACK interfaces directly to ALSA and pulseaudio communicates to JACK. This setup worked OK (I had to manually start things a few times) but as I understood the Ubuntu daemon setup and perfected things jackd stopped working completely. I'm running 10.10. If I run something through ALSA I get sound no problem. However, when I run jack with realtime: /usr/bin/jackd -v -R -ch -Z -t2000 -d alsa -P I get the following error: jackd watchdog: timeout - killing jackd Conversely, if I run without realtime: /usr/bin/jackd -v -r -ch -Z -t2000 -d alsa -P I get: ALSA: poll time out, polled for 32032138 usecs DRIVER NT: could not run driver cycle Jack was working just fine before I made these changes; while I don't have an exact copy of my original configuration I recall running the bare minimum of options worked fine. I've seen some articles saying the problem is with ALSA capture. In fact, I tried enabling capture in alsamixer once and everything worked! On reboot that success was not repeated and I haven't been able to get jack working since. That shouldn't matter because specifying -P should obviate any capture issues. Short summary: I can't get jackd to work under any circumstances (unless I specify -d dummy). Sound works with other programs with ALSA, but when I run JACK the daemon opens the card but times out and dies. JACK worked fine before but I can't figure out what changed (or where to even look). I should mention I am running with CPU speed throttling on, but I'm using HPET to mitigate this (and I've run jack with no issue before). Thanks!

    Read the article

< Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >