Search Results

Search found 87875 results on 3515 pages for 'server pool'.

Page 770/3515 | < Previous Page | 766 767 768 769 770 771 772 773 774 775 776 777  | Next Page >

  • Alternative to CURL due to long waiting

    - by aalan
    Hey Guys I currently run a PHP-script using CURL to send data to another server, to do run a PHP-script that could take up to a minute to run. This server doesn't give any data back. But the CURL-request still has to wait for it to complete, and then it loads the rest of the orignal page. I would like my PHP-script to just send the data to the other server and then not wait for an answer. So my question is how should I solve this? I have read that CURL always has to wait. What are your suggestions? Thank You!

    Read the article

  • Expanded securityadmin

    - by user80652
    I'm aware that sysadmin is documented as the server role necessary for creating logins (SQL/Windows-integrated); nevertheless, I'm tasked to find out if there's any other server role (built-in or otherwise) that can be used. To be specific, I'm looking to setup one or two logins with access to create logins, create [database] users, assign users to [database] roles. Potentially reset passwords, but most of the logins are Windows-integrated and it's not necessary. Cannot have access to data at all, nor can these logins have rights to update tables nor create/update roles. Seems my only options so far are to set these 2 logins with securityadmin server role and for the specific databases, configure with db_securityadmin and db_accessadmin... but this configuration doesn't allow for creating logins.

    Read the article

  • Cisco ASA: Allowing and Denying VPN Access based on membership to an AD group

    - by milkandtang
    I have a Cisco ASA 5505 connecting to an Active Directory server for VPN authentication. Usually we'd restrict this to a particular OU, but in this case users which need access are spread across multiple OUs. So, I'd like to use a group to specify which users have remote access. I've created the group and added the users, but I'm having trouble figuring out how to deny users which aren't in that group. Right now, if someone connects they get assigned the correct group policy "companynamera" if they are in that group, so the LDAP mapping is working. However, users who are not in that group still authenticate fine, and their group policy becomes the LDAP path of their first group, i.e. CN=Domain Users,CN=Users,DC=example,DC=com, and then are still allowed access. How do I add a filter so that I can map everything that isn't "companynamera" to no access? Config I'm using (with some stuff such as ACLs and mappings removed, since they are just noise here): gateway# show run : Saved : ASA Version 8.2(1) ! hostname gateway domain-name corp.company-name.com enable password gDZcqZ.aUC9ML0jK encrypted passwd gDZcqZ.aUC9ML0jK encrypted names name 192.168.0.2 dc5 description FTP Server name 192.168.0.5 dc2 description Everything server name 192.168.0.6 dc4 description File Server name 192.168.0.7 ts1 description Light Use Terminal Server name 192.168.0.8 ts2 description Heavy Use Terminal Server name 4.4.4.82 primary-frontier name 5.5.5.26 primary-eschelon name 172.21.18.5 dmz1 description Kerio Mail Server and FTP Server name 4.4.4.84 ts-frontier name 4.4.4.85 vpn-frontier name 5.5.5.28 ts-eschelon name 5.5.5.29 vpn-eschelon name 5.5.5.27 email-eschelon name 4.4.4.83 guest-frontier name 4.4.4.86 email-frontier dns-guard ! interface Vlan1 nameif inside security-level 100 ip address 192.168.0.254 255.255.255.0 ! interface Vlan2 description Frontier FiOS nameif outside security-level 0 ip address primary-frontier 255.255.255.0 ! interface Vlan3 description Eschelon T1 nameif backup security-level 0 ip address primary-eschelon 255.255.255.248 ! interface Vlan4 nameif dmz security-level 50 ip address 172.21.18.254 255.255.255.0 ! interface Vlan5 nameif guest security-level 25 ip address 172.21.19.254 255.255.255.0 ! interface Ethernet0/0 switchport access vlan 2 ! interface Ethernet0/1 switchport access vlan 3 ! interface Ethernet0/2 switchport access vlan 4 ! interface Ethernet0/3 switchport access vlan 5 ! interface Ethernet0/4 ! interface Ethernet0/5 ! interface Ethernet0/6 ! interface Ethernet0/7 ! ftp mode passive clock timezone PST -8 clock summer-time PDT recurring dns domain-lookup inside dns server-group DefaultDNS name-server dc2 domain-name corp.company-name.com same-security-traffic permit intra-interface access-list companyname_splitTunnelAcl standard permit 192.168.0.0 255.255.255.0 access-list companyname_splitTunnelAcl standard permit 172.21.18.0 255.255.255.0 access-list inside_nat0_outbound extended permit ip any 172.21.20.0 255.255.255.0 access-list inside_nat0_outbound extended permit ip any 172.21.18.0 255.255.255.0 access-list bypassingnat_dmz extended permit ip 172.21.18.0 255.255.255.0 192.168.0.0 255.255.255.0 pager lines 24 logging enable logging buffer-size 12288 logging buffered warnings logging asdm notifications mtu inside 1500 mtu outside 1500 mtu backup 1500 mtu dmz 1500 mtu guest 1500 ip local pool VPNpool 172.21.20.50-172.21.20.59 mask 255.255.255.0 no failover icmp unreachable rate-limit 1 burst-size 1 no asdm history enable arp timeout 14400 global (outside) 1 interface global (outside) 2 email-frontier global (outside) 3 guest-frontier global (backup) 1 interface global (dmz) 1 interface nat (inside) 0 access-list inside_nat0_outbound nat (inside) 2 dc5 255.255.255.255 nat (inside) 1 192.168.0.0 255.255.255.0 nat (dmz) 0 access-list bypassingnat_dmz nat (dmz) 2 dmz1 255.255.255.255 nat (dmz) 1 172.21.18.0 255.255.255.0 access-group outside_access_in in interface outside access-group dmz_access_in in interface dmz route outside 0.0.0.0 0.0.0.0 4.4.4.1 1 track 1 route backup 0.0.0.0 0.0.0.0 5.5.5.25 254 timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute timeout tcp-proxy-reassembly 0:01:00 ldap attribute-map RemoteAccessMap map-name memberOf IETF-Radius-Class map-value memberOf CN=RemoteAccess,CN=Users,DC=corp,DC=company-name,DC=com companynamera dynamic-access-policy-record DfltAccessPolicy aaa-server ActiveDirectory protocol ldap aaa-server ActiveDirectory (inside) host dc2 ldap-base-dn dc=corp,dc=company-name,dc=com ldap-scope subtree ldap-login-password * ldap-login-dn cn=administrator,ou=Admins,dc=corp,dc=company-name,dc=com server-type microsoft aaa-server ADRemoteAccess protocol ldap aaa-server ADRemoteAccess (inside) host dc2 ldap-base-dn dc=corp,dc=company-name,dc=com ldap-scope subtree ldap-login-password * ldap-login-dn cn=administrator,ou=Admins,dc=corp,dc=company-name,dc=com server-type microsoft ldap-attribute-map RemoteAccessMap aaa authentication enable console LOCAL aaa authentication ssh console LOCAL http server enable http 192.168.0.0 255.255.255.0 inside no snmp-server location no snmp-server contact snmp-server enable traps snmp authentication linkup linkdown coldstart sla monitor 123 type echo protocol ipIcmpEcho 4.4.4.1 interface outside num-packets 3 frequency 10 sla monitor schedule 123 life forever start-time now crypto ipsec transform-set ESP-3DES-SHA esp-3des esp-sha-hmac crypto ipsec security-association lifetime seconds 28800 crypto ipsec security-association lifetime kilobytes 4608000 crypto dynamic-map outside_dyn_map 20 set pfs crypto dynamic-map outside_dyn_map 20 set transform-set ESP-3DES-SHA crypto map outside_map 65535 ipsec-isakmp dynamic outside_dyn_map crypto map outside_map interface outside crypto isakmp enable outside crypto isakmp policy 10 authentication pre-share encryption 3des hash sha group 2 lifetime 86400 ! track 1 rtr 123 reachability telnet timeout 5 ssh 192.168.0.0 255.255.255.0 inside ssh timeout 5 ssh version 2 console timeout 0 management-access inside dhcpd auto_config outside ! threat-detection basic-threat threat-detection statistics access-list no threat-detection statistics tcp-intercept webvpn group-policy companynamera internal group-policy companynamera attributes wins-server value 192.168.0.5 dns-server value 192.168.0.5 vpn-tunnel-protocol IPSec password-storage enable split-tunnel-policy tunnelspecified split-tunnel-network-list value companyname_splitTunnelAcl default-domain value corp.company-name.com split-dns value corp.company-name.com group-policy companyname internal group-policy companyname attributes wins-server value 192.168.0.5 dns-server value 192.168.0.5 vpn-tunnel-protocol IPSec password-storage enable split-tunnel-policy tunnelspecified split-tunnel-network-list value companyname_splitTunnelAcl default-domain value corp.company-name.com split-dns value corp.company-name.com username admin password IhpSqtN210ZsNaH. encrypted privilege 15 tunnel-group companyname type remote-access tunnel-group companyname general-attributes address-pool VPNpool authentication-server-group ActiveDirectory LOCAL default-group-policy companyname tunnel-group companyname ipsec-attributes pre-shared-key * tunnel-group companynamera type remote-access tunnel-group companynamera general-attributes address-pool VPNpool authentication-server-group ADRemoteAccess LOCAL default-group-policy companynamera tunnel-group companynamera ipsec-attributes pre-shared-key * ! class-map type inspect ftp match-all ftp-inspection-map class-map inspection_default match default-inspection-traffic ! ! policy-map type inspect ftp ftp-inspection-map parameters class ftp-inspection-map policy-map type inspect dns migrated_dns_map_1 parameters message-length maximum 512 policy-map global_policy class inspection_default inspect dns migrated_dns_map_1 inspect ftp inspect h323 h225 inspect h323 ras inspect http inspect ils inspect netbios inspect rsh inspect rtsp inspect skinny inspect sqlnet inspect sunrpc inspect tftp inspect sip inspect xdmcp inspect icmp inspect icmp error inspect esmtp inspect pptp ! service-policy global_policy global prompt hostname context Cryptochecksum:487525494a81c8176046fec475d17efe : end gateway# Thanks so much!

    Read the article

  • ActiveX component can't create Object Error? Check 64 bit Status

    - by Rick Strahl
    If you're running on IIS 7 and a 64 bit operating system you might run into the following error using ASP classic or ASP.NET with COM interop. In classic ASP applications the error will show up as: ActiveX component can't create object   (Error 429) (actually without error handling the error just shows up as 500 error page) In my case the code that's been giving me problems has been a FoxPro COM object I'd been using to serve banner ads to some of my pages. The code basically looks up banners from a database table and displays them at random. The ASP classic code that uses it looks like this: <% Set banner = Server.CreateObject("wwBanner.aspBanner") banner.BannerFile = "wwsitebanners" Response.Write(banner.GetBanner(-1)) %> Originally this code had no specific error checking as above so the ASP pages just failed with 500 error pages from the Web server. To find out what the problem is this code is more useful at least for debugging: <% ON ERROR RESUME NEXT Set banner = Server.CreateObject("wwBanner.aspBanner") Response.Write(err.Number & " - " & err.Description) banner.BannerFile = "wwsitebanners" Response.Write(banner.GetBanner(-1)) %> which results in: 429 - ActiveX component can't create object which at least gives you a slight clue. In ASP.NET invoking the same COM object with code like this: <% dynamic banner = wwUtils.CreateComInstance("wwBanner.aspBanner") as dynamic; banner.cBANNERFILE = "wwsitebanners"; Response.Write(banner.getBanner(-1)); %> results in: Retrieving the COM class factory for component with CLSID {B5DCBB81-D5F5-11D2-B85E-00600889F23B} failed due to the following error: 80040154 Class not registered (Exception from HRESULT: 0x80040154 (REGDB_E_CLASSNOTREG)). The class is in fact registered though and the COM server loads fine from a command prompt or other COM client. This error can be caused by a COM server that doesn't load. It looks like a COM registration error. There are a number of traditional reasons why this error can crop up of course. The server isn't registered (run regserver32 to register a DLL server or /regserver on an EXE server) Access permissions aren't set on the COM server (Web account has to be able to read the DLL ie. Network service) The COM server fails to load during initialization ie. failing during startup One thing I always do to check for COM errors fire up the server in a COM client outside of IIS and ensure that it works there first - it's almost always easier to debug a server outside of the Web environment. In my case I tried the server in Visual FoxPro on the server with: loBanners = CREATEOBJECT("wwBanner.aspBanner") loBanners.cBannerFile = "wwsitebanners" ? loBanners.GetBanner(-1) and it worked just fine. If you don't have a full dev environment on the server you can also use VBScript do the same thing and run the .vbs file from the command prompt: Set banner = Server.CreateObject("wwBanner.aspBanner") banner.BannerFile = "wwsitebanners" MsgBox(banner.getBanner(-1)) Since this both works it tells me the server is registered and working properly. This leaves startup failures or permissions as the problem. I double checked permissions for the Application Pool and the permissions of the folder where the DLL lives and both are properly set to allow access by the Application Pool impersonated user. Just to be sure I assigned an Admin user to the Application Pool but still no go. So now what? 64 bit Servers Ahoy A couple of weeks back I had set up a few of my Application pools to 64 bit mode. My server is Server 2008 64 bit and by default Application Pools run 64 bit. Originally when I installed the server I set up most of my Application Pools to 32 bit mainly for backwards compatibility. But as more of my code migrates to 64 bit OS's I figured it'd be a good idea to see how well code runs under 64 bit code. The transition has been mostly painless. Until today when I noticed the problem with the code above when scrolling to my IIS logs and noticing a lot of 500 errors on many of my ASP classic pages. The code in question in most of these pages deals with this single simple COM object. It took a while to figure out that the problem is caused by the Application Pool running in 64 bit mode. The issue is that 32 bit COM objects (ie. my old Visual FoxPro COM component) cannot be loaded in a 64 bit Application Pool. The ASP pages using this COM component broke on the day I switched my main Application Pool into 64 bit mode but I didn't find the problem until I searched my logs for errors by pure chance. To fix this is easy enough once you know what the problem is by switching the Application Pool to Enable 32-bit Applications: Once this is done the COM objects started working correctly again. 64 bit ASP and ASP.NET with DCOM Servers This is kind of off topic, but incidentally it's possible to load 32 bit DCOM (out of process) servers from ASP.NET and ASP classic even if those applications run in 64 bit application pools. In fact, in West Wind Web Connection I use this capability to run a 64 bit ASP.NET handler that talks to a 32 bit FoxPro COM server which allows West Wind Web Connection to run in native 64 bit mode without custom configuration (which is actually quite useful). It's probably not a common usage scenario but it's good to know that you can actually access 32 bit COM objects this way from ASP.NET. For West Wind Web Connection this works out well as the DCOM interface only makes one non-chatty call to the backend server that handles all the rest of the request processing. Application Pool Isolation is your Friend For me the recent incident of failure in the classic ASP pages has just been another reminder to be very careful with moving applications to 64 bit operation. There are many little traps when switching to 64 bit that are very difficult to track and test for. I described one issue I had a couple of months ago where one of the default ASP.NET filters was loading the wrong version (32bit instead of 64bit) which was extremely difficult to track down and was caused by a very sneaky configuration switch error (basically 3 different entries for the same ISAPI filter all with different bitness settings). It took me almost a full day to track this down). Recently I've been taken to isolate individual applications into separate Application Pools rather than my past practice of combining many apps into shared AppPools. This is a good practice assuming you have enough memory to make this work. Application Pool isolate provides more modularity and allows me to selectively move applications to 64 bit. The error above came about precisely because I moved one of my most populous app pools to 64 bit and forgot about the minimal COM object use in some of my old pages. It's easy to forget. To 64bit or Not Is it worth it to move to 64 bit? Currently I'd say -not really. In my - admittedly limited - testing I don't see any significant performance increases. In fact 64 bit apps just seem to consume considerably more memory (30-50% more in my pools on average) and performance is minimally improved (less than 5% at the very best) in the load testing I've performed on a couple of sites in both modes. The only real incentive for 64 bit would be applications that require huge data spaces that exceed the 32 bit 4 gigabyte memory limit. However I have a hard time imagining an application that needs 4 gigs of memory in a single Application Pool :-). Curious to hear other opinions on benefits of 64 bit operation. © Rick Strahl, West Wind Technologies, 2005-2011Posted in COM   ASP.NET  FoxPro  

    Read the article

  • J2EE Applications, SPARC T4, Solaris Containers, and Resource Pools

    - by user12620111
    I've obtained a substantial performance improvement on a SPARC T4-2 Server running a J2EE Application Server Cluster by deploying the cluster members into Oracle Solaris Containers and binding those containers to cores of the SPARC T4 Processor. This is not a surprising result, in fact, it is consistent with other results that are available on the Internet. See the "references", below, for some examples. Nonetheless, here is a summary of my configuration and results. (1.0) Before deploying a J2EE Application Server Cluster into a virtualized environment, many decisions need to be made. I'm not claiming that all of the decisions that I have a made will work well for every environment. In fact, I'm not even claiming that all of the decisions are the best possible for my environment. I'm only claiming that of the small sample of configurations that I've tested, this is the one that is working best for me. Here are some of the decisions that needed to be made: (1.1) Which virtualization option? There are several virtualization options and isolation levels that are available. Options include: Hard partitions:  Dynamic Domains on Sun SPARC Enterprise M-Series Servers Hypervisor based virtualization such as Oracle VM Server for SPARC (LDOMs) on SPARC T-Series Servers OS Virtualization using Oracle Solaris Containers Resource management tools in the Oracle Solaris OS to control the amount of resources an application receives, such as CPU cycles, physical memory, and network bandwidth. Oracle Solaris Containers provide the right level of isolation and flexibility for my environment. To borrow some words from my friends in marketing, "The SPARC T4 processor leverages the unique, no-cost virtualization capabilities of Oracle Solaris Zones"  (1.2) How to associate Oracle Solaris Containers with resources? There are several options available to associate containers with resources, including (a) resource pool association (b) dedicated-cpu resources and (c) capped-cpu resources. I chose to create resource pools and associate them with the containers because I wanted explicit control over the cores and virtual processors.  (1.3) Cluster Topology? Is it best to deploy (a) multiple application servers on one node, (b) one application server on multiple nodes, or (c) multiple application servers on multiple nodes? After a few quick tests, it appears that one application server per Oracle Solaris Container is a good solution. (1.4) Number of cluster members to deploy? I chose to deploy four big 64-bit application servers. I would like go back a test many 32-bit application servers, but that is left for another day. (2.0) Configuration tested. (2.1) I was using a SPARC T4-2 Server which has 2 CPU and 128 virtual processors. To understand the physical layout of the hardware on Solaris 10, I used the OpenSolaris psrinfo perl script available at http://hub.opensolaris.org/bin/download/Community+Group+performance/files/psrinfo.pl: test# ./psrinfo.pl -pv The physical processor has 8 cores and 64 virtual processors (0-63) The core has 8 virtual processors (0-7)   The core has 8 virtual processors (8-15)   The core has 8 virtual processors (16-23)   The core has 8 virtual processors (24-31)   The core has 8 virtual processors (32-39)   The core has 8 virtual processors (40-47)   The core has 8 virtual processors (48-55)   The core has 8 virtual processors (56-63)     SPARC-T4 (chipid 0, clock 2848 MHz) The physical processor has 8 cores and 64 virtual processors (64-127)   The core has 8 virtual processors (64-71)   The core has 8 virtual processors (72-79)   The core has 8 virtual processors (80-87)   The core has 8 virtual processors (88-95)   The core has 8 virtual processors (96-103)   The core has 8 virtual processors (104-111)   The core has 8 virtual processors (112-119)   The core has 8 virtual processors (120-127)     SPARC-T4 (chipid 1, clock 2848 MHz) (2.2) The "before" test: without processor binding. I started with a 4-member cluster deployed into 4 Oracle Solaris Containers. Each container used a unique gigabit Ethernet port for HTTP traffic. The containers shared a 10 gigabit Ethernet port for JDBC traffic. (2.3) The "after" test: with processor binding. I ran one application server in the Global Zone and another application server in each of the three non-global zones (NGZ):  (3.0) Configuration steps. The following steps need to be repeated for all three Oracle Solaris Containers. (3.1) Stop AppServers from the BUI. (3.2) Stop the NGZ. test# ssh test-z2 init 5 (3.3) Enable resource pools: test# svcadm enable pools (3.4) Create the resource pool: test# poolcfg -dc 'create pool pool-test-z2' (3.5) Create the processor set: test# poolcfg -dc 'create pset pset-test-z2' (3.6) Specify the maximum number of CPU's that may be addd to the processor set: test# poolcfg -dc 'modify pset pset-test-z2 (uint pset.max=32)' (3.7) bash syntax to add Virtual CPUs to the processor set: test# (( i = 64 )); while (( i < 96 )); do poolcfg -dc "transfer to pset pset-test-z2 (cpu $i)"; (( i = i + 1 )) ; done (3.8) Associate the resource pool with the processor set: test# poolcfg -dc 'associate pool pool-test-z2 (pset pset-test-z2)' (3.9) Tell the zone to use the resource pool that has been created: test# zonecfg -z test-z1 set pool=pool-test-z2 (3.10) Boot the Oracle Solaris Container test# zoneadm -z test-z2 boot (3.11) Save the configuration to /etc/pooladm.conf test# pooladm -s (4.0) Results. Using the resource pools improves both throughput and response time: (5.0) References: System Administration Guide: Oracle Solaris Containers-Resource Management and Oracle Solaris Zones Capitalizing on large numbers of processors with WebSphere Portal on Solaris WebSphere Application Server and T5440 (Dileep Kumar's Weblog)  http://www.brendangregg.com/zones.html Reuters Market Data System, RMDS 6 Multiple Instances (Consolidated), Performance Test Results in Solaris, Containers/Zones Environment on Sun Blade X6270 by Amjad Khan, 2009.

    Read the article

  • Microsoft SSIS Service: Registry setting specifying configuration file does not exist.

    - by mbrc
    Microsoft SSIS Service: Registry setting specifying configuration file does not exist. Attempting to load default config file. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. this is my MsDtsSrvr.ini.xml <?xml version="1.0" encoding="utf-8"?> <DtsServiceConfiguration xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <StopExecutingPackagesOnShutdown>true</StopExecutingPackagesOnShutdown> <TopLevelFolders> <Folder xsi:type="SqlServerFolder"> <Name>MSDB</Name> <ServerName>.\SQL2008</ServerName> </Folder> <Folder xsi:type="FileSystemFolder"> <Name>File System</Name> <StorePath>..\Packages</StorePath> </Folder> </TopLevelFolders> </DtsServiceConfiguration> i found here http://msdn.microsoft.com/en-us/library/ms137789.aspx that i need to update my registry. Only entry at HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\100\SSIS\ServiceConfigFile is (Default) with no value. what i must add in registry that i will not get this error any more?

    Read the article

  • How does a web server/the http protocol handle version control and compression?

    - by Sune Rasmussen
    When a client browser requests a file from the web server, I know that some kind of check is performed, because the files needed to serve the web page may already be cached by the web browser. So, if a file exists in the cache, no files are sent. But if the file on the server has changed since the file was cached in the browser, the file is sent and updated anyhow. Then, if you have compression like gzipping enabled on the server, the files that are to be provided to the client must be gzipped on the way, requiring some amount of server side processing. But how is this managed? The logical approach seems to me, that the web server should have a cache as well, containing the newest version of all files that have been requested within a certain time span, thus a compressed version of these files, so that compression would not have to be done each time a files is requested. And also, how are files eventually requested? Does the browser ask for files, each time it encounters one in the HTML code and the specific file is not stored in the local cache, or does it sum all the files that are needed up and ask for the whole bunch at the same time? But that's only guessing from a programming point of view, and I don't really know. If the answers are very different among web server systems, I'm primarily interested in Apache, but other answers are appreciated, too.

    Read the article

  • DCOM Authentication Fails to use Kerberos, Falls back to NTLM

    - by Asa Yeamans
    I have a webservice that is written in Classic ASP. In this web service it attempts to create a VirtualServer.Application object on another server via DCOM. This fails with Permission Denied. However I have another component instantiated in this same webservice on the same remote server, that is created without problems. This component is a custom-in house component. The webservice is called from a standalone EXE program that calls it via WinHTTP. It has been verified that WinHTTP is authenticating with Kerberos to the webservice successfully. The user authenticated to the webservice is the Administrator user. The EXE to webservice authentication step is successful and with kerberos. I have verified the DCOM permissions on the remote computer with DCOMCNFG. The default limits allow administrators both local and remote activation, both local and remote access, and both local and remote launch. The default component permissions allow the same. This has been verified. The individual component permissions for the working component are set to defaults. The individual component permissions for the VirtualServer.Application component are also set to defaults. Based upon these settings, the webservice should be able to instantiate and access the components on the remote computer. Setting up a Wireshark trace while running both tests, one with the working component and one with the VirtualServer.Application component reveals an intresting behavior. When the webservice is instantiating the working, custom, component, I can see the request on the wire to the RPCSS endpoint mapper first perform the TCP connect sequence. Then I see it perform the bind request with the appropriate security package, in this case kerberos. After it obtains the endpoint for the working DCOM component, it connects to the DCOM endpoint authenticating again via Kerberos, and it successfully is able to instantiate and communicate. On the failing VirtualServer.Application component, I again see the bind request with kerberos go to the RPCC endpoing mapper successfully. However, when it then attempts to connect to the endpoint in the Virtual Server process, it fails to connect because it only attempts to authenticate with NTLM, which ultimately fails, because the webservice does not have access to the credentials to perform the NTLM hash. Why is it attempting to authenticate via NTLM? Additional Information: Both components run on the same server via DCOM Both components run as Local System on the server Both components are Win32 Service components Both components have the exact same launch/access/activation DCOM permissions Both Win32 Services are set to run as Local System The permission denied is not a permissions issue as far as I can tell, it is an authentication issue. Permission is denied because NTLM authentication is used with a NULL username instead of Kerberos Delegation Constrained delegation is setup on the server hosting the webservice. The server hosting the webservice is allowed to delegate to rpcss/dcom-server-name The server hosting the webservice is allowed to delegate to vssvc/dcom-server-name The dcom server is allowed to delegate to rpcss/webservice-server The SPN's registered on the dcom server include rpcss/dcom-server-name and vssvc/dcom-server-name as well as the HOST/dcom-server-name related SPNs The SPN's registered on the webservice-server include rpcss/webservice-server and the HOST/webservice-server related SPNs Anybody have any Ideas why the attempt to create a VirtualServer.Application object on a remote server is falling back to NTLM authentication causing it to fail and get permission denied? Additional information: When the following code is run in the context of the webservice, directly via a testing-only, just-developed COM component, it fails on the specified line with Access Denied. COSERVERINFO csi; csi.dwReserved1=0; csi.pwszName=L"terahnee.rivin.net"; csi.pAuthInfo=NULL; csi.dwReserved2=NULL; hr=CoGetClassObject(CLSID_VirtualServer, CLSCTX_ALL, &csi, IID_IClassFactory, (void **) &pClsFact); if(FAILED( hr )) goto error1; // Fails here with HRESULT_FROM_WIN32(ERROR_ACCESS_DENIED) hr=pClsFact->CreateInstance(NULL, IID_IUnknown, (void **) &pUnk); if(FAILED( hr )) goto error2; Ive also noticed that in the Wireshark Traces, i see the attempt to connect to the service process component only requests NTLMSSP authentication, it doesnt even attmept to use kerberos. This suggests that for some reason the webservice thinks it cant use kerberos...

    Read the article

  • Can a wifi AP act as a client, and a server at the same time?

    - by nbolton
    I feel this is SF worthy (as opposed to SU) as I go into a bit of detail on gateways/routing. Here's my ideal setup (if possible) -- there is a wifi network (lets call it bob's) with which I want access to, but I have a few other computers on my network which I want to keep behind a firewall. So I was thinking of buying a wireless access point so that I could set it up to connect to bob's network from the AP, and then from my server, connect to the AP via ethernet. So that's the first bit. Second part is that I want to have my own private wifi network off the back of this; can I then tell the AP to serve a new network called foobar. When I say private network, I mean that my server is actually a Debian linux install with routing configured (and I also do some QoS stuff on, etc). So ideally, I'd like all the clients on the private network to be behind the server in terms of routing. However, if the private clients connect to the server via wifi, then aren't they exposed to the "public" network? That is, if someone is savvy enough to scan for my IP range. Also, to do routing I'd need to connect two ethernet cables between the server and the AP (because you can't do routing/QoS on virtual devices) -- which isn't a problem really; but I'm not sure whether the AP will allow me to separate the public and private LANs. Or, as well as the AP, am I better getting a wifi-to-ethernet adapter for the server? I could use a wifi usb, but this can be tricky to set up on headless linux; plus the signal strength is a bit lousy. If this question is a bit vague/spurious in places, please comment and I will explain in more detail.

    Read the article

  • 64 bit vs 32 bit

    - by user51737
    When I was doing my course MCSA, I'm taught the following: With a 32-bit processor only 32-bit operating system can be installed. with a 64-bit processor both 32-bit & 64-bit operating system can be installed It's said 64-bit os cannot be installed on a 32-bit processor. I just want to make sure the above points because recently I'm asked to installed Windows Server 2008 R2 Enterprize and while installation it showed only x64 and it simply installed it. I was thinking all the computers in my office having a 32-bit processor. If so how it could be possible to install a x64 bit os on a 32-bit processor? or I'm wrong with the 1st point or the processor may be of 64-bit(I don't know how to check). I'm confused... One thing what I know the benefits of 64-bit over 32-bit is faster operation. If anyone could tell me other benefits, it could be helpful for me. Thanks!

    Read the article

  • Automatically install driver on headless WHSv1 system

    - by Dan Neely
    I have one of the HP Mediasmart Windows Home Server v1 boxes. It's network port appears to have died a few days ago but the system is not giving any other sign of failure: No activity lights activate on either side of the cable when connected to my gigabit switch; when connected to one of my routers 100 megabit ports the lights turn on but it remains unreachable over the network and my router never lists it as among DHCP clients. I bought a USB-ethernet adapter to temporarily get it back online; but the adapter needs a driver to work which I can't install because the system is headless by design (no video out, no PCI/PCIe slots) with admin access only available via the WHS client or remote desktop. Both of those options require network connectivity and are consequently unavailable. I tried copying the drivers to a flash drive; but Windows either didn't look there or none of the drivers provided were suitable (Win8, Win7, or combined XP and Vista). I've been told that a USB WiFi adapter would have the same driver problem.

    Read the article

  • javax.naming.InvalidNameException using Oracle BPM and weblogic when accessing directory

    - by alfredozn
    We are getting this exception when we start our cluster (2 managed servers, 1 admin), we have deployed only the ears corresponding to the OBPM 10.3.1 SP1 in a weblogic 10.3. When the server cluster starts, one of the managed servers (the first to start) get overloaded and ran out of connections to the directory DB because of this repeatedly error. It looks like the engine is trying to get the info from the LDAP server but I don't know why it is building a wrong query. fuego.directory.DirectoryRuntimeException: Exception [javax.naming.InvalidNameException: CN=Alvarez Guerrero Bernardo DEL:ca9ef28d-3b94-4e8f-a6bd-8c880bb3791b,CN=Deleted Objects,DC=corp: [LDAP: error code 34 - 0000208F: NameErr: DSID-031001BA, problem 2006 (BAD_NAME), data 8349, best match of: 'CN=Alvarez Guerrero Bernardo DEL:ca9ef28d-3b94-4e8f-a6bd-8c880bb3791b,CN=Deleted Objects,DC=corp,dc=televisa,dc=com,dc=mx' ^@]; remaining name 'CN=Alvarez Guerrero Bernardo DEL:ca9ef28d-3b94-4e8f-a6bd-8c880bb3791b,CN=Deleted Objects,DC=corp']. at fuego.directory.DirectoryRuntimeException.wrapException(DirectoryRuntimeException.java:85) at fuego.directory.hybrid.ldap.JNDIQueryExecutor.selectById(JNDIQueryExecutor.java:163) at fuego.directory.hybrid.ldap.JNDIQueryExecutor.selectById(JNDIQueryExecutor.java:110) at fuego.directory.hybrid.ldap.Repository.selectById(Repository.java:38) at fuego.directory.hybrid.msad.MSADGroupValueProvider.getAssignedParticipantsInternal(MSADGroupValueProvider.java:124) at fuego.directory.hybrid.msad.MSADGroupValueProvider.getAssignedParticipants(MSADGroupValueProvider.java:70) at fuego.directory.hybrid.ldap.Group$7.getValue(Group.java:149) at fuego.directory.hybrid.ldap.Group$7.getValue(Group.java:152) at fuego.directory.hybrid.ldap.LDAPResult.getValue(LDAPResult.java:76) at fuego.directory.hybrid.ldap.LDAPOrganizationGroupAccessor.setInfo(LDAPOrganizationGroupAccessor.java:352) at fuego.directory.hybrid.ldap.LDAPOrganizationGroupAccessor.build(LDAPOrganizationGroupAccessor.java:121) at fuego.directory.hybrid.ldap.LDAPOrganizationGroupAccessor.build(LDAPOrganizationGroupAccessor.java:114) at fuego.directory.hybrid.ldap.LDAPOrganizationGroupAccessor.fetchGroup(LDAPOrganizationGroupAccessor.java:94) at fuego.directory.hybrid.HybridGroupAccessor.fetchGroup(HybridGroupAccessor.java:146) at sun.reflect.GeneratedMethodAccessor66.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at fuego.directory.provider.DirectorySessionImpl$AccessorProxy.invoke(DirectorySessionImpl.java:756) at $Proxy66.fetchGroup(Unknown Source) at fuego.directory.DirOrganizationalGroup.fetch(DirOrganizationalGroup.java:275) at fuego.metadata.GroupManager.loadGroup(GroupManager.java:225) at fuego.metadata.GroupManager.find(GroupManager.java:57) at fuego.metadata.ParticipantManager.addNestedGroups(ParticipantManager.java:621) at fuego.metadata.ParticipantManager.buildCompleteRoleAssignments(ParticipantManager.java:527) at fuego.metadata.Participant$RoleTransitiveClousure.build(Participant.java:760) at fuego.metadata.Participant$RoleTransitiveClousure.access$100(Participant.java:692) at fuego.metadata.Participant.buildRoles(Participant.java:401) at fuego.metadata.Participant.updateMembers(Participant.java:372) at fuego.metadata.Participant.<init>(Participant.java:64) at fuego.metadata.Participant.createUncacheParticipant(Participant.java:84) at fuego.server.persistence.jdbc.JdbcProcessInstancePersMgr.loadItems(JdbcProcessInstancePersMgr.java:1706) at fuego.server.persistence.Persistence.loadInstanceItems(Persistence.java:838) at fuego.server.AbstractInstanceService.readInstance(AbstractInstanceService.java:791) at fuego.ejbengine.EJBInstanceService.getLockedROImpl(EJBInstanceService.java:218) at fuego.server.AbstractInstanceService.getLockedROImpl(AbstractInstanceService.java:892) at fuego.server.AbstractInstanceService.getLockedImpl(AbstractInstanceService.java:743) at fuego.server.AbstractInstanceService.getLockedImpl(AbstractInstanceService.java:730) at fuego.server.AbstractInstanceService.getLocked(AbstractInstanceService.java:144) at fuego.server.AbstractInstanceService.getLocked(AbstractInstanceService.java:162) at fuego.server.AbstractInstanceService.unselectAllItems(AbstractInstanceService.java:454) at fuego.server.execution.ToDoItemUnselect.execute(ToDoItemUnselect.java:105) at fuego.server.execution.DefaultEngineExecution$AtomicExecutionTA.runTransaction(DefaultEngineExecution.java:304) at fuego.transaction.TransactionAction.startNestedTransaction(TransactionAction.java:527) at fuego.transaction.TransactionAction.startTransaction(TransactionAction.java:548) at fuego.transaction.TransactionAction.start(TransactionAction.java:212) at fuego.server.execution.DefaultEngineExecution.executeImmediate(DefaultEngineExecution.java:123) at fuego.server.execution.DefaultEngineExecution.executeAutomaticWork(DefaultEngineExecution.java:62) at fuego.server.execution.EngineExecution.executeAutomaticWork(EngineExecution.java:42) at fuego.server.execution.ToDoItem.executeAutomaticWork(ToDoItem.java:261) at fuego.ejbengine.ItemExecutionBean$1.execute(ItemExecutionBean.java:223) at fuego.server.execution.DefaultEngineExecution$AtomicExecutionTA.runTransaction(DefaultEngineExecution.java:304) at fuego.transaction.TransactionAction.startBaseTransaction(TransactionAction.java:470) at fuego.transaction.TransactionAction.startTransaction(TransactionAction.java:551) at fuego.transaction.TransactionAction.start(TransactionAction.java:212) at fuego.server.execution.DefaultEngineExecution.executeImmediate(DefaultEngineExecution.java:123) at fuego.server.execution.EngineExecution.executeImmediate(EngineExecution.java:66) at fuego.ejbengine.ItemExecutionBean.processMessage(ItemExecutionBean.java:209) at fuego.ejbengine.ItemExecutionBean.onMessage(ItemExecutionBean.java:120) at weblogic.ejb.container.internal.MDListener.execute(MDListener.java:466) at weblogic.ejb.container.internal.MDListener.transactionalOnMessage(MDListener.java:371) at weblogic.ejb.container.internal.MDListener.onMessage(MDListener.java:327) at weblogic.jms.client.JMSSession.onMessage(JMSSession.java:4547) at weblogic.jms.client.JMSSession.execute(JMSSession.java:4233) at weblogic.jms.client.JMSSession.executeMessage(JMSSession.java:3709) at weblogic.jms.client.JMSSession.access$000(JMSSession.java:114) at weblogic.jms.client.JMSSession$UseForRunnable.run(JMSSession.java:5058) at weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:516) at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201) at weblogic.work.ExecuteThread.run(ExecuteThread.java:173) Caused by: javax.naming.InvalidNameException: CN=Alvarez Guerrero Bernardo DEL:ca9ef28d-3b94-4e8f-a6bd-8c880bb3791b,CN=Deleted Objects,DC=corp: [LDAP: error code 34 - 0000208F: NameErr: DSID-031001BA, problem 2006 (BAD_NAME), data 8349, best match of: 'CN=Alvarez Guerrero Bernardo DEL:ca9ef28d-3b94-4e8f-a6bd-8c880bb3791b,CN=Deleted Objects,DC=corp,dc=televisa,dc=com,dc=mx' ^@]; remaining name 'CN=Alvarez Guerrero Bernardo DEL:ca9ef28d-3b94-4e8f-a6bd-8c880bb3791b,CN=Deleted Objects,DC=corp' at com.sun.jndi.ldap.LdapCtx.processReturnCode(LdapCtx.java:2979) at com.sun.jndi.ldap.LdapCtx.processReturnCode(LdapCtx.java:2794) at com.sun.jndi.ldap.LdapCtx.searchAux(LdapCtx.java:1826) at com.sun.jndi.ldap.LdapCtx.c_search(LdapCtx.java:1749) at com.sun.jndi.toolkit.ctx.ComponentDirContext.p_search(ComponentDirContext.java:368) at com.sun.jndi.toolkit.ctx.PartialCompositeDirContext.search(PartialCompositeDirContext.java:338) at com.sun.jndi.toolkit.ctx.PartialCompositeDirContext.search(PartialCompositeDirContext.java:321) at javax.naming.directory.InitialDirContext.search(InitialDirContext.java:248) at fuego.jndi.FaultTolerantLdapContext.search(FaultTolerantLdapContext.java:612) at fuego.directory.hybrid.ldap.JNDIQueryExecutor.selectById(JNDIQueryExecutor.java:136) ... 67 more

    Read the article

  • Error installing mysql, any way to fix it?

    - by user156355
    I was installing mysql with apt-get install mysql-server (like I always did) before that I had done an apt-get update (I am using Debian 6), and when I installed I found this problem, pretty common as I see, but Ive followed all stepts and nothing have worked, Ive tried with apt-get install -f also with apt-get remove mysql-server (and common, and mysql-server-5.1) and also with apt-get purge (every packate) and later install, but nothing... I tried also dpkg-reconfigure mysql-server-5.1 apt-get install --reinstall mysql-server (all runed as Root) Still, nothing worked, any idea??? 130130 10:11:48 InnoDB: Shutdown completed; log sequence number 0 44233 Starting MySQL database server: mysqld . . . . . . . . . . . . . . failed! invoke-rc.d: initscript mysql, action "start" failed. dpkg: error processing mysql-server-5.1 (--configure): subprocess installed post-installation script returned error exit status 1 configured to not write apport reports dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.1; however: Package mysql-server-5.1 is not configured yet. dpkg: error processing mysql-server (--configure): dependency problems - leaving unconfigured configured to not write apport reports Errors were encountered while processing: mysql-server-5.1 mysql-server E: Sub-process /usr/bin/dpkg returned an error code (1) when I tried dpkg-reconfigure mysql-server-5.1 /usr/sbin/dpkg-reconfigure: mysql-server-5.1 is broken or not fully installed the case "Start" on /etc/init.d/mysql is 'start') sanity_checks; # Start daemon log_daemon_msg "Starting MySQL database server" "mysqld" if mysqld_status check_alive nowarn; then log_progress_msg "already running" log_end_msg 0 else # Could be removed during boot test -e /var/run/mysqld || install -m 755 -o mysql -g root -d /var/run/mysqld # Start MySQL! /usr/bin/mysqld_safe > /dev/null 2>&1 & # 6s was reported in #352070 to be too few when using ndbcluster for i in 1 2 3 4 5 6 7 8 9 10 11 12 13 14; do sleep 1 if mysqld_status check_alive nowarn ; then break; fi log_progress_msg "." done if mysqld_status check_alive warn; then log_end_msg 0 # Now start mysqlcheck or whatever the admin wants. output=$(/etc/mysql/debian-start) [ -n "$output" ] && log_action_msg "$output" else log_end_msg 1 log_failure_msg "Please take a look at the syslog" fi fi When I make mysql force-reload: Reloading MySQL database server: mysqld/usr/bin/mysqladmin: connect to server at 'localhost' failed error: 'Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)' Check that mysqld is running and that the socket: '/var/run/mysqld/mysqld.sock' exists! root@americandougnuts:/etc/init.d# root@americandougnuts:/etc/init.d# Check that mysqld is running and that the socket: '/var/run/mysqld/mysqld.sock' exists!

    Read the article

  • High Load mysql on Debian server stops every day. Why?

    - by Oleg Abrazhaev
    I have Debian server with 32 gb memory. And there is apache2, memcached and nginx on this server. Memory load always on maximum. Only 500m free. Most memory leak do MySql. Apache only 70 clients configured, other services small memory usage. When mysql use all memory it stops. And nothing works, need mysql reboot. Mysql configured use maximum 24 gb memory. I have hight weight InnoDB bases. (400000 rows, 30 gb). And on server multithread daemon, that makes many inserts in this tables, thats why InnoDB. There is my mysql config. [mysqld] # # * Basic Settings # default-time-zone = "+04:00" user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp language = /usr/share/mysql/english skip-external-locking default-time-zone='Europe/Moscow' # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. # # * Fine Tuning # #low_priority_updates = 1 concurrent_insert = ALWAYS wait_timeout = 600 interactive_timeout = 600 #normal key_buffer_size = 2024M #key_buffer_size = 1512M #70% hot cache key_cache_division_limit= 70 #16-32 max_allowed_packet = 32M #1-16M thread_stack = 8M #40-50 thread_cache_size = 50 #orderby groupby sort sort_buffer_size = 64M #same myisam_sort_buffer_size = 400M #temp table creates when group_by tmp_table_size = 3000M #tables in memory max_heap_table_size = 3000M #on disk open_files_limit = 10000 table_cache = 10000 join_buffer_size = 5M # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP #myisam_use_mmap = 1 max_connections = 200 thread_concurrency = 8 # # * Query Cache Configuration # #more ignored query_cache_limit = 50M query_cache_size = 210M #on query cache query_cache_type = 1 # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. #log = /var/log/mysql/mysql.log # # Error logging goes to syslog. This is a Debian improvement :) # # Here you can see queries with especially long duration log_slow_queries = /var/log/mysql/mysql-slow.log long_query_time = 1 log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log server-id = 1 log-bin = /var/lib/mysql/mysql-bin #replicate-do-db = gate log-bin-index = /var/lib/mysql/mysql-bin.index log-error = /var/lib/mysql/mysql-bin.err relay-log = /var/lib/mysql/relay-bin relay-log-info-file = /var/lib/mysql/relay-bin.info relay-log-index = /var/lib/mysql/relay-bin.index binlog_do_db = 24avia expire_logs_days = 10 max_binlog_size = 100M read_buffer_size = 4024288 innodb_buffer_pool_size = 5000M innodb_flush_log_at_trx_commit = 2 innodb_thread_concurrency = 8 table_definition_cache = 2000 group_concat_max_len = 16M #binlog_do_db = gate #binlog_ignore_db = include_database_name # # * BerkeleyDB # # Using BerkeleyDB is now discouraged as its support will cease in 5.1.12. #skip-bdb # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # You might want to disable InnoDB to shrink the mysqld process by circa 100MB. #skip-innodb # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 500M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 32M key_buffer_size = 512M # # * NDB Cluster # # See /usr/share/doc/mysql-server-*/README.Debian for more information. # # The following configuration is read by the NDB Data Nodes (ndbd processes) # not from the NDB Management Nodes (ndb_mgmd processes). # # [MYSQL_CLUSTER] # ndb-connectstring=127.0.0.1 # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/ Please, help me make it stable. Memory used /etc/mysql # free total used free shared buffers cached Mem: 32930800 32766424 164376 0 139208 23829196 -/+ buffers/cache: 8798020 24132780 Swap: 33553328 44660 33508668 Maybe my problem not in memory, but MySQL stops every day. As you can see, cache memory free 24 gb. Thank to Michael Hampton? for correction. Load overage on server 3.5. Maybe hdd or another problem? Maybe my config not optimal for 30gb InnoDB ? I'm already try mysqltuner and tunung-primer.sh , but they marked all green. Mysqltuner output mysqltuner >> MySQLTuner 1.0.1 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.5.24-9-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: -Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 112G (Tables: 1528) [--] Data in InnoDB tables: 39G (Tables: 340) [--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17) [!!] Total fragmented tables: 344 -------- Performance Metrics ------------------------------------------------- [--] Up for: 8h 18m 33s (14M q [478.333 qps], 259K conn, TX: 9B, RX: 5B) [--] Reads / Writes: 84% / 16% [--] Total buffers: 10.5G global + 81.1M per thread (200 max threads) [OK] Maximum possible memory usage: 26.3G (83% of installed RAM) [OK] Slow queries: 1% (259K/14M) [!!] Highest connection usage: 100% (201/200) [OK] Key buffer size / total MyISAM indexes: 1.5G/5.6G [OK] Key buffer hit rate: 100.0% (6B cached / 1M reads) [OK] Query cache efficiency: 74.3% (8M cached / 11M selects) [OK] Query cache prunes per day: 0 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 247K sorts) [!!] Joins performed without indexes: 106025 [!!] Temporary tables created on disk: 49% (351K on disk / 715K total) [OK] Thread cache hit rate: 99% (249 created / 259K connections) [!!] Table cache hit rate: 15% (2K open / 13K opened) [OK] Open file limit used: 15% (3K/20K) [OK] Table locks acquired immediately: 99% (4M immediate / 4M locks) [!!] InnoDB data size / buffer pool: 39.4G/5.9G -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Reduce or eliminate persistent connections to reduce connection usage Adjust your join queries to always utilize indexes Temporary table size is already large - reduce result set size Reduce your SELECT DISTINCT queries without LIMIT clauses Increase table_cache gradually to avoid file descriptor limits Variables to adjust: max_connections (> 200) wait_timeout (< 600) interactive_timeout (< 600) join_buffer_size (> 5.0M, or always use indexes with joins) table_cache (> 10000) innodb_buffer_pool_size (>= 39G) Mysql primer output -- MYSQL PERFORMANCE TUNING PRIMER -- - By: Matthew Montgomery - MySQL Version 5.5.24-9-log x86_64 Uptime = 0 days 8 hrs 20 min 50 sec Avg. qps = 478 Total Questions = 14369568 Threads Connected = 16 Warning: Server has not been running for at least 48hrs. It may not be safe to use these recommendations To find out more information on how each of these runtime variables effects performance visit: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html Visit http://www.mysql.com/products/enterprise/advisors.html for info about MySQL's Enterprise Monitoring and Advisory Service SLOW QUERIES The slow query log is enabled. Current long_query_time = 1.000000 sec. You have 260626 out of 14369701 that take longer than 1.000000 sec. to complete Your long_query_time seems to be fine BINARY UPDATE LOG The binary update log is enabled Binlog sync is not enabled, you could loose binlog records during a server crash WORKER THREADS Current thread_cache_size = 50 Current threads_cached = 45 Current threads_per_sec = 0 Historic threads_per_sec = 0 Your thread_cache_size is fine MAX CONNECTIONS Current max_connections = 200 Current threads_connected = 11 Historic max_used_connections = 201 The number of used connections is 100% of the configured maximum. You should raise max_connections INNODB STATUS Current InnoDB index space = 214 M Current InnoDB data space = 39.40 G Current InnoDB buffer pool free = 0 % Current innodb_buffer_pool_size = 5.85 G Depending on how much space your innodb indexes take up it may be safe to increase this value to up to 2 / 3 of total system memory MEMORY USAGE Max Memory Ever Allocated : 23.46 G Configured Max Per-thread Buffers : 15.84 G Configured Max Global Buffers : 7.54 G Configured Max Memory Limit : 23.39 G Physical Memory : 31.40 G Max memory limit seem to be within acceptable norms KEY BUFFER Current MyISAM index space = 5.61 G Current key_buffer_size = 1.47 G Key cache miss rate is 1 : 5578 Key buffer free ratio = 77 % Your key_buffer_size seems to be fine QUERY CACHE Query cache is enabled Current query_cache_size = 200 M Current query_cache_used = 101 M Current query_cache_limit = 50 M Current Query cache Memory fill ratio = 50.59 % Current query_cache_min_res_unit = 4 K MySQL won't cache query results that are larger than query_cache_limit in size SORT OPERATIONS Current sort_buffer_size = 64 M Current read_rnd_buffer_size = 256 K Sort buffer seems to be fine JOINS Current join_buffer_size = 5.00 M You have had 106606 queries where a join could not use an index properly You have had 8 joins without keys that check for key usage after each row join_buffer_size >= 4 M This is not advised You should enable "log-queries-not-using-indexes" Then look for non indexed joins in the slow query log. OPEN FILES LIMIT Current open_files_limit = 20210 files The open_files_limit should typically be set to at least 2x-3x that of table_cache if you have heavy MyISAM usage. Your open_files_limit value seems to be fine TABLE CACHE Current table_open_cache = 10000 tables Current table_definition_cache = 2000 tables You have a total of 1910 tables You have 2151 open tables. The table_cache value seems to be fine TEMP TABLES Current max_heap_table_size = 2.92 G Current tmp_table_size = 2.92 G Of 366426 temp tables, 49% were created on disk Perhaps you should increase your tmp_table_size and/or max_heap_table_size to reduce the number of disk-based temporary tables Note! BLOB and TEXT columns are not allow in memory tables. If you are using these columns raising these values might not impact your ratio of on disk temp tables. TABLE SCANS Current read_buffer_size = 3 M Current table scan ratio = 2846 : 1 read_buffer_size seems to be fine TABLE LOCKING Current Lock Wait ratio = 1 : 185 You may benefit from selective use of InnoDB. If you have long running SELECT's against MyISAM tables and perform frequent updates consider setting 'low_priority_updates=1'

    Read the article

  • DPM server 2010 Attach agent error : administrator privileges missing?

    - by Michael
    Hello, I’m hoping you would be able to help me out with this little problem I’m having. I installed DPM 2010 in our test environment to test backups on Exchange 2010 servers. The environment includes : 1xDC 2x Exchange Server 2010 1x DPM 2010 server All of these are running on Microsoft server 2008 R2 Virtual machines. The host machines are using Hyper-v. So the problem goes like this : 1- I tried to install the agents from the DPM server GUI, which failed saying I didn’t have the correct permissions. 2- So then I tried the manual installation using the commands from : the Microsoft site http://technet.microsoft.com/en-us/library/bb870935.aspx 3- The agent installation worked but when I get to attaching the agents to the DPM server it still gives me the error saying that the specified account does not have administrator rights. 4- I tried the Domain admin, users who are domain admin + local admin, single local admins. 5- I have turned off the windows firewall and made sure all the services are running. So now I’m out of ideas and really need help, the agent attach to the DPM server is the last thing that is holding me back from deploying everything to the production site. Any help would be really appreciated.

    Read the article

  • Mapped networkdrive on logout

    - by Robuust
    I'm using a script to keep a mapped networkconnection alive, but ofcourse the mapped connection is gone when I logout. The point is now, that I'm running this on Windows Server 2008 R2, where I use remote desktop to login on the administrator account. However, it should remain logged in and not remove the mapped connection as this script takes care of not logging out on MS office 365 sharepoint. Is there a way to keep the mapped networklocation (L:) available after logout? So the script can run to remain the connection? # Create an IE Object and navigate to my SharePoint Site $ie = New-Object -ComObject InternetExplorer.Application $ie.navigate('https://xxx.sharepoint.com/') # Don't need the object anymore, so let's close it to free up some memory $ie.Quit() # Just in case there was a problem with the web client service # I am going to stop and start it, you could potentially remove this # part if you want. I like it just because it takes out a step of # troubleshooting if I'm having problems. Stop-Service WebClient Start-Service WebClient # We are going to set the $Drive variable here, this is just # going to tell the command what drive letter to map you can # change this to whatever you want (if you change it to a # drive that is already mapped it will overwrite it, so be careful. $Drive = "L:" # You can change the drive destiniation to whatever you want, # it has to be a document library or folder of course. $DrvDest = "https://xxx.sharepoint.com/files/" # Here is where we create the object to map the network drive and # then map the network drive $net = New-Object -ComObject WScript.Network; $net.mapnetworkdrive($Drive,$DrvDest) # That is the end of the script, now schedule this with task # scheduler and every so often and you should be set.

    Read the article

  • Google chrome not accepting any security certificates

    - by Jerry
    I've recently developed a problem with Google Chrome that's really annoying. I'm using Firefox at the moment with no problems whatsoever and it's the same with IE, so it's safe to say this problem is specific to Chrome. The problem is that it's not accepting security certificates from certain sites. I suppose the best place to start would be google itself. I can't search. The google search page will load but when I type some search term into the search box and hit 'search' I get the message: "You attempted to reach www.google.com, but the server presented an invalid certificate. You cannot proceed because the website operator has requested heightened security for this domain." No matter what the search term is, this is the result. Also when I try to log in to facebook - same message. Youtube works and many other sites that I know present security certs so I'm baffled. I've searched and there are other people who have had similar issues but I can't find a solution anywhere. The most common answer I'm picking up for this is to "check your system time" but I can safely say that it's not my system time. If anyone knows what is going on, I'd very much appreciate being informed. It's not super urgent as I can use Firefox to access those places Chrome won't, but it IS super annoying because I can usually sort out issues like this in no time.

    Read the article

  • Sendmail : Mail delivery of same domain to internal or external mail server.

    - by Silkograph
    Its bit difficult to explain but very simple problem. We have internal sendmail server and hosted server. Both are set to same domain name. We have mixed mail accounts. For example we have two user in one office. [email protected] is local only [email protected] is internal plus external. Internal means we create user on local linux box where sendmail is set. External means we create user on local and hosted server. [email protected] can send mails to any internal user created on Linux box where sendmail is installed. But he can not send mail to outside domain and no mail can be sent to him as there is no account created on external hosted server. [email protected] can send mails to internal as well as all other domains through sendmail's smart_host feature, which uses hosted server's smtp. [email protected] can get all external emails internally through Fetchmail on linux box. Now we have third user [email protected] who will be always outstation and can use external server only. So I can not create account on local linux box for [email protected] because his mail will get delivered locally only. I don't want to create alias and send his mails to gmail or yahoo's account. I want to send emails to his external account from any internal user. How this can be done? Thanks in advance.

    Read the article

  • Malware Defense Shows Up in PlayOn Settings/Logs Although System Has Been Thoroughly Cleaned

    - by nicorellius
    I was hit really hard by some nasty malware: Malware Defense. I was doing something I should not have been doing when I got it (surfing Pirate Bay for TV shows). It locked up my system and I had to reboot in safe mode. I was able to shut down the process and remove it using a malware killer tool. I then installed, after my machine was cleaned up a bit, Clamwin, Malwarebytes, and another AV tool. I cleaned the heck out of my system. Simultaneously, while this was going on, I was having trouble with my media-server, PlayOn. This tool is great, but has some bugs. One in particular is that it will not function well with AV software running. I found a way to allow the new AV software to run while using PlayOn, but it still says I have Malware Defense on. Firstly, Malware Defense is long gone. I cleaned all remnants from my registry and scoured my system with the above tools multiple times. PlayOn is getting some information that I have this crap installed on my system, but it's not. The system runs OK, but not optimally. I have a feeling it is causing my streaming to be interrupted sometimes. How is it that I can't even find Malware Defense on my system if I tried but yet somehow PlayOn is getting a finger print of it somewhere? I have gone back and forth with MediaMall to no avail. I kind of just gave up, because the streaming works OK. BTW, I also uninstalled/reinstalled PlayOn several times, reverted back to previous versions, etc. The only thing I haven't done is reformat my disk and reinstall Windows. I really don't want to do this if there is another way to remove this little print. Any ideas?

    Read the article

  • When would a persistent route not be an active route?

    - by alnorth29
    I've added a persistent route to our Windows Server 2003 box using "route -p add". After a reboot the "route print" gave this: Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 10.91.131.1 10.91.131.9 20 10.88.0.0 255.255.255.252 10.88.0.1 10.88.0.1 30 10.88.0.1 255.255.255.255 127.0.0.1 127.0.0.1 30 10.91.131.0 255.255.255.0 10.91.131.9 10.91.131.9 20 10.91.131.9 255.255.255.255 127.0.0.1 127.0.0.1 20 10.255.255.255 255.255.255.255 10.88.0.1 10.88.0.1 30 10.255.255.255 255.255.255.255 10.91.131.9 10.91.131.9 20 127.0.0.0 255.0.0.0 127.0.0.1 127.0.0.1 1 224.0.0.0 240.0.0.0 10.88.0.1 10.88.0.1 30 224.0.0.0 240.0.0.0 10.91.131.9 10.91.131.9 20 255.255.255.255 255.255.255.255 10.88.0.1 10.88.0.1 1 255.255.255.255 255.255.255.255 10.91.131.9 10.91.131.9 1 Default Gateway: 10.91.131.1 =========================================================================== Persistent Routes: Network Address Netmask Gateway Address Metric 10.88.0.0 255.255.255.0 10.88.0.2 1 The route I added is listed as a persistent route, but not an active one. Why might this be the case? The route in question is for an OpenVPN connection, would that have anything to do with it?

    Read the article

  • "Security Warning" comes up when I run via another program

    - by Alexander Bird
    If I execute vmmap from the command line it works fine. However, if I call some other program and pass vmmap as a paramater for this other program to start the execution, then I get this "security error" popup -- which makes it hard to automate scripts. In other words, I want to wrap vmmap via another program. In my case, I want to wrap vmmap via another program because whenever vmmap runs, it will bring a window up momentarily and then disappear. So I try passing vmmap as an argument to another program which will start the program "headlessly". I tried this program and this program, and in both cases I get the same popup which defeats the purpose of automation. Why does this happen when the program isn't run directly? Does anyone know the internals of what this warning is? And, utlimately, is there a way to stop this from happening, but only for this instance? I don't want to disable this warning-system on my whole computer. EDIT: I am using Windows Server 2003, and I don't necessarily need solutions for other platforms, but I would like to know what they are if they are platform-dependent solutions.

    Read the article

  • Winamp playing sound but no video

    - by Greg Sansom
    I am having problems playing video in Winamp (the movie I am trying to play is an AVI - not sure if other formats work). I have installed the K-Lite Codec Pack, and the video does work in Winamp Classic. I can also play the video in Winamp on another machine (although I can't remember the exact configuration details of that machine - and I don't think they're relevant). There are a few symptoms: The content of the Video view is either empty, transparent, or displays rendering from other programs. Opening the Visualization view shows the following error: MILKDROP ERROR DirectX initialization failed (GetDeviceCaps). This means that no valid 3D-accelerated display adapter could be found on your computer. If you know this is not the case, it is possible that your graphics subsystem is unstable; please try rebooting your computer and then try to run the plugin again. Otherwise, please install a 3D-accelerated display adapter. Trying to open streams via the SHOUTCast TV plugin shows Error opening video output, and the video does not open. Opening the file with WMC causes the following error (although the movie still plays): Error creating DX9 allocation presenter CreateDevice failed D3DERR_NOTAVAILABLE There are no warnings displayed in Device Manager, although the display adapter is the standard Windows one. Running DxDiag shows no problems (codec for Video listed as XviD 1.1.2 Final). GSpot reports that codecs are installed. System specs: - Windows Server 2008 r2 Standard 64-bit, with latest updates; - .NET 3.5.1 installed; - Winamp v5.6.01 (latest version); - DirectX 11 (Latest version); - K-Lite Codec Pack 7.0.0 (Full); - Machine is HP DC7600 - full specs here. Please comment if there is any more information which will help to diagnose the problem.

    Read the article

  • Ping and crawling not working, site still resolving

    - by Andrew Alexander
    Ok, so we're trying to figure out why the site of one of our clients isn't being crawled by Google (we've ruled out robots.txt or meta tags) When we go to the site, either IP address or domain name, the site resolves, everything works. However, Google is getting a 302 redirect (which it apparently isn't following for crawling), and when we ping the address, it times out (note, the site is still resolving in the browser throughout all of this). The site is built in ASP.Net (I assume C#) and so my thoughts were that it was an errant redirect rule, or some other sort of server side issue. We also thought that it might be due to incorrect domain pointing (but if we try to ping the IP, it doesn't work, so that sorta rules that out). We're really not sure what is causing all of these errors, or even if they have one single source. Anyone have any ideas what could be going on? Do you need any more information? To boil it down in a TL; dr: * Site resolving in browser, both IP and domain name. No problems here. * Site not being crawled by Google (gets a 302 it doesn't seem to follow) - it is not due to robots.txt or meta tags * Ping is not working for the IP address. This is very odd, because again, the IP address seems to work fine in the browser. * Our thoughts are either redirect rule issue, domain pointing issue, or possibly some errant code - or some combination of the three

    Read the article

  • Apache URL rewriting in reverse proxy

    - by Jeremy Gooch
    I'm deploying Apache in front of a Karaf-hosted application (Apache and Karaf are on separate servers). I want Apache to operate as a reverse proxy and also to hide part of the URL. The URL to get the log-in page of the application directly from the app server is http://app-server:8181/jellyfish. Pages are served by the Jetty instance running within Karaf. Of course, this behaviour would usually be blocked by the firewall for everything except the reverse proxy server. With the firewall off, if you hit this URL then Jetty loads the log-in page. The browser's address bar correctly changes to http://app-server:8181/jellyfish/login?0 and everything works. What I want is for http://web-server (i.e. from the root) to map to Jetty on the app server with the name of the app (jellyfish) suppressed. e.g. The browser would change to show http://web-server/login?0 in the address bar and all subsequent URLs and content would be served with the web-server's domain and without the jellyfish clutter. I can get Apache to operate as a simple reverse proxy, using the following config (snippet):- ProxyPass /jellyfish http://app-server:8181/jellyfish ProxyPassReverse / http://app-server:8181/ ...but this requires the browser's URL to contain jellyfish and going to the root URL (http://web-server) gives a 404 Not Found. I've spent a lot of time trying to use mod_rewrite with and without its [P] flag to get around this, but without success. I then tried the ProxyPassMatch directive, but I can't seem to get that quite correct either. Here's the current config, as is loaded into /etc/apache2/sites-available/ on the web server. Note that there is a locally-hosted images directory. I've also kept the mod_rewrite proxy exploit protection and am suppressing a couple of mod_security rules that were giving false positives. <VirtualHost *:80> ServerAdmin admin@drummer-server ServerName drummer-server ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /images/ "/var/www/images/" RewriteEngine On RewriteCond %{REQUEST_URI} !^$ RewriteCond %{REQUEST_URI} !^/ RewriteRule .* - [R=400,L] ProxyPass /images ! ProxyPassMatch ^/(.*) http://granny-server:8181/jellyfish/$1 ProxyPassReverse / http://granny-server:8181/jellyfish ProxyPreserveHost On SecRuleRemoveById 981059 981060 <Directory "/var/www/images"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> </VirtualHost> If I go to http://web-server, I get redirected to http://web-server/jellyfish/home but this gives a 404, with a complaint about trying to access /jellyfish/jellyfish/home - NB the browser's address bar does not contain the double /jellyfish. HTTP ERROR 404 Problem accessing /jellyfish/jellyfish/home. Reason: Not Found And, if I go to http://web-server/login, I get redirected to http://web-server/jellyfish/login?0 but this gives a 404, with a complaint about trying to access /jellyfish/jellyfish/login. HTTP ERROR 404 Problem accessing /jellyfish/jellyfish/login. Reason: Not Found So, I'm guessing I'm somehow passing through the rules twice. I am also slightly bemused as to where the home bit of the URL comes from in the first example. Can someone point me in the right direction, please? Thanks, J.

    Read the article

  • Roundcube "Server Error (OK!)": Lists no messages but can get messages according to the log file

    - by thonixx
    In my server setup there are three virtual machines. One windows machine, an Ubuntu Server 11.10 and a Debian Squeeze mailserver. On the Ubuntu system I have Roundcube installed and I want to connect to the virtual mail server. What's the problem After login into Roundcube it says "Server Error (OK!)" and lists no messages. More information On the Ubuntu server there is no error in any log file (even Roundcubes log files). In the imap log file there you can see Roundcube is able to fetch all imap messages (I can see them in the imap log file created by Roundcube). And on the side of the mail server there are no error messages too. The test connection at the end of the configuration of Roundcube works too, there is a "success" notification. Even the basic login at Roundcube login dialog works without any error message. Roundcube log file you can look here for the log file: http://fixee.org/paste/wxg36eh/ So does anyone know what's wrong with Roundcube?

    Read the article

< Previous Page | 766 767 768 769 770 771 772 773 774 775 776 777  | Next Page >