Search Results

Search found 21640 results on 866 pages for 'local storage'.

Page 168/866 | < Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >

  • Windows Azure from a Data Perspective

    Before creating a data application in Windows Azure, it is important to make choices based on the type of data you have, as well as the security and the business requirements. There are a wide range of options, because Windows Azure has intrinsic data storage, completely separate from SQL Azure, that is highly available and replicated. Your data requirements are likely to dictate the type of data storage options you choose.

    Read the article

  • LOB Pointer Indexing Proposal

    - by jchang
    My observations are that IO to lob pages (and row overflow pages as well?) is restricted to synchronous IO, which can result in serious problems when these reside on disk drive storage. Even if the storage system is comprised of hundreds of HDDs, the realizable IO performance to lob pages is that of a single disk, with some improvement in parallel execution plans. The reason for this appears to be that each thread must work its way through a page to find the lob pointer information, and then generates...(read more)

    Read the article

  • Multiple database with Spring+Hibernate+JPA

    - by ziftech
    Hi everybody! I'm trying to configure Spring+Hibernate+JPA for work with two databases (MySQL and MSSQL) my datasource-context.xml: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:aop="http://www.springframework.org/schema/aop" xsi:schemaLocation="http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-2.5.xsd http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.5.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-2.5.xsd http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util-2.5.xsd" xmlns:p="http://www.springframework.org/schema/p" xmlns:tx="http://www.springframework.org/schema/tx" xmlns:util="http://www.springframework.org/schema/util"> <!-- Data Source config --> <bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close" p:driverClassName="${local.jdbc.driver}" p:url="${local.jdbc.url}" p:username="${local.jdbc.username}" p:password="${local.jdbc.password}"> </bean> <bean id="dataSourceRemote" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close" p:driverClassName="${remote.jdbc.driver}" p:url="${remote.jdbc.url}" p:username="${remote.jdbc.username}" p:password="${remote.jdbc.password}" /> <bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager" p:entity-manager-factory-ref="entityManagerFactory" /> <!-- JPA config --> <tx:annotation-driven transaction-manager="transactionManager" /> <bean id="persistenceUnitManager" class="org.springframework.orm.jpa.persistenceunit.DefaultPersistenceUnitManager"> <property name="persistenceXmlLocations"> <list value-type="java.lang.String"> <value>classpath*:config/persistence.local.xml</value> <value>classpath*:config/persistence.remote.xml</value> </list> </property> <property name="dataSources"> <map> <entry key="localDataSource" value-ref="dataSource" /> <entry key="remoteDataSource" value-ref="dataSourceRemote" /> </map> </property> <property name="defaultDataSource" ref="dataSource" /> </bean> <bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="jpaVendorAdapter"> <bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter" p:showSql="true" p:generateDdl="true"> </bean> </property> <property name="persistenceUnitManager" ref="persistenceUnitManager" /> <property name="persistenceUnitName" value="localjpa"/> </bean> <bean class="org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor" /> </beans> each persistence.xml contains one unit, like this: <persistence-unit name="remote" transaction-type="RESOURCE_LOCAL"> <properties> <property name="hibernate.ejb.naming_strategy" value="org.hibernate.cfg.DefaultNamingStrategy" /> <property name="hibernate.dialect" value="${remote.hibernate.dialect}" /> <property name="hibernate.hbm2ddl.auto" value="${remote.hibernate.hbm2ddl.auto}" /> </properties> </persistence-unit> PersistenceUnitManager cause following exception: Cannot resolve reference to bean 'persistenceUnitManager' while setting bean property 'persistenceUnitManager'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'persistenceUnitManager' defined in class path resource [config/datasource-context.xml]: Initialization of bean failed; nested exception is org.springframework.beans.TypeMismatchException: Failed to convert property value of type [java.util.ArrayList] to required type [java.lang.String] for property 'persistenceXmlLocation'; nested exception is java.lang.IllegalArgumentException: Cannot convert value of type [java.util.ArrayList] to required type [java.lang.String] for property 'persistenceXmlLocation': no matching editors or conversion strategy found If left only one persistence.xml without list, every works fine but I need 2 units... I also try to find alternative solution for work with two databases in Spring+Hibernate context, so I would appreciate any solution new error after changing to persistenceXmlLocations No single default persistence unit defined in {classpath:config/persistence.local.xml, classpath:config/persistence.remote.xml} UPDATE: I add persistenceUnitName, it works, but only with one unit, still need help UPDATE: thanks, ChssPly76 I changed config files: datasource-context.xml <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:aop="http://www.springframework.org/schema/aop" xsi:schemaLocation="http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-2.5.xsd http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.5.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-2.5.xsd http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util-2.5.xsd" xmlns:p="http://www.springframework.org/schema/p" xmlns:tx="http://www.springframework.org/schema/tx" xmlns:util="http://www.springframework.org/schema/util"> <bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close" p:driverClassName="${local.jdbc.driver}" p:url="${local.jdbc.url}" p:username="${local.jdbc.username}" p:password="${local.jdbc.password}"> </bean> <bean id="dataSourceRemote" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close" p:driverClassName="${remote.jdbc.driver}" p:url="${remote.jdbc.url}" p:username="${remote.jdbc.username}" p:password="${remote.jdbc.password}"> </bean> <bean class="org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor"> <property name="defaultPersistenceUnitName" value="pu1" /> </bean> <bean id="persistenceUnitManager" class="org.springframework.orm.jpa.persistenceunit.DefaultPersistenceUnitManager"> <property name="persistenceXmlLocation" value="${persistence.xml.location}" /> <property name="defaultDataSource" ref="dataSource" /> <!-- problem --> <property name="dataSources"> <map> <entry key="local" value-ref="dataSource" /> <entry key="remote" value-ref="dataSourceRemote" /> </map> </property> </bean> <bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="jpaVendorAdapter"> <bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter" p:showSql="true" p:generateDdl="true"> </bean> </property> <property name="persistenceUnitManager" ref="persistenceUnitManager" /> <property name="persistenceUnitName" value="pu1" /> <property name="dataSource" ref="dataSource" /> </bean> <bean id="entityManagerFactoryRemote" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="jpaVendorAdapter"> <bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter" p:showSql="true" p:generateDdl="true"> </bean> </property> <property name="persistenceUnitManager" ref="persistenceUnitManager" /> <property name="persistenceUnitName" value="pu2" /> <property name="dataSource" ref="dataSourceRemote" /> </bean> <tx:annotation-driven /> <bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager" p:entity-manager-factory-ref="entityManagerFactory" /> <bean id="transactionManagerRemote" class="org.springframework.orm.jpa.JpaTransactionManager" p:entity-manager-factory-ref="entityManagerFactoryRemote" /> </beans> persistence.xml <?xml version="1.0" encoding="UTF-8"?> <persistence xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd" version="1.0"> <persistence-unit name="pu1" transaction-type="RESOURCE_LOCAL"> <properties> <property name="hibernate.ejb.naming_strategy" value="org.hibernate.cfg.DefaultNamingStrategy" /> <property name="hibernate.dialect" value="${local.hibernate.dialect}" /> <property name="hibernate.hbm2ddl.auto" value="${local.hibernate.hbm2ddl.auto}" /> </properties> </persistence-unit> <persistence-unit name="pu2" transaction-type="RESOURCE_LOCAL"> <properties> <property name="hibernate.ejb.naming_strategy" value="org.hibernate.cfg.DefaultNamingStrategy" /> <property name="hibernate.dialect" value="${remote.hibernate.dialect}" /> <property name="hibernate.hbm2ddl.auto" value="${remote.hibernate.hbm2ddl.auto}" /> </properties> </persistence-unit> </persistence> Now it builds two entityManagerFactory, but both are for Microsoft SQL Server [main] INFO org.hibernate.ejb.Ejb3Configuration - Processing PersistenceUnitInfo [ name: pu1 ...] [main] INFO org.hibernate.cfg.SettingsFactory - RDBMS: Microsoft SQL Server [main] INFO org.hibernate.ejb.Ejb3Configuration - Processing PersistenceUnitInfo [ name: pu2 ...] [main] INFO org.hibernate.cfg.SettingsFactory - RDBMS: Microsoft SQL Server (but must MySQL) I suggest, that use only dataSource, dataSourceRemote (no substitution) is not worked. That's my last problem

    Read the article

  • Cannot join Win7 workstations to Win2k8 domain

    - by wfaulk
    I am trying to connect a Windows 7 Ultimate machine to a Windows 2k8 domain and it's not working. I get this error: Note: This information is intended for a network administrator. If you are not your network's administrator, notify the administrator that you received this information, which has been recorded in the file C:\Windows\debug\dcdiag.txt. DNS was successfully queried for the service location (SRV) resource record used to locate a domain controller for domain "example.local": The query was for the SRV record for _ldap._tcp.dc._msdcs.example.local The following domain controllers were identified by the query: dc1.example.local dc2.example.local However no domain controllers could be contacted. Common causes of this error include: Host (A) or (AAAA) records that map the names of the domain controllers to their IP addresses are missing or contain incorrect addresses. Domain controllers registered in DNS are not connected to the network or are not running. The client is in an office connected remotely via MPLS to the data center where our domain controllers exist. I don't seem to have anything blocking connectivity to the DCs, but I don't have total control over the MPLS circuit, so it's possible that there's something blocking connectivity. I have tried multiple clients (Win7 Ultimate and WinXP SP3) in the one office and get the same symptoms on all of them. I have no trouble connecting to either of the domain controllers, though I have, admittedly, not tried every possible port. ICMP, LDAP, DNS, and SMB connections all work fine. Client DNS is pointing to the DCs, and "example.local" resolves to the two IP addresses of the DCs. I get this output from the NetLogon Test command line utility: C:\Windows\System32>nltest /dsgetdc:example.local Getting DC name failed: Status = 1355 0x54b ERROR_NO_SUCH_DOMAIN I have also created a separate network to emulate that office's configuration that's connected to the DC network via LAN-to-LAN VPN instead of MPLS. Joining Windows 7 computers from that remote network works fine. The only difference I can find between the two environments is the intermediate connectivity, but I'm out of ideas as to what to test or how to do it. What further steps should I take? (Note that this isn't actually my client workstation and I have no direct access to it; I'm forced to do remote hands access to it, which makes some of the obvious troubleshooting methods, like packet sniffing, more difficult. If I could just set up a system there that I could remote into, I would, but requests to that effect have gone unanswered.) 2011-08-25 update: I had DCDIAG.EXE run on a client attempting to join the domain: C:\Windows\System32>dcdiag /u:example\adminuser /p:********* /s:dc2.example.local Directory Server Diagnosis Performing initial setup: Ldap search capabality attribute search failed on server dc2.example.local, return value = 81 This sounds like it was able to connect via LDAP, but the thing that it was trying to do failed. But I don't quite follow what it was trying to do, much less how to reproduce it or resolve it. 2011-08-26 update: Using LDP.EXE to try and make an LDAP connection directly to the DCs results in these errors: ld = ldap_open("10.0.0.1", 389); Error <0x51: Fail to connect to 10.0.0.1. ld = ldap_open("10.0.0.2", 389); Error <0x51: Fail to connect to 10.0.0.2. ld = ldap_open("10.0.0.1", 3268); Error <0x51: Fail to connect to 10.0.0.1. ld = ldap_open("10.0.0.2", 3268); Error <0x51: Fail to connect to 10.0.0.2. This would seem to point fingers at LDAP connections being blocked somewhere. (And 0x51 == 81, which was the error from DCDIAG.EXE from yesterday's update.) I could swear I tested this using TELNET.EXE weeks ago, but now I'm thinking that I may have assumed that its clearing of the screen was telling me that it was waiting and not that it had connected. I'm tracking down LDAP connectivity problems now. This update may become an answer.

    Read the article

  • optional local variables in rails partial templates: how do I get out of the (defined? foo) mess?

    - by brahn
    I've been a bad kid and used the following syntax in my partial templates to set default values for local variables if a value wasn't explicitly defined in the :locals hash when rendering the partial -- <% foo = default_value unless (defined? foo) %> This seemed to work fine until recently, when (for no reason I could discern) non-passed variables started behaving as if they had been defined to nil (rather than undefined). As has been pointed by various helpful people on SO, http://api.rubyonrails.org/classes/ActionView/Base.html says not to use defined? foo and instead to use local_assigns.has_key? :foo I'm trying to amend my ways, but that means changing a lot of templates. Can/should I just charge ahead and make this change in all the templates? Is there any trickiness I need to watch for? How diligently do I need to test each one?

    Read the article

  • Site-to-site VPN using MD5 instead of SHA and getting regular disconnection

    - by Steven
    We are experiencing some strange behavior with a site-to-site IPsec VPN that goes down about every week for 30 minutes (Iam told 30 minutes exactly). I don't have access to the logs, so it's difficult to troubleshoot. What is also strange is that the two VPN devices are set to use SHA hash algorithm but apparently end up agreeing to use MD5. Does anybody have a clue? or is this just insufficient information? Edit: Here is an extract of the log of one of the two VPN devices, which is a Cisco 3000 series VPN concentrator. 27981 03/08/2010 10:02:16.290 SEV=4 IKE/41 RPT=16120 xxxxxxxx IKE Initiator: New Phase 1, Intf 2, IKE Peer xxxxxxxx local Proxy Address xxxxxxxx, remote Proxy Address xxxxxxxx, SA (L2L: 1A) 27983 03/08/2010 10:02:56.930 SEV=4 IKE/41 RPT=16121 xxxxxxxx IKE Initiator: New Phase 1, Intf 2, IKE Peer xxxxxxxx local Proxy Address xxxxxxxx, remote Proxy Address xxxxxxxx, SA (L2L: 1A) 27986 03/08/2010 10:03:35.370 SEV=4 IKE/41 RPT=16122 xxxxxxxx IKE Initiator: New Phase 1, Intf 2, IKE Peer xxxxxxxx local Proxy Address xxxxxxxx, remote Proxy Address xxxxxxxx, SA (L2L: 1A) [… same continues for another 15 minutes …] 28093 03/08/2010 10:19:46.710 SEV=4 IKE/41 RPT=16140 xxxxxxxx IKE Initiator: New Phase 1, Intf 2, IKE Peer xxxxxxxx local Proxy Address xxxxxxxx, remote Proxy Address xxxxxxxx, SA (L2L: 1A) 28096 03/08/2010 10:20:17.720 SEV=5 IKE/172 RPT=1291 xxxxxxxx Group [xxxxxxxx] Automatic NAT Detection Status: Remote end is NOT behind a NAT device This end IS behind a NAT device 28100 03/08/2010 10:20:17.820 SEV=3 IKE/134 RPT=79 xxxxxxxx Group [xxxxxxxx] Mismatch: Configured LAN-to-LAN proposal differs from negotiated proposal. Verify local and remote LAN-to-LAN connection lists. 28103 03/08/2010 10:20:17.820 SEV=4 IKE/119 RPT=1197 xxxxxxxx Group [xxxxxxxx] PHASE 1 COMPLETED 28104 03/08/2010 10:20:17.820 SEV=4 AUTH/22 RPT=1031 xxxxxxxx User [xxxxxxxx] Group [xxxxxxxx] connected, Session Type: IPSec/LAN- to-LAN 28106 03/08/2010 10:20:17.820 SEV=4 AUTH/84 RPT=39 LAN-to-LAN tunnel to headend device xxxxxxxx connected 28110 03/08/2010 10:20:17.920 SEV=5 IKE/25 RPT=1291 xxxxxxxx Group [xxxxxxxx] Received remote Proxy Host data in ID Payload: Address xxxxxxxx, Protocol 0, Port 0 28113 03/08/2010 10:20:17.920 SEV=5 IKE/24 RPT=88 xxxxxxxx Group [xxxxxxxx] Received local Proxy Host data in ID Payload: Address xxxxxxxx, Protocol 0, Port 0 28116 03/08/2010 10:20:17.920 SEV=5 IKE/66 RPT=1290 xxxxxxxx Group [xxxxxxxx] IKE Remote Peer configured for SA: L2L: 1A 28117 03/08/2010 10:20:17.930 SEV=5 IKE/25 RPT=1292 xxxxxxxx Group [xxxxxxxx] Received remote Proxy Host data in ID Payload: Address xxxxxxxx, Protocol 0, Port 0 28120 03/08/2010 10:20:17.930 SEV=5 IKE/24 RPT=89 xxxxxxxx Group [xxxxxxxx] Received local Proxy Host data in ID Payload: Address xxxxxxxx, Protocol 0, Port 0 28123 03/08/2010 10:20:17.930 SEV=5 IKE/66 RPT=1291 xxxxxxxx Group [xxxxxxxx] IKE Remote Peer configured for SA: L2L: 1A 28124 03/08/2010 10:20:18.070 SEV=4 IKE/173 RPT=17330 xxxxxxxx Group [xxxxxxxx] NAT-Traversal successfully negotiated! IPSec traffic will be encapsulated to pass through NAT devices. 28127 03/08/2010 10:20:18.070 SEV=4 IKE/49 RPT=17332 xxxxxxxx Group [xxxxxxxx] Security negotiation complete for LAN-to-LAN Group (xxxxxxxx) Responder, Inbound SPI = 0x56a4fe5c, Outbound SPI = 0xcdfc3892 28130 03/08/2010 10:20:18.070 SEV=4 IKE/120 RPT=17332 xxxxxxxx Group [xxxxxxxx] PHASE 2 COMPLETED (msgid=37b3b298) 28131 03/08/2010 10:20:18.750 SEV=4 IKE/41 RPT=16141 xxxxxxxx Group [xxxxxxxx] IKE Initiator: New Phase 2, Intf 2, IKE Peer xxxxxxxx local Proxy Address xxxxxxxx, remote Proxy Address xxxxxxxx, SA (L2L: 1A) 28135 03/08/2010 10:20:18.870 SEV=4 IKE/173 RPT=17331 xxxxxxxx Group [xxxxxxxx] NAT-Traversal successfully negotiated! IPSec traffic will be encapsulated to pass through NAT devices.

    Read the article

  • squid3 auth thru samba using ntlm to AD doesn't work

    - by derty
    some users here are spending to much time exploring the WWW. So big boss whats to get this under control. We use a squid3 just for some security reason and chace benefits. and now i'm trying to set up a new proxy on a different server (Debian 6) Permissions are defined in AC and the squid3 should get the auth thru samba/winbind by using the ntlm protocol. but i'll get all the time Access, denited. it only works by using LDAP but thats not the way i need it. here some log and confs squid access.log 1326878095.784 1 192.168.15.27 TCP_DENIED/407 4049 GET http://at.msn.com/? -NONE/- text/html 1326878095.791 1 192.168.15.27 TCP_DENIED/407 4294 GET http://at.msn.com/? - NONE/- text/html 1326878095.803 9 192.168.15.27 TCP_DENIED/403 4028 GET http://at.msn.com/? kavan NONE/- text/html 1326878095.848 0 192.168.15.27 TCP_DENIED/403 3881 GET http://www.squid-cache.org/Artwork/SN.png kavan NONE/- text/html 1326878100.279 0 192.168.15.27 TCP_DENIED/403 3735 GET http://www.google.at/ kavan NONE/- text/html 1326878100.296 0 192.168.15.27 TCP_DENIED/403 3870 GET http://www.squid-cache.org/Artwork/SN.png kavan NONE/- text/html 1326878155.700 0 192.168.15.27 TCP_DENIED/407 4072 GET http://ie9cvlist.ie.microsoft.com/IE9CompatViewList.xml - NONE/- text/html 1326878155.705 2 192.168.15.27 TCP_DENIED/407 4317 GET http://ie9cvlist.ie.microsoft.com/IE9CompatViewList.xml - NONE/- text/html 1326878155.709 3 192.168.15.27 TCP_DENIED/403 4026 GET http://ie9cvlist.ie.microsoft.com/IE9CompatViewList.xml kavan NONE/- text/html squid chace 2012/01/18 10:12:49| Creating Swap Directories 2012/01/18 10:12:49| Starting Squid Cache version 3.1.6 for x86_64-pc-linux-gnu... 2012/01/18 10:12:49| Process ID 17236 2012/01/18 10:12:49| With 65535 file descriptors available 2012/01/18 10:12:49| Initializing IP Cache... 2012/01/18 10:12:49| DNS Socket created at [::], FD 7 2012/01/18 10:12:49| DNS Socket created at 0.0.0.0, FD 8 2012/01/18 10:12:49| Adding nameserver 192.168.15.2 from /etc/resolv.conf 2012/01/18 10:12:49| Adding nameserver 192.168.15.19 from /etc/resolv.conf 2012/01/18 10:12:49| Adding nameserver 192.168.15.1 from /etc/resolv.conf 2012/01/18 10:12:49| Adding domain schoenbrunn.local from /etc/resolv.conf 2012/01/18 10:12:49| helperOpenServers: Starting 5/5 'squid_ldap_auth' processes 2012/01/18 10:12:49| helperOpenServers: Starting 10/10 'ntlm_auth' processes 2012/01/18 10:12:49| helperOpenServers: Starting 10/10 'squid_kerb_auth' processes 2012/01/18 10:12:49| squid_kerb_auth: INFO: Starting version 1.0.5 2012/01/18 10:12:49| squid_kerb_auth: INFO: Starting version 1.0.5 2012/01/18 10:12:49| squid_kerb_auth: INFO: Starting version 1.0.5 2012/01/18 10:12:49| squid_kerb_auth: INFO: Starting version 1.0.5 2012/01/18 10:12:49| squid_kerb_auth: INFO: Starting version 1.0.5 2012/01/18 10:12:49| squid_kerb_auth: INFO: Starting version 1.0.5 2012/01/18 10:12:49| squid_kerb_auth: INFO: Starting version 1.0.5 2012/01/18 10:12:49| squid_kerb_auth: INFO: Starting version 1.0.5 2012/01/18 10:12:49| helperOpenServers: Starting 5/5 'squid_ldap_group' processes 2012/01/18 10:12:49| squid_kerb_auth: INFO: Starting version 1.0.5 2012/01/18 10:12:49| squid_kerb_auth: INFO: Starting version 1.0.5 2012/01/18 10:12:49| Unlinkd pipe opened on FD 73 2012/01/18 10:12:49| Local cache digest enabled; rebuild/rewrite every 3600/3600 sec 2012/01/18 10:12:49| Store logging disabled 2012/01/18 10:12:49| Swap maxSize 0 + 262144 KB, estimated 20164 objects 2012/01/18 10:12:49| Target number of buckets: 1008 2012/01/18 10:12:49| Using 8192 Store buckets 2012/01/18 10:12:49| Max Mem size: 262144 KB 2012/01/18 10:12:49| Max Swap size: 0 KB 2012/01/18 10:12:49| Using Least Load store dir selection 2012/01/18 10:12:49| Set Current Directory to /var/spool/squid3 2012/01/18 10:12:49| Loaded Icons. 2012/01/18 10:12:49| Accepting HTTP connections at [::]:3128, FD 74. 2012/01/18 10:12:49| HTCP Disabled. 2012/01/18 10:12:49| Squid modules loaded: 0 2012/01/18 10:12:49| Adaptation support is off. 2012/01/18 10:12:49| Ready to serve requests. 2012/01/18 10:12:50| storeLateRelease: released 0 objects smb.conf # Domain Authntication Settings workgroup = <WORKGROUP> security = ads password server = <DOMAINNAME>.LOCAL realm = <DOMAINNAME>.LOCAL ldap ssl = no # logging log level = 5 max log size = 50 # logs split per machine log file = /var/log/samba/%m.log # max 50KB per log file, then rotate ; max log size = 50 # User settings username map = /etc/samba/smbusers idmap uid = 10000-20000000 idmap gid = 10000-20000000 idmap backend = ad ; template primary group = <ad group> template shell = /sbin/nologin # Winbind Settings winbind separator = + winbind enum users = Yes winbind enum groups = Yes winbind netsted groups = Yes winbind nested groups = Yes winbind cache time = 10 winbind use default domain = Yes #Other Globals unix charset = LOCALE server string = <SERVERNAME> load printers = no printing = cups cups options = raw ; printcap name = /etc/printcap #obtain list of printers automatically on SystemV ; printcap name = lpstat ; printing = cups squid.conf auth_param ntlm program /usr/bin/ntlm_auth --require-membership-of=<DOMAINNAME>\\INTERNETZ --helper-protocol=squid-2.5-ntlmssp auth_param ntlm children 10 auth_param basic program /usr/lib/squid3/squid_ldap_auth -R -b "dc=<dcname>,dc=local" -D "cn=administrator,cn=Users,dc=<domainname>,dc=local" -w "******" -f sAMAccountName=%s -h 192.168.15.19:3268 auth_param basic realm "Proxy Authentifizierung. Bitte geben Sie Ihren Benutzername und Ihr Passwort ein!" #means insert you PW in an other language - # external_acl_type InetGroup %LOGIN /usr/lib/squid3/squid_ldap_group -R -b "dc=<domainname>,dc=local" -D "cn=administrator,cn=Users,dc=<domainname>,dc=local" -w "******" -f "(&(objectclass=person)(sAMAccountName=%v) (memberof=cn=%a,cn=internetz,dc=<domainname>,dc=local))" -h 192.168.15.19:3268 auth_param negotiate program /usr/lib/squid3/squid_kerb_auth -d auth_param negotiate children 10 auth_param negotiate keep_alive on acl localnet proxy_auth REQUIRED acl InetAccess external InetGroup Internetz http_access allow InetAccess http_access deny all acl auth proxy_auth REQUIRED http_access allow auth and a very suspicious is that by adding the proxy server to the Domain i see 2 new entries in the PC one with the original computer-name leopoldine and one with leopoldine CNF:f8efa4c4-ff0e-4217-939d-f1523b43464d ?!? I tried a lot, really... but i stuck on this problem... i actually i even reinstalled all dependent programs and reconfigured them from default. Group exists and has me in it. Firefox running on the old proxy and i use IE for testing the new one. But i'll get all the time Access-Denited and to be honest i'm quite a beginner, so please don't be to prude. I'll interested in improving, i'll get the information we need to fix this but i started working 2 month ago and got only 1 1/2 year's training and not a single sec. in linux ;)

    Read the article

  • LXC Container Networking

    - by digitaladdictions
    I just started to experiment with LXC containers. I was able to create a container and start it up but I cannot get dhcp to assign the container an IP address. If I assign a static address the container can ping the host IP but not outside the host IP. The host is CentOS 6.5 and the guest is Ubuntu 14.04LTS. I used the template downloaded by lxc-create -t download -n cn-01 command. If I am trying to get an IP address on the same subnet as the host I don't believe I should need the IP tables rule for masquerading but I added it anyways. Same with IP forwarding. I compiled LXC by hand from the following source https://linuxcontainers.org/downloads/lxc-1.0.4.tar.gz Host Operating System Version #> cat /etc/redhat-release CentOS release 6.5 (Final) #> uname -a Linux localhost.localdomain 2.6.32-431.20.3.el6.x86_64 #1 SMP Thu Jun 19 21:14:45 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Container Config #> cat /usr/local/var/lib/lxc/cn-01/config # Template used to create this container: /usr/local/share/lxc/templates/lxc-download # Parameters passed to the template: # For additional config options, please look at lxc.container.conf(5) # Distribution configuration lxc.include = /usr/local/share/lxc/config/ubuntu.common.conf lxc.arch = x86_64 # Container specific configuration lxc.rootfs = /usr/local/var/lib/lxc/cn-01/rootfs lxc.utsname = cn-01 # Network configuration lxc.network.type = veth lxc.network.flags = up lxc.network.link = br0 LXC default.confu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:12:30:f2 brd ff:ff:ff:ff:f #> cat /usr/local/etc/lxc/default.conf lxc.network.type = veth lxc.network.link = br0 lxc.network.flags = up #> lxc-checkconfig Kernel configuration not found at /proc/config.gz; searching... Kernel configuration found at /boot/config-2.6.32-431.20.3.el6.x86_64 --- Namespaces --- Namespaces: enabled Utsname namespace: enabled Ipc namespace: enabled Pid namespace: enabled User namespace: enabled Network namespace: enabled Multiple /dev/pts instances: enabled --- Control groups --- Cgroup: enabled Cgroup namespace: enabled Cgroup device: enabled Cgroup sched: enabled Cgroup cpu account: enabled Cgroup memory controller: /usr/local/bin/lxc-checkconfig: line 103: [: too many arguments enabled Cgroup cpuset: enabled --- Misc --- Veth pair device: enabled Macvlan: enabled Vlan: enabled File capabilities: /usr/local/bin/lxc-checkconfig: line 118: [: -gt: unary operator expected Note : Before booting a new kernel, you can check its configuration usage : CONFIG=/path/to/config /usr/local/bin/lxc-checkconfig Network Config (HOST) #> cat /etc/sysconfig/network-scripts/ifcfg-br0 DEVICE=br0 TYPE=Bridge BOOTPROTO=dhcp ONBOOT=yes #> cat /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 ONBOOT=yes TYPE=Ethernet IPV6INIT=no USERCTL=no BRIDGE=br0 #> cat /etc/networks default 0.0.0.0 loopback 127.0.0.0 link-local 169.254.0.0 #> ip a s 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:12:30:f2 brd ff:ff:ff:ff:ff:ff inet6 fe80::20c:29ff:fe12:30f2/64 scope link valid_lft forever preferred_lft forever 3: pan0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether 42:7e:43:b3:61:c5 brd ff:ff:ff:ff:ff:ff 4: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 00:0c:29:12:30:f2 brd ff:ff:ff:ff:ff:ff inet 10.60.70.121/24 brd 10.60.70.255 scope global br0 inet6 fe80::20c:29ff:fe12:30f2/64 scope link valid_lft forever preferred_lft forever 12: vethT6BGL2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether fe:a1:69:af:50:17 brd ff:ff:ff:ff:ff:ff inet6 fe80::fca1:69ff:feaf:5017/64 scope link valid_lft forever preferred_lft forever #> brctl show bridge name bridge id STP enabled interfaces br0 8000.000c291230f2 no eth0 vethT6BGL2 pan0 8000.000000000000 no #> cat /proc/sys/net/ipv4/ip_forward 1 # Generated by iptables-save v1.4.7 on Fri Jul 11 15:11:36 2014 *nat :PREROUTING ACCEPT [34:6287] :POSTROUTING ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A POSTROUTING -o eth0 -j MASQUERADE COMMIT # Completed on Fri Jul 11 15:11:36 2014 Network Config (Container) #> cat /etc/network/interfaces # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback auto eth0 iface eth0 inet dhcp #> ip a s 11: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 02:69:fb:42:ee:d7 brd ff:ff:ff:ff:ff:ff inet6 fe80::69:fbff:fe42:eed7/64 scope link valid_lft forever preferred_lft forever 13: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever

    Read the article

  • Exchange ActiveSync Exception

    - by Dmeglio
    One of the users on my network is having an issue with his iPhone syncing via ActiveSync. Overall it's working, but every now and then he gets a "Synchronization with your iPhone failed for 3 items." I asked him to go into OWA and turn on the Mobile Phone logging. I looked through the logs and this is what stood out to me: SyncCommand_GenerateResponsesXmlNode_AddChange_Exception : Microsoft.Exchange.Data.Storage.PropertyErrorException: Property: [{00062008-0000-0000-c000-000000000046}:0x8501] ReminderMinutesBeforeStartInternal, PropertyErrorCode: NotFound, PropertyErrorDescription: . at Microsoft.Exchange.Data.Storage.PropertyBag.ThrowIfPropertyError(StorePropertyDefinition propertyDefinition, Object propertyValue) at Microsoft.Exchange.Data.Storage.StoreObject.GetProperty(PropertyDefinition propertyDefinition) at Microsoft.Exchange.Data.Storage.MeetingMessage.get_Item(PropertyDefinition propertyDefinition) at Microsoft.Exchange.AirSync.SchemaConverter.XSO.XsoMeetingRequestProperty.get_NestedData() at Microsoft.Exchange.AirSync.SchemaConverter.AirSync.AirSyncMeetingRequestProperty.InternalCopyFrom(IProperty srcProperty) at Microsoft.Exchange.AirSync.SchemaConverter.AirSync.AirSyncProperty.CopyFrom(IProperty srcProperty) at Microsoft.Exchange.AirSync.SchemaConverter.AirSync.AirSyncDataObject.CopyFrom(IProperty srcRootProperty) at Microsoft.Exchange.AirSync.SyncCollection.ConvertServerToClientObject(ISyncItem syncItem, XmlNode airSyncParentNode, SyncOperation changeObject) at Microsoft.Exchange.AirSync.SyncCollection.GenerateCommandsXmlNode(XmlDocument xmlResponse, IAirSyncVersionFactory versionFactory, String deviceType, ProtocolLogger protocolLogger, MailboxLogger mailboxLogger) Does anyone have any idea what might cause this? We have 4 iPhone users connected to our Exchange via ActiveSync. Right now, this seems to be the only user experiencing this issue. I'd appreciate any help anyone can provide. Thanks.

    Read the article

  • "postgres blocked for more than 120 seconds" - is my db still consistent?

    - by nn4l
    I am using an iscsi volume on an Open-E storage system for several virtual machines running on a XenServer host. Occasionally, when there is a very high disk I/O load on the virtual machines (and therefore also on the storage system), I got this error message on the vm consoles: [2594520.161701] INFO: task kjournald:117 blocked for more than 120 seconds. [2594520.161787] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [2594520.162194] INFO: task flush-202:0:229 blocked for more than 120 seconds. [2594520.162274] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [2594520.162801] INFO: task postgres:1567 blocked for more than 120 seconds. [2594520.162882] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. I understand this error message is caused by the kernel to inform that these processes haven't been run for 120 seconds, most likely because a disk access to the storage system has not yet been processed. But what is the effect on the processes. For example, will the postgres process eventually write its data when the storage system is idle again after a few minutes, so that all data is still consistent? Or will it abort the write, leaving some tables in an inconsistent state? I certainly expect that the former should be the case - if the disk access is slow, postgres (or any other affected process) should just wait as long as it takes. I can live with the application hanging for a few minutes. But if there is a chance for data corruption then any of these errors is really bad news. Please advise what to do here.

    Read the article

  • sql 2008 disk layout on a budget this is for database mirroring

    - by user22215
    Guys I'm rolling out a SQL database server that will be used to back Sharepoint 2007. Right now I need some advice on my disk layout. I have two Dell servers that are configured a little differently in terms of storage. The principle server will be using a combination of local storage and san storage. I have to work with what I have the organization is currently all allocated on san storage it was like pulling teeth to even get what I have to work with now. My disk setup on the principle is as follows: raid 1 for OS raid 10 for logs raid 10 fiber on san for high IO databases raid 10 sata on san for content databases My question in regards to the principle server is where should I place the temp db? I thought about placing it on the fiber raid 10 which will be hosting my high IO Sharepoint SSP databases my only other choice is to move it to the raid 1 os partition which I’m sure you guys will be against. Now let’s talk about the mirror server it is not connected to the san it is all local 6 15k SAS drives. Now my question is the same do I put tempdb on the os partition or do I leave the os partition and use a single raid 10 for everything? Any help you can provide is much appreciated.

    Read the article

  • Windows 7 home backup solution, with offsite provision

    - by Richard E
    I am looking for a home backup solution for my single Windows 7 (Home Premium) PC. I have about 500GB of data to backup. I would like to spend less than GBP 300 on the solution. I don't see the need to backup the whole PC, rather specific folder branches (iTunes, photos, documents, Outlook files, user folders such as desktop, favorites etc). I would like a solution that enables me to maintain backups in two separate physical locations (e.g. home and work). To facilitate this I am imagining a storage unit with slots for two removable drives, along with three separate drives. At any one time two of the drives will be being backed up to in the storage unit. The third will be located at my work. Periodically I will take one of the drives into work and leave it there, then bring the drive that was there back home, and plug it into the storage unit. It will then be backed up along with the other drive that was left in the storage unit. This approach should cover scenarios such as virus attack and fire or theft from one location. Thoughts and comments on the sanity of this approach please...

    Read the article

  • Windows 7 home backup solution, with offsite provision

    - by Richard E
    I am looking for a home backup solution for my single Windows 7 (Home Premium) PC. I have about 500GB of data to backup. I would like to spend less than GBP 300 on the solution. I don't see the need to backup the whole PC, rather specific folder branches (iTunes, photos, documents, Outlook files, user folders such as desktop, favorites etc). I would like a solution that enables me to maintain backups in two separate physical locations (e.g. home and work). To facilitate this I am imagining a storage unit with slots for two removable drives, along with three separate drives. At any one time two of the drives will be being backed up to in the storage unit. The third will be located at my work. Periodically I will take one of the drives into work and leave it there, then bring the drive that was there back home, and plug it into the storage unit. It will then be backed up along with the other drive that was left in the storage unit. This approach should cover scenarios such as virus attack and fire or theft from one location. Thoughts and comments on the sanity of this approach please...

    Read the article

  • Can't mount FAT32 drive under Ubuntu Linux

    - by Josh
    I have a 320GB USB drive with a single large FAT32 partition. The volume mounts perfectly fine on my Mac OS X 10.5.8 machine and Disk Utility on the mac reports no issues with the volume. I can read/write all data on the drive. However when I connect the drive to my Ubuntu 9.10 Karmic system, the partition does not mount. dmesg|tail says: [ 2752.334822] scsi3 : SCSI emulation for USB Mass Storage devices [ 2752.335040] usb-storage: device found at 3 [ 2752.335044] usb-storage: waiting for device to settle before scanning [ 2757.330301] usb-storage: device scan complete [ 2757.331005] scsi 3:0:0:0: Direct-Access WD 3200AAK External 1.65 PQ: 0 ANSI: 0 [ 2757.331772] sd 3:0:0:0: Attached scsi generic sg2 type 0 [ 2757.355647] sd 3:0:0:0: [sdb] 625142448 512-byte logical blocks: (320 GB/298 GiB) [ 2757.360737] sd 3:0:0:0: [sdb] Write Protect is off [ 2757.360749] sd 3:0:0:0: [sdb] Mode Sense: 00 00 00 00 [ 2757.360755] sd 3:0:0:0: [sdb] Assuming drive cache: write through [ 2757.367618] sd 3:0:0:0: [sdb] Assuming drive cache: write through [ 2757.367631] sdb: sdb1 [ 2762.797622] sd 3:0:0:0: [sdb] Assuming drive cache: write through [ 2762.797636] sd 3:0:0:0: [sdb] Attached SCSI disk [ 2822.866228] FAT: bogus number of reserved sectors [ 2822.866237] VFS: Can't find a valid FAT filesystem on dev sdb1. When I run fsck.vfat -a /dev/sdb1 I get: root@cartman:~# fsck.vfat -a /dev/sdb1 dosfsck 3.0.3, 18 May 2009, FAT32, LFN Logical sector size is zero. Googling "vfat Logical sector size is zero" produced no consensus as to the solution. I would prefer not to have to completely reformat the disk if possible because it contains about 280GB of data I would rather not have to find a temporary home for. Any suggestions?

    Read the article

  • NFS high CPU usage

    - by user269836
    Hello, I have a very strange issue. I have next server: Intel(R) Xeon(TM) MP CPU 3.16GHz cat /proc/cpuinfo | grep proce | wc -l 8 free -m total used free shared buffers cached Mem: 28203 27606 596 0 10789 9714 -/+ buffers/cache: 7103 21100 Swap: 24695 0 24695 RAID card *-storage description: RAID bus controller product: MegaRAID vendor: LSI Logic / Symbios Logic physical id: 7 bus info: pci@0000:13:07.0 logical name: scsi2 version: 01 width: 32 bits clock: 66MHz capabilities: storage pm bus_master cap_list rom configuration: driver=megaraid latency=32 resources: irq:134 memory:d8ff0000-d8ffffff(prefetchable) memory:df600000-df60ffff(prefetchable) HDD: 10x148Gb SCSI U320 15k - RAID5 /dev/sdb1 807G 674G 93G 88% /storage /dev/sdb1 /storage ext4 defaults,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,noatime,nodiratime,noacl,errors=remount-ro 0 1 network cards ethtool -i eth0 driver: tg3 version: 3.116 firmware-version: 5704-v3.36, ASFIPMIc v2.36 bus-info: 0000:10:02.0 ethtool -i eth1 driver: tg3 version: 3.116 firmware-version: 5704-v3.36, ASFIPMIc v2.36 bus-info: 0000:10:02.0 ifconfig bond0 Link encap:Ethernet HWaddr 00:0f:1f:ff:d6:4d inet addr:192.168.15.71 Bcast:192.168.15.255 Mask:255.255.255.0 inet6 addr: fe80::20f:1fff:feff:d64d/64 Scope:Link UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1 RX packets:1062818202 errors:0 dropped:3918 overruns:0 frame:0 TX packets:1041317321 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:10000 RX bytes:258867684559 (241.0 GiB) TX bytes:396569192650 (369.3 GiB) this server running only nfs-kernel-server uname -a Linux nas2-backup 2.6.32-5-amd64 #1 SMP Sun Sep 23 10:07:46 UTC 2012 x86_64 GNU/Linux Debian 6. What do I have, once per day or two, LA goes up, it can be reached around LA: 40 but if I do: nfs-kernel-server restart. Every thing is OK. But on the next day or a little bit later, LA goes up again. Servers are connected to d-link dgs 1016d with 24 GBits ports. I have tried everything to find out what the problem is. Why it's happening, but still I can not resolve this issue. Any ideas on what is happening here?

    Read the article

  • Exchange DiskShadow/Robocopy backup does not purge log files

    - by Robert Allan Hennigan Leahy
    I have a series of scripts setup to backup my Exchange. The following command is executed to start the process: diskshadow /s C:\Backup_Scripts\exchangeserverbackupscript1.dsh This is exchangeserverbackupscript1.dsh: #DiskShadow script file set verbose on #delete shadows all set context persistent writer verify {76fe1ac4-15f7-4bcd-987e-8e1acb462fb7} set metadata C:\Backup_Scripts\shadowmetadata.cab begin backup add volume C: alias SH1 create expose %SH1% P: exec C:\Backup_Scripts\exchangeserverbackupscript1.cmd end backup delete shadows exposed P: exit #End of script And this is exchangeserverbackupscript1.cmd: robocopy "P:\Program Files\Microsoft\Exchange Server\Mailbox\First Storage Group" "\\leahyfs\J$\E-Mail Backups\Day 1" /MIR /R:0 /W:0 /COPY:DT /B This is not causing Exchange to purge its log files. The edb file is 4.7 gigabytes, but the First Storage Group folder itself is 50+ gigabytes due to many, many log files for each day going back to 2009. Is there any way -- I've Googled and haven't found anything -- to notify Exchange when I've completed a full backup, and have it purge its log files? According to this and this, end backup should cause Exchange to "flush the transaction logs for that storage group" but only "if a successful backup of a storage group occurred", which leaves my question as: What constitutes a "successful backup", and why is what I'm doing not it?

    Read the article

  • What does this mean: "SATP VMW_SATP_LOCAL does not support device configuration"?

    - by Jason Tan
    Can anyone tell me what this means in ESXi 5.1?: SATP VMW_SATP_LOCAL does not support device configuration I've googled it and I get a lot of results, but as yet all the pages that contain the string are discussing other matters. The storage array is a HDS HUS-VM and the hosts are HP b460c G8 blades with flex fabric and flex fabric VCs which I am in the process of commissioning and would like to get it started on the right foot - i.e. error and warning free! naa.600508b1001c56ee3d70da65f071da23 Device Display Name: HP Serial Attached SCSI Disk (naa.600508b1001c56ee3d70da65f071da23) Storage Array Type: VMW_SATP_LOCAL Storage Array Type Device Config: SATP VMW_SATP_LOCAL does not support device configuration. Path Selection Policy: VMW_PSP_FIXED Path Selection Policy Device Config: {preferred=vmhba0:C0:T0:L1;current=vmhba0:C0:T0:L1} Path Selection Policy Device Custom Config: Working Paths: vmhba0:C0:T0:L1 Is Local SAS Device: true Is Boot USB Device: false This is the same LUN: ~ # esxcli storage core device list -d naa.60060e80132757005020275700000016 naa.60060e80132757005020275700000016 Display Name: HITACHI Fibre Channel Disk (naa.60060e80132757005020275700000016) Has Settable Display Name: true Size: 204800 Device Type: Direct-Access Multipath Plugin: NMP Devfs Path: /vmfs/devices/disks/naa.60060e80132757005020275700000016 Vendor: HITACHI Model: OPEN-V Revision: 5001 SCSI Level: 2 Is Pseudo: false Status: degraded Is RDM Capable: true Is Local: false Is Removable: false Is SSD: false Is Offline: false Is Perennially Reserved: false Queue Full Sample Size: 0 Queue Full Threshold: 0 Thin Provisioning Status: unknown Attached Filters: VAAI_FILTER VAAI Status: supported Other UIDs: vml.020001000060060e801327570050202757000000164f50454e2d56 Is Local SAS Device: false Is Boot USB Device: false ~ #

    Read the article

  • Cluster FIle System

    - by Ben
    We are looking for to choose a clustered file system for our in house appplication. Let me first highlight my requirement. we have a storage and 2 servers at present.We get the data files from remote servers to our server and on both servers we are running our application to access those data and make a final result as per our requirements. In future may be after 3-4 months, we can add another servers in current cluster pool to handle more data load from remote location data senders. So my requirement is that to integrate same storage partition on 2-3 servers , it might be 4-5 more servers in future, My application read data from storage partition and write back to storage partition. Is there any bottleneck / limitation from RHCS , GFS2 or anything.? We are new with RHCS + GFS and all. Can we have any other better approach or someway to deal with our requirement light way? what is the best OS version for this ? how's RHEL 6.4 64 bit ? please share some case study or some gudie reference as per past experiences with such environnmnets Regards, Ben

    Read the article

  • Exchange 2007 - Mailbox Database Recovery

    - by Phrontiste
    Exchange 2007 edb Can we restore Exchange edb (First storage group\mailbox database.edb) to another exchange server ? Do I just copy the edb to the new exchange server and delete the first storage group\mailbox database.edb and replace it with this one ? How can I get all the mailboxes from that (old) mailbox database.edb ? I had a exchange 2007 server with 10 mailboxes, I have installed exchange on another machine and was thinking if I can do the above ? or is there any way I can get all the mailboxes from that edb (old mailbox database) and import them into the new one. I have deleted the old exchange install I had (these are test machines) What are the steps required to get the DB working on the new machine ? Also, I am confused about the recovery storage group ? I can mount a mailbox database in recovery storage group, but when I try to get mailbox out of it, it won't match any thing ? can someone please assist in understanding RSG and how to restore the OLD mailbox database. thanks and regards Phrontiste

    Read the article

  • Using NFS for scalable PHP/MySQL web application

    - by Jeroen Moons
    Here's the situation: I have a PHP/MySQL web application that accepts user uploads (pdf files). From these pdf files' pages a preview image is made on the fly and presented to the web app's users. Some pdfs might be on the large side, most will be under 50 MB but some extreme cases could be as large as a few hundred MB. A little waiting for the preview image for large pdf files is acceptable but no more than a minute let's say. Everything is running on one server for now, but soon the app will hit the server's limit on both storage and processing power. My idea to solve the problem: To deal with this situation I had the idea of having one or more pdf processing servers as needed, and one or more file storage servers. These two types of servers are mounted to the server on which the actual app runs using NFS. The app could then use GearMan to delegate pdf processing tasks to these processing servers. The processing server can mount the storage server and read the file stored there, process it and write its output to that server. The servers I'm talking about will be amazon ec2 instances. The web app returns a link to the resulting pdf preview image on the storage server that was used which can then be used on the front end to show the image to the user. My question: I have zero experience with apps that use multiple servers, is this idea viable or is there a better way to do it? Is an NFS setup fast and reliable enough for this situation?

    Read the article

  • What is the best cloud technology to use for MongoDB/GridFS database servers

    - by Nerian
    We are going to launch a service that will require between 1 and 2 GB for file storage per paid user. I am going to use GridFS for storing files. GridFS is a module for MongoDB that allows to store large files in de database. I am pondering the different options for storing the database. But since I am unexperienced at deployment and it is my first time with Mongodb I need your experience. Criteria: I want to spend my time developing my core business, that is, my own application. I am a Ruby on Rails developer. I do not like to mess with server configuration. Hence, I would like a fully managed hosting solution. But I would like to know about any other option, if you think it is worth it. It should be able to scale. Cloud style. Pay as you go. The lower the price, the better. So far I known of these services: https://mongohq.com/pricing https://mongomachine.com/pricing https://mongolab.com/about/pricing/ http://cloudcontrol.com/add-ons/mongodb/ And they seem to be OK for common needs, that is no file storage. But I am going to use GridFS, so the size matters. These services seems to scale, in price, quite poorly. MongoHQ: The larger plan max storage is 20 GB. Seems like a very little storage, for GridFS. MongoMachine: Flat price, 2.5$ per GB. I didn't found the limit. Seems like a good price, comparing the others. MongoLab: 3.984 GB max, which I don't think I will hit, so perfect. 8$ per GB, quite costly. CloudControl: The larger plan is 20 Gb. The custom service starts at 250€ plus some unspecified charge per GB. What is your experience with these services? Any downtimes? Other possibilities? Edit: Added meaning of GridFS

    Read the article

  • external drive and CentOS - Reset high speed USB device number

    - by Phil
    I have 2 external drives (3TB) and both will not work with my centOS Box. Tested them in windows ( different machine ) No problems ( 2.6.32-279.9.1.el6.i686 ) dmesg reports: usb 2-2: new high speed USB device number 3 using ehci_hcd usb 2-2: New USB device found, idVendor=2109, idProduct=0700 usb 2-2: New USB device strings: Mfr=1, Product=2, SerialNumber=3 usb 2-2: Product: USB 3.0 SATA Bridge usb 2-2: Manufacturer: VIA Labs, Inc. usb 2-2: SerialNumber: 0000000000006121 usb 2-2: configuration #1 chosen from 1 choice scsi6 : SCSI emulation for USB Mass Storage devices usb-storage: device found at 3 usb-storage: waiting for device to settle before scanning usb-storage: device scan complete scsi 6:0:0:0: Direct-Access ST3000DM 001-9YN166 CC4B PQ: 0 ANSI: 2 sd 6:0:0:0: Attached scsi generic sg3 type 0 sd 6:0:0:0: [sdd] Very big device. Trying to use READ CAPACITY(16). sd 6:0:0:0: [sdd] 5860533165 512-byte logical blocks: (3.00 TB/2.72 TiB) sd 6:0:0:0: [sdd] Write Protect is off sd 6:0:0:0: [sdd] Mode Sense: 00 06 00 00 sd 6:0:0:0: [sdd] Assuming drive cache: write through sd 6:0:0:0: [sdd] Very big device. Trying to use READ CAPACITY(16). sd 6:0:0:0: [sdd] Assuming drive cache: write through sdd: sdd1 sd 6:0:0:0: [sdd] Very big device. Trying to use READ CAPACITY(16). sd 6:0:0:0: [sdd] Assuming drive cache: write through sd 6:0:0:0: [sdd] Attached SCSI disk Tyring to use cfdisk / fdisk / gdisk or even fdisk -l results in the program hanging and dmesg reports: usb 2-2: reset high speed USB device number 3 using ehci_hcd usb 2-2: reset high speed USB device number 3 using ehci_hcd usb 2-2: reset high speed USB device number 3 using ehci_hcd I have the same 2 drives physically installed in the computer via SATA Any Ideas?

    Read the article

  • Best format for hard drive for Windows and Mac?

    - by Neil
    I have a 500 GB USB External Hard Drive. I need four partitions on it, for the following purposes: 160 GB for a bootable backup of my Mac. 160 GB for a bootable backup of my Windows. 11 GB for a bootable Snow Leopard Install Disk Rest as for file storage. Now I need a partition table which will get recognised on both Windows and Mac, without needing extra software on Windows, which will let me keep bootable copies of both OS'es, but let me access the file storage from both OS'es. Currently, I have a GUI Partition Table, with Mac OS Extended (Journaled) Partitions for the two backups, Mac OS Extended for the Install Disk, and NTFS for the file storage. While this gets recognised perfectly on my Mac, thanks to an NTFS for Mac driver from Paragon, when connected to Windows, the drive is detected by the machine (listed in Safely Remove USB), but not recognised in Windows Explorer unless I install MacDrive, which is not feasible for me to install on public Windows Machines I might wanna access my storage area on. Can someone recommend the best combination of formats and software/drivers to get this done seamlessly?

    Read the article

  • Parallelism in .NET – Part 6, Declarative Data Parallelism

    - by Reed
    When working with a problem that can be decomposed by data, we have a collection, and some operation being performed upon the collection.  I’ve demonstrated how this can be parallelized using the Task Parallel Library and imperative programming using imperative data parallelism via the Parallel class.  While this provides a huge step forward in terms of power and capabilities, in many cases, special care must still be given for relative common scenarios. C# 3.0 and Visual Basic 9.0 introduced a new, declarative programming model to .NET via the LINQ Project.  When working with collections, we can now write software that describes what we want to occur without having to explicitly state how the program should accomplish the task.  By taking advantage of LINQ, many operations become much shorter, more elegant, and easier to understand and maintain.  Version 4.0 of the .NET framework extends this concept into the parallel computation space by introducing Parallel LINQ. Before we delve into PLINQ, let’s begin with a short discussion of LINQ.  LINQ, the extensions to the .NET Framework which implement language integrated query, set, and transform operations, is implemented in many flavors.  For our purposes, we are interested in LINQ to Objects.  When dealing with parallelizing a routine, we typically are dealing with in-memory data storage.  More data-access oriented LINQ variants, such as LINQ to SQL and LINQ to Entities in the Entity Framework fall outside of our concern, since the parallelism there is the concern of the data base engine processing the query itself. LINQ (LINQ to Objects in particular) works by implementing a series of extension methods, most of which work on IEnumerable<T>.  The language enhancements use these extension methods to create a very concise, readable alternative to using traditional foreach statement.  For example, let’s revisit our minimum aggregation routine we wrote in Part 4: double min = double.MaxValue; foreach(var item in collection) { double value = item.PerformComputation(); min = System.Math.Min(min, value); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here, we’re doing a very simple computation, but writing this in an imperative style.  This can be loosely translated to English as: Create a very large number, and save it in min Loop through each item in the collection. For every item: Perform some computation, and save the result If the computation is less than min, set min to the computation Although this is fairly easy to follow, it’s quite a few lines of code, and it requires us to read through the code, step by step, line by line, in order to understand the intention of the developer. We can rework this same statement, using LINQ: double min = collection.Min(item => item.PerformComputation()); Here, we’re after the same information.  However, this is written using a declarative programming style.  When we see this code, we’d naturally translate this to English as: Save the Min value of collection, determined via calling item.PerformComputation() That’s it – instead of multiple logical steps, we have one single, declarative request.  This makes the developer’s intentions very clear, and very easy to follow.  The system is free to implement this using whatever method required. Parallel LINQ (PLINQ) extends LINQ to Objects to support parallel operations.  This is a perfect fit in many cases when you have a problem that can be decomposed by data.  To show this, let’s again refer to our minimum aggregation routine from Part 4, but this time, let’s review our final, parallelized version: // Safe, and fast! double min = double.MaxValue; // Make a "lock" object object syncObject = new object(); Parallel.ForEach( collection, // First, we provide a local state initialization delegate. () => double.MaxValue, // Next, we supply the body, which takes the original item, loop state, // and local state, and returns a new local state (item, loopState, localState) => { double value = item.PerformComputation(); return System.Math.Min(localState, value); }, // Finally, we provide an Action<TLocal>, to "merge" results together localState => { // This requires locking, but it's only once per used thread lock(syncObj) min = System.Math.Min(min, localState); } ); Here, we’re doing the same computation as above, but fully parallelized.  Describing this in English becomes quite a feat: Create a very large number, and save it in min Create a temporary object we can use for locking Call Parallel.ForEach, specifying three delegates For the first delegate: Initialize a local variable to hold the local state to a very large number For the second delegate: For each item in the collection, perform some computation, save the result If the result is less than our local state, save the result in local state For the final delegate: Take a lock on our temporary object to protect our min variable Save the min of our min and local state variables Although this solves our problem, and does it in a very efficient way, we’ve created a set of code that is quite a bit more difficult to understand and maintain. PLINQ provides us with a very nice alternative.  In order to use PLINQ, we need to learn one new extension method that works on IEnumerable<T> – ParallelEnumerable.AsParallel(). That’s all we need to learn in order to use PLINQ: one single method.  We can write our minimum aggregation in PLINQ very simply: double min = collection.AsParallel().Min(item => item.PerformComputation()); By simply adding “.AsParallel()” to our LINQ to Objects query, we converted this to using PLINQ and running this computation in parallel!  This can be loosely translated into English easily, as well: Process the collection in parallel Get the Minimum value, determined by calling PerformComputation on each item Here, our intention is very clear and easy to understand.  We just want to perform the same operation we did in serial, but run it “as parallel”.  PLINQ completely extends LINQ to Objects: the entire functionality of LINQ to Objects is available.  By simply adding a call to AsParallel(), we can specify that a collection should be processed in parallel.  This is simple, safe, and incredibly useful.

    Read the article

  • Can't install any drivers at all on Windows 8. Error 0x000003F9

    - by ABarney
    I suddenly can't install any drivers at all on my Windows 8 Pro x64 install. It doesn't matter what kind of driver it is, nothing will install. Everything ends with error 0x000003F9: The system has attempted to load or restore a file into the registry, but the specified file is not in a registry file format. When Windows Update tries to install a driver, it just gives error code 800703F9 and says that "Windows Update ran into a problem." I've already done a scan of system files with sfc, tried another user account, done a chkdsk, and a few more things, but nothing works. The problem started when I tried to install drivers for my printer earlier today and suddenly started getting messages saying that "Windows Modules Installer has stopped working." I decided to restart and was being greeted with the recovery boot options. I shut the computer down, but when I booted it back up the same thing happened, so I did a repair your pc, and was able to boot into the OS properly. Then I rebooted into my external drive and did a chkdsk on the Windows 8 install that started acting funny. When I booted back into Windows 8, I wasn't able to install any drivers. They all keep coming up with the same error. And I can't seem to find anything at all on this issue. Any help would be much appreciated. Here's an install log from a failed driver install: >>> [Device Install (DiInstallDriver) - F:\Android\android-sdk\extras\google\usb_driver\android_winusb.inf] >>> Section start 2012/12/06 20:15:20.714 cmd: "F:\Windows\System32\InfDefaultInstall.exe" "F:\Android\android-sdk\extras\google\usb_driver\android_winusb.inf" inf: {SetupCopyOEMInf: F:\Android\android-sdk\extras\google\usb_driver\android_winusb.inf} 20:15:20.716 sto: {Import Driver Package: F:\Android\android-sdk\extras\google\usb_driver\android_winusb.inf} 20:15:20.719 sto: Driver Store = F:\Windows\System32\DriverStore [Online] (6.2.9200) sto: Driver Package = F:\Android\android-sdk\extras\google\usb_driver\android_winusb.inf sto: Architecture = amd64 sto: Flags = 0x00000000 inf: Provider = Google, Inc. inf: Class GUID = {3f966bd9-fa04-4ec5-991c-d326973b5128} inf: Driver Version = 08/27/2012,7.0.0.1 inf: Catalog File = androidwinusba64.cat inf: Version Flags = 0x00000011 ! sto: Unable to determine presence of driver package 'android_winusb.inf'. Error = 0x000003F9 flq: Copying 'F:\Android\android-sdk\extras\google\usb_driver\amd64\WdfCoInstaller01009.dll' to 'F:\Users\ALEXBA~1\AppData\Local\Temp\{5da5e23e-2f82-2b4f-b73d-9d77c2978b0e}\amd64\WdfCoInstaller01009.dll'. flq: Copying 'F:\Android\android-sdk\extras\google\usb_driver\amd64\WinUSBCoInstaller2.dll' to 'F:\Users\ALEXBA~1\AppData\Local\Temp\{5da5e23e-2f82-2b4f-b73d-9d77c2978b0e}\amd64\WinUSBCoInstaller2.dll'. flq: Copying 'F:\Android\android-sdk\extras\google\usb_driver\android_winusb.inf' to 'F:\Users\ALEXBA~1\AppData\Local\Temp\{5da5e23e-2f82-2b4f-b73d-9d77c2978b0e}\android_winusb.inf'. flq: Copying 'F:\Android\android-sdk\extras\google\usb_driver\androidwinusba64.cat' to 'F:\Users\ALEXBA~1\AppData\Local\Temp\{5da5e23e-2f82-2b4f-b73d-9d77c2978b0e}\androidwinusba64.cat'. pol: {Driver package policy check} 20:15:20.814 pol: {Driver package policy check - exit(0x00000000)} 20:15:20.814 sto: {Stage Driver Package: F:\Users\ALEXBA~1\AppData\Local\Temp\{5da5e23e-2f82-2b4f-b73d-9d77c2978b0e}\android_winusb.inf} 20:15:20.815 ! sto: Unable to determine presence of driver package 'android_winusb.inf'. Error = 0x000003F9 inf: {Query Configurability: F:\Users\ALEXBA~1\AppData\Local\Temp\{5da5e23e-2f82-2b4f-b73d-9d77c2978b0e}\android_winusb.inf} 20:15:20.820 inf: Driver package uses WDF. inf: Driver package 'android_winusb.inf' is configurable. inf: {Query Configurability: exit(0x00000000)} 20:15:20.823 flq: Copying 'F:\Users\ALEXBA~1\AppData\Local\Temp\{5da5e23e-2f82-2b4f-b73d-9d77c2978b0e}\amd64\WdfCoInstaller01009.dll' to 'F:\Windows\System32\DriverStore\Temp\{30801e6d-d30f-2f4b-87dc-c80122d5f248}\amd64\WdfCoInstaller01009.dll'. flq: Copying 'F:\Users\ALEXBA~1\AppData\Local\Temp\{5da5e23e-2f82-2b4f-b73d-9d77c2978b0e}\amd64\WinUSBCoInstaller2.dll' to 'F:\Windows\System32\DriverStore\Temp\{30801e6d-d30f-2f4b-87dc-c80122d5f248}\amd64\WinUSBCoInstaller2.dll'. flq: Copying 'F:\Users\ALEXBA~1\AppData\Local\Temp\{5da5e23e-2f82-2b4f-b73d-9d77c2978b0e}\android_winusb.inf' to 'F:\Windows\System32\DriverStore\Temp\{30801e6d-d30f-2f4b-87dc-c80122d5f248}\android_winusb.inf'. flq: Copying 'F:\Users\ALEXBA~1\AppData\Local\Temp\{5da5e23e-2f82-2b4f-b73d-9d77c2978b0e}\androidwinusba64.cat' to 'F:\Windows\System32\DriverStore\Temp\{30801e6d-d30f-2f4b-87dc-c80122d5f248}\androidwinusba64.cat'. sto: {DRIVERSTORE IMPORT VALIDATE} 20:15:20.875 sig: {_VERIFY_FILE_SIGNATURE} 20:15:20.881 sig: Key = android_winusb.inf sig: FilePath = F:\Windows\System32\DriverStore\Temp\{30801e6d-d30f-2f4b-87dc-c80122d5f248}\android_winusb.inf sig: Catalog = F:\Windows\System32\DriverStore\Temp\{30801e6d-d30f-2f4b-87dc-c80122d5f248}\androidwinusba64.cat ! sig: Verifying file against specific (valid) catalog failed! (0x800b0109) ! sig: Error 0x800b0109: A certificate chain processed, but terminated in a root certificate which is not trusted by the trust provider. sig: {_VERIFY_FILE_SIGNATURE exit(0x800b0109)} 20:15:20.893 sig: {_VERIFY_FILE_SIGNATURE} 20:15:20.893 sig: Key = android_winusb.inf sig: FilePath = F:\Windows\System32\DriverStore\Temp\{30801e6d-d30f-2f4b-87dc-c80122d5f248}\android_winusb.inf sig: Catalog = F:\Windows\System32\DriverStore\Temp\{30801e6d-d30f-2f4b-87dc-c80122d5f248}\androidwinusba64.cat sig: Success: File is signed in Authenticode(tm) catalog. sig: Error 0xe0000242: The publisher of an Authenticode(tm) signed catalog has not yet been established as trusted. sig: {_VERIFY_FILE_SIGNATURE exit(0xe0000242)} 20:15:20.907 ! sig: Driver package signer is unknown, but user trusts signer. sto: {DRIVERSTORE IMPORT VALIDATE: exit(0x00000000)} 20:15:22.701 sig: Signer Score = 0x0F000000 sig: Signer Name = Google Inc sto: {DRIVERSTORE IMPORT BEGIN} 20:15:22.702 sto: {DRIVERSTORE IMPORT BEGIN: exit(0x00000000)} 20:15:22.702 cpy: {Copy Directory: F:\Windows\System32\DriverStore\Temp\{30801e6d-d30f-2f4b-87dc-c80122d5f248}} 20:15:22.703 cpy: Target Path = F:\Windows\System32\DriverStore\FileRepository\android_winusb.inf_amd64_f7c4b212c9d862a3 cpy: {Copy Directory: F:\Windows\System32\DriverStore\Temp\{30801e6d-d30f-2f4b-87dc-c80122d5f248}\amd64} 20:15:22.704 cpy: Target Path = F:\Windows\System32\DriverStore\FileRepository\android_winusb.inf_amd64_f7c4b212c9d862a3\amd64 cpy: {Copy Directory: exit(0x00000000)} 20:15:22.705 cpy: {Copy Directory: exit(0x00000000)} 20:15:22.706 ! sto: Unable to determine if driver package 'android_winusb.inf' is already registered. Error = 0x000003F9 idb: {Register Driver Package: F:\Windows\System32\DriverStore\FileRepository\android_winusb.inf_amd64_f7c4b212c9d862a3\android_winusb.inf} 20:15:22.707 !!! idb: Failed to create driver package object 'android_winusb.inf_amd64_f7c4b212c9d862a3' in DRIVERS database node. Error = 0x000003F9 !!! idb: Failed to register driver package 'F:\Windows\System32\DriverStore\FileRepository\android_winusb.inf_amd64_f7c4b212c9d862a3\android_winusb.inf'. Error = 0x000003F9 idb: {Register Driver Package: exit(0x000003f9)} 20:15:22.709 sto: {DRIVERSTORE IMPORT END} 20:15:22.710 sto: {DRIVERSTORE IMPORT END: exit(0x000003f9)} 20:15:22.710 sto: Rolled back driver package import. !!! sto: Failed to import driver package into Driver Store. Error = 0x000003F9 sto: {Stage Driver Package: exit(0x000003f9)} 20:15:22.736 sto: {Import Driver Package: exit(0x000003f9)} 20:15:22.766

    Read the article

< Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >