Search Results

Search found 2041 results on 82 pages for 'distribution'.

Page 74/82 | < Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >

  • Can't add client machine to windows server 2008 domain controller

    - by Patrick J Collins
    A bit of background before I dive into the gritty details: I have a single server running Windows 2003 Server where I host my ASP.net website and SQL Server + Reports. I've been creating ordinary windows user accounts to authenticate my users, and I enabled integrated windows authentication with impersonation. I've set up a bunch of user groups which correspond to certain roles (admin, power user, normal user, etc) and I test membership to enable or disable certain features. Overall, I'm pretty happy with the solution, it was quick to setup and I don't have to worry about messing around storing passwords and whatnot. Well, what I'm trying to do now is set up a new environment with 3 servers (Web, SQL, Reports) and I'd like these three servers to share common user accounts. I understand that I could add these three machines to a domain, which means installing Active Directory on one of the machines. I am barking up the wrong tree here? Would you suggest an alternative configuration? Assuming that I stick with AD, I have a couple of questions regarding DNS. To be honest, I'd rather not fiddle around with the DNS settings because my ISP already has their own DNS server which works just fine. It would appear however that DNS and AD are intertwined. Firstly, if I am to create a new domain in called mycompany.net, do I actually need to be the registered owner of that domain name and ensure the DNS entry points to the IP address of the machine hosting AD? Secondly, for the two other machines that I am trying to add to the domain, do I need to fiddle with their DNS settings? I've tried setting the preferred DNS Server IP address to that of my newly installed AD, but no luck. At this point, I can't add the two other machines to the domain. Here are some diagnostics that I have run based on a few suggestions I read on forums (sorry they're in French, although I could translate if needed). I ran nltest, which seems to indicate that the client can discover the domain controller. When I run dcdiag, the call to DsGetDcName fails with error 1722, not really sure what that means. Any suggestions? Thanks! C:\Users\Administrator>nltest /dsgetdc:mycompany.net Contrôleur de domaine : \\REPORTS.mycompany.net Adresse : \\111.111.111.111 GUID dom : 3333a4ec-ca56-4f02-bb9e-76c29c6c3832 Nom dom : mycompany.net Nom de la forêt : mycompany.net Nom de site du contrôleur de domaine : Default-First-Site-Name Nom de notre site : Default-First-Site-Name Indicateurs : PDC GC DS LDAP KDC TIMESERV WRITABLE DNS_DC DNS_DOMAIN DNS _FOREST CLOSE_SITE FULL_SECRET La commande a été correctement exécutée C:\Users\Administrator>dcdiag /s:mycompany.net /u: mycompany.net \pcollins /p:somepass Diagnostic du serveur d'annuaire Exécution de l'installation initiale : * Forêt AD identifiée. Collecte des informations initiales terminée. Exécution des tests initiaux nécessaires Test du serveur : Default-First-Site-Name\REPORTS Démarrage du test : Connectivity ......................... Le test Connectivity de REPORTS a réussi Exécution des tests principaux Test du serveur : Default-First-Site-Name\REPORTS Démarrage du test : Advertising Erreur irrécupérable : l'appel DsGetDcName (REPORTS) a échoué ; erreur 1722 Le localisateur n'a pas pu trouver le serveur. ......................... Le test Advertising de REPORTS a échoué Démarrage du test : FrsEvent Impossible d'interroger le journal des événements File Replication Service sur le serveur REPORTS.mycompany.net. Erreur 0x6ba « Le serveur RPC n'est pas disponible. » ......................... Le test FrsEvent de REPORTS a échoué Démarrage du test : DFSREvent Impossible d'interroger le journal des événements DFS Replication sur le serveur REPORTS.mycompany.net. Erreur 0x6ba « Le serveur RPC n'est pas disponible. » ......................... Le test DFSREvent de REPORTS a échoué Démarrage du test : SysVolCheck [REPORTS] Une opération net use ou LsaPolicy a échoué avec l'erreur 53, Le chemin réseau n'a pas été trouvé.. ......................... Le test SysVolCheck de REPORTS a échoué Démarrage du test : KccEvent Impossible d'interroger le journal des événements Directory Service sur le serveur REPORTS.mycompany.net. Erreur 0x6ba « Le serveur RPC n'est pas disponible. » ......................... Le test KccEvent de REPORTS a échoué Démarrage du test : KnowsOfRoleHolders ......................... Le test KnowsOfRoleHolders de REPORTS a réussi Démarrage du test : MachineAccount Impossible d'ouvrir le canal avec [REPORTS] : échec avec l'erreur 53 : Le chemin réseau n'a pas été trouvé. Impossible d'obtenir le nom de domaine NetBIOS Échec : impossible de tester le nom principal de service (SPN) HOST Échec : impossible de tester le nom principal de service (SPN) HOST ......................... Le test MachineAccount de REPORTS a réussi Démarrage du test : NCSecDesc ......................... Le test NCSecDesc de REPORTS a réussi Démarrage du test : NetLogons [REPORTS] Une opération net use ou LsaPolicy a échoué avec l'erreur 53, Le chemin réseau n'a pas été trouvé.. ......................... Le test NetLogons de REPORTS a échoué Démarrage du test : ObjectsReplicated ......................... Le test ObjectsReplicated de REPORTS a réussi Démarrage du test : Replications ......................... Le test Replications de REPORTS a réussi Démarrage du test : RidManager ......................... Le test RidManager de REPORTS a réussi Démarrage du test : Services Impossible d'ouvrir IPC distant à [REPORTS.mycompany.net] : erreur 0x35 « Le chemin réseau n'a pas été trouvé. » ......................... Le test Services de REPORTS a échoué Démarrage du test : SystemLog Impossible d'interroger le journal des événements System sur le serveur REPORTS.mycompany.net. Erreur 0x6ba « Le serveur RPC n'est pas disponible. » ......................... Le test SystemLog de REPORTS a échoué Démarrage du test : VerifyReferences ......................... Le test VerifyReferences de REPORTS a réussi Exécution de tests de partitions sur ForestDnsZones Démarrage du test : CheckSDRefDom ......................... Le test CheckSDRefDom de ForestDnsZones a réussi Démarrage du test : CrossRefValidation ......................... Le test CrossRefValidation de ForestDnsZones a réussi Exécution de tests de partitions sur DomainDnsZones Démarrage du test : CheckSDRefDom ......................... Le test CheckSDRefDom de DomainDnsZones a réussi Démarrage du test : CrossRefValidation ......................... Le test CrossRefValidation de DomainDnsZones a réussi Exécution de tests de partitions sur Schema Démarrage du test : CheckSDRefDom ......................... Le test CheckSDRefDom de Schema a réussi Démarrage du test : CrossRefValidation ......................... Le test CrossRefValidation de Schema a réussi Exécution de tests de partitions sur Configuration Démarrage du test : CheckSDRefDom ......................... Le test CheckSDRefDom de Configuration a réussi Démarrage du test : CrossRefValidation ......................... Le test CrossRefValidation de Configuration a réussi Exécution de tests de partitions sur mycompany Démarrage du test : CheckSDRefDom ......................... Le test CheckSDRefDom de mycompany a réussi Démarrage du test : CrossRefValidation ......................... Le test CrossRefValidation de mycompany a réussi Exécution de tests d'entreprise sur mycompany.net Démarrage du test : LocatorCheck Avertissement : l'appel DcGetDcName(GC_SERVER_REQUIRED) a échoué ; erreur 1722 Serveur de catalogue global introuvable - Les catalogues globaux ne fonctionnent pas. Avertissement : l'appel DcGetDcName(PDC_REQUIRED) a échoué ; erreur 1722 Contrôleur principal de domaine introuvable. Le serveur contenant le rôle PDC ne fonctionne pas. Avertissement : l'appel DcGetDcName(TIME_SERVER) a échoué ; erreur 1722 Serveur de temps introuvable. Le serveur contenant le rôle PDC ne fonctionne pas. Avertissement : l'appel DcGetDcName(GOOD_TIME_SERVER_PREFERRED) a échoué ; erreur 1722 Serveur de temps introuvable. Avertissement : l'appel DcGetDcName(KDC_REQUIRED) a échoué ; erreur 1722 Centre de distribution de clés introuvable : les centres de distribution de clés ne fonctionnent pas. ......................... Le test LocatorCheck de mycompany.net a échoué Démarrage du test : Intersite ......................... Le test Intersite de mycompany.net a réussi Update 1 : I am under the distinct impression that the problem is caused by some security settings. I have read elsewhere that the client needs to be able to access the fileshare sysvol. I had to enable Client for Microsoft Windows and File and Printer Sharing which were previously disabled. When I now run dcdiag the Advertising test works, which I suppose is forward progress. It currently chokes on the Services step (unable to open remote IPC). Démarrage du test : Services Impossible d'ouvrir IPC distant à [REPORTS.locbus.net] : erreur 0x35 « Le chemin réseau n'a pas été trouvé. » ......................... Le test Services de REPORTS a échoué The original English version of that error message : Could not open Remote ipc to [server] Update 2 : I attach some more diagnostics : Netsetup.log (client): 09/24/2009 13:27:09:773 ----------------------------------------------------------------- 09/24/2009 13:27:09:773 NetpValidateName: checking to see if 'WEB' is valid as type 1 name 09/24/2009 13:27:12:773 NetpCheckNetBiosNameNotInUse for 'WEB' [MACHINE] returned 0x0 09/24/2009 13:27:12:773 NetpValidateName: name 'WEB' is valid for type 1 09/24/2009 13:27:12:805 ----------------------------------------------------------------- 09/24/2009 13:27:12:805 NetpValidateName: checking to see if 'WEB' is valid as type 5 name 09/24/2009 13:27:12:805 NetpValidateName: name 'WEB' is valid for type 5 09/24/2009 13:27:12:852 ----------------------------------------------------------------- 09/24/2009 13:27:12:852 NetpValidateName: checking to see if 'MYCOMPANY.NET' is valid as type 3 name 09/24/2009 13:27:12:992 NetpCheckDomainNameIsValid [ Exists ] for 'MYCOMPANY.NET' returned 0x0 09/24/2009 13:27:12:992 NetpValidateName: name 'MYCOMPANY.NET' is valid for type 3 09/24/2009 13:27:21:320 ----------------------------------------------------------------- 09/24/2009 13:27:21:320 NetpDoDomainJoin 09/24/2009 13:27:21:320 NetpMachineValidToJoin: 'WEB' 09/24/2009 13:27:21:320 OS Version: 6.0 09/24/2009 13:27:21:320 Build number: 6002 09/24/2009 13:27:21:320 ServicePack: Service Pack 2 09/24/2009 13:27:21:414 SKU: Windows Server® 2008 Standard 09/24/2009 13:27:21:414 NetpDomainJoinLicensingCheck: ulLicenseValue=1, Status: 0x0 09/24/2009 13:27:21:414 NetpGetLsaPrimaryDomain: status: 0x0 09/24/2009 13:27:21:414 NetpMachineValidToJoin: status: 0x0 09/24/2009 13:27:21:414 NetpJoinDomain 09/24/2009 13:27:21:414 Machine: WEB 09/24/2009 13:27:21:414 Domain: MYCOMPANY.NET 09/24/2009 13:27:21:414 MachineAccountOU: (NULL) 09/24/2009 13:27:21:414 Account: MYCOMPANY.NET\pcollins 09/24/2009 13:27:21:414 Options: 0x25 09/24/2009 13:27:21:414 NetpLoadParameters: loading registry parameters... 09/24/2009 13:27:21:414 NetpLoadParameters: DNSNameResolutionRequired not found, defaulting to '1' 0x2 09/24/2009 13:27:21:414 NetpLoadParameters: status: 0x2 09/24/2009 13:27:21:414 NetpValidateName: checking to see if 'MYCOMPANY.NET' is valid as type 3 name 09/24/2009 13:27:21:523 NetpCheckDomainNameIsValid [ Exists ] for 'MYCOMPANY.NET' returned 0x0 09/24/2009 13:27:21:523 NetpValidateName: name 'MYCOMPANY.NET' is valid for type 3 09/24/2009 13:27:21:523 NetpDsGetDcName: trying to find DC in domain 'MYCOMPANY.NET', flags: 0x40001010 09/24/2009 13:27:22:039 NetpDsGetDcName: failed to find a DC having account 'WEB$': 0x525, last error is 0x79 09/24/2009 13:27:22:039 NetpDsGetDcName: status of verifying DNS A record name resolution for 'KING.MYCOMPANY.NET': 0x0 09/24/2009 13:27:22:039 NetpDsGetDcName: found DC '\\KING.MYCOMPANY.NET' in the specified domain 09/24/2009 13:27:30:039 NetUseAdd to \\KING.MYCOMPANY.NET\IPC$ returned 53 09/24/2009 13:27:30:039 NetpJoinDomain: status of connecting to dc '\\KING.MYCOMPANY.NET': 0x35 09/24/2009 13:27:30:039 NetpDoDomainJoin: status: 0x35 09/24/2009 13:27:30:148 ----------------------------------------------------------------- ipconfig /all (on client): Configuration IP de Windows Nom de l'hôte . . . . . . . . . . : WEB Suffixe DNS principal . . . . . . : Type de noeud. . . . . . . . . . : Hybride Routage IP activé . . . . . . . . : Non Proxy WINS activé . . . . . . . . : Non Carte Ethernet Connexion au réseau local : Suffixe DNS propre à la connexion. . . : Description. . . . . . . . . . . . . . : Intel 21140-Based PCI Fast Ethernet Adapter (Emulated) Adresse physique . . . . . . . . . . . : **-15-5D-A1-17-** DHCP activé. . . . . . . . . . . . . . : Non Configuration automatique activée. . . : Oui Adresse IPv4. . . . . . . . . . . : **.***.163.122(préféré) Masque de sous-réseau. . . . . . . . . : 255.255.255.0 Passerelle par défaut. . . . . . . . . : **.***.163.2 Serveurs DNS. . . . . . . . . . . . . : **.***.163.123 NetBIOS sur Tcpip. . . . . . . . . . . : Activé ipconfig /all (on server): Configuration IP de Windows Nom de l'hôte . . . . . . . . . . : KING Suffixe DNS principal . . . . . . : mycompany.net Type de noeud. . . . . . . . . . : Hybride Routage IP activé . . . . . . . . : Non Proxy WINS activé . . . . . . . . : Non Liste de recherche du suffixe DNS.: locbus.net Carte Ethernet Connexion au réseau local : Suffixe DNS propre à la connexion. . . : Description. . . . . . . . . . . . . . : Intel 21140-Based PCI Fast Ethernet Adapter (Emulated) Adresse physique . . . . . . . . . . . : **-15-5D-A1-1E-** DHCP activé. . . . . . . . . . . . . . : Non Configuration automatique activée. . . : Oui Adresse IPv4. . . . . . . . . . . : **.***.163.123(préféré) Masque de sous-réseau. . . . . . . . . : 255.255.255.0 Passerelle par défaut. . . . . . . . . : **.***.163.2 Serveurs DNS. . . . . . . . . . . . . : 127.0.0.1 NetBIOS sur Tcpip. . . . . . . . . . . : Activé nslookup (on client): Serveur : *******.***.com Address: **.***.163.123 Nom : mycompany.net Addresses: ****:****:a37b::****:a37b **.****.163.123

    Read the article

  • What is the difference between Anycast and GeoDNS / GeoIP wrt HA?

    - by Riyad
    Based on the Wikipedia description of Anycast, it includes both the distribution of a domain-name-to-many-IP-mapping across many DNS servers as well as replying to clients with the most geographically close (or fastest) server. In the context of a globally distributed, highly available site like google.com (or any CDN service with many global edge locations) this sounds like the two key features one would need. DNS services like Amazon's Route53, EasyDNS and DNSMadeEasy all advertise themselves as Anycast-enabled networks. Therefore my assumption is that each of these DNS services transparently offer me those two killer features: multi-IP-to-domain mapping AND routing clients to the closest node. However, each of these services seem to separate out these two functionalities, referring to the 2nd one (routing clients to closest node) as "GeoDNS", "GeoIP" or "Global Traffic Director" and charge extra for the service. If a core tenant of an Anycast-capable system is to already do this, why is this functionality being earmarked as this extra feature? What is this "GeoDNS" feature doing that a standard Anycast DNS service won't do (according to the definition of Anycast from Wikipedia -- I understand what is being advertised, just not why it isn't implied already). I get extra-confused when a DNS service like Route53 that doesn't support this nebulous "GeoDNS" feature lists functionality like: Fast – Using a global anycast network of DNS servers around the world, Route 53 is designed to automatically route your users to the optimal location depending on network conditions. As a result, the service offers low query latency for your end users, as well as low update latency for your DNS record management needs. ... which sounds exactly like what GeoDNS is intended to do, but geographically directing clients is something they explicitly don't support it yet. Ultimately I am looking for the two following features from a DNS provider: Map multiple IP addresses to a single domain name (like google.com, amazon.com, etc. does) Utilize a DNS service that will respond to client requests for that domain with the IP address of the nearest server to the requestee. As mentioned, it seems like this is all part of an "Anycast" DNS service (all of which these services are), but the features and marketing I see from them suggest otherwise, making me think I need to learn a bit more about how DNS works before making a deployment choice. Thanks in advance for any clarifications.

    Read the article

  • How to setup Hadoop cluster so that it accepts mapreduce jobs from remote computers?

    - by drasto
    There is a computer I use for Hadoop map/reduce testing. This computer runs 4 Linux virtual machines (using Oracle virtual box). Each of them has Cloudera with Hadoop (distribution c3u4) installed and serves as a node of Hadoop cluster. One of those 4 nodes is master node running namenode and jobtracker, others are slave nodes. Normally I use this cluster from local network for testing. However when I try to access it from another network I cannot send any jobs to it. The computer running Hadoop cluster has public IP and can be reached over internet for another services. For example I am able to get HDFS (namenode) administration site and map/reduce (jobtracker) administration site (on ports 50070 and 50030 respectively) from remote network. Also it is possible to use Hue. Ports 8020 and 8021 are both allowed. What is blocking my map/reduce job submits from reaching the cluster? Is there some setting that I must change first in order to be able to submit map/reduce jobs remotely? Here is my mapred-site.xml file: <configuration> <property> <name>mapred.job.tracker</name> <value>master:8021</value> </property> <!-- Enable Hue plugins --> <property> <name>mapred.jobtracker.plugins</name> <value>org.apache.hadoop.thriftfs.ThriftJobTrackerPlugin</value> <description>Comma-separated list of jobtracker plug-ins to be activated. </description> </property> <property> <name>jobtracker.thrift.address</name> <value>0.0.0.0:9290</value> </property> </configuration> And this is in /etc/hosts file: 192.168.1.15 master 192.168.1.14 slave1 192.168.1.13 slave2 192.168.1.9 slave3

    Read the article

  • multiple webapps in tomcat -- what is the optimal architecture?

    - by rvdb
    I am maintaining a growing base of mainly Cocoon-2.1-based web applications [http://cocoon.apache.org/2.1/], deployed in a Tomcat servlet container [http://tomcat.apache.org/], and proxied with an Apache http server [http://httpd.apache.org/docs/2.2/]. I am conceptually struggling with the best way to deploy multiple web applications in Tomcat. Since I'm not a Java programmer and we don't have any sysadmin staff I have to figure out myself what is the most sensible way to do this. My setup has evolved through 2 scenarios and I'm considering a third for maximal separation of the distinct webapps. [1] 1 Tomcat instance, 1 Cocoon instance, multiple webapps -tomcat |_ webapps |_ webapp1 |_ webapp2 |_ webapp[n] |_ WEB-INF (with Cocoon libs) This was my first approach: just drop all web applications inside a single Cocoon webapps folder inside a single Tomcat container. This seemed to run fine, I did not encounter any memory issues. However, this poses a maintainability drawback, as some Cocoon components are subject to updates, which often affect the webapp coding. Hence, updating Cocoon becomes unwieldy: since all webapps share the same pool of Cocoon components, updating one of them would require the code in all web applications to be updated simultaneously. In order to isolate the web applications, I moved to the second scenario. [2] 1 Tomcat instance, each webapp in its dedicated Cocoon environment -tomcat |_ webapps |_ webapp1 | |_ WEB-INF (with Cocoon libs) |_ webapp1 | |_ WEB-INF (with Cocoon libs) |_ webapp[n] |_ WEB-INF (with Cocoon libs) This approach separates all webapps into their own Cocoon environment, run inside a single Tomcat container. In theory, this works fine: all webapps can be updated independently. However, this soon results in PermGenSpace errors. It seemed that I could manage the problem by increasing memory allocation for Tomcat, but I realise this isn't a structural solution, and that overloading a single Tomcat in this way is prone to future memory errors. This set me thinking about the third scenario. [3] multiple Tomcat instances, each with a single webapp in its dedicated Cocoon environment -tomcat |_ webapps |_ webapp1 |_ WEB-INF (with Cocoon libs) -tomcat |_ webapps |_ webapp2 |_ WEB-INF (with Cocoon libs) -tomcat |_ webapps |_ webapp[n] |_ WEB-INF (with Cocoon libs) I haven't tried this approach, but am thinking of the $CATALINA_BASE variable. A single Tomcat distribution can be multiply instanciated with different $CATALINA_BASE environments, each pointing to a Cocoon instance with its own webapp. I wonder whether such an approach could avoid the structural memory-related problems of approach [2], or will the same issues apply? On the other hand, this approach would complicate management of the Apache http frontend, as it will require the AJP connectors of the different Tomcat instances to be listening at different ports. Hence, Apache's worker configuration has to be updated and reloaded whenever a new webapp (in its own Tomcat instance) is added. And there seems no way to reload worker.properties without restarting the entire Apache http server. Is there perhaps another / more dynamic way of 'modularizing' multiple Tomcat-served webapps, or can one of these scenarios be refined? Any thoughts, suggestions, advice much appreciated. Ron

    Read the article

  • How to shutdown VMware Fusion virtual machine on host shutdown

    - by Nikksno
    I have a Mac mini running Mavericks server. I installed the Atmail server + webmail vm [a linux centos distribution] in VMware Fusion Professional 6 with the VMware Tools addon. It works flawlessly. I've set it to start on boot and that works very reliably. However I've been looking for a way to also safely and gracefully shut it down whenever OS X shuts down for whatever reason. The Mac is connected to a UPS and configured to perform an automatic shutdown in case the battery starts running low so that's no additional problem. Now the first thing I did was to go into Fusion's prefs and select "Power off the vm" when closing it. However I noticed that for some arcane reason closing the vm window would actually forcibly power off the vm: so then I found this post that showed me how to change the default power options and I managed to have the vm cleanly shutdown when closing its window or quitting Fusion altogether. At this point I was hoping to have solved the problem but as it turns out upon invoking system shutdown OS X doesn't wait for the vm to shutdown and terminates Fusion before it has a chance to do so. At this point I started looking for a way to automate the process of shutting down the guest os via some advanced setting but had no luck in doing so. That's when I found a command to shut the vm down: vmrun and it worked. The only thing left was to find out a way to execute this script on os x shutdown and giving it a little time to power off completely. However this turned out to be a nightmare: I spent hours looking through several ways to do this with Startup Items, rc.shutdown, cron, launchd, etc... but none of them worked the way I had configured them. I have to say that I found very limited information on using launchd for a shutdown script execution and I know it's the latest thing in the OS X world so I'm hoping someone out there among you will be able to help me out with this. I still think this is an extremely basic feature to ask for and I was really surprised to find this little documentation on so many different aspects of this problem. Is Fusion too basic of an application for this? I really hope someone can help. Thank you very much in advance.

    Read the article

  • Partition table is corrupt

    - by Tim
    I have a corrupt the partition table on the laptop that is running Ubunutu 10.4. Before the partition table was corrupt I had the following partitions: 2 primary partitions: 1st - NTFS 2nd - Extended 4 logical partitons that are built within 2nd extended: 1st NTFS (68 Gib) 2nd Linux (19 Gib) 3rd Swap (1.4 Gib) 4th Linux (24 Gib) The physical order of these partitions was the following: ( 4th Linux ) - ( 1st NTFS ) - ( 2nd Linux ) - ( 3rd Swap ) The logical order of the partition was different: ( 1st NTFS ) - ( 2nd Linux ) - ( 3rd Swap ) ( 4th Linux ) NTFS partition was big and it resided between 2 Linux partitions, neither of these partitions had enough space to install Oracle 11g for my project with prof. Gamper and Markus Innerebner. Therefore, I decided to a) either move the NTFS partion to the left or b) remove it completely and extend partition where Linux resides. As I tool I have chosen GParted. But unfortunately it was not able to move the partition because he found that in NTFS partition there are some blocks that are referenced multiple times. Also it was not able to remove the partition neither, because in this case the partitions that follow it ( 2nd Linux ) - ( 3rd Swap ) have to be in his opinion also removed, because the organization of extended partition is a linked list. Since GParted was not able to do such thing I was trying to find another tool. I found diskdrake tool on PSLinuxOS distribution of linux. That tool silently deleted ( 1st NTFS ) partition and I thought that everything was fine. But diskdrake has damaged the partition in a way that I am not able either to boot from the hard disk nor to see the partitions with GParted and even with diskdrake itself! Fortunately I have a live CD of Ubuntu 8.10 and I am able to boot and see hard disk. I have 2 ideas how I can solve the problem: 1) Manually change disk partitions and point them to the correct partitions. 2) Create partition table with GParted that as much as possible is the same with the previous one I find the 2nd approach less time consuming but some data will be lost because of it is not possible to place borders of the partitions exactly how it was before. And moreover I am not sure if such approach would work, for example, if the OS is able to locate files after repartitioning. I feel like that it will but not 100% sure. Are there some ideas how the problem may be solved?

    Read the article

  • How to make Linux reliably boot on multi-cpu machines?

    - by Adam Tabi
    I've got two machines, one with 4x12 AMD Opteron cores (AMD Opteron(tm) Processor 6176), one with 2x8 Xeon cores (HT disabled; Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz). On both machines I experience difficulties during boot of Linux using recent kernels. The system hangs during the initialization of the kernel, before or just when initramfs started initializing the hardware. The last thing which got displayed was a stacktrace like this: CPU: 31 PID: 0 Comm: swapper/31 Tainted: G D 3.11.6-hardened #11 Hardware name: Supermicro X9DRT-HF+/X9DRT-HF+, BIOS 3.00 07/08/2013 task: ffff880854695500 ti: ffff880854695a28 task.ti: ffff880854695a28 RIP: 0010:[<ffffffff8100a82e>] [<ffffffff8100a82e>] default_idle+0x6/0xe RSP: 0000:ffff8808546b3ec8 EFLAGS: 00000286 RAX: ffffffff8100a828 RBX: ffff880854695a28 RCX: 00000000ffffffff RDX: 0100000000000000 RSI: 0000000000000000 RDI: ffff88107fdec690 RBP: ffff8808546b3ec8 R08: 0000000000000000 R09: ffff880854695500 R10: ffff880854695500 R11: 0000000000000001 R12: ffff880854695a28 R13: ffff880854695a28 R14: ffff880854695a28 R15: 0000000000000000 FS: 0000000000000000(0000) GS:ffff88107fde0000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 000002b43256a960 CR3: 00000000016b5000 CR4: 00000000000607f0 Stack: ffff8808546b3ed8 ffffffff8100aec9 ffff8808546b3f10 ffffffff8109ce25 334ab55852ec7aef 000000000000001f ffffffff8102d6c0 0000000000000000 0000000000000000 ffff8808546b3f48 ffffffff810276e0 ffff8808546b3f28 Call Trace: [<ffffffff8100aec9>] arch_cpu_idle+0x20/0x2b [<ffffffff8109ce25>] cpu_startup_entry+0xed/0x138 [<ffffffff8102d6c0>] ? flat_init_apic_ldr+0x80/0x80 [<ffffffff810276e0>] start_secondary+0x2c9/0x2f8 I compiled the kernel myself and it works fine, if I boot with nolapic. Yet, only one core is used. Also, the kernel of RHEL6 seems to work fine. I suspect that there are some patches used to make things work. Using the kernel config file from RHEL6 and building a more recent kernel yields the same problems. On the Xeon machine, things got better by disabling Hyperthreading completely. The machine now boots successfully on at least 4 out of 5 times. And if it boots, multicore stuff works just fine. However, I'm wondering about what to do about the AMD machine. So to sum it up: Gentoo kernel 3.6 - 3.11 won't reliably boot those machines unless you reduce the amount of cores (e.g. via nolapic). RHEL6 kernel (which is 2.6.32) boots just fine. RH kernel config used to build a 3.x kernel won't yield a working kernel. Not distribution specific (apart from the kernel being used). These stack traces got printed every minute or so. The kernel seems to be stuck in an endless loop. Yet, a recent kernel is needed for various reasons. So the question is: What does the RHEL6 kernel do, what vanilla or gentoo kernels don't do? Is there a boot option that might lead to a reliable boot with all the cores enabled? Best, Adam

    Read the article

  • BUILDROOT files during RPM generation

    - by khmarbaise
    Currently i have the following spec file to create a RPM. The spec file is generated by maven plugin to produce a RPM out of it. The question is: will i find files which are mentioned in the spec file after the rpm generation inside the BUILDROOT/SPECS/SOURCES/SRPMS structure? %define _unpackaged_files_terminate_build 0 Name: rpm-1 Version: 1.0 Release: 1 Summary: rpm-1 License: 2009 my org Distribution: My App Vendor: my org URL: www.my.org Group: Application/Collectors Packager: my org Provides: project Requires: /bin/sh Requires: jre >= 1.5 Requires: BASE_PACKAGE PreReq: dependency Obsoletes: project autoprov: yes autoreq: yes BuildRoot: /home/build/.jenkins/jobs/rpm-maven-plugin/workspace/target/it/rpm-1/target/rpm/rpm-1/buildroot %description %install if [ -e $RPM_BUILD_ROOT ]; then mv /home/build/.jenkins/jobs/rpm-maven-plugin/workspace/target/it/rpm-1/target/rpm/rpm-1/tmp-buildroot/* $RPM_BUILD_ROOT else mv /home/build/.jenkins/jobs/rpm-maven-plugin/workspace/target/it/rpm-1/target/rpm/rpm-1/tmp-buildroot $RPM_BUILD_ROOT fi ln -s /usr/myusr/app $RPM_BUILD_ROOT/usr/myusr/app2 ln -s /tmp/myapp/somefile $RPM_BUILD_ROOT/tmp/myapp/somefile2 ln -s name.sh $RPM_BUILD_ROOT/usr/myusr/app/bin/oldname.sh %files %defattr(-,myuser,mygroup,-) %dir "/usr/myusr/app" "/usr/myusr/app2" "/tmp/myapp/somefile" "/tmp/myapp/somefile2" "/usr/myusr/app/lib" %attr(755,myuser,mygroup) "/usr/myusr/app/bin/start.sh" %attr(755,myuser,mygroup) "/usr/myusr/app/bin/filter-version.txt" %attr(755,myuser,mygroup) "/usr/myusr/app/bin/name.sh" %attr(755,myuser,mygroup) "/usr/myusr/app/bin/name-Linux.sh" %attr(755,myuser,mygroup) "/usr/myusr/app/bin/filter.txt" %attr(755,myuser,mygroup) "/usr/myusr/app/bin/oldname.sh" %dir "/usr/myusr/app/conf" %config "/usr/myusr/app/conf/log4j.xml" "/usr/myusr/app/conf/log4j.xml.deliver" %prep echo "hello from prepare" %pre -p /bin/sh #!/bin/sh if [ -s "/etc/init.d/myapp" ] then /etc/init.d/myapp stop rm /etc/init.d/myapp fi %post #!/bin/sh #create soft link script to services directory ln -s /usr/myusr/app/bin/start.sh /etc/init.d/myapp chmod 555 /etc/init.d/myapp %preun #!/bin/sh #the argument being passed in indicates how many versions will exist #during an upgrade, this value will be 1, in which case we do not want to stop #the service since the new version will be running once this script is called #during an uninstall, the value will be 0, in which case we do want to stop #the service and remove the /etc/init.d script. if [ "$1" = "0" ] then if [ -s "/etc/init.d/myapp" ] then /etc/init.d/myapp stop rm /etc/init.d/myapp fi fi; %triggerin -- dependency, dependency1 echo "hello from install" %changelog * Tue May 23 2000 Vincent Danen <[email protected]> 0.27.2-2mdk -update BuildPreReq to include rep-gtk and rep-gtkgnome * Thu May 11 2000 Vincent Danen <[email protected]> 0.27.2-1mdk -0.27.2 * Thu May 11 2000 Vincent Danen <[email protected]> 0.27.1-2mdk -added BuildPreReq -change name from Sawmill to Sawfish The problem i found is that the files (filter.txt in particular) after the generation process on a Ubuntu system but not on SuSE system. Which might be caused by different rpm versions ? Currently we have an integration test which fails based on the non existing of the file (filter.txt under a buildroot folder?)

    Read the article

  • Link aggregation with freebsd8 and a cicso 3550, what am i doing wrong?

    - by Flamewires
    Hey, I am trying to setup Link Aggrigation with LACP (well, anything that provides increased bandwidth and failover using my setup will work). I'm running FreeBSD 8.0 on 3 machines. M1 is running 2 10/100 ethernetcards setup for link aggrigation using lagg. for reference: ifconfig em0 up ifconfig tx0 up ifconfig create lagg0 ifconfig lagg0 laggproto lacp laggport tx0 laggport em0 192.168.1.16 netmask 255.255.255.0 I plugged them into ports 1 and 2 of a Cicso 3550. then ran: configure terminal interface range Fa0/1 - 2 switchport mode access switchport access vlan 1 channel-group 1 mode active (everythings in vlan 1) Now Im able to connect the other computers to other ports on the switch and failover works great, i can unplug cables in the middle of a transfer and the traffic gets rerouted. However, im not noticing any speed increase. My test setup: load balancing: i tried dst and src on the switch, neither seemed to give me a speed increase. I am SCPing 2 500 meg files from the lagg computer to other computers (one each) which are also running 10/100 full duplex cards. I get transfer speeds of about 11.2-11.4 Mbps to a single host, and about half that (5.9-6.2) Mbps when transferring to both at the same time. From what I understood with destination load balancing the router was suppose to balance traffic headed for 1 computer over 1 port and traffic headed for another over a diff(in this case) the other port. With destination-MAC address forwarding, when packets are forwarded to an EtherChannel, the packets are distributed across the ports in the channel based on the destination host MAC address of the incoming packet. Therefore, packets to the same destination are forwarded over the same port, and packets to a different destination are sent on a different port in the channel. For the 3550 series switch, when source-MAC address forwarding is used, load distribution based on the source and destination IP address is also enabled for routed IP traffic. All routed IP traffic chooses a port based on the source and destination IP address. Packets between two IP hosts always use the same port in the channel, and traffic between any other pair of hosts can use a different port in the channel. (Link) What am i doing wrong/what would i need to do to see a speed increase beyond what i could do with just a single card?

    Read the article

  • Complete Active Directory redesign and GPO application

    - by Wolfgang Kuehne
    after much testing and hundreds of tries and hours invested I decided to consult you experts here. Overview: I want to apply some GPO to our users which will add some specific site to the Trusted Sites in Internet Explorer settings for all users. However, the more I try the more confusing the results become. The GPO is either applied to one group of users, or to another one. Finally, I came to the conclusion that this weird behavior is cause rather by the poor organization in Users and Groups in Active Directory. As such I want to kick the problem from the root: Redesign the Active Directory Users and Groups. Scenario: There is one Domain Controller, and we use Terminal Services (so there is a Terminal Server as well). Users usually log on to the Terminal Server using Remote Desktop to perform their daily tasks. I would classify the users in the following way: IT: Admins, Software Development Business: Administration, Management The current structure of the Active Directory Users and Groups is a result of the previous IT management. The company has used Small Business Server which has created multiple default user groups and containers. Unfortunately, the guys working before me have do no documentation at all. Now, as I inherit this structure I am in the no mans land. No idea which direction to head first. As you can see, the Active Directory User and Groups have become a bit confusing. There is no SBS anymore, but when migrating from SBS to the current Windows Server 2008 R2 environment the guys before me have simply copied the same structure. The real question: Where should I start cleaning from, ensuring that I won't break totally the current infrastructure? What is a nice organization for the scenario that I have explained above? Possible useful info for the current structure: Computers folder contains Terminal Services Computers user group Members: TerminalServer computer located at Server -> Terminalserver OU Member of: NONE Foreign Security Principals : EMPTY Managed Service Accounts : EMPTY Microsoft Exchange Security Groups : not sure if needed, our emails are administered by external service provider Distribution Groups : not sure if needed Security Groups : there are couple of groups which are needed SBS users : contains all the users Terminalserver : contains only the TerminalServer machine

    Read the article

  • Trying to install Rmagick on Debian

    - by Janak
    One of things you need to do to get Rmagick installed apt-get install libmagick9-dev When I try that I get the following errors Reading package lists... Done Building dependency tree... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. Since you only requested a single operation it is extremely likely that the package is simply not installable and a bug report against that package should be filed. The following information may help to resolve the situation: The following packages have unmet dependencies: libmagick9-dev: Depends: libjpeg62-dev but it is not going to be installed Depends: libbz2-dev but it is not going to be installed Depends: libtiff4-dev but it is not going to be installed Depends: libwmf-dev (>= 0.2.7-1) but it is not going to be installed Depends: libz-dev Depends: libpng12-dev but it is not going to be installed Depends: libfreetype6-dev but it is not going to be installed Depends: libexif-dev but it is not going to be installed Depends: libdjvulibre-dev but it is not going to be installed Depends: librsvg2-dev but it is not going to be installed Depends: libgraphviz-dev but it is not going to be installed E: Broken packages I don't know what to do to fix this? EDIT 1: I tried sudo apt-get install librmagick-ruby and it worked fine but then I needed to install the fleximage gem gem1.8 install fleximage and I got the following error message Building native extensions. This could take a while... ERROR: Error installing fleximage: ERROR: Failed to build gem native extension. /usr/bin/ruby1.8 extconf.rb checking for Ruby version >= 1.8.5... yes checking for cc... yes checking for Magick-config... no Can't install RMagick 2.12.2. Can't find Magick-config in /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/gems/1.8/bin *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/usr/bin/ruby1.8 Gem files will remain installed in /usr/lib/ruby/gems/1.8/gems/rmagick-2.12.2 for inspection. Results logged to /usr/lib/ruby/gems/1.8/gems/rmagick-2.12.2/ext/RMagick/gem_make.out

    Read the article

  • Why is the server performance so poor? What can be done to improve the speed of the server?

    - by fslsyed
    Very slow processing using Windows Server2008 R2 Standard with Service Pack One. Situation: Read a text file using the text data to populate a series of MS Sql tables. The converted data is used to generate monthly PDF invoice files; the PDF files are saved directly to the hard drive. The application is multi-threading with one thread used for the text conversion and three threads for PDF invoice generation. The text conversion is occurring concurrently with the invoice generation. Application Software: C# using Microsoft Visual Studio 2010 Ultimate. Crystal Report Writer 2011 with runtime 13_0_3 64 bit version. Targeted platform is x64; also tested as x86, and Any CPU with similar results. Microsoft .NET Framework 4.0. Microsoft Sql 2008 Issue: The software is running very slowly. The conversion of the text file is approximately six hundred fifty records per second and generation of the PDF files is approximately twelve invoices per minute. The text file to be converted is six hundred Meg with seven thousand invoices to be generated. The software was installed on three different machines from the same distribution files. The same text file was converted on each machine. The user executing the application was an administrator on each machine. The only variances were the machine and operating system. The configurations are as follows: Server: Operating System: Windows Server2008 R2 Standard 64-bit (6.1, Build7601) SP1 Service Pack: System Manufacturer: IBM System Model: System x3550 M3-[7944AC1]- BIOS: Default System BIOS Processor: Intel® Xeon® CPU E5620@ 2.4GHz (16 CPUs) Memory: 16384MB Notebook: Operating System: Windows 7 Home Premium Standard 64-bit (6.1, Build7601) System Manufacturer: Hewlett-Packard System Model: HP Pavilion dv7 Notebook PC BIOS: Default System BIOS Processor: AMD Phenom II N640 Dual-Core Processor 2.9GHz (2 CPUs) Memory: 6144MB Desktop: Operating System: Windows 7 Professional 64-bit (6.1, Build7601) SP1 System Manufacturer: Dell Inc. System Model: OptiPlex 960 BIOS: Phoenix ROM BIOS PLUS Version 1.10 A11 Processor: Intel Core™2 Quad CPU Q9650 @3.00GHZ (4 CPUs) Memory: 16384MB Processing results per machine: The applications were executed seven times with the averages being displayed below. Machine Text Records Invoices Generated Converted Per Minute Per Minute Server (1) 650 12 Notebook 980 17 Desktop 2,100 45 (1) The server is dedicated to execution of this application; no additional applications are being executed. Question: Why is the server performance so poor? What can be done to improve the speed of the server?

    Read the article

  • Sony VGN-NR260E "External Device Boot"

    - by user72158
    [A LITTLE BACKGROUND] On all modern Dell computers pushing the F12 on bios boot will allow for a screen that lets you choose what boot option you need. For example if I want to boot off of a USB flash drive to boot into a live Linux distribution in order to clean virus's on netbooks that do not have CD drives to boot from I would push F12 and choose USB device from the list of options. If this does not show up then I can always go to the F2 bios setup and choose flash drive to be the first option. When I restart the computer it will boot into the flash device. I understand that I can purchase an external USB CD drive and then boot from that. I do not want to use that option. The reason for using a flash device instead of a CD is: A: This USB flash device has several different boot OS's on it that are used. B: The antivirus disks are updated often and burning cd's and throwing away others is wasteful compared to simply updating a flash drive. There is nothing wrong with the flash drive. It works perfect on many other PC's. [PROBLEM] Booting this flashdrive has been working for years on hundreds of computers... I just have this ONE computer that I cannot figure out how to get it to boot on... I have a Sony Vaio that will not boot to this device. I've tried pushing every key combo I can think of (F12, Esc, Del, F10...) and none of these key combinations will bring up the boot menu. I chose F2 and went into the bios and changed the first boot device to USB flash device. This did not work either. There is an astrix next to the device and the note states: "This Drive is available when External Device Boot is Enable." [WHAT I NEED] I need to know How to enable External Device Boot on the Sony Vaio VGN-NR260E laptop. OR How to bring up the Boot Menu to allow me to boot off a flash device. Thanks for anyone that can help!

    Read the article

  • Domain: Netlogon event sequence

    - by Bob
    I'm getting really confused, reading tutorials from SAMBA howto, which is hell of a mess. Could you write step-by-step, what events happen upon NetLogon? Or in particular, I can't get these things: I really can't get the mechanism of action of LDAP and its role. Should I think of Active Directory LDS as of its superset? What're the other roles of AD and why this term is nearly a synonym of term "domain"? What's the role of LDAP in the remote login sequence? Does it store roaming user profiles? Does it store anything else? How it is called (are there any upper-level or lower-level services that use it in the course of NetLogon)? How do I join a domain. On the client machine I just use the Domain Controller admin credentials, but how do I prepare the Domain Controller for a new machine to join it. What's that deal of Machine trust accounts? How it is used? Suppose, I've just configured a machine to join a domain, created its machine trust, added its data to the domain controller. How would that machine find WINS server to query it for Domain Controller NetBIOS name? Does any computer name, ending with <1C type, correspond to domain controller? In what cases Kerberos and LM/NTLM are used for authentication? Where are password hashes stored in, say, Windows2000 domain controller? Right in the registry? What is SAM - is it a service, responsible for authentication and sending/storing those passwords and accompanying information, such as groups policies etc.? Who calls it? Does it use Active Directory? What's the role of NetBIOS except by name service? Can you exemplify a scenario of its usage as a "datagram distribution service for connectionless communication" or "session service for connection-oriented communication"? (quoted taken from http://en.wikipedia.org/wiki/NetBIOS_Frames_protocol description of NetBIOS roles) Thanks and sorry for many questions.

    Read the article

  • Thinkpad speaker turns mute - Linux Codec issue?

    - by Curlew
    At some point a few days ago the speakers on my Lenovo Thinkpad T410 (Model number: 2537A11) suddenly stopped working randomly. This error happens every time I watch a video or listen to a music file. The sound just abruptly stops. At the moment, I can't produce a single sound no matter what I do. I am using Debian GNU/Linux on this laptop and there doesn't appear to be anything else wrong (the fan is working, no abnormal heat (staying around ~40°C), no other obvious errors or problems). Here is the output of a nice program someone pointed me to: martin@martin:~/Downloads$ sudo python run.py --monitor Using temporary directory: /dev/shm/hda-analyzer You may remove this directory when finished or if you like to download the most recent copy of hda-analyzer tool. Downloading file hda_analyzer.py Downloading file hda_guilib.py Downloading file hda_codec.py Downloading file hda_proc.py Downloading file hda_graph.py Downloading file hda_mixer.py Downloaded all files, executing hda_analyzer.py Watching 1 cards ====================================== Sound is working normally and then it stops and the following lines appear: Diff for codec 0/0 (0x14f15069): --- +++ @@ -164,17 +164,17 @@ Power: setting=D0, actual=D0 Node 0x1f [Pin Complex] wcaps 0x400501: Stereo Pincap 0x00000010: OUT Pin Default 0x901701f0: [Fixed] Speaker at Int N/A Conn = Analog, Color = Unknown DefAssociation = 0xf, Sequence = 0x0 Misc = NO_PRESENCE Pin-ctls: 0x40: OUT - Power: setting=D0, actual=D0 + Power: setting=D3, actual=D3 Connection: 2 0x10* 0x11 Node 0x20 [Pin Complex] wcaps 0x400781: Stereo Digital Pincap 0x00000010: OUT Pin Default 0x40f001f0: [N/A] Other at Ext N/A Conn = Unknown, Color = Unknown DefAssociation = 0xf, Sequence = 0x0 Misc = NO_PRESENCE And now there is also an error in the dmesg output hda-intel: IRQ timing workaround is activated for card #0. Suggest a bigger bdl_pos_adj. I changed the bdl_pos_adj to various numbers (-1, 0, 64, 1024) and either there is no change at all or dmesg reports that the adjustment is too big. I wonder if this bdl_pos_adj is the real reason for the error. Here is my hardware information provided by alsa-info.sh website. Okay, i did some serious testing and even installed Windows and now i officially conclude that this is a hard-ware related issue with my Laptop speakers. Reason: The error occurs in my installed Debian Linux, an Ubuntu Live distribution and Windows XP No error-message appears in all of the OS. The sound just keeps running and i can't hear a thing. I tested different setups, including OSS, ALSA and the pulseaudio server on top If i use my new usb-headphones i can hear sound all the time without any sudden silences. So obviously, although hard to believe, my laptop speakers are not okay (never heard of similar cases). I'll award the bounty to anyone who can point me to good tutorials or the procedure how to exchange my T410 speakers (i still have warranty. The laptop was bought in Germany, but now i am in Denmark). Or to someone who can explain me the output from hda-analyzer (big log above).

    Read the article

  • Single domain name potentially resolving to multiple servers

    - by Jace
    first time here at Server Fault, and I apologize in advance that this domain stuff is not really my strength. Any and all suggestions are much appreciated. I am completely lost and incredibly tired! I've inherited an incredibly convoluted system from my predecessor, and I'm trying to find a way to solve it - or I need to be told that it just isn't possible. I've got an old site on ServerA (some kind of Linux distribution), with the domain SomeDomain.com There is a new site sitting on ServerB (Ubuntu), with the intention of having SomeDomain.com to serve it in the future (it is replacing the old site) ServerA also has a web app that is currently in use by other departments within the company (accessible at SomeDomain.com/web-app/) The goal: To have SomeDomain.com and all extensions of this domain name (sub-domains, URL's etc.) serve the new site on ServerB. BUT, the URL SomeDomain.com/web-app/ must serve the Web App on ServerA. The Catch: The ServerA is a shared server with a hosting company with VERY limiting restrictions in place - I cannot adjust DNS settings (apart from Name servers - but cannot set A records or anything, I have full access to ServerB to do as I wish). Therefore the web-app MUST be served from SomeDomain.com/web-app/ and not from a sub-domain or anything. These limitations make migrating the web-app from Server A to Server B rather undesirable, AND this web-app will be replaced in the near future, so it isn't worth the effort right now. Therefore, ultimately I will want 1 domain name to resolve to Server B's IP address most of the time, but in the event that the URL is SomeDomain.com/web-app/, it should resolve to Server A's IP. Note: The domain names don't, technically, have to resolve to one IP or another - but ultimately the URL's must stay consistent Some things I have tried: I've looked into mod_rewrite and .htaccess to try and achieve this effect, but it doesn't look like it's going to work for me - but I may have done it wrong (On Server B, I just checked if the request URI was /web-app/ and tried to serve the /web-app/ folder on Server A) I do have the ability to modify the name servers on both servers I am not able to make a sub domain on Server A that points back to Server A (I assume because the hosting company's servers use the URL to determine what site the serve). I figured this could be good as I'd could set an A record on Server B to point to the web app on Server A - but alas, Server A requires SomeDomain.com. If there is any more information I can give, please let me know. I need a nudge in the right direction, ideas or a solution.

    Read the article

  • .htaccess ignored, SPECIFIC to EC2 - not the usual suspects

    - by tedneigerux
    I run 8-10 EC2 based web servers, so my experience is many hours, but is limited to CentOS; specifically Amazon's distribution. I'm installing Apache using yum, so therefore getting Amazon's default compilation of Apache. I want to implement canonical redirects from non-www (bare/root) domain to www.domain.com for SEO using mod_rewrite BUT MY .htaccess FILE IS CONSISTENTLY IGNORED. My troubleshooting steps (outlined below) lead me to believe it's something specific to Amazon's build of Apache. TEST CASE Launch a EC2 Instance, e.g. Amazon Linux AMI 2013.03.1 SSH to the Server Run the commands: $ sudo yum install httpd $ sudo apachectl start $ sudo vi /etc/httpd/conf/httpd.conf $ sudo apachectl restart $ sudo vi /var/www/html/.htaccess In httpd.conf I changed the following, in the DOCROOT section / scope: AllowOverride All In .htaccess, added: (EDIT, I added RewriteEngine On later) RewriteCond %{HTTP_HOST} ^domain\.com$ [NC] RewriteRule ^/(.*) http://www.domain.com/$1 [R=301,L] Permissions on .htaccess are correct, AFAI can tell: $ ls -al /var/www/html/.htaccess -rwxrwxr-x 1 git apache 142 Jun 18 22:58 /var/www/html/.htaccess Other info: $ httpd -v Server version: Apache/2.2.24 (Unix) Server built: May 20 2013 21:12:45 $ httpd -M Loaded Modules: core_module (static) ... rewrite_module (shared) ... version_module (shared) Syntax OK EXPECTED BEHAVIOR $ curl -I domain.com HTTP/1.1 301 Moved Permanently Date: Wed, 19 Jun 2013 12:36:22 GMT Server: Apache/2.2.24 (Amazon) Location: http://www.domain.com/ Connection: close Content-Type: text/html; charset=UTF-8 ACTUAL BEHAVIOR $ curl -I domain.com HTTP/1.1 200 OK Date: Wed, 19 Jun 2013 12:34:10 GMT Server: Apache/2.2.24 (Amazon) Connection: close Content-Type: text/html; charset=UTF-8 TROUBLESHOOTING STEPS In .htaccess, added: BLAH BLAH BLAH ERROR RewriteCond %{HTTP_HOST} ^domain\.com$ [NC] RewriteRule ^/(.*) http://www.domain.com/$1 [R=301,L] My server threw an error 500, so I knew the .htaccess file was processed. As expected, it created an Error log entry: [Wed Jun 19 02:24:19 2013] [alert] [client XXX.XXX.XXX.XXX] /var/www/html/.htaccess: Invalid command 'BLAH BLAH BLAH ERROR', perhaps misspelled or defined by a module not included in the server configuration Since I have root access on the server, I then tried moving my rewrite rule directly to the httpd.conf file. THIS WORKED. This tells us several important things are working. $ curl -I domain.com HTTP/1.1 301 Moved Permanently Date: Wed, 19 Jun 2013 12:36:22 GMT Server: Apache/2.2.24 (Amazon) Location: http://www.domain.com/ Connection: close Content-Type: text/html; charset=UTF-8 HOWEVER, it is bothering me that it didn't work in the .htaccess file. And I have other use cases where I need it to work in .htaccess (e.g. an EC2 instance with named virtual hosts). Thank you in advance for your help.

    Read the article

  • Updating the managed debugging API for .NET v4

    - by Brian Donahue
    In any successful investigation, the right tools play a big part in collecting evidence about the state of the "crime scene" as it was before the detectives arrived. Unfortunately for the Crash Scene Investigator, we don't have the budget to fly out to the customer's site, chalk the outline, and eat their doughnuts. We have to rely on the end-user to collect the evidence for us, which means giving them the fingerprint dust and the evidence baggies and leaving them to it. With that in mind, the Red Gate support team have been writing tools that can collect vital clues with a minimum of fuss. Years ago we would have asked for a memory dump, where we used to get the customer to run CDB.exe and produce dumps that we could analyze in-house, but those dumps were pretty unwieldy (500MB files) and the debugger often didn't dump exactly where we wanted, or made five or more dumps. What we wanted was just the minimum state information from the program at the time of failure, so we produced a managed debugger that captured every first and second-chance exception and logged the stack and a minimal amount of variables from the memory of the application, which could all be exported as XML. This caused less inconvenience to the end-user because it is much easier to send a 65KB XML file in an email than a 500MB file containing all of the application's memory. We don't need to have the entire victim shipped out to us when we just want to know what was under the fingernails. The thing that made creating a managed debugging tool possible was the MDbg Engine example written by Microsoft as part of the Debugging Tools for Windows distribution. Since the ICorDebug interface is a bit difficult to understand, they had kindly created some wrappers that provided an event-driven debugging model that was perfect for our needs, but .NET 4 applications under debugging started complaining that "The debugger's protocol is incompatible with the debuggee". The introduction of .NET Framework v4 had changed the managed debugging API significantly, however, without an update for the MDbg Engine code! After a few hours of research, I had finally worked out that most of the version 4 ICorDebug interface still works much the same way in "legacy" v2 mode and there was a relatively easy fix for the problem in that you can still get a reference to legacy ICorDebug by changing the way the interface is created. In .NET v2, the interface was acquired using the CreateDebuggingInterfaceFromVersion method in mscoree.dll. In v4, you must first create IClrMetaHost, enumerate the runtimes, get an ICLRRuntimeInfo interface to the .NET 4 runtime from that, and use the GetInterface method in mscoree.dll to return a "legacy" ICorDebug interface. The rest of the MDbg Engine will continue working the old way. Here is how I had changed the MDbg Engine code to support .NET v4: private void InitFromVersion(string debuggerVersion){if( debuggerVersion.StartsWith("v1") ){throw new ArgumentException( "Can't debug a version 1 CLR process (\"" + debuggerVersion + "\"). Run application in a version 2 CLR, or use a version 1 debugger instead." );} ICorDebug rawDebuggingAPI=null;if (debuggerVersion.StartsWith("v4")){Guid CLSID_MetaHost = new Guid("9280188D-0E8E-4867-B30C-7FA83884E8DE"); Guid IID_MetaHost = new Guid("D332DB9E-B9B3-4125-8207-A14884F53216"); ICLRMetaHost metahost = (ICLRMetaHost)NativeMethods.ClrCreateInterface(CLSID_MetaHost, IID_MetaHost); IEnumUnknown runtimes = metahost.EnumerateInstalledRuntimes(); ICLRRuntimeInfo runtime = GetRuntime(runtimes, debuggerVersion); //Defined in metahost.hGuid CLSID_CLRDebuggingLegacy = new Guid(0xDF8395B5, 0xA4BA, 0x450b, 0xA7, 0x7C, 0xA9, 0xA4, 0x77, 0x62, 0xC5, 0x20);Guid IID_ICorDebug = new Guid("3D6F5F61-7538-11D3-8D5B-00104B35E7EF"); Object res;runtime.GetInterface(ref CLSID_CLRDebuggingLegacy, ref IID_ICorDebug, out res); rawDebuggingAPI = (ICorDebug)res; }elserawDebuggingAPI = NativeMethods.CreateDebuggingInterfaceFromVersion((int)CorDebuggerVersion.Whidbey,debuggerVersion);if (rawDebuggingAPI != null)InitFromICorDebug(rawDebuggingAPI);elsethrow new ArgumentException("Support for debugging version " + debuggerVersion + " is not yet implemented");} The changes above will ensure that the debugger can support .NET Framework v2 and v4 applications with the same codebase, but we do compile two different applications: one targeting v2 and the other v4. As a footnote I need to add that some missing native methods and wrappers, along with the EnumerateRuntimes method code, came from the Mindbg project on Codeplex. Another change is that when using the MDbgEngine.CreateProcess to launch a process in the debugger, do not supply a null as the final argument. This does not work any more because GetCORVersion always returns "v2.0.50727" as the function has been deprecated in .NET v4. What's worse is that on a system with only .NET 4, the user will be prompted to download and install .NET v2! Not nice! This works much better: proc = m_Debugger.CreateProcess(ProcessName, ProcessArgs, DebugModeFlag.Default,String.Format("v{0}.{1}.{2}",System.Environment.Version.Major,System.Environment.Version.Minor,System.Environment.Version.Build)); Microsoft "unofficially" plan on updating the MDbg samples soon, but if you have an MDbg-based application, you can get it working right now by changing one method a bit and adding a few new interfaces (ICLRMetaHost, IEnumUnknown, and ICLRRuntimeInfo). The new, non-legacy implementation of MDbg Engine will add new, interesting features like dump-file support and by association I assume garbage-collection/managed object stats, so it will be well worth looking into if you want to extend the functionality of a managed debugger going forward.

    Read the article

  • SQL SERVER – SSMS: Memory Usage By Memory Optimized Objects Report

    - by Pinal Dave
    At conferences and at speaking engagements at the local UG, there is one question that keeps on coming which I wish were never asked. The question around, “Why is SQL Server using up all the memory and not releasing even when idle?” Well, the answer can be long and with the release of SQL Server 2014, this got even more complicated. This release of SQL Server 2014 has the option of introducing In-Memory OLTP which is completely new concept and our dependency on memory has increased multifold. In reality, nothing much changes but we have memory optimized objects (Tables and Stored Procedures) additional which are residing completely in memory and improving performance. As a DBA, it is humanly impossible to get a hang of all the innovations and the new features introduced in the next version. So today’s blog is around the report added to SSMS which gives a high level view of this new feature addition. This reports is available only from SQL Server 2014 onwards because the feature was introduced in SQL Server 2014. Earlier versions of SQL Server Management Studio would not show the report in the list. If we try to launch the report on the database which is not having In-Memory File group defined, then we would see the message in report. To demonstrate, I have created new fresh database called MemoryOptimizedDB with no special file group. Here is the query used to identify whether a database has memory-optimized file group or not. SELECT TOP(1) 1 FROM sys.filegroups FG WHERE FG.[type] = 'FX' Once we add filegroup using below command, we would see different version of report. USE [master] GO ALTER DATABASE [MemoryOptimizedDB] ADD FILEGROUP [IMO_FG] CONTAINS MEMORY_OPTIMIZED_DATA GO The report is still empty because we have not defined any Memory Optimized table in the database.  Total allocated size is shown as 0 MB. Now, let’s add the folder location into the filegroup and also created few in-memory tables. We have used the nomenclature of IMO to denote “InMemory Optimized” objects. USE [master] GO ALTER DATABASE [MemoryOptimizedDB] ADD FILE ( NAME = N'MemoryOptimizedDB_IMO', FILENAME = N'E:\Program Files\Microsoft SQL Server\MSSQL12.SQL2014\MSSQL\DATA\MemoryOptimizedDB_IMO') TO FILEGROUP [IMO_FG] GO You may have to change the path based on your SQL Server configuration. Below is the script to create the table. USE MemoryOptimizedDB GO --Drop table if it already exists. IF OBJECT_ID('dbo.SQLAuthority','U') IS NOT NULL DROP TABLE dbo.SQLAuthority GO CREATE TABLE dbo.SQLAuthority ( ID INT IDENTITY NOT NULL, Name CHAR(500)  COLLATE Latin1_General_100_BIN2 NOT NULL DEFAULT 'Pinal', CONSTRAINT PK_SQLAuthority_ID PRIMARY KEY NONCLUSTERED (ID), INDEX hash_index_sample_memoryoptimizedtable_c2 HASH (Name) WITH (BUCKET_COUNT = 131072) ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA) GO As soon as above script is executed, table and index both are created. If we run the report again, we would see something like below. Notice that table memory is zero but index is using memory. This is due to the fact that hash index needs memory to manage the buckets created. So even if table is empty, index would consume memory. More about the internals of how In-Memory indexes and tables work will be reserved for future posts. Now, use below script to populate the table with 10000 rows INSERT INTO SQLAuthority VALUES (DEFAULT) GO 10000 Here is the same report after inserting 1000 rows into our InMemory table.    There are total three sections in the whole report. Total Memory consumed by In-Memory Objects Pie chart showing memory distribution based on type of consumer – table, index and system. Details of memory usage by each table. The information about all three is taken from one single DMV, sys.dm_db_xtp_table_memory_stats This DMV contains memory usage statistics for both user and system In-Memory tables. If we query the DMV and look at data, we can easily notice that the system tables have negative object IDs.  So, to look at user table memory usage, below is the over-simplified version of query. USE MemoryOptimizedDB GO SELECT OBJECT_NAME(OBJECT_ID), * FROM sys.dm_db_xtp_table_memory_stats WHERE OBJECT_ID > 0 GO This report would help DBA to identify which in-memory object taking lot of memory which can be used as a pointer for designing solution. I am sure in future we will discuss at lengths the whole concept of In-Memory tables in detail over this blog. To read more about In-Memory OLTP, have a look at In-Memory OLTP Series at Balmukund’s Blog. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL Tagged: SQL Memory, SQL Reports

    Read the article

  • Can't install wine (or ia32-libs) in Ubuntu 12.10 64 bit

    - by carestad
    As already pointed out here, people seems to have issues with installing wine in the latest version of Ubuntu. I'm suspecting this only happens with 64 bit users. For example, when trying to install wine, wine1.4, wine1.4:i386, wine1.5, wine1.5:i386, ia32-libs or ia32-libs:i386 with apt-get, I get a lot of dependency errors. Doing a sudo apt-get -f install doesn't seem to do the trick, neither does using aptitude. The errors I get is normally that the packages depend on some :i386 package, but installing those manually doesn't work either because they also have dependency issues (isn't APT supposed to do this automatically?!). I also downloaded CrossOver today and tried installing the .deb manually, but the dependency issues show up there as well. When running sudo apt-get -f install after trying to install the CrossOver .deb, apt-get wants to purge the following packages: ia32-crossover intel-gpu-tools libdrm-nouveau2 libgl1-mesa-dri libva-x11-1 ubuntu-desktop vlc xorg xserver-xorg-video-ati xserver-xorg-video-intel xserver-xorg-video-modesetting xserver-xorg-video-openchrome xserver-xorg-video-radeon xserver-xorg-video-vmware What I've tried so far (and didn't work): Installing synaptic, reloading my repositories, searching for ia32 and installing ia32-libs. Using Ubuntu Software Center to install Wine and ia32-libs. Using apt-get and aptitude to install all the differend varieties of the wine packages, both with and without the :i386 and -amd64 suffixes in package names. Disabling the universe and multiverse repos, run a sudo apt-get update and then re-enable them again. Boot a newly downloaded Ubuntu 12.10 x64 live USB and try to install all the different packages there. What I haven't tried (yet): Boot a newly downloaded Ubuntu 12.10 x32 image and try to install wine there (I'm just guessing that will work). Reinstall Ubuntu. Throw my computer out a window. wine alexander@cosmo:~$ LANGUAGE=en_US sudo apt-get install wine Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: wine : Depends: wine1.5 but it is not going to be installed E: Unable to correct problems, you have held broken packages. wine-1.4 alexander@cosmo:~$ LANGUAGE=en_US sudo apt-get install wine1.4 (...) The following packages have unmet dependencies: wine1.4 : Depends: wine1.4-i386 (= 1.4.1-0ubuntu1) E: Unable to correct problems, you have held broken packages. wine-1.4:i386 alexander@cosmo:~$ LANGUAGE=en_US sudo apt-get install wine1.4:i386 (...) The following packages have unmet dependencies: libaudio2:i386 : Depends: libxt6:i386 but it is not going to be installed libqtgui4:i386 : Depends: libsm6:i386 but it is not going to be installed libunity-webapps0 : Depends: unity-webapps-service but it is not going to be installed openssh-client : Depends: adduser (>= 3.10) but it is not going to be installed Depends: passwd ssh : Depends: openssh-server wine1.4:i386 : Depends: wine1.4-i386:i386 (= 1.4.1-0ubuntu1) Depends: binfmt-support:i386 (>= 1.1.2) Depends: procps:i386 Recommends: cups-bsd:i386 Recommends: gnome-exe-thumbnailer:i386 but it is not installable or kde-runtime:i386 but it is not going to be installed Recommends: ttf-droid:i386 but it is not installable Recommends: ttf-liberation:i386 but it is not installable Recommends: ttf-mscorefonts-installer:i386 Recommends: ttf-umefont:i386 but it is not installable Recommends: ttf-unfonts-core:i386 but it is not installable Recommends: ttf-wqy-microhei:i386 but it is not installable Recommends: winbind:i386 Recommends: winetricks:i386 but it is not going to be installed Recommends: xdg-utils:i386 but it is not installable E: Error, pkgProblemResolver::Resolve generated breaks, this may be caused by held packages. wine-1.5 alexander@cosmo:~$ sudo apt-get install wine1.5 (...) The following packages have unmet dependencies: wine1.5 : Depends: wine1.5-i386 (= 1.5.16-0ubuntu1) E: Unable to correct problems, you have held broken packages. wine-1.5:i386 alexander@cosmo:~$ sudo apt-get install wine1.5:i386 (...) The following packages have unmet dependencies: libaudio2:i386 : Depends: libxt6:i386 but it is not going to be installed libqtgui4:i386 : Depends: libsm6:i386 but it is not going to be installed libunity-webapps0 : Depends: unity-webapps-service but it is not going to be installed openssh-client : Depends: adduser (>= 3.10) but it is not going to be installed Depends: passwd ssh : Depends: openssh-server wine1.5:i386 : Depends: wine1.5-i386:i386 (= 1.5.16-0ubuntu1) but it is not going to be installed Depends: binfmt-support:i386 (>= 1.1.2) Depends: procps:i386 Recommends: cups-bsd:i386 Recommends: gnome-exe-thumbnailer:i386 but it is not installable or kde-runtime:i386 but it is not going to be installed Recommends: ttf-droid:i386 but it is not installable Recommends: ttf-liberation:i386 but it is not installable Recommends: ttf-mscorefonts-installer:i386 Recommends: ttf-umefont:i386 but it is not installable Recommends: ttf-unfonts-core:i386 but it is not installable Recommends: ttf-wqy-microhei:i386 but it is not installable Recommends: winbind:i386 Recommends: winetricks:i386 but it is not going to be installed Recommends: xdg-utils:i386 but it is not installable E: Error, pkgProblemResolver::Resolve generated breaks, this may be caused by held packages. ia32-libs alexander@cosmo:~$ sudo apt-get install ia32-libs (...) The following packages have unmet dependencies: ia32-libs : Depends: ia32-libs-multiarch E: Unable to correct problems, you have held broken packages. ia32-libs:i386 alexander@cosmo:~$ sudo apt-get install ia32-libs:i386 (...) Package ia32-libs:i386 is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source However the following packages replace it: lib32z1 lib32ncurses5 lib32bz2-1.0 lib32asound2 E: Package 'ia32-libs:i386' has no installation candidate

    Read the article

  • Configure Oracle SOA JMSAdatper to Work with WLS JMS Topics

    - by fip
    The WebLogic JMS Topic are typically running in a WLS cluster. So as your SOA composites that receive these Topic messages. In some situation, the two clusters are the same while in others they are sepearate. The composites in SOA cluster are subscribers to the JMS Topic in WebLogic cluster. As nature of JMS Topic is meant to distribute the same copy of messages to all its subscribers, two questions arise immediately when it comes to load balancing the JMS Topic messages against the SOA composites: How to assure all of the SOA cluster members receive different messages instead of the same (duplicate) messages, even though the SOA cluster members are all subscribers to the Topic? How to make sure the messages are evenly distributed (load balanced) to SOA cluster members? Here we will walk through how to configure the JMS Topic, the JmsAdapter connection factory, as well as the composite so that the JMS Topic messages will be evenly distributed to same composite running off different SOA cluster nodes without causing duplication. 2. The typical configuration In this typical configuration, we achieve the load balancing of JMS Topic messages to JmsAdapters by configuring a partitioned distributed topic along with sharable subscriptions. You can reference the documentation for explanation of PDT. And this blog posting does a very good job to visually explain how this combination of configurations would message load balancing among clients of JMS Topics. Our job is to apply this configuration in the context of SOA JMS Adapters. To do so would involve the following steps: Step A. Configure JMS Topic to be UDD and PDT, at the WebLogic cluster that house the JMS Topic Step B. Configure JCA Connection Factory with proper ServerProperties at the SOA cluster Step C. Reference the JCA Connection Factory and define a durable subscriber name, at composite's JmsAdapter (or the *.jca file) Here are more details of each step: Step A. Configure JMS Topic to be UDD and PDT, You do this at the WebLogic cluster that house the JMS Topic. You can follow the instructions at Administration Console Online Help to create a Uniform Distributed Topic. If you use WebLogic Console, then at the same administration screen you can specify "Distribution Type" to be "Uniform", and the Forwarding policy to "Partitioned", which would make the JMS Topic Uniform Distributed Destination and a Partitioned Distributed Topic, respectively Step B: Configure ServerProperties of JCA Connection Factory You do this step at the SOA cluster. This step is to make the JmsAdapter that connect to the JMS Topic through this JCA Connection Factory as a certain type of "client". When you configure the JCA Connection Factory for the JmsAdapter, you define the list of properties in FactoryProperties field, in a semi colon separated list: ClientID=myClient;ClientIDPolicy=UNRESTRICTED;SubscriptionSharingPolicy=SHARABLE;TopicMessageDistributionAll=false You can refer to Chapter 8.4.10 Accessing Distributed Destinations (Queues and Topics) on the WebLogic Server JMS of the Adapter User Guide for the meaning of these properties. Please note: Except for ClientID, other properties such as the ClientIDPolicy=UNRESTRICTED, SubscriptionSharingPolicy=SHARABLE and TopicMessageDistributionAll=false are all default settings for the JmsAdapter's connection factory. Therefore you do NOT have to explicitly specify them explicitly. All you need to do is the specify the ClientID. The ClientID is different from the subscriber ID that we are to discuss in the later steps. To make it simple, you just need to remember you need to specify the client ID and make it unique per connection factory. Here is the example setting: Step C. Reference the JCA Connection Factory and define a durable subscriber name, at composite's JmsAdapter (or the *.jca file) In the following example, the value 'MySubscriberID-1' was given as the value of property 'DurableSubscriber': <adapter-config name="subscribe" adapter="JMS Adapter" wsdlLocation="subscribe.wsdl" xmlns="http://platform.integration.oracle/blocks/adapter/fw/metadata"> <connection-factory location="eis/wls/MyTestUDDTopic" UIJmsProvider="WLSJMS" UIConnectionName="ateam-hq24b"/> <endpoint-activation portType="Consume_Message_ptt" operation="Consume_Message"> <activation-spec className="oracle.tip.adapter.jms.inbound.JmsConsumeActivationSpec"> <property name="DurableSubscriber" value="MySubscriberID-1"/> <property name="PayloadType" value="TextMessage"/> <property name="UseMessageListener" value="false"/> <property name="DestinationName" value="jms/MyTestUDDTopic"/> </activation-spec> </endpoint-activation> </adapter-config> You can set the durable subscriber name either at composite's JmsAdapter wizard,or by directly editing the JmsAdapter's *.jca file within the Composite project. 2.The "atypical" configurations: For some systems, there may be restrictions that do not allow the afore mentioned "typical" configurations be applied. For examples, some deployments may be required to configure the JMS Topic to be Replicated Distributed Topic rather than Partition Distributed Topic. We would like to discuss those scenarios here: Configuration A: The JMS Topic is NOT PDT In this case, you need to define the message selector 'NOT JMS_WL_DDForwarded' in the adapter's *.jca file, to filter out those "replicated" messages. Configuration B. The ClientIDPolicy=RESTRICTED In this case, you need separate factories for different composites. More accurately, you need separate factories for different *.jca file of JmsAdapter. References: Managing Durable Subscription WebLogic JMS Partitioned Distributed Topics and Shared Subscriptions JMS Troubleshooting: Configuring JMS Message Logging: Advanced Programming with Distributed Destinations Using the JMS Destination Availability Helper API

    Read the article

  • Running Jetty under Windows Azure Using RoleEntryPoint in a Worker Role

    - by Shawn Cicoria
    This post is built upon the work of Mario Kosmiskas and David C. Chou’s prior postings – from here: http://blogs.msdn.com/b/mariok/archive/2011/01/05/deploying-java-applications-in-azure.aspx  http://blogs.msdn.com/b/dachou/archive/2010/03/21/run-java-with-jetty-in-windows-azure.aspx As Mario points out in his post, when you need to have more control over the process that starts, it generally is better left to a RoleEntryPoint capability that as of now, requires the use of a CLR based assembly that is deployed as part of the package to Azure. There were things I liked especially about Mario’s post – specifically, the ability to pull down the JRE and Jetty runtimes at role startup and instantiate the process using the extracted bits.  The way Mario initialized the java process (and Jetty) was to take advantage of a role startup task configured as part of the service definition.  This is a great quick way to kick off processes or tasks prior to your role entry point.  However, if you need access to service configuration values or role events, that’s where RoleEntryPoint comes in.  For this PoC sample I moved the logic for retrieving the bits for the jre and jetty to the worker roles OnStart – in addition to moving the process kickoff to the OnStart method.  The Run method at this point is there to loop and just report the status of the java process. Beyond just making things more parameterized, both Mario’s and David’s articles still form the essence of the approach. The solution that accompanies this post provides all the necessary .NET based Visual Studio project.  In addition, you’ll need: 1. Jetty 7 runtime http://www.eclipse.org/jetty/downloads.php 2. JRE http://www.oracle.com/technetwork/java/javase/downloads/index.html Once you have these the first step is to create archives (zips) of the distributions.  For this PoC, the structure of the archive requires that the root of the archive looks as follows: JRE6.zip jetty---.zip Upload the contents to a storage container (block blob), and for this example I used /archives as the location.  The service configuration has several settings that allow, which is the advantage of using RoleEntryPoint, the ability to provide these things via native configuration support from Azure in a worker role. Storage Explorer You can use development storage for testing this out – the zipped version of the solution is configured for development storage.  When you’re ready to deploy, you update the two settings – 1 for diagnostics and the other for the storage container where the /archives are going to be stored. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="HostedJetty" osFamily="2" osVersion="*"> <Role name="JettyWorker"> <Instances count="1" /> <ConfigurationSettings> <!--<Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="DefaultEndpointsProtocol=https;AccountName=<accountName>;AccountKey=<accountKey>" />--> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="JettyArchive" value="jetty-distribution-7.3.0.v20110203b.zip" /> <Setting name="StartRole" value="true" /> <Setting name="BlobContainer" value="archives" /> <Setting name="JreArchive" value="jre6.zip" /> <!--<Setting name="StorageCredentials" value="DefaultEndpointsProtocol=https;AccountName=<accountName>;AccountKey=<accountKey>"/>--> <Setting name="StorageCredentials" value="UseDevelopmentStorage=true" />   For interacting with Storage you can use several tools – one tool that I like is from the Windows Azure CAT team located here: http://appfabriccat.com/2011/02/exploring-windows-azure-storage-apis-by-building-a-storage-explorer-application/  and shown in the prior picture At runtime, during role initialization and startup, Azure will call into your RoleEntryPoint.  At that time the code will do a dynamic pull of the 2 archives and extract – using the Sharp Zip Lib <link> as Mario had demonstrated in his sample.  The only different here is the use of CLR code vs. PowerShell (which is really CLR, but that’s another discussion). At this point, once the 2 zips are extracted, the Role’s file system looks as follows: Worker Role approot From there, the OnStart method (which also does the download and unzip using a simple StorageHelper class) kicks off the Java path and now you have Java! Task Manager Jetty Sample Page A couple of things I’m working on to enhance this is to extract the jre and jetty bits not to the appRoot but to a resource location defined as part of the service definition. ServiceDefinition.csdef <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="HostedJetty" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WorkerRole name="JettyWorker"> <Imports> <Import moduleName="Diagnostics" /> <Import moduleName="RemoteAccess" /> <Import moduleName="RemoteForwarder" /> </Imports> <Endpoints> <InputEndpoint name="JettyPort" protocol="tcp" port="80" localPort="8080" /> </Endpoints> <LocalResources> <LocalStorage name="Archives" cleanOnRoleRecycle="false" sizeInMB="100" /> </LocalResources>   As the concept matures a bit, being able to update dynamically the content or jar files as part of a running java solution is something that is possible through continued enhancement of this simple model. The Visual Studio 2010 Solution is located here: HostingJavaSln_NDA.zip

    Read the article

  • NET Math Libraries

    - by JoshReuben
    NET Mathematical Libraries   .NET Builder for Matlab The MathWorks Inc. - http://www.mathworks.com/products/netbuilder/ MATLAB Builder NE generates MATLAB based .NET and COM components royalty-free deployment creates the components by encrypting MATLAB functions and generating either a .NET or COM wrapper around them. .NET/Link for Mathematica www.wolfram.com a product that 2-way integrates Mathematica and Microsoft's .NET platform call .NET from Mathematica - use arbitrary .NET types directly from the Mathematica language. use and control the Mathematica kernel from a .NET program. turns Mathematica into a scripting shell to leverage the computational services of Mathematica. write custom front ends for Mathematica or use Mathematica as a computational engine for another program comes with full source code. Leverages MathLink - a Wolfram Research's protocol for sending data and commands back and forth between Mathematica and other programs. .NET/Link abstracts the low-level details of the MathLink C API. Extreme Optimization http://www.extremeoptimization.com/ a collection of general-purpose mathematical and statistical classes built for the.NET framework. It combines a math library, a vector and matrix library, and a statistics library in one package. download the trial of version 4.0 to try it out. Multi-core ready - Full support for Task Parallel Library features including cancellation. Broad base of algorithms covering a wide range of numerical techniques, including: linear algebra (BLAS and LAPACK routines), numerical analysis (integration and differentiation), equation solvers. Mathematics leverages parallelism using .NET 4.0's Task Parallel Library. Basic math: Complex numbers, 'special functions' like Gamma and Bessel functions, numerical differentiation. Solving equations: Solve equations in one variable, or solve systems of linear or nonlinear equations. Curve fitting: Linear and nonlinear curve fitting, cubic splines, polynomials, orthogonal polynomials. Optimization: find the minimum or maximum of a function in one or more variables, linear programming and mixed integer programming. Numerical integration: Compute integrals over finite or infinite intervals, over 2D and higher dimensional regions. Integrate systems of ordinary differential equations (ODE's). Fast Fourier Transforms: 1D and 2D FFT's using managed or fast native code (32 and 64 bit) BigInteger, BigRational, and BigFloat: Perform operations with arbitrary precision. Vector and Matrix Library Real and complex vectors and matrices. Single and double precision for elements. Structured matrix types: including triangular, symmetrical and band matrices. Sparse matrices. Matrix factorizations: LU decomposition, QR decomposition, singular value decomposition, Cholesky decomposition, eigenvalue decomposition. Portability and performance: Calculations can be done in 100% managed code, or in hand-optimized processor-specific native code (32 and 64 bit). Statistics Data manipulation: Sort and filter data, process missing values, remove outliers, etc. Supports .NET data binding. Statistical Models: Simple, multiple, nonlinear, logistic, Poisson regression. Generalized Linear Models. One and two-way ANOVA. Hypothesis Tests: 12 14 hypothesis tests, including the z-test, t-test, F-test, runs test, and more advanced tests, such as the Anderson-Darling test for normality, one and two-sample Kolmogorov-Smirnov test, and Levene's test for homogeneity of variances. Multivariate Statistics: K-means cluster analysis, hierarchical cluster analysis, principal component analysis (PCA), multivariate probability distributions. Statistical Distributions: 25 29 continuous and discrete statistical distributions, including uniform, Poisson, normal, lognormal, Weibull and Gumbel (extreme value) distributions. Random numbers: Random variates from any distribution, 4 high-quality random number generators, low discrepancy sequences, shufflers. New in version 4.0 (November, 2010) Support for .NET Framework Version 4.0 and Visual Studio 2010 TPL Parallellized – multicore ready sparse linear program solver - can solve problems with more than 1 million variables. Mixed integer linear programming using a branch and bound algorithm. special functions: hypergeometric, Riemann zeta, elliptic integrals, Frensel functions, Dawson's integral. Full set of window functions for FFT's. Product  Price Update subscription Single Developer License $999  $399  Team License (3 developers) $1999  $799  Department License (8 developers) $3999  $1599  Site License (Unlimited developers in one physical location) $7999  $3199    NMath http://www.centerspace.net .NET math and statistics libraries matrix and vector classes random number generators Fast Fourier Transforms (FFTs) numerical integration linear programming linear regression curve and surface fitting optimization hypothesis tests analysis of variance (ANOVA) probability distributions principal component analysis cluster analysis built on the Intel Math Kernel Library (MKL), which contains highly-optimized, extensively-threaded versions of BLAS (Basic Linear Algebra Subroutines) and LAPACK (Linear Algebra PACKage). Product  Price Update subscription Single Developer License $1295 $388 Team License (5 developers) $5180 $1554   DotNumerics http://www.dotnumerics.com/NumericalLibraries/Default.aspx free DotNumerics is a website dedicated to numerical computing for .NET that includes a C# Numerical Library for .NET containing algorithms for Linear Algebra, Differential Equations and Optimization problems. The Linear Algebra library includes CSLapack, CSBlas and CSEispack, ports from Fortran to C# of LAPACK, BLAS and EISPACK, respectively. Linear Algebra (CSLapack, CSBlas and CSEispack). Systems of linear equations, eigenvalue problems, least-squares solutions of linear systems and singular value problems. Differential Equations. Initial-value problem for nonstiff and stiff ordinary differential equations ODEs (explicit Runge-Kutta, implicit Runge-Kutta, Gear's BDF and Adams-Moulton). Optimization. Unconstrained and bounded constrained optimization of multivariate functions (L-BFGS-B, Truncated Newton and Simplex methods).   Math.NET Numerics http://numerics.mathdotnet.com/ free an open source numerical library - includes special functions, linear algebra, probability models, random numbers, interpolation, integral transforms. A merger of dnAnalytics with Math.NET Iridium in addition to a purely managed implementation will also support native hardware optimization. constants & special functions complex type support real and complex, dense and sparse linear algebra (with LU, QR, eigenvalues, ... decompositions) non-uniform probability distributions, multivariate distributions, sample generation alternative uniform random number generators descriptive statistics, including order statistics various interpolation methods, including barycentric approaches and splines numerical function integration (quadrature) routines integral transforms, like fourier transform (FFT) with arbitrary lengths support, and hartley spectral-space aware sequence manipulation (signal processing) combinatorics, polynomials, quaternions, basic number theory. parallelized where appropriate, to leverage multi-core and multi-processor systems fully managed or (if available) using native libraries (Intel MKL, ACMS, CUDA, FFTW) provides a native facade for F# developers

    Read the article

  • Partition Wise Joins

    - by jean-pierre.dijcks
    Some say they are the holy grail of parallel computing and PWJ is the basis for a shared nothing system and the only join method that is available on a shared nothing system (yes this is oversimplified!). The magic in Oracle is of course that is one of many ways to join data. And yes, this is the old flexibility vs. simplicity discussion all over, so I won't go there... the point is that what you must do in a shared nothing system, you can do in Oracle with the same speed and methods. The Theory A partition wise join is a join between (for simplicity) two tables that are partitioned on the same column with the same partitioning scheme. In shared nothing this is effectively hard partitioning locating data on a specific node / storage combo. In Oracle is is logical partitioning. If you now join the two tables on that partitioned column you can break up the join in smaller joins exactly along the partitions in the data. Since they are partitioned (grouped) into the same buckets, all values required to do the join live in the equivalent bucket on either sides. No need to talk to anyone else, no need to redistribute data to anyone else... in short, the optimal join method for parallel processing of two large data sets. PWJ's in Oracle Since we do not hard partition the data across nodes in Oracle we use the Partitioning option to the database to create the buckets, then set the Degree of Parallelism (or run Auto DOP - see here) and get our PWJs. The main questions always asked are: How many partitions should I create? What should my DOP be? In a shared nothing system the answer is of course, as many partitions as there are nodes which will be your DOP. In Oracle we do want you to look at the workload and concurrency, and once you know that to understand the following rules of thumb. Within Oracle we have more ways of joining of data, so it is important to understand some of the PWJ ideas and what it means if you have an uneven distribution across processes. Assume we have a simple scenario where we partition the data on a hash key resulting in 4 hash partitions (H1 -H4). We have 2 parallel processes that have been tasked with reading these partitions (P1 - P2). The work is evenly divided assuming the partitions are the same size and we can scan this in time t1 as shown below. Now assume that we have changed the system and have a 5th partition but still have our 2 workers P1 and P2. The time it takes is actually 50% more assuming the 5th partition has the same size as the original H1 - H4 partitions. In other words to scan these 5 partitions, the time t2 it takes is not 1/5th more expensive, it is a lot more expensive and some other join plans may now start to look exciting to the optimizer. Just to post the disclaimer, it is not as simple as I state it here, but you get the idea on how much more expensive this plan may now look... Based on this little example there are a few rules of thumb to follow to get the partition wise joins. First, choose a DOP that is a factor of two (2). So always choose something like 2, 4, 8, 16, 32 and so on... Second, choose a number of partitions that is larger or equal to 2* DOP. Third, make sure the number of partitions is divisible through 2 without orphans. This is also known as an even number... Fourth, choose a stable partition count strategy, which is typically hash, which can be a sub partitioning strategy rather than the main strategy (range - hash is a popular one). Fifth, make sure you do this on the join key between the two large tables you want to join (and this should be the obvious one...). Translating this into an example: DOP = 8 (determined based on concurrency or by using Auto DOP with a cap due to concurrency) says that the number of partitions >= 16. Number of hash (sub) partitions = 32, which gives each process four partitions to work on. This number is somewhat arbitrary and depends on your data and system. In this case my main reasoning is that if you get more room on the box you can easily move the DOP for the query to 16 without repartitioning... and of course it makes for no leftovers on the table... And yes, we recommend up-to-date statistics. And before you start complaining, do read this post on a cool way to do stats in 11.

    Read the article

  • World Record Batch Rate on Oracle JD Edwards Consolidated Workload with SPARC T4-2

    - by Brian
    Oracle produced a World Record batch throughput for single system results on Oracle's JD Edwards EnterpriseOne Day-in-the-Life benchmark using Oracle's SPARC T4-2 server running Oracle Solaris Containers and consolidating JD Edwards EnterpriseOne, Oracle WebLogic servers and the Oracle Database 11g Release 2. The workload includes both online and batch workload. The SPARC T4-2 server delivered a result of 8,000 online users while concurrently executing a mix of JD Edwards EnterpriseOne Long and Short batch processes at 95.5 UBEs/min (Universal Batch Engines per minute). In order to obtain this record benchmark result, the JD Edwards EnterpriseOne, Oracle WebLogic and Oracle Database 11g Release 2 servers were executed each in separate Oracle Solaris Containers which enabled optimal system resources distribution and performance together with scalable and manageable virtualization. One SPARC T4-2 server running Oracle Solaris Containers and consolidating JD Edwards EnterpriseOne, Oracle WebLogic servers and the Oracle Database 11g Release 2 utilized only 55% of the available CPU power. The Oracle DB server in a Shared Server configuration allows for optimized CPU resource utilization and significant memory savings on the SPARC T4-2 server without sacrificing performance. This configuration with SPARC T4-2 server has achieved 33% more Users/core, 47% more UBEs/min and 78% more Users/rack unit than the IBM Power 770 server. The SPARC T4-2 server with 2 processors ran the JD Edwards "Day-in-the-Life" benchmark and supported 8,000 concurrent online users while concurrently executing mixed batch workloads at 95.5 UBEs per minute. The IBM Power 770 server with twice as many processors supported only 12,000 concurrent online users while concurrently executing mixed batch workloads at only 65 UBEs per minute. This benchmark demonstrates more than 2x cost savings by consolidating the complete solution in a single SPARC T4-2 server compared to earlier published results of 10,000 users and 67 UBEs per minute on two SPARC T4-2 and SPARC T4-1. The Oracle DB server used mirrored (RAID 1) volumes for the database providing high availability for the data without impacting performance. Performance Landscape JD Edwards EnterpriseOne Day in the Life (DIL) Benchmark Consolidated Online with Batch Workload System Rack Units BatchRate(UBEs/m) Online Users Users /Units Users /Core Version SPARC T4-2 (2 x SPARC T4, 2.85 GHz) 3 95.5 8,000 2,667 500 9.0.2 IBM Power 770 (4 x POWER7, 3.3 GHz, 32 cores) 8 65 12,000 1,500 375 9.0.2 Batch Rate (UBEs/m) — Batch transaction rate in UBEs per minute Configuration Summary Hardware Configuration: 1 x SPARC T4-2 server with 2 x SPARC T4 processors, 2.85 GHz 256 GB memory 4 x 300 GB 10K RPM SAS internal disk 2 x 300 GB internal SSD 2 x Sun Storage F5100 Flash Arrays Software Configuration: Oracle Solaris 10 Oracle Solaris Containers JD Edwards EnterpriseOne 9.0.2 JD Edwards EnterpriseOne Tools (8.98.4.2) Oracle WebLogic Server 11g (10.3.4) Oracle HTTP Server 11g Oracle Database 11g Release 2 (11.2.0.1) Benchmark Description JD Edwards EnterpriseOne is an integrated applications suite of Enterprise Resource Planning (ERP) software. Oracle offers 70 JD Edwards EnterpriseOne application modules to support a diverse set of business operations. Oracle's Day in the Life (DIL) kit is a suite of scripts that exercises most common transactions of JD Edwards EnterpriseOne applications, including business processes such as payroll, sales order, purchase order, work order, and manufacturing processes, such as ship confirmation. These are labeled by industry acronyms such as SCM, CRM, HCM, SRM and FMS. The kit's scripts execute transactions typical of a mid-sized manufacturing company. The workload consists of online transactions and the UBE – Universal Business Engine workload of 61 short and 4 long UBEs. LoadRunner runs the DIL workload, collects the user’s transactions response times and reports the key metric of Combined Weighted Average Transaction Response time. The UBE processes workload runs from the JD Enterprise Application server. Oracle's UBE processes come as three flavors: Short UBEs < 1 minute engage in Business Report and Summary Analysis, Mid UBEs > 1 minute create a large report of Account, Balance, and Full Address, Long UBEs > 2 minutes simulate Payroll, Sales Order, night only jobs. The UBE workload generates large numbers of PDF files reports and log files. The UBE Queues are categorized as the QBATCHD, a single threaded queue for large and medium UBEs, and the QPROCESS queue for short UBEs run concurrently. Oracle's UBE process performance metric is Number of Maximum Concurrent UBE processes at transaction rate, UBEs/minute. Key Points and Best Practices Two JD Edwards EnterpriseOne Application Servers, two Oracle WebLogic Servers 11g Release 1 coupled with two Oracle Web Tier HTTP server instances and one Oracle Database 11g Release 2 database on a single SPARC T4-2 server were hosted in separate Oracle Solaris Containers bound to four processor sets to demonstrate consolidation of multiple applications, web servers and the database with best resource utilizations. Interrupt fencing was configured on all Oracle Solaris Containers to channel the interrupts to processors other than the processor sets used for the JD Edwards Application server, Oracle WebLogic servers and the database server. A Oracle WebLogic vertical cluster was configured on each WebServer Container with twelve managed instances each to load balance users' requests and to provide the infrastructure that enables scaling to high number of users with ease of deployment and high availability. The database log writer was run in the real time RT class and bound to a processor set. The database redo logs were configured on the raw disk partitions. The Oracle Solaris Container running the Enterprise Application server completed 61 Short UBEs, 4 Long UBEs concurrently as the mixed size batch workload. The mixed size UBEs ran concurrently from the Enterprise Application server with the 8,000 online users driven by the LoadRunner. See Also SPARC T4-2 Server oracle.com OTN JD Edwards EnterpriseOne oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Oracle Fusion Middleware oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 09/30/2012.

    Read the article

< Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >