Search Results

Search found 32567 results on 1303 pages for 'test manager'.

Page 24/1303 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • Interpreting and using the Asterisk "timing test" command

    - by zigg
    Timing is very important for certain kinds of applications in Asterisk. If DAHDI is the timing source, the dahdi_test command can be used to check the timing provided by the DAHDI kernel module. If dahdi_test returns exclusively measurements above 99.975%, the DAHDI timing source is generally considered good. Since Asterisk 1.6, new timing sources have become available, such as pthread and timerfd. The accuracy of these timing sources seems to be measurable with the Asterisk CLI timing test command: localhost*CLI> timing test Attempting to test a timer with 50 ticks per second. Using the 'timerfd' timing module for this test. It has been 1000 milliseconds, and we got 50 timer ticks My concern is that timing 50 ticks seems to be a considerably less stressful test than dahdi_test's 8192 samples in 8000 ms, particularly since just about every system I've tried it on, virtual or otherwise, can handle it. I can ask timing test to ramp it up to what I think are dahdi_test's standards: localhost*CLI> timing test 1024 Attempting to test a timer with 1024 ticks per second. Using the 'timerfd' timing module for this test. It has been 1000 milliseconds, and we got 1024 timer ticks This will indeed break down a bit depending on the system I'm using, usually with a decrease in timer ticks. But I'm not sure whether this is useful to stress it to this level. Is there authoritative guidance on using and interpreting the timing test command to insure that a given Asterisk system has a timing source that will work well?

    Read the article

  • How to test nginx proxy timeouts

    - by mkorszun
    Target: I would like to test all Nginx proxy timeout parameters in very simple scenario. My first approach was to create really simple HTTP server and put some timeouts: Between listen and accept to test proxy_connect_timeout Between accept and read to test proxy_send_timeout Between read and send to test proxy_read_timeout Test: 1) Server code (python): import socket import os import time import threading def http_resp(conn): conn.send("HTTP/1.1 200 OK\r\n") conn.send("Content-Length: 0\r\n") conn.send("Content-Type: text/xml\r\n\r\n\r\n") def do(conn, addr): print 'Connected by', addr print 'Sleeping before reading data...' time.sleep(0) # Set to test proxy_send_timeout data = conn.recv(1024) print 'Sleeping before sending data...' time.sleep(0) # Set to test proxy_read_timeout http_resp(conn) print 'End of data stream, closing connection' conn.close() def main(): s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) s.bind(('', int(os.environ['PORT']))) s.listen(1) print 'Sleeping before accept...' time.sleep(130) # Set to test proxy_connect_timeout while 1: conn, addr = s.accept() t = threading.Thread(target=do, args=(conn, addr)) t.start() if __name__ == "__main__": main() 2) Nginx configuration: I have extended Nginx default configuration by setting explicitly proxy_connect_timeout and adding proxy_pass pointing to my local HTTP server: location / { proxy_pass http://localhost:8888; proxy_connect_timeout 200; } 3) Observation: proxy_connect_timeout - Even though setting it to 200s and sleeping only 130s between listen and accept Nginx returns 504 after ~60s which might be because of the default proxy_read_timeout value. I do not understand how proxy_read_timeout could affect connection at so early stage (before accept). I would expect 200 here. Please explain! proxy_send_timeout - I am not sure if my approach to test proxy_send_timeout is correct - i think i still do not understand this parameter correctly. After all, delay between accept and read does not force proxy_send_timeout. proxy_read_timeout - it seems to be pretty straightforward. Setting delay between read and write does the job. So I guess my assumptions are wrong and probably I do not understand proxy_connect and proxy_send timeouts properly. Can some explain them to me using above test if possible (or modifying if required).

    Read the article

  • How to implement a single instance app manager in java (CVM PhoneME)?

    - by Marcus
    Hi, I'm working on a application manager for embeded platform based on the CVM PhoneME VM. The VM is started by a C++ app which configures the CVM and then triggers the VM itself. This C++ app is called form the command line passing the main class name and the classpath of a java application. There is a main java app (lets call it Manager) which loads the app using classloaders. I want this manager to be a single instance application so it could track all running apps. In other words: The first time I start an app (app1 for instance), the VM will launch and the Manager will load the app1. In further calls to load other apps (app2, app3 and so on), the same instance of the Manager would load those apps. The manager is working fine, except for the fact that this is not a single instance. Is it possible to do what I want? I found this: http://www.knowledgesutra.com/forums/topic/59760-how-to-implement-single-instance-application-on-java/ This is almost the same I want, except for the app loading part. However, the necessary packages are not available in the CVM implementation. Thanks very much.

    Read the article

  • How do i assert my exception message with JUNIT Test annotation ?

    - by Cshah
    i have written a few junits with @Test annotation. If my test method throws a checked exception and if i want to assert the message along with the exception, is there a way to do so with JUNIT @Test annotation.AFAIK, Junit 4.7 doesnt provide this feature but does any future versions provide it. I know in .NET you can assert the message and the exception class. Looking for similar feature in the java world. This is what i want @Test (expected = RuntimeException.class, message = "Employee ID is null") public void shouldThrowRuntimeExceptionWhenEmployeeIDisNull() { }

    Read the article

  • Junit run not picking file src/test/resources. For file required by some dependency jar

    - by saddy-dj
    Hi, I m facing a issue where test/resource is not picked,but instead jar's main/resource is picked Scenario is like : Myproject src/test/resources--- have config.xml w which should be needed by abc.jar which is a dependecy in Myproject. When running test case for Myproject its loading config.xml of abc.jar instead of Myproject test/resources. - I need to know order in which maven pick resources. - Or wat im trying is not possible. Thanks.

    Read the article

  • Can't add client machine to windows server 2008 domain controller

    - by Patrick J Collins
    A bit of background before I dive into the gritty details: I have a single server running Windows 2003 Server where I host my ASP.net website and SQL Server + Reports. I've been creating ordinary windows user accounts to authenticate my users, and I enabled integrated windows authentication with impersonation. I've set up a bunch of user groups which correspond to certain roles (admin, power user, normal user, etc) and I test membership to enable or disable certain features. Overall, I'm pretty happy with the solution, it was quick to setup and I don't have to worry about messing around storing passwords and whatnot. Well, what I'm trying to do now is set up a new environment with 3 servers (Web, SQL, Reports) and I'd like these three servers to share common user accounts. I understand that I could add these three machines to a domain, which means installing Active Directory on one of the machines. I am barking up the wrong tree here? Would you suggest an alternative configuration? Assuming that I stick with AD, I have a couple of questions regarding DNS. To be honest, I'd rather not fiddle around with the DNS settings because my ISP already has their own DNS server which works just fine. It would appear however that DNS and AD are intertwined. Firstly, if I am to create a new domain in called mycompany.net, do I actually need to be the registered owner of that domain name and ensure the DNS entry points to the IP address of the machine hosting AD? Secondly, for the two other machines that I am trying to add to the domain, do I need to fiddle with their DNS settings? I've tried setting the preferred DNS Server IP address to that of my newly installed AD, but no luck. At this point, I can't add the two other machines to the domain. Here are some diagnostics that I have run based on a few suggestions I read on forums (sorry they're in French, although I could translate if needed). I ran nltest, which seems to indicate that the client can discover the domain controller. When I run dcdiag, the call to DsGetDcName fails with error 1722, not really sure what that means. Any suggestions? Thanks! C:\Users\Administrator>nltest /dsgetdc:mycompany.net Contrôleur de domaine : \\REPORTS.mycompany.net Adresse : \\111.111.111.111 GUID dom : 3333a4ec-ca56-4f02-bb9e-76c29c6c3832 Nom dom : mycompany.net Nom de la forêt : mycompany.net Nom de site du contrôleur de domaine : Default-First-Site-Name Nom de notre site : Default-First-Site-Name Indicateurs : PDC GC DS LDAP KDC TIMESERV WRITABLE DNS_DC DNS_DOMAIN DNS _FOREST CLOSE_SITE FULL_SECRET La commande a été correctement exécutée C:\Users\Administrator>dcdiag /s:mycompany.net /u: mycompany.net \pcollins /p:somepass Diagnostic du serveur d'annuaire Exécution de l'installation initiale : * Forêt AD identifiée. Collecte des informations initiales terminée. Exécution des tests initiaux nécessaires Test du serveur : Default-First-Site-Name\REPORTS Démarrage du test : Connectivity ......................... Le test Connectivity de REPORTS a réussi Exécution des tests principaux Test du serveur : Default-First-Site-Name\REPORTS Démarrage du test : Advertising Erreur irrécupérable : l'appel DsGetDcName (REPORTS) a échoué ; erreur 1722 Le localisateur n'a pas pu trouver le serveur. ......................... Le test Advertising de REPORTS a échoué Démarrage du test : FrsEvent Impossible d'interroger le journal des événements File Replication Service sur le serveur REPORTS.mycompany.net. Erreur 0x6ba « Le serveur RPC n'est pas disponible. » ......................... Le test FrsEvent de REPORTS a échoué Démarrage du test : DFSREvent Impossible d'interroger le journal des événements DFS Replication sur le serveur REPORTS.mycompany.net. Erreur 0x6ba « Le serveur RPC n'est pas disponible. » ......................... Le test DFSREvent de REPORTS a échoué Démarrage du test : SysVolCheck [REPORTS] Une opération net use ou LsaPolicy a échoué avec l'erreur 53, Le chemin réseau n'a pas été trouvé.. ......................... Le test SysVolCheck de REPORTS a échoué Démarrage du test : KccEvent Impossible d'interroger le journal des événements Directory Service sur le serveur REPORTS.mycompany.net. Erreur 0x6ba « Le serveur RPC n'est pas disponible. » ......................... Le test KccEvent de REPORTS a échoué Démarrage du test : KnowsOfRoleHolders ......................... Le test KnowsOfRoleHolders de REPORTS a réussi Démarrage du test : MachineAccount Impossible d'ouvrir le canal avec [REPORTS] : échec avec l'erreur 53 : Le chemin réseau n'a pas été trouvé. Impossible d'obtenir le nom de domaine NetBIOS Échec : impossible de tester le nom principal de service (SPN) HOST Échec : impossible de tester le nom principal de service (SPN) HOST ......................... Le test MachineAccount de REPORTS a réussi Démarrage du test : NCSecDesc ......................... Le test NCSecDesc de REPORTS a réussi Démarrage du test : NetLogons [REPORTS] Une opération net use ou LsaPolicy a échoué avec l'erreur 53, Le chemin réseau n'a pas été trouvé.. ......................... Le test NetLogons de REPORTS a échoué Démarrage du test : ObjectsReplicated ......................... Le test ObjectsReplicated de REPORTS a réussi Démarrage du test : Replications ......................... Le test Replications de REPORTS a réussi Démarrage du test : RidManager ......................... Le test RidManager de REPORTS a réussi Démarrage du test : Services Impossible d'ouvrir IPC distant à [REPORTS.mycompany.net] : erreur 0x35 « Le chemin réseau n'a pas été trouvé. » ......................... Le test Services de REPORTS a échoué Démarrage du test : SystemLog Impossible d'interroger le journal des événements System sur le serveur REPORTS.mycompany.net. Erreur 0x6ba « Le serveur RPC n'est pas disponible. » ......................... Le test SystemLog de REPORTS a échoué Démarrage du test : VerifyReferences ......................... Le test VerifyReferences de REPORTS a réussi Exécution de tests de partitions sur ForestDnsZones Démarrage du test : CheckSDRefDom ......................... Le test CheckSDRefDom de ForestDnsZones a réussi Démarrage du test : CrossRefValidation ......................... Le test CrossRefValidation de ForestDnsZones a réussi Exécution de tests de partitions sur DomainDnsZones Démarrage du test : CheckSDRefDom ......................... Le test CheckSDRefDom de DomainDnsZones a réussi Démarrage du test : CrossRefValidation ......................... Le test CrossRefValidation de DomainDnsZones a réussi Exécution de tests de partitions sur Schema Démarrage du test : CheckSDRefDom ......................... Le test CheckSDRefDom de Schema a réussi Démarrage du test : CrossRefValidation ......................... Le test CrossRefValidation de Schema a réussi Exécution de tests de partitions sur Configuration Démarrage du test : CheckSDRefDom ......................... Le test CheckSDRefDom de Configuration a réussi Démarrage du test : CrossRefValidation ......................... Le test CrossRefValidation de Configuration a réussi Exécution de tests de partitions sur mycompany Démarrage du test : CheckSDRefDom ......................... Le test CheckSDRefDom de mycompany a réussi Démarrage du test : CrossRefValidation ......................... Le test CrossRefValidation de mycompany a réussi Exécution de tests d'entreprise sur mycompany.net Démarrage du test : LocatorCheck Avertissement : l'appel DcGetDcName(GC_SERVER_REQUIRED) a échoué ; erreur 1722 Serveur de catalogue global introuvable - Les catalogues globaux ne fonctionnent pas. Avertissement : l'appel DcGetDcName(PDC_REQUIRED) a échoué ; erreur 1722 Contrôleur principal de domaine introuvable. Le serveur contenant le rôle PDC ne fonctionne pas. Avertissement : l'appel DcGetDcName(TIME_SERVER) a échoué ; erreur 1722 Serveur de temps introuvable. Le serveur contenant le rôle PDC ne fonctionne pas. Avertissement : l'appel DcGetDcName(GOOD_TIME_SERVER_PREFERRED) a échoué ; erreur 1722 Serveur de temps introuvable. Avertissement : l'appel DcGetDcName(KDC_REQUIRED) a échoué ; erreur 1722 Centre de distribution de clés introuvable : les centres de distribution de clés ne fonctionnent pas. ......................... Le test LocatorCheck de mycompany.net a échoué Démarrage du test : Intersite ......................... Le test Intersite de mycompany.net a réussi Update 1 : I am under the distinct impression that the problem is caused by some security settings. I have read elsewhere that the client needs to be able to access the fileshare sysvol. I had to enable Client for Microsoft Windows and File and Printer Sharing which were previously disabled. When I now run dcdiag the Advertising test works, which I suppose is forward progress. It currently chokes on the Services step (unable to open remote IPC). Démarrage du test : Services Impossible d'ouvrir IPC distant à [REPORTS.locbus.net] : erreur 0x35 « Le chemin réseau n'a pas été trouvé. » ......................... Le test Services de REPORTS a échoué The original English version of that error message : Could not open Remote ipc to [server] Update 2 : I attach some more diagnostics : Netsetup.log (client): 09/24/2009 13:27:09:773 ----------------------------------------------------------------- 09/24/2009 13:27:09:773 NetpValidateName: checking to see if 'WEB' is valid as type 1 name 09/24/2009 13:27:12:773 NetpCheckNetBiosNameNotInUse for 'WEB' [MACHINE] returned 0x0 09/24/2009 13:27:12:773 NetpValidateName: name 'WEB' is valid for type 1 09/24/2009 13:27:12:805 ----------------------------------------------------------------- 09/24/2009 13:27:12:805 NetpValidateName: checking to see if 'WEB' is valid as type 5 name 09/24/2009 13:27:12:805 NetpValidateName: name 'WEB' is valid for type 5 09/24/2009 13:27:12:852 ----------------------------------------------------------------- 09/24/2009 13:27:12:852 NetpValidateName: checking to see if 'MYCOMPANY.NET' is valid as type 3 name 09/24/2009 13:27:12:992 NetpCheckDomainNameIsValid [ Exists ] for 'MYCOMPANY.NET' returned 0x0 09/24/2009 13:27:12:992 NetpValidateName: name 'MYCOMPANY.NET' is valid for type 3 09/24/2009 13:27:21:320 ----------------------------------------------------------------- 09/24/2009 13:27:21:320 NetpDoDomainJoin 09/24/2009 13:27:21:320 NetpMachineValidToJoin: 'WEB' 09/24/2009 13:27:21:320 OS Version: 6.0 09/24/2009 13:27:21:320 Build number: 6002 09/24/2009 13:27:21:320 ServicePack: Service Pack 2 09/24/2009 13:27:21:414 SKU: Windows Server® 2008 Standard 09/24/2009 13:27:21:414 NetpDomainJoinLicensingCheck: ulLicenseValue=1, Status: 0x0 09/24/2009 13:27:21:414 NetpGetLsaPrimaryDomain: status: 0x0 09/24/2009 13:27:21:414 NetpMachineValidToJoin: status: 0x0 09/24/2009 13:27:21:414 NetpJoinDomain 09/24/2009 13:27:21:414 Machine: WEB 09/24/2009 13:27:21:414 Domain: MYCOMPANY.NET 09/24/2009 13:27:21:414 MachineAccountOU: (NULL) 09/24/2009 13:27:21:414 Account: MYCOMPANY.NET\pcollins 09/24/2009 13:27:21:414 Options: 0x25 09/24/2009 13:27:21:414 NetpLoadParameters: loading registry parameters... 09/24/2009 13:27:21:414 NetpLoadParameters: DNSNameResolutionRequired not found, defaulting to '1' 0x2 09/24/2009 13:27:21:414 NetpLoadParameters: status: 0x2 09/24/2009 13:27:21:414 NetpValidateName: checking to see if 'MYCOMPANY.NET' is valid as type 3 name 09/24/2009 13:27:21:523 NetpCheckDomainNameIsValid [ Exists ] for 'MYCOMPANY.NET' returned 0x0 09/24/2009 13:27:21:523 NetpValidateName: name 'MYCOMPANY.NET' is valid for type 3 09/24/2009 13:27:21:523 NetpDsGetDcName: trying to find DC in domain 'MYCOMPANY.NET', flags: 0x40001010 09/24/2009 13:27:22:039 NetpDsGetDcName: failed to find a DC having account 'WEB$': 0x525, last error is 0x79 09/24/2009 13:27:22:039 NetpDsGetDcName: status of verifying DNS A record name resolution for 'KING.MYCOMPANY.NET': 0x0 09/24/2009 13:27:22:039 NetpDsGetDcName: found DC '\\KING.MYCOMPANY.NET' in the specified domain 09/24/2009 13:27:30:039 NetUseAdd to \\KING.MYCOMPANY.NET\IPC$ returned 53 09/24/2009 13:27:30:039 NetpJoinDomain: status of connecting to dc '\\KING.MYCOMPANY.NET': 0x35 09/24/2009 13:27:30:039 NetpDoDomainJoin: status: 0x35 09/24/2009 13:27:30:148 ----------------------------------------------------------------- ipconfig /all (on client): Configuration IP de Windows Nom de l'hôte . . . . . . . . . . : WEB Suffixe DNS principal . . . . . . : Type de noeud. . . . . . . . . . : Hybride Routage IP activé . . . . . . . . : Non Proxy WINS activé . . . . . . . . : Non Carte Ethernet Connexion au réseau local : Suffixe DNS propre à la connexion. . . : Description. . . . . . . . . . . . . . : Intel 21140-Based PCI Fast Ethernet Adapter (Emulated) Adresse physique . . . . . . . . . . . : **-15-5D-A1-17-** DHCP activé. . . . . . . . . . . . . . : Non Configuration automatique activée. . . : Oui Adresse IPv4. . . . . . . . . . . : **.***.163.122(préféré) Masque de sous-réseau. . . . . . . . . : 255.255.255.0 Passerelle par défaut. . . . . . . . . : **.***.163.2 Serveurs DNS. . . . . . . . . . . . . : **.***.163.123 NetBIOS sur Tcpip. . . . . . . . . . . : Activé ipconfig /all (on server): Configuration IP de Windows Nom de l'hôte . . . . . . . . . . : KING Suffixe DNS principal . . . . . . : mycompany.net Type de noeud. . . . . . . . . . : Hybride Routage IP activé . . . . . . . . : Non Proxy WINS activé . . . . . . . . : Non Liste de recherche du suffixe DNS.: locbus.net Carte Ethernet Connexion au réseau local : Suffixe DNS propre à la connexion. . . : Description. . . . . . . . . . . . . . : Intel 21140-Based PCI Fast Ethernet Adapter (Emulated) Adresse physique . . . . . . . . . . . : **-15-5D-A1-1E-** DHCP activé. . . . . . . . . . . . . . : Non Configuration automatique activée. . . : Oui Adresse IPv4. . . . . . . . . . . : **.***.163.123(préféré) Masque de sous-réseau. . . . . . . . . : 255.255.255.0 Passerelle par défaut. . . . . . . . . : **.***.163.2 Serveurs DNS. . . . . . . . . . . . . : 127.0.0.1 NetBIOS sur Tcpip. . . . . . . . . . . : Activé nslookup (on client): Serveur : *******.***.com Address: **.***.163.123 Nom : mycompany.net Addresses: ****:****:a37b::****:a37b **.****.163.123

    Read the article

  • How has test first development changed the way you write software?

    - by Toran Billups
    I've started to find that I can't write software without writing a test first. I ask this subjective question because I want to hear what others in the community think about the reasons I can't go back to writing production code without a test first. If you can't write a test for something you don't understand it Without a regression test you can't clean the code You are going to test it anyway, spend the time to do it right Evolutionary design is possible without fear You actually write less code yourself Fast feedback cycles save time and money Job security (less bugs makes your boss happy) It actually makes my work more enjoyable

    Read the article

  • In TDD, should tests be written by the person who implemented the feature under test?

    - by martin
    We run a project in which we want to solve with test driven development. I thought about some questions that came up when initiating the project. One question was: Who should write the unit-test for a feature? Should the unit-test be written by the feature-implementing programmer? Or should the unit test be written by another programmer, who defines what a method should do and the feature-implementing programmer implements the method until the tests runs? If I understand the concept of TDD in the right way, the feature-implementing programmer has to write the test by himself, because TDD is procedure with mini-iterations. So it would be too complex to have the tests written by another programmer? What would you say? Should the tests in TDD be written by the programmer himself or should another programmer write the tests that describes what a method can do?

    Read the article

  • Any way to release focus on a KVM guest in virt-manager without having to click Ctrl_L + Alt_L?

    - by slm
    Is there a way to move my mouse in and out of a KVM guest in virt-manager without having to click to gain focus of the window and release focus by pressing Ctrl_L + Alt_L? EDIT #1 The guest VM is Win2008R2. I've tried this on a CentOS 5, 6, and Fedora 14 box running various versions of virt-manager and KVM and they all exhibit this behavior. EDIT #2 I just tried the solution recommended by @tpow and that appears to be the issue. Manually adding a tablet input device resolves the problem and I can now move the mouse in and out of the KVM guest without having to gain focus first.

    Read the article

  • Unable to access ubuntu through windows boot manager

    - by Arvind
    I am having an ASUS K55VM....it came with DOS....to this Windows 7 was installed...now i am planned to add another OS ie Ubuntu...my computer had 4 partitions of 240GB 230GB 230GB and 230GB....i changed this to 5 parts....243GB 230GB 230GB 150GB and an 80GB...i installed Ubuntu 12.04 to the 80GB partition through a USB flash drive....it installed fine...however the problem is during the Boot option in Windows Boot manager when i choose Ubuntu...this what is displayed... Windows failed to start. A recent hardware or software change might be the cause. To fix the problem: 1.Insert your Windows installation disc and restart your computer. 2.Choose your language settings, and then click "Next." 3.Click "Repair your computer." If you do not have this disc, contact your system administrator or computer manufacturer for assistance. File: \ubuntu\winboot\wubildr.mbr Status: 0xc000000e Info: The selected entry could not be loaded because the application is missing or corrupt. I tried to change the boot option for the OS and made Ubuntu as my first option....It worked...however windows 7 does not show up in GRUB....in this fear i did the following: Restarted my pc changed boot option 1 to windows. Formatted the drive to which Ubuntu was installed(80GB one)through windows. Restarted again. The problem persists....Windows boot manager shows Ubuntu still and when i choose this it gives two lines one like: grub rescue....but my windows 7 is working perfectly fine. Now want a solution to remove Ubuntu from the windows boot option and reinstall completely...

    Read the article

  • Accessing host LVM partition from Windows XP through Virt.manager 0.8.5 / Qemu / KVM

    - by Nico de Smidt
    Hi, requested use case is having a Windows XP SP3 guest running in 64bit Ubuntu. (Linux pcs 2.6.35-22-server #35-Ubuntu SMP Sat Oct 16 22:02:33 UTC 2010 x86_64 GNU/Linux) I want this guest to access an LVM LV on the Ubuntu disk. I've setup the following LVM config: --- Logical volume --- LV Name /dev/storage/sdc1 VG Name storage LV UUID Zg5IMC-OlqB-prL5-fgg4-3A9A-OgKP-oZ0QkJ LV Write Access read/write LV Status available # open 0 LV Size 1.01 GiB Current LE 259 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 251:3 -- 1) I've setup a storage pool for /dev/storage 2) I've mkfs.vfat /dev/storage/sdc1 3) and made a virtual IDE disk in the virt-manager setup for the guest. Target device: IDE Disk 2 Source path: /dev/storage/sdc1 -- Now when running XP (guest) Windows sees a new disk in Disk Manager and want's to install a partition on it, since it believes the drive is empty. After formatting from within Windows I can put data on the new disk volume. -- Back in Ubuntu however I cannot access this this any more since it created a partition within an LVM Logical Volume. Running fdisk -l shows the following: root@pcs:/media# fdisk -l /dev/storage/sdc1 Disk /dev/storage/sdc1: 1086 MB, 1086324736 bytes 32 heads, 63 sectors/track, 1052 cylinders Units = cylinders of 2016 * 512 = 1032192 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x8d72e4f4 Device Boot Start End Blocks Id System /dev/storage/sdc1p1 1 1050 1058368+ c W95 FAT32 (LBA) -- which seems fine to me, but when trying to mount /dev/storage/sdc1p1 I get the following error: mount /dev/storage/sdc1p1 /media/xp mount: special device /dev/storage/sdc1p1 does not exist which makes sense since in lvdisplay sdc1p1 does not exist Main question: I want to mount the vfat partition in both Ubuntu and XP What am I missing here????? regards, and thanks for your consideration. Nico

    Read the article

  • Configuration Manager Setting Causing Error PRJ0019

    - by Jeff Paterno
    Recently I ran into an issue with a project failing to build on an automated build server using CruiseControl. When I looked into the build log I saw that the Post-Build project was failing with the error message: "error PRJ0019: A tool returned an error code from "Performing Post-Build Event..." This was most frustrating especially since the solution was building without issue on my local development environment. The Post-Build project was a C++ project that basically called several batch files to unregister/register assemblies, copy resources and supporting files, and place other dependencies in the GAC. I decided to run each of the batch files manually to see if that would provide more information as to why this project was failing. This lead me to determine that the batch file that was placing assemblies in the GAC was the culprit and that it was failing to find a particular assembly. The missing assembly was the output of another project. The project that was not producing the expected output was another C++ project that called a batch file. This batch process was actually embedding resource files into an assembly and then copying the assembly to the expected location. The real confusion started when I looked back into my Subversion log and noted that nothing had changed in this project in more than 2 months! It was almost as if the project had stopped building altogether. But what would cause that?! The Configuration Manager, obviously! Checking the solution's Configuration Manager settings, I found that the project that was not producing any output was in fact not selected to be part of the build process when the "Any CPU" platform was selected. This was the problem! I had recently updated the CruiseControl configurations to force the solution to be built targeting the platform "Any CPU". As a result, the project that was at the root of the problem was not configured to be built and the post-build process was failing when it couldn't find what it needed.

    Read the article

  • Daten versionieren mit Oracle Database Workspace Manager

    - by Heinz-Wilhelm Fabry (DBA Community)
    Wie können extrem lange Transaktionen durchgeführt werden, also Transaktionen, die Datensätze über Stunden oder Tage exklusiv sperren, ohne dass diese langen Transaktionen 'normale' Transaktionen auf diesen Datensätzen behindern? Solche langen Transakionen sind zum Beispiel im Spatial Umfeld keine Seltenheit. Wie können unterschiedliche historische Zustände von Produktionsdaten online zeitlich unbegrenzt vorgehalten werden? Die UNDO Daten, die das gesamte Änderungsvolumen einer Datenbank vorhalten, gewährleisten in der Regel nur einen zeitlich sehr limitierten Zugriff auf 'ältere' Daten. Und die Technologie der database archives, auch bekannt unter dem Namen Total Recall, erlaubt einerseits keine Änderungen an den älteren Daten und steht andererseits ausschließlich in der Enterprise Edition der Datenbank zur Verfügung. Wie kann man die aktuellsten Produktionsdaten für WHAT-IF-Analysen verändern und währenddessen andere Benutzer ungestört auf den Originaldaten weiterarbeiten lassen? Ein SET TRANSACTION READ ONLY erlaubt keinerlei Änderungen und ist ebenfalls begrenzt auf die 'Reichweite' der UNDO Informationen. Zwar könnte man für derartige Analysen eine Datenbankkopie aus dem Backup aufbauen oder eine Standby Lösung implementieren, aber das ist doch eher aufwändig. Es gibt eine verblüffend einfache Antwort auf diese scheinbar komplizierten Fragen. Sie heisst Oracle Database Workspace Manager oder kurz Workspace Manager (WM). Der WM ist ein Feature der Datenbank seit Oracle9i, das sowohl in der Standard als auch in der Enterprise Edition zur Verfügung steht. Anders als in den ersten Versionen ist er längst auch Bestandteil jeder Installation. Um so erstaunlicher ist es, dass so wenige Kunden ihn kennen. Dieser Tipp soll dazu beitragen, das zu ändern.

    Read the article

  • Simple Introduction to using the Enterprise Manager SOA/BPM Facade API by Jaideep Ganguli

    - by JuergenKress
    There may be times when you need to expose just a small section of what is displayed in the Enterprise Manager console for SOA/BPM (EM console). A simple example can be where stakeholders on the systems integration or customer teams want to monitor a dashboard of statistics on how many instances of a composite have been created and how many have faulted. You can see this in the EM, as shown below Some of these stakeholders may not have knowledge of  EM console and they just want a quick view into the statistics, without having to navigate EM. This post describes how to use the Oracle Fusion Middleware Infrastructure Management Java API  for Oracle SOA Suite (also called the Facade API)  to build a custom ADF page to display this information. If you want a quick introduction in using the Facade API, this post is for you. Read the complete article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: Enterprise Manager,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • JUnit confusion: use 'extend Testcase' or '@Test' ?

    - by Rabarberski
    I've found the proper use (or at least the documentation) of JUnit very confusing. This question serves both as a future reference and as a real question. If I've understood correctly, there are two main approaches to create and run a JUnit test: Approach A: create a class that extends TestCase, and start test methods with the word test. When running the class as a JUnit Test (in Eclipse), all methods starting with the word test are automatically run. import junit.framework.TestCase; public class DummyTestA extends TestCase { public void testSum() { int a = 5; int b = 10; int result = a + b; assertEquals(15, result); } } Approach B: create a 'normal' class and prepend a @Test annotation to the method. Note that you do NOT have to start the method with the word test. import org.junit.*; import static org.junit.Assert.*; public class DummyTestB { @Test public void Sum() { int a = 5; int b = 10; int result = a + b; assertEquals(15, result); } } Mixing the two seems not to be a good idea, see e.g. this stackoverflow question: Now, my questions(s): What is the preferred approach, or when would you use one instead of the other? Approach B allows for testing for exceptions by extending the @Test annotation like in @Test(expected = ArithmeticException.class). But how do you test for exceptions when using approach A? When using approach A, you can group a number of test classes in a test suite. TestSuite suite = new TestSuite("All tests");<br/> suite.addTestSuite(DummyTestA.class); suite.addTestSuite(DummyTestAbis.class);` But this can't be used with approach B (since each testclass should subclass TestCase). What is the proper way to group tests for approach B?

    Read the article

  • BizTalk 2009 - BizTalk Benchmark Wizard: Running a Test

    - by StuartBrierley
    The BizTalk Benchmark Wizard is a ultility that can be used to gain some validation of a BizTalk installation, giving a level of guidance on whether it is performing as might be expected.  It should be used after BizTalk Server has been installed and before any solutions are deployed to the environment.  This will ensure that you are getting consistent and clean results from the BizTalk Benchmark Wizard. The BizTalk Benchmark Wizard applies load to the BizTalk Server environment under a choice of specific scenarios. During these scenarios performance counter information is collected and assessed against statistics that are appropriate to the BizTalk Server environment. For details on installing the Benchmark Wizard see my previous post. The BizTalk Benchmarking Wizard provides two simple test scenarios, one for messaging and one for Orchestrations, which can be used to test your BizTalk implementation. Messaging Loadgen generates a new XML message and sends it over NetTCP A WCF-NetTCP Receive Location receives a the xml document from Loadgen. The PassThruReceive pipeline performs no processing and the message is published by the EPM to the MessageBox. The WCF One-Way Send Port, which is the only subscriber to the message, retrieves the message from the MessageBox The PassThruTransmit pipeline provides no additional processing The message is delivered to the back end WCF service by the WCF NetTCP adapter Orchestrations Loadgen generates a new XML message and sends it over NetTCP A WCF-NetTCP Receive Location receives a the xml document from Loadgen. The XMLReceive pipeline performs no processing and the message is published by the EPM to the MessageBox. The message is delivered to a simple Orchestration which consists of a receive location and a send port The WCF One-Way Send Port, which is the only subscriber to the Orchestration message, retrieves the message from the MessageBox The PassThruTransmit pipeline provides no additional processing The message is delivered to the back end WCF service by the WCF NetTCP adapter Below is a quick outline of how to run the BizTalk Benchmark Wizard on a single server, although it should be noted that this is not ideal as this server is then both generating and processing the load.  In order to separate this load out you should run the "Indigo" service on a seperate server. To start the BizTalk Benchmark Wizard click Start > All Programs > BizTalk Benchmark Wizard > BizTalk Benchmark Wizard. On this screen click next, you will then get the following pop up window. Check the server and database names and check the "check prerequsites" check-box before pressing ok.  The wizard will then check that the appropriate test scenarios are installed. You should then choose the test scenario that wish to run (messaging or orchestration) and the architecture that most closely matches your environment. You will then be asked to confirm the host server for each of the host instances. Next you will be presented with the prepare screen.  You will need to start the indigo service before pressing the Test Indigo Service Button. If you are running the indigo service on a separate server you can enter the server name here.  To start the indigo service click Start > All Programs > BizTalk Benchmark Wizard > Start Indigo Service.   While the test is running you will be presented with two speed dial type displays - one for the received messages per second and one for the processed messages per second. The green dial shows the current rate and the red dial shows the overall average rate.  Optionally you can view the CPU usage of the various servers involved in processing the tests. For my development environment I expected low results and this is what I got.  Although looking at the online high scores table and comparing to the quad core system listed, the results are perhaps not really that bad. At some time I may look at what improvements I can make to this score, but if you are interested in that now take a look at Benchmark your BizTalk Server (Part 3).

    Read the article

  • Oracle Enterprise Manager Cloud Control 12c Release 2 (12.1.0.2) is Available Now !

    - by Anand Akela
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} Oracle today announced the availability of Oracle Enterprise Manager Cloud Control 12c Release 2 (12.1.0.2). It is now available for download on OTN on ALL platforms. This is the first major release since the launch of Enterprise Manager 12c in October of 2011. This is the first time when Enterprise Manager release is available on all platforms simultaneously. This is primarily a stability release which incorporates many of issues and feedback reported by early adopters. In addition, this release contains many new features and enhancements in areas across the board. New Capabilities and Features Enhanced management capabilities for enterprise private clouds: Introduces new capabilities to allow customers to build and manage a Java Platform-as-a-Service (PaaS) cloud based on Oracle Weblogic Server. The new capabilities include guided set up of PaaS Cloud, self-service provisioning, automatic scale out and metering and chargeback. Enhanced lifecycle management capabilities for Oracle WebLogic Server environments: Combining in-context multiple domain, patching and configuration file synchronizations. Integrated Hardware-Software management for Oracle Exalogic Elastic Cloud through features such as rack schematics visualization and integrated monitoring of all hardware and software components. The latest management capabilities for business-critical applications include: Business Application Management: A new Business Application (BA) target type and dashboard with flexible definitions provides a logical view of an application’s business transactions, end-user experiences and the cloud infrastructure the monitored application is running on. Enhanced User Experience Reporting: Oracle Real User Experience Insight has been enhanced to provide reporting capabilities on client-side issues for applications running in the cloud and has been more tightly coupled with Oracle Business Transaction Management to help ensure that real-time user experience and transaction tracing data is provided to users in context. Several key improvements address ease of administration, reporting and extensibility for massively scalable cloud environments including dynamic groups, self-updateable monitoring templates, bulk operations against many events, etc. New and Revised Plug-Ins: Several plug-Ins have been updated as a part of this release resulting in either new versions or revisions. Revised plug-ins contain only bug-fixes and while new plug-ins incorporate both bug fixes as well as new functionality. Plug-In Name Version Enterprise Manager for Oracle Database 12.1.0.2 (revision) Enterprise Manager for Oracle Fusion Middleware 12.1.0.3 (new) Enterprise Manager for Chargeback and Capacity Planning 12.1.0.3 (new) Enterprise Manager for Oracle Fusion Applications 12.1.0.3 (new) Enterprise Manager for Oracle Virtualization 12.1.0.3 (new) Enterprise Manager for Oracle Exadata 12.1.0.3 (new) Enterprise Manager for Oracle Cloud 12.1.0.4 (new) Installation and Upgrade: All major platforms have been released simultaneously (Linux 32 / 64 bit, Solaris (SPARC), Solaris x86-64, IBM AIX 64-bit, and Windows x86-64 (64-bit) ) Enterprise Manager 12.1.0.2 is a complete release that includes both the EM OMS and Agent versions of 12.1.0.2. Installation options available with EM 12.1.0.2: User can do fresh Install or an upgrade from versions EM 10.2.0.5, 11.1, or 12.1.0.2 ( Bundle Patch 1 not mandatory). Upgrading to EM 12.1.0.2 from EM 12.1.0.1 is not a patch application (similar to Bundle Patch 1) but is achieved through a 1-system upgrade. Documentation: Oracle Enterprise Manager Cloud Control Introduction Document provides a broad overview of capabilities and highlights"What's New" in EM 12.1.0.2. All updated Oracle Enterprise Manager documentation can be found on OTN Upgrade Guide Please feel free to ask questions related to the new Oracle Enterprise Manager Cloud Control 12c Release 2 (12.1.0.2) at the Oracle Enterprise Manager Forum . You could also share your feedback at twitter  using hash tag #em12c or at Facebook . Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • Dual pane file manager for Mac OS

    - by Alex Kaushovik
    Is there a good customizable dual-pane file manager for Mac like Total Commander / Far Manager in Windows, or like Krusader / Midnight Commander in Linux? I used to work on Windows for quite a while and mostly used Far Manager and sometimes Total Commander, then I switched to Ubuntu Linux and used Krusader, now I switched to Mac OS (Snow Leopard) and I'm having a hard time trying to find a good file manager... Many of the existing applications are trying to replace the Finder with "multimedia capabilities nobody cares about in file manager - IMHO" (Path Finder, ForkLift), some of them are almost good dual-pane file managers (couldn't remember examples), but none of them worked for me mostly because of one reason: I couldn't integrate my file/folder comparison utility (Araxis Merge for Mac) with them... The way it worked for me in Windows and Linux is that I was setting the cursor on one file in the left pane, then setting the right-pane cursor on another file in right pane, then I pressed a hotkey that launched Araxis Merge with those to files/folders comparison results. It was very easy to set up in Far Manager (Windows) and Krusader (Linux, actually in Linux I used "Meld" instead of Araxis Merge...) The tool I'm looking for doesn't necessarily has to be free... Thank you!

    Read the article

  • Dual pane file manager for Mac OS

    - by Alex Kaushovik
    Is there a good customizable dual-pane file manager for Mac like Total Commander / Far Manager in Windows, or like Krusader / Midnight Commander in Linux? I used to work on Windows for quite a while and mostly used Far Manager and sometimes Total Commander, then I switched to Ubuntu Linux and used Krusader, now I switched to Mac OS (Snow Leopard) and I'm having a hard time trying to find a good file manager... Many of the existing applications are trying to replace the Finder with "multimedia capabilities nobody cares about in file manager - IMHO" (Path Finder, ForkLift), some of them are almost good dual-pane file managers (couldn't remember examples), but none of them worked for me mostly because of one reason: I couldn't integrate my file/folder comparison utility (Araxis Merge for Mac) with them... The way it worked for me in Windows and Linux is that I was setting the cursor on one file in the left pane, then setting the right-pane cursor on another file in right pane, then I pressed a hotkey that launched Araxis Merge with those to files/folders comparison results. It was very easy to set up in Far Manager (Windows) and Krusader (Linux, actually in Linux I used "Meld" instead of Araxis Merge...) The tool I'm looking for doesn't necessarily has to be free... Thank you!

    Read the article

  • Dual pane file manager for Mac OS X

    - by Alex Kaushovik
    Is there a good customizable dual-pane file manager for Mac like Total Commander / Far Manager in Windows, or like Krusader / Midnight Commander in Linux? I used to work on Windows for quite a while and mostly used Far Manager and sometimes Total Commander, then I switched to Ubuntu Linux and used Krusader, now I switched to Mac OS (Snow Leopard) and I'm having a hard time trying to find a good file manager... Many of the existing applications are trying to replace the Finder with "multimedia capabilities nobody cares about in file manager - IMHO" (Path Finder, ForkLift), some of them are almost good dual-pane file managers (couldn't remember examples), but none of them worked for me mostly because of one reason: I couldn't integrate my file/folder comparison utility (Araxis Merge for Mac) with them... The way it worked for me in Windows and Linux is that I was setting the cursor on one file in the left pane, then setting the right-pane cursor on another file in right pane, then I pressed a hotkey that launched Araxis Merge with those to files/folders comparison results. It was very easy to set up in Far Manager (Windows) and Krusader (Linux, actually in Linux I used "Meld" instead of Araxis Merge...) The tool I'm looking for doesn't necessarily has to be free... Thank you!

    Read the article

  • Optional parens in Ruby for method with uppercase start letter?

    - by RasmusKL
    I just started out using IronRuby (but the behaviour seems consistent when I tested it in plain Ruby) for a DSL in my .NET application - and as part of this I'm defining methods to be called from the DSL via define_method. However, I've run into an issue regarding optional parens when calling methods starting with an uppercase letter. Given the following program: class DemoClass define_method :test do puts "output from test" end define_method :Test do puts "output from Test" end def run puts "Calling 'test'" test() puts "Calling 'test'" test puts "Calling 'Test()'" Test() puts "Calling 'Test'" Test end end demo = DemoClass.new demo.run Running this code in a console (using plain ruby) yields the following output: ruby .\test.rb Calling 'test' output from test Calling 'test' output from test Calling 'Test()' output from Test Calling 'Test' ./test.rb:13:in `run': uninitialized constant DemoClass::Test (NameError) from ./test.rb:19:in `<main>' I realize that the Ruby convention is that constants start with an uppercase letter and that the general naming convention for methods in Ruby is lowercase. But the parens are really killing my DSL syntax at the moment. Is there any way around this issue?

    Read the article

  • IXRepository and test problems

    - by Ridermansb
    Recently had a doubt about how and where to test repository methods. Let the following situation: I have an interface IRepository like this: public interface IRepository<T> where T: class, IEntity { IQueryable<T> Query(Expression<Func<T, bool>> expression); // ... Omitted } And a generic implementation of IRepository public class Repository<T> : IRepository<T> where T : class, IEntity { public IQueryable<T> Query(Expression<Func<T, bool>> expression) { return All().Where(expression).AsQueryable(); } } This is an implementation base that can be used by any repository. It contains the basic implementation of my ORM. Some repositories have specific filters, in which case we will IEmployeeRepository with a specific filter: public interface IEmployeeRepository : IRepository<Employee> { IQueryable<Employee> GetInactiveEmployees(); } And the implementation of IEmployeeRepository: public class EmployeeRepository : Repository<Employee>, IEmployeeRepository // TODO: I have a dependency with ORM at this point in Repository<Employee>. How to solve? How to test the GetInactiveEmployees method { public IQueryable<Employee> GetInactiveEmployees() { return Query(p => p.Status != StatusEmployeeEnum.Active || p.StartDate < DateTime.Now); } } Questions Is right to inherit Repository<Employee>? The goal is to reuse code once all implementing IRepository already been made. If EmployeeRepository inherit only IEmployeeRepository, I have to literally copy and paste the code of Repository<T>. In our example, in EmployeeRepository : Repository<Employee> our Repository lies in our ORM layer. We have a dependency here with our ORM impossible to perform some unit test. How to create a unit test to ensure that the filter GetInactiveEmployees return all Employees in which the Status != Active and StartDate < DateTime.Now. I can not create a Fake/Mock of IEmployeeRepository because I would be testing? Need to test the actual implementation of GetInactiveEmployees. The complete code can be found on Github

    Read the article

  • Learning AngularJS by Example – The Customer Manager Application

    - by dwahlin
    I’m always tinkering around with different ideas and toward the beginning of 2013 decided to build a sample application using AngularJS that I call Customer Manager. It’s not exactly the most creative name or concept, but I wanted to build something that highlighted a lot of the different features offered by AngularJS and how they could be used together to build a full-featured app. One of the goals of the application was to ensure that it was approachable by people new to Angular since I’ve never found overly complex applications great for learning new concepts. The application initially started out small and was used in my AngularJS in 60-ish Minutes video on YouTube but has gradually had more and more features added to it and will continue to be enhanced over time. It’ll be used in a new “end-to-end” training course my company is working on for AngularjS as well as in some video courses that will be coming out. Here’s a quick look at what the application home page looks like: In this post I’m going to provide an overview about how the application is organized, back-end options that are available, and some of the features it demonstrates. I’ve already written about some of the features so if you’re interested check out the following posts: Building an AngularJS Modal Service Building a Custom AngularJS Unique Value Directive Using an AngularJS Factory to Interact with a RESTful Service Application Structure The structure of the application is shown to the right. The  homepage is index.html and is located at the root of the application folder. It defines where application views will be loaded using the ng-view directive and includes script references to AngularJS, AngularJS routing and animation scripts, plus a few others located in the Scripts folder and to custom application scripts located in the app folder. The app folder contains all of the key scripts used in the application. There are several techniques that can be used for organizing script files but after experimenting with several of them I decided that I prefer things in folders such as controllers, views, services, etc. Doing that helps me find things a lot faster and allows me to categorize files (such as controllers) by functionality. My recommendation is to go with whatever works best for you. Anyone who says, “You’re doing it wrong!” should be ignored. Contrary to what some people think, there is no “one right way” to organize scripts and other files. As long as the scripts make it down to the client properly (you’ll likely minify and concatenate them anyway to reduce bandwidth and minimize HTTP calls), the way you organize them is completely up to you. Here’s what I ended up doing for this application: Animation code for some custom animations is located in the animations folder. In addition to AngularJS animations (which are defined using CSS in Content/animations.css), it also animates the initial customer data load using a 3rd party script called GreenSock. Controllers are located in the controllers folder. Some of the controllers are placed in subfolders based upon the their functionality while others are placed at the root of the controllers folder since they’re more generic:   The directives folder contains the custom directives created for the application. The filters folder contains the custom filters created for the application that filter city/state and product information. The partials folder contains partial views. This includes things like modal dialogs used in the application. The services folder contains AngularJS factories and services used for various purposes in the application. Most of the scripts in this folder provide data functionality. The views folder contains the different views used in the application. Like the controllers folder, the views are organized into subfolders based on their functionality:   Back-End Services The Customer Manager application (grab it from Github) provides two different options on the back-end including ASP.NET Web API and Node.js. The ASP.NET Web API back-end uses Entity Framework for data access and stores data in SQL Server (LocalDb). The other option on the back-end is Node.js, Express, and MongoDB.   Using the ASP.NET Web API Back-End To run the application using ASP.NET Web API/SQL Server back-end open the .sln file at the root of the project in Visual Studio 2012 or higher (the free Express 2013 for Web version is fine). Press F5 and a browser will automatically launch and display the application. Using the Node.js Back-End To run the application using the Node.js/MongoDB back-end follow these steps: In the CustomerManager directory execute 'npm install' to install Express, MongoDB and Mongoose (package.json). Load sample data into MongoDB by performing the following steps: Execute 'mongod' to start the MongoDB daemon Navigate to the CustomerManager directory (the one that has initMongoCustData.js in it) then execute 'mongo' to start the MongoDB shell Enter the following in the mongo shell to load the seed files that handle seeding the database with initial data: use custmgr load("initMongoCustData.js") load("initMongoSettingsData.js") load("initMongoStateData.js") Start the Node/Express server by navigating to the CustomerManager/server directory and executing 'node app.js' View the application at http://localhost:3000 in your browser. Key Features The Customer Manager application certainly doesn’t cover every feature provided by AngularJS (as mentioned the intent was to keep it as simple as possible) but does provide insight into several key areas: Using factories and services as re-useable data services (see the app/services folder) Creating custom directives (see the app/directives folder) Custom paging (see app/views/customers/customers.html and app/controllers/customers/customersController.js) Custom filters (see app/filters) Showing custom modal dialogs with a re-useable service (see app/services/modalService.js) Making Ajax calls using a factory (see app/services/customersService.js) Using Breeze to retrieve and work with data (see app/services/customersBreezeService.js). Switch the application to use the Breeze factory by opening app/services.config.js and changing the useBreeze property to true. Intercepting HTTP requests to display a custom overlay during Ajax calls (see app/directives/wcOverlay.js) Custom animations using the GreenSock library (see app/animations/listAnimations.js) Creating custom AngularJS animations using CSS (see Content/animations.css) JavaScript patterns for defining controllers, services/factories, directives, filters, and more (see any JavaScript file in the app folder) Card View and List View display of data (see app/views/customers/customers.html and app/controllers/customers/customersController.js) Using AngularJS validation functionality (see app/views/customerEdit.html, app/controllers/customerEditController.js, and app/directives/wcUnique.js) More… Conclusion I’ll be enhancing the application even more over time and welcome contributions as well. Tony Quinn contributed the initial Node.js/MongoDB code which is very cool to have as a back-end option. Access the standard application here and a version that has custom routing in it here. Additional information about the custom routing can be found in this post.

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >