Search Results

Search found 2993 results on 120 pages for 'distributed transactions'.

Page 63/120 | < Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >

  • Where to start when setting up a gambling website [closed]

    - by molleman
    I don't know if this is the correct place or this question by I couldn't find anywhere else. I am looking into building a gambling website, I have good HTML CSS and JavaScript skills but I do not know enough of the server side to actually build a backend for a gambling website. Firstly where would you start , what language , I work With Wordpress , so I understand php a little but not a lot . Would php be a good place to start . If so what framework would be able to scale if the number of users on the website increased . clearly I would like a backend where you could view 1. Users 2. Transactions 3. input games which you can bet on And so on.. Also users should be able to see all there associated information If anyone could help that would be great.

    Read the article

  • Realtime website slowness (PHP/MYSQL)

    - by 3s2ng
    We got a website with realtime transactions. The past few days we experienced some slowness during a period of time. Usually the slowness last for 5 minutes. And also during the slowness we noticed that sometimes we can not connect to SSH and FTP or sometimes very slow to connect to those services. Currently we are trying to identify the issue. We already setup database monitoring tool. Now are about to signup with pingdom.com My question is. If the website is too slow due to the database (table or row lock) will it affect the other services like SSH and FTP? Does the ping correlates the page load and the connection between my PC and server? Thanks, Mark

    Read the article

  • Block a Server from reaching a machine

    - by user
    I have a Windows 2003 server that I want to block from accessing a specific IP address. I want to control this from the Server. because I control the machine. The traffic is http traffic (webservice call). It uses a non-standard port, so IP address+ Port combination would also work. Background: I have a development enviornment that for some reason is ignoring host file enteries under some circumstances. These host files point the enviornment at services in another Dev enviornment. Wne th host files are ignored, dev is talking to production. This is not my question, rather the motivation for this inquiry. I want is a failsafe to ensure dev will error instead of happily engaging in transactions with production. I control the dev server, I do not control the firewalls or the target production machine.

    Read the article

  • Oracle 10g Failover Database - How to fail back?

    - by rrkwells
    I want to know how the failover database concept works after recovery. We have defined our application to connect to a backup database in case the production database fails. If this happens, then all the transactions will be happening on that backup database. Once the production db server is running again, then how do we make sure the changes made in the backup database will be reflected on the production database? We want to make sure that any changes made while failed over are not lost. We are using Oracle 10g.

    Read the article

  • RSA Encrypt / Decrypt Problem in .NET

    - by Brendon Randall
    I'm having a problem with C# encrypting and decrypting using RSA. I have developed a web service that will be sent sensitive financial information and transactions. What I would like to be able to do is on the client side, Encrypt the certain fields using the clients RSA Private key, once it has reached my service it will decrypt with the clients public key. At the moment I keep getting a "The data to be decrypted exceeds the maximum for this modulus of 128 bytes." exception. I have not dealt much with C# RSA cryptography so any help would be greatly appreciated. This is the method i am using to generate the keys private void buttonGenerate_Click(object sender, EventArgs e) { string secretKey = RandomString(12, true); CspParameters param = new CspParameters(); param.Flags = CspProviderFlags.UseMachineKeyStore; SecureString secureString = new SecureString(); byte[] stringBytes = Encoding.ASCII.GetBytes(secretKey); for (int i = 0; i < stringBytes.Length; i++) { secureString.AppendChar((char)stringBytes[i]); } secureString.MakeReadOnly(); param.KeyPassword = secureString; RSACryptoServiceProvider rsaProvider = new RSACryptoServiceProvider(param); rsaProvider = (RSACryptoServiceProvider)RSACryptoServiceProvider.Create(); rsaProvider.KeySize = 1024; string publicKey = rsaProvider.ToXmlString(false); string privateKey = rsaProvider.ToXmlString(true); Repository.RSA_XML_PRIVATE_KEY = privateKey; Repository.RSA_XML_PUBLIC_KEY = publicKey; textBoxRsaPrivate.Text = Repository.RSA_XML_PRIVATE_KEY; textBoxRsaPublic.Text = Repository.RSA_XML_PUBLIC_KEY; MessageBox.Show("Please note, when generating keys you must sign on to the gateway\n" + " to exhange keys otherwise transactions will fail", "Key Exchange", MessageBoxButtons.OK, MessageBoxIcon.Information); } Once i have generated the keys, i send the public key to the web service which stores it as an XML file. Now i decided to test this so here is my method to encrypt a string public static string RsaEncrypt(string dataToEncrypt) { string rsaPrivate = RSA_XML_PRIVATE_KEY; CspParameters csp = new CspParameters(); csp.Flags = CspProviderFlags.UseMachineKeyStore; RSACryptoServiceProvider provider = new RSACryptoServiceProvider(csp); provider.FromXmlString(rsaPrivate); ASCIIEncoding enc = new ASCIIEncoding(); int numOfChars = enc.GetByteCount(dataToEncrypt); byte[] tempArray = enc.GetBytes(dataToEncrypt); byte[] result = provider.Encrypt(tempArray, true); string resultString = Convert.ToBase64String(result); Console.WriteLine("Encrypted : " + resultString); return resultString; } I do get what seems to be an encrypted value. In the test crypto web method that i created, i then take this encrypted data, try and decrypt the data using the clients public key and send this back in the clear. But this is where the exception is thrown. Here is my method responsible for this. public string DecryptRSA(string data, string merchantId) { string clearData = null; try { CspParameters param = new CspParameters(); param.Flags = CspProviderFlags.UseMachineKeyStore; RSACryptoServiceProvider rsaProvider = new RSACryptoServiceProvider(param); string merchantRsaPublic = GetXmlRsaKey(merchantId); rsaProvider.FromXmlString(merchantRsaPublic); byte[] asciiString = Encoding.ASCII.GetBytes(data); byte[] decryptedData = rsaProvider.Decrypt(asciiString, false); clearData = Convert.ToString(decryptedData); } catch (CryptographicException ex) { Log.Error("A cryptographic error occured trying to decrypt a value for " + merchantId, ex); } return clearData; } If anyone could help me that would be awesome, as i have said i have not done much with C# RSA encryption/decryption.

    Read the article

  • C# RSA Encrypt / Decrypt Problem

    - by Brendon Randall
    Hi All, Im having a problem with C# encrypting and decrypting using RSA. I have developed a web service that will be sent sensitive financial information and transactions. What I would like to be able to do is on the client side, Encrypt the certain fields using the clients RSA Private key, once it has reached my service it will decrypt with the clients public key. At the moment I keep getting a "The data to be decrypted exceeds the maximum for this modulus of 128 bytes." exception. I have not dealt much with C# RSA cryptography so any help would be greatly appreciated. This is the method i am using to generate the keys private void buttonGenerate_Click(object sender, EventArgs e) { string secretKey = RandomString(12, true); CspParameters param = new CspParameters(); param.Flags = CspProviderFlags.UseMachineKeyStore; SecureString secureString = new SecureString(); byte[] stringBytes = Encoding.ASCII.GetBytes(secretKey); for (int i = 0; i < stringBytes.Length; i++) { secureString.AppendChar((char)stringBytes[i]); } secureString.MakeReadOnly(); param.KeyPassword = secureString; RSACryptoServiceProvider rsaProvider = new RSACryptoServiceProvider(param); rsaProvider = (RSACryptoServiceProvider)RSACryptoServiceProvider.Create(); rsaProvider.KeySize = 1024; string publicKey = rsaProvider.ToXmlString(false); string privateKey = rsaProvider.ToXmlString(true); Repository.RSA_XML_PRIVATE_KEY = privateKey; Repository.RSA_XML_PUBLIC_KEY = publicKey; textBoxRsaPrivate.Text = Repository.RSA_XML_PRIVATE_KEY; textBoxRsaPublic.Text = Repository.RSA_XML_PUBLIC_KEY; MessageBox.Show("Please note, when generating keys you must sign on to the gateway\n" + " to exhange keys otherwise transactions will fail", "Key Exchange", MessageBoxButtons.OK, MessageBoxIcon.Information); } Once i have generated the keys, i send the public key to the web service which stores it as an XML file. Now i decided to test this so here is my method to encrypt a string public static string RsaEncrypt(string dataToEncrypt) { string rsaPrivate = RSA_XML_PRIVATE_KEY; CspParameters csp = new CspParameters(); csp.Flags = CspProviderFlags.UseMachineKeyStore; RSACryptoServiceProvider provider = new RSACryptoServiceProvider(csp); provider.FromXmlString(rsaPrivate); ASCIIEncoding enc = new ASCIIEncoding(); int numOfChars = enc.GetByteCount(dataToEncrypt); byte[] tempArray = enc.GetBytes(dataToEncrypt); byte[] result = provider.Encrypt(tempArray, true); string resultString = Convert.ToBase64String(result); Console.WriteLine("Encrypted : " + resultString); return resultString; } I do get what seems to be an encrypted value. In the test crypto web method that i created, i then take this encrypted data, try and decrypt the data using the clients public key and send this back in the clear. But this is where the exception is thrown. Here is my method responsible for this. public string DecryptRSA(string data, string merchantId) { string clearData = null; try { CspParameters param = new CspParameters(); param.Flags = CspProviderFlags.UseMachineKeyStore; RSACryptoServiceProvider rsaProvider = new RSACryptoServiceProvider(param); string merchantRsaPublic = GetXmlRsaKey(merchantId); rsaProvider.FromXmlString(merchantRsaPublic); byte[] asciiString = Encoding.ASCII.GetBytes(data); byte[] decryptedData = rsaProvider.Decrypt(asciiString, false); clearData = Convert.ToString(decryptedData); } catch (CryptographicException ex) { Log.Error("A cryptographic error occured trying to decrypt a value for " + merchantId, ex); } return clearData; If anyone could help me that would be awesome, as i have said i have not done much with C# RSA encryption/decryption. Thanks in advance

    Read the article

  • [Closed] Oracle JDBC connection with Weblogic 10 datasource mapping, giving problem java.sql.SQLExce

    - by gauravkarnatak
    Oracle JDBC connection with Weblogic 10 datasource mapping, giving problem java.sql.SQLException: Closed Connection I am using weblogic 10 JNDI datasource to create JDBC connections, below is my config <?xml version="1.0" encoding="UTF-8"?> <jdbc-data-source xmlns="http://www.bea.com/ns/weblogic/90" xmlns:sec="http://www.bea.com/ns/weblogic/90/security" xmlns:wls="http://www.bea.com/ns/weblogic/90/security/wls" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.bea.com/ns/weblogic/920 http://www.bea.com/ns/weblogic/920.xsd"> <name>XL-Reference-DS</name> <jdbc-driver-params> <url>jdbc:oracle:oci:@abc.XL.COM</url> <driver-name>oracle.jdbc.driver.OracleDriver</driver-name> <properties> <property> <name>user</name> <value>DEV_260908</value> </property> <property> <name>password</name> <value>password</value> </property> <property> <name>dll</name> <value>ocijdbc10</value> </property> <property> <name>protocol</name> <value>oci</value> </property> <property> <name>oracle.jdbc.V8Compatible</name> <value>true</value> </property> <property> <name>baseDriverClass</name> <value>oracle.jdbc.driver.OracleDriver</value> </property> </properties> </jdbc-driver-params> <jdbc-connection-pool-params> <initial-capacity>1</initial-capacity> <max-capacity>100</max-capacity> <capacity-increment>1</capacity-increment> <test-connections-on-reserve>true</test-connections-on-reserve> <test-table-name>SQL SELECT 1 FROM DUAL</test-table-name> </jdbc-connection-pool-params> <jdbc-data-source-params> <jndi-name>ReferenceData</jndi-name> <global-transactions-protocol>OnePhaseCommit</global-transactions-protocol> </jdbc-data-source-params> </jdbc-data-source> When I run a bulk task where there are lots of connections made and closed, sometimes it gives connection closed exception for any of the task in the bulk task. Below is detailed exception' java.sql.SQLException: Closed Connection at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:111) at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:145) at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:207) at oracle.jdbc.driver.OracleStatement.ensureOpen(OracleStatement.java:3512) at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3265) at oracle.jdbc.driver.OraclePreparedStatement.executeUpdate(OraclePreparedStatement.java:3367) Any ideas?

    Read the article

  • Migrate Spring JPA DAO unit testing to google app engine

    - by twingocerise
    I'm trying to put together a simple environment where I can get Spring, Maven, JPA, Google App Engine and DAO unit testing working happily all together. The goal is to be able to run a simple DAO unit test creating an entity and then load it again with a simple find to check it's been created properly - all of this from my maven build. My dao is making use of the JPA entity manager (query(), persist(), etc.) I've got it working no problem with hsqldb and a datasource, etc. but I'm struggling to get it working with appengine. My questions are: 1) I'm using an entity manager, injecting my persistence unit as followed. Is it OK? Is there any need for a datasource or something special? I thought not but correct me if I'm wrong. applicationContext.xml <bean id='entityManagerFactory' class='org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean'> <property name="persistenceUnitName" value="transactions-optional" /> </bean> Persistence.xml <persistence-unit name="transactions-optional"> <provider>org.datanucleus.store.appengine.jpa.DatastorePersistenceProvider</provider> <properties> <property name="datanucleus.NontransactionalRead" value="true"/> <property name="datanucleus.NontransactionalWrite" value="true"/> <property name="datanucleus.ConnectionURL" value="appengine"/> </properties> </persistence-unit> 2) what are the dependencies I need to add to my pom file to be able to run the unit test making use of the entityManager? What about versions ? I found loads of things about appengine-api-labs/stubs/testing but none them got it working i.e. I'm getting jdo dependency missing while I'm using JPA... I also get loads of conflicts when I try to add some jars (datanucleus and stuff). So far I'm trying appengine-api-1.0-sdk v1.7.0 - ASM-all v3.3 - datanucleus core/api-jpa/enhancer v3.1.0 - datanucleus-appengine v2.0.1.1 and all the gae testing jars v1.7.0 3) Is there anything I need to add to my surefire plugin (test runner) to make sure it picks up all the dependencies? I'm getting an exhausting ClassNotFound on DatastorePersistenceProvider while it is in my classpath (I checked the jars and the mvn dependency:tree) I had a look at this but it doesn't seem to be working at all: http://www.vertigrated.com/blog/2011/02/working-maven-3-google-app-engine-plugin-with-gwt-support/ 4) Do I need to use any sot of localhelper to test my DAOs? Ideally I'd want to test my dao layer "as is" with the entity manager... what's your opinion ? Has anyone managed to run a unit test using JPA on google app engine ? 5) Do I need to set up any sort of gae.home somewhere in my pom file? Would anyone make use of it (a plugin or something) ? 6) Is the gwt-maven plugin any helpful if I don't use gwt - I'm writing a simple webservice making use of appengine, not a GWT app... Any help would be much appreciated as I've been struggling for 2 days now... Cheers, V.

    Read the article

  • Query a recordset

    - by Dion
    I'm trying to work out how to move the $sql_pay_det query outside of the $sql_pay loop (so in effect query on $rs_pay_det rather than creating a new $rs_pay_det for each iteration of the loop) At the moment my code looks like: //Get payroll transactions $sql_pay = 'SELECT Field1, DescriptionandURLlink, Field5, Amount, Field4, PeriodName FROM tblgltransactionspayroll WHERE CostCentreCode = "'.$cc.'" AND SubjectiveCode = "'.$subj.'" AND PeriodNo = "'.$month.'"'; $rs_pay = mysql_query($sql_pay); $row_pay = mysql_fetch_assoc($rs_pay); while($row_pay = mysql_fetch_array($rs_pay)) { $employee_name = $row_pay['Field1']; $assign_no = $row_pay['DescriptionandURLlink']; $pay_period = $row_pay['Field5']; $mth_name = $row_pay['PeriodName']; $amount = $row_pay['Amount']; $total_amount = $total_amount + $amount; $amount = my_number_format($amount, 2, ".", ","); $sql_pay_det = 'SELECT ElementDesc, Amount, WTEWorked, WTEPaid, WTEContract, Payscale FROM tblpayrolldetail WHERE CostCentreCode = "'.$cc.'" AND SubjectiveCode = "'.$subj.'" AND AccountingPeriod = "'.$mth_name.'" AND EmployeeRef = "'.$assign_no.'"'; $rs_pay_det = mysql_query($sql_pay_det); $row_pay_det = mysql_fetch_assoc($rs_pay_det); while($row_pay_det = mysql_fetch_array($rs_pay_det)) { $element_det = $row_pay_det['ElementDesc']; $amount_det = $row_pay_det['Amount']; $wte_worked = $row_pay_det['WTEWorked']; $wte_paid = $row_pay_det['WTEPaid']; $wte_cont = $row_pay_det['WTEContract']; $payscale = $row_pay_det['Payscale']; //Get band/point and annual salary where element is basic pay if ($element_det =="3959#Basic Pay"){ $sql_payscale = 'SELECT txtPayscaleName, Salary FROM tblpayscalemapping WHERE txtPayscale = "'.$payscale.'"'; $rs_payscale = mysql_query($sql_payscale); $row_payscale = mysql_fetch_assoc($rs_payscale); $grade = $row_payscale['txtPayscaleName']; $salary = "£" . my_number_format($row_payscale['Salary'], 0, ".", ","); } } } I've tried doing this: //Get payroll transactions $sql_pay = 'SELECT Field1, DescriptionandURLlink, Field5, Amount, Field4, PeriodName FROM tblgltransactionspayroll WHERE CostCentreCode = "'.$cc.'" AND SubjectiveCode = "'.$subj.'" AND PeriodNo = "'.$month.'"'; $rs_pay = mysql_query($sql_pay); $row_pay = mysql_fetch_assoc($rs_pay); //Get payroll detail recordset $sql_pay_det = 'SELECT ElementDesc, Amount, WTEWorked, WTEPaid, WTEContract, Payscale, EmployeeRef FROM tblpayrolldetail WHERE CostCentreCode = "'.$cc.'" AND SubjectiveCode = "'.$subj.'" AND AccountingPeriod = "'.$mth_name.'"'; $rs_pay_det = mysql_query($sql_pay_det); while($row_pay = mysql_fetch_array($rs_pay)) { $employee_name = $row_pay['Field1']; $assign_no = $row_pay['DescriptionandURLlink']; $pay_period = $row_pay['Field5']; $mth_name = $row_pay['PeriodName']; $amount = $row_pay['Amount']; $total_amount = $total_amount + $amount; $amount = my_number_format($amount, 2, ".", ","); //Query $rs_pay_det $sql_pay_det2 = 'SELECT ElementDesc, Amount, WTEWorked, WTEPaid, WTEContract, Payscale FROM "'.$rs_pay_det.'" WHERE EmployeeRef = "'.$assign_no.'"'; $rs_pay_det2 = mysql_query($sql_pay_det2); $row_pay_det2 = mysql_fetch_assoc($rs_pay_det2); while($row_pay_det = mysql_fetch_array($rs_pay_det)) { $element_det = $row_pay_det['ElementDesc']; $amount_det = $row_pay_det['Amount']; $wte_worked = $row_pay_det['WTEWorked']; $wte_paid = $row_pay_det['WTEPaid']; $wte_cont = $row_pay_det['WTEContract']; $payscale = $row_pay_det['Payscale']; //Get band/point and annual salary where element is basic pay if ($element_det =="3959#Basic Pay"){ $sql_payscale = 'SELECT txtPayscaleName, Salary FROM tblpayscalemapping WHERE txtPayscale = "'.$payscale.'"'; $rs_payscale = mysql_query($sql_payscale); $row_payscale = mysql_fetch_assoc($rs_payscale); $grade = $row_payscale['txtPayscaleName']; $salary = "£" . my_number_format($row_payscale['Salary'], 0, ".", ","); } } } But I get an error on $row_pay_det2 saying that "mysql_fetch_assoc(): supplied argument is not a valid MySQL result resource"

    Read the article

  • Learning to implement DIC in MVC

    - by Tom
    I am learning to apply DIC to MVC project. So, I have sketched this DDD-ish DIC-ready-ish layout to my best understanding. I have read many blogs articles wikis for the last few days. However, I am not confident about implementing it correctly. Could you please demonstrate to me how to put them into DIC the proper way? I prefer Ninject or Windsor after all the readings, but anyDIC will do as long as I can get the correct idea how to do it. Web controller... public class AccountBriefingController { //create private IAccountServices accountServices { get; set; } public AccountBriefingController(IAccountServices accsrv) accountServices = accsrv; } //do work public ActionResult AccountBriefing(string userid, int days) { //get days of transaction records for this user BriefingViewModel model = AccountServices.GetBriefing(userid, days); return View(model); } } View model ... public class BriefingViewModel { //from user repository public string UserId { get; set; } public string AccountNumber {get; set;} public string FirstName { get; set; } public string LastName { get; set; } //from account repository public string Credits { get; set; } public List<string> Transactions { get; set; } } Service layer ... public interface IAccountServices { BriefingViewModel GetBriefing(); } public class AccountServices { //create private IUserRepository userRepo {get; set;} private IAccountRepository accRepo {get; set;} public AccountServices(UserRepository ur, AccountRepository ar) { userRepo = ur; accRepo = ar; } //do work public BriefingViewModel GetBriefing(string userid, int days) { var model = new BriefingViewModel(); //<---is that okay to new a model here?? var user = userRepo.GetUser(userid); if(user != null) { model.UserId = userid; model.AccountNumber = user.AccountNumber; model.FirstName = user.FirstName; model.LastName = user.LastName; //account records model.Credits = accRepo.GetUserCredits(userid); model.Transactions = accRepo.GetUserTransactions(userid, days); } return model; } } Domain layer and data models... public interface IUserRepository { UserDataModel GetUser(userid); } public interface IAccountRepository { List<string> GetUserTransactions(userid, days); int GetUserCredits(userid); } // Entity Framework DBContext goes under here Please point out if my implementation is wrong, e.g.I can feel in AccountServices-GetBriefing - new BriefingViewModel() seems wrong to me, but I don't know how to fit the stud into DIC? Thank you very much for your help!

    Read the article

  • If I use a facade class with generic methods to access the JPA API, how should I provide additional processing for specific types?

    - by Shaun
    Let's say I'm making a fairly simple web application using JAVA EE specs (I've heard this is possible). In this app, I only have about 10 domain/data objects, and these are represented by JPA Entities. Architecturally, I would consider the JPA API to perform the role of a DAO. Of course, I don't want to use the EntityManager directly in my UI (JSF) and I need to manage transactions, so I delegate these tasks to the so-called service layer. More specifically, I would like to be able to handle these tasks in a single DataService class (often also called CrudService) with generic methods. See this article by Adam Bien for an example interface: http://www.adam-bien.com/roller/abien/entry/generic_crud_service_aka_dao My project differs from that article in that I can't use EJBs, so my service classes are essentially just named beans and I handle transactions manually. Regardless, what I want is a single interface for simple CRUD operations on my data objects because having a different class for each data type would lead to a lot of duplicate and/or unnecessary code. Ideally, my views would be able to use a method such as public <T> List<T> findAll(Class<T> type) { ... } to retrieve data. Using JSF, it might look something like this: <h:dataTable value="#{dataService.findAll(data.class)}" var="d"> ... </h:dataTable> Similarly, after validating forms, my controller could submit the data with a method such as: public <T> void add(T entity) { ... } Granted, you'd probably actually want to return something useful to the caller. In any case, this works well if your data can be treated as homogenous in this manner. Alas, it breaks down when you need to perform additional processing on certain objects before passing them on to JPA. For example, let's say I'm dealing with Books and Authors which have a many-to-many relationship. Each Book has a set of IDs referring to its authors, and each Author has a set of IDs referring to their books. Normally, JPA can manage this kind of relationship for you, but in some cases it can't (for example, the google app engine JPA provider doesn't support this). Thus, when I persist a new book for example, I may need to update the corresponding author entities. My question, then, is if there's an elegant way to handle this or if I should reconsider the sanity of my whole design. Here's a couple ways I see of dealing with it: The instanceof operator. I could use this to target certain classes when special processing is needed. Perhaps maintainability suffers and it isn't beautiful code, but if there's only 10 or so domain objects it can't be all that bad... could it? Make a different service for each entity type (ie, BookService and AuthorService). All services would inherit from a generic DataService base class and override methods if special processing is needed. At this point, you could probably also just call them DAOs instead. As always, I appreciate the help. Let me know if any clarifications are needed, as I left out many smaller details.

    Read the article

  • Sun Oracle Database Machine a román Banca Transilvaniánál

    - by Fekete Zoltán
    Oracle sajtóhír: Banca Transilvania, first institution in Romania to use Sun Oracle Database Machine (English version) Sikersztori, ügyféltörténet pdf-ben. Az Database Machine V2 megjelenését 2009 szeptemberben jelentette az Oracle. A világon az elso bank, ahol már élesben muködik a Database Machine V2, a romániai Banca Transilvania! Olvassa el a sajtóhírt. A Banca Transilvania 1,5 milló ügyféllel rendelkezik. "This system, product of Oracle and Sun, is the fastest server in the world for data storage, online transactions processing and data warehousing applications." Robert C. Rekkers, Banca Transilvania CEO, ezt nyilatkozta:"Business information is accessed 30 times faster using the new system, leading to quicker decisions and a better data base segmentation", azaz a Database Machine segítségével az üzleti kérséseket 30-szor gyorsabban tudják megválaszolni, mint a korábbi rendszerrel. Leontin Toderici, Banca Transilvania COO mondta a következot: "The acquisition price was excellent, as the costs were below those of an ordinary system", azaz a rendszer ára kiváló volt, kisebb volt a kötsége, mint a hagyományos rendszereké. Sorin Mindrutescu, az Oracle Romania vezetoje büszke arra, hogy egy romániai cég is az innovatív rendszer felhasználói között lehet.: "Oracle Exadata V2 is the result of over 30 years of experience in hardware and software development of two leader companies. I am glad that a top Romanian company is amongst the first in the world to use this innovative product." Az Exadata termékcsalád és a Database Machine kiváló eszköz OLTP rendszerek, adattárházak, konszolidációs megoldások adatbázisainak futtatására. Egy csomagban a tartalmazza a szoftvert és az "okos" hardvert, az adatfeldoldozó, a tároló (storage) komponenseket, mindezt az extrém gyors Infiniband kapcsolatokkal összekötve. A Banca Transilvani az Oracle readingi (Nagy-Britannia) központjában tesztelte a Database Machine rendszert, s a korábbi rendszernél tízszer, néhol hetvenkettoször gyorsabb teljesítményt kaptak, 10-72-szeres teljesítménynövekedés!, említette Tudor Iliescu, Trend Import - Export CEO. A központi Oracle sajtóhír: Customers Select Oracle® Exadata for Extreme Performance of Data Warehouse and OLTP Applications

    Read the article

  • SQL SERVER – Difference Between ROLLBACK IMMEDIATE and WITH NO_WAIT during ALTER DATABASE

    - by pinaldave
    Today, we are going to discuss about something very simple, but quite commonly confused two options of ALTER DATABASE. The first one is ALTER DATABASE …ROLLBACK IMMEDIATE and the second one is WITH NO_WAIT. Many people think they are the same or are not sure of the difference between these two options. Before we continue our explaination, let us go through the explanation given by Book On Line. ROLLBACK AFTER integer [SECONDS] | ROLLBACK IMMEDIATE Specifies whether to roll back after a specified number of seconds or immediately. NO_WAIT Specifies that if the requested database state or option change cannot complete immediately without waiting for transactions to commit or roll back on their own, then the request will fail. If you have understood the difference by now, there is no need to proceed further. If you are still confused, continue with the rest of the post. There is one big difference between ROLLBACK and NO_WAIT. In case incomplete Transaction ALTER DATABASE … ROLLBACK rollbacks those incomplete transaction immediately, where as ALTER DATABASE … NO_WAIT will terminate and rollback the transaction of ALTER DATABASE … NO_WAIT itself. I think it can be clearly explained with the help of the following images. Option 1: ALTER DATABASE … ROLLBACK Connection 1 – Simulating some operation using WAITFOR DELAY WAITFOR DELAY '1:00:00' Connection 2 ALTER DATABASE TestDb SET SINGLE_USER WITH ROLLBACK IMMEDIATE; Option 2: ALTER DATABASE … NO_WAIT Connection 1 – Simulating some operation using WAITFOR DELAY WAITFOR DELAY '1:00:00' Connection 2 ALTER DATABASE TestDb SET SINGLE_USER WITH NO_WAIT; Let me know if this example was simple enough. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Documentation, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQLAuthority News – Happy Deepavali and Happy News Year

    - by pinaldave
    Diwali or Deepavali is popularly known as the festival of lights. It literally means “array of light” or “row of lamps“. Today we build a small clay maps and fill it with oil and light it up. The significance of lighting the lamp is the triumph of good over evil. I work every single day in a year but today I am spending my time with family and little one. I make sure that my daughter is aware of our culture and she learns to celebrate the festival with the same passion and values which I have. Every year on this day, I do not write a long blog post but rather write a small post with various SQL Tips and Tricks. After reading them you should quickly get back to your friends and family – it is the most important festival day. Here are a few tips and tricks: Take regular full backup of your database Avoid cursors if they can be replaced by set based process Keep your index maintenance script handy and execute them at intervals Consider Solid State Drive (SDD) for crucial database and tempdb placement Update statistics for OLTP transactions at intervals I guess that’s it for today. If you still have more time to learn. Here are few things you should consider. Get FREE Books by Sign up for tomorrow’s webcast by Rick Morelan Watch SQL in Sixty Seconds Series – FREE SQL Learning Read my earlier 2300+ articles Well, I am sure that will keep you busy for the rest of the day! Happy Diwali to All of You! Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • The new workflow management of Oracle´s Hyperion Planning: Define more details with Planning Unit Hierarchies and Promotional Paths

    - by Alexandra Georgescu
    After having been almost unchanged for several years, starting with the 11.1.2 release of Oracle´s Hyperion Planning the Process Management has not only got a new name: “Approvals” now is offering the possibility to further split Planning Units (comprised of a unique Scenario-Version-Entity combination) into more detailed combinations along additional secondary dimensions, a so called Planning Unit Hierarchy, and also to pre-define a path of planners, reviewers and approvers, called Promotional Path. I´d like to introduce you to changes and enhancements in this new process management and arouse your curiosity for checking out more details on it. One reason of using the former process management in Planning was to limit data entry rights to one person at a time based on the assignment of a planning unit. So the lowest level of granularity for this assignment was, for a given Scenario-Version combination, the individual entity. Even if in many cases one person wasn´t responsible for all data being entered into that entity, but for only part of it, it was not possible to split the ownership along another additional dimension, for example by assigning ownership to different accounts at the same time. By defining a so called Planning Unit Hierarchy (PUH) in Approvals this gap is now closed. Complementing new Shared Services roles for Planning have been created in order to manage set up and use of Approvals: The Approvals Administrator consisting of the following roles: Approvals Ownership Assigner, who assigns owners and reviewers to planning units for which Write access is assigned (including Planner responsibilities). Approvals Supervisor, who stops and starts planning units and takes any action on planning units for which Write access is assigned. Approvals Process Designer, who can modify planning unit hierarchy secondary dimensions and entity members for which Write access is assigned, can also modify scenarios and versions that are assigned to planning unit hierarchies and can edit validation rules on data forms for which access is assigned. (this includes as well Planner and Ownership Assigner responsibilities) Set up of a Planning Unit Hierarchy is done under the Administration menu, by selecting Approvals, then Planning Unit Hierarchy. Here you create new PUH´s or edit existing ones. The following window displays: After providing a name and an optional description, a pre-selection of entities can be made for which the PUH will be defined. Available options are: All, which pre-selects all entities to be included for the definitions on the subsequent tabs None, manual entity selections will be made subsequently Custom, which offers the selection for an ancestor and the relative generations, that should be included for further definitions. Finally a pattern needs to be selected, which will determine the general flow of ownership: Free-form, uses the flow/assignment of ownerships according to Planning releases prior to 11.1.2 In Bottom-up, data input is done at the leaf member level. Ownership follows the hierarchy of approval along the entity dimension, including refinements using a secondary dimension in the PUH, amended by defined additional reviewers in the promotional path. Distributed, uses data input at the leaf level, while ownership starts at the top level and then is distributed down the organizational hierarchy (entities). After ownership reaches the lower levels, budgets are submitted back to the top through the approval process. Proceeding to the next step, now a secondary dimension and the respective members from that dimension might be selected, in order to create more detailed combinations underneath each entity. After selecting the Dimension and a Parent Member, the definition of a Relative Generation below this member assists in populating the field for Selected Members, while the Count column shows the number of selected members. For refining this list, you might click on the icon right beside the selected member field and use the check-boxes in the appearing list for deselecting members. -------------------------------------------------------------------------------------------------------- TIP: In order to reduce maintenance of the PUH due to changes in the dimensions included (members added, moved or removed) you should consider to dynamically link those dimensions in the PUH with the dimension hierarchies in the planning application. For secondary dimensions this is done using the check-boxes in the Auto Include column. For the primary dimension, the respective selection criteria is applied by right-clicking the name of an entity activated as planning unit, then selecting an item of the shown list of include or exclude options (children, descendants, etc.). Anyway in order to apply dimension changes impacting the PUH a synchronization must be run. If this is really necessary or not is shown on the first screen after selecting from the menu Administration, then Approvals, then Planning Unit Hierarchy: under Synchronized you find the statuses Yes, No or Locked, where the last one indicates, that another user is just changing or synchronizing the PUH. Select one of the not synchronized PUH´s (status No) and click the Synchronize option in order to execute. -------------------------------------------------------------------------------------------------------- In the next step owners and reviewers are assigned to the PUH. Using the icons with the magnifying glass right besides the columns for Owner and Reviewer the respective assignments can be made in the ordermthat you want them to review the planning unit. While it is possible to assign only one owner per entity or combination of entity+ member of the secondary dimension, the selection for reviewers might consist of more than one person. The complete Promotional Path, including the defined owners and reviewers for the entity parents, can be shown by clicking the icon. In addition optional users might be defined for being notified about promotions for a planning unit. -------------------------------------------------------------------------------------------------------- TIP: Reviewers cannot change data, but can only review data according to their data access permissions and reject or promote planning units. -------------------------------------------------------------------------------------------------------- In order to complete your PUH definitions click Finish - this saves the PUH and closes the window. As a final step, before starting the approvals process, you need to assign the PUH to the Scenario-Version combination for which it should be used. From the Administration menu select Approvals, then Scenario and Version Assignment. Expand the PUH in order to see already existing assignments. Under Actions click the add icon and select scenarios and versions to be assigned. If needed, click the remove icon in order to delete entries. After these steps, set up is completed for starting the approvals process. Start, stop and control of the approvals process is now done under the Tools menu, and then Manage Approvals. The new PUH feature is complemented by various additional settings and features; some of them at least should be mentioned here: Export/Import of PUHs: Out of Office agent: Validation Rules changing promotional/approval path if violated (including the use of User-defined Attributes (UDAs)): And various new and helpful reviewer actions with corresponding approval states. About the Author: Bernhard Kinkel started working for Hyperion Solutions as a Presales Consultant and Consultant in 1998 and moved to Hyperion Education Services in 1999. He joined Oracle University in 2007 where he is a Principal Education Consultant. Based on these many years of working with Hyperion products he has detailed product knowledge across several versions. He delivers both classroom and live virtual courses. His areas of expertise are Oracle/Hyperion Essbase, Oracle Hyperion Planning and Hyperion Web Analysis.

    Read the article

  • Windows Azure Evolution &ndash; Welcome to VS2012

    - by Shaun
    When the Microsoft released the first preview version of Windows 8 and Visual Studio, many people in the community were asking if the windows azure tool is available to it. The answer was “NO”. Microsoft promised that the windows azure tool will only support the Visual Studio 2010 but when the 2012 was final released, windows azure tool should be work. But now alone with the new windows azure platform was published we got the latest Windows Azure SDK 1.7, which is compatible to the Visual Studio 2012 RC.   You can retrieve the latest version of the Windows Azure SDK through Web Platform Installer, which I think it’s the easiest and simplest way to download and install, since besides the SDK itself it also needs some other components. To download the latest windows azure SDK from Web Platform Installer, just go to the windows azure website and clicked the Develop, .NET and click the blue “install” button. Then you need to select which version of Visual Studio you want to use, Visual Studio 2010 or Visual Studio 2012 RC. After selected the current version you will download an EXE file. This file will lead you to install the Web Platform Installer 4.0 (if you haven’t installed) and the latest windows azure SDK. You can see the version name is June 2012, 1.7. Finally the WebPI will detect the dependent components you need to download and begin to install. But if you want to challenge yourself you can download the components and install them manually. The standalone installations are listed in this page with the instruction on how to install them with necessary pre-requirements.   Once you finished the installation you can open the Visual Studio 2012 RC and as usual, it need to be run as administrator. If you clicked the New Project link from the start page, navigated to Cloud category you will find that there no project template available. Is there anything wrong? So, if you changed the target framework from the default .NET 4.5 to .NET 4 you will see the azure project template. This is because, currently the windows azure instance does not support .NET 4.5. After clicked OK you will see the role creation window, which is similar as what you have seen before. But there are some new role templates in this SDK. Firstly you will have ASP.NET MVC 4 web role available, which means you can create ASP.NET MVC 4 applications for internet, intranet, mobile and WebAPI on the cloud. Then there are two new worker role templates, “Cache Worker Role” and “Worker Role with Service Bus Queue”. “Worker Role with Service Bus Queue” is a worker role which had added necessary references to access the Windows Azure Service Bus Queue. It also have some basic sample code in the worker role class which could read messages from the queue when started. The “Cache Worker Role” is a worker role which has the in-memory distributed cache feature enabled by default. This feature is different than the Windows Azure Caching. It allows the role instance to use its memory as a in-memory distributed cache clusters. By using this feature you can have one or more worker roles as some dedicate cache clusters. Alternatively, you can make part of your web role and worker role’s memory as the cache clusters as well. Let’s just create an ASP.NET MVC 4 Web Role, and click F5 to run it under the local emulator. If you have been working with azure for a while you should know that I need to setup the local storage emulator before running locally if it’s a fresh azure SDK installation. But in this version when we started our azure project the Visual Studio will check if the storage emulator had been initialized. If not, it will run the initializer automatically. And as you can see, in this version the storage emulator relies on the SQL Server 2012 Local DB feature. It will create the emulator database and tables in the default local database. You can set the storage emulator to use a standard SQL Server default instance by using the command “dsinit /instance:.”. The “dsinit” tool now is located at %PROGRAM FILES%\Microsoft SDKs\Windows Azure\Emulator\devstore After the Visual Studio complied and deployed the package our website should be shown in the browser. This is the MVC 4 Web Role home page on my Windows 8 machine in IE10. Another thing you might notice is that, in this version the compute emulator utilizes IIS Express to host the web roles instead of the full IIS. You can add breakpoint in the code and debug, and you can use the local storage emulator to test your code for accessing the storage service. All of them are same as what your are doing now on SDK 1.6. You can switch to use IIS to run your web role in local emulator. Just open the windows azure porject property windows, in the Web page select “Use IIS Web Server”. For more information about this please have a look on Nuno’s blog post. In the role property page in Visual Studio there’s no massive changes. You can configure your role settings such as the endpoints, certificates and local storage, etc.. One thing was added is the Caching tab. Here you can specify enable the caching feature or not, and how much memory you want to use as the cache cluster. I will introduce more details about it in the future posts. The publish and package feature are also no change. You can publish your project to azure directly through Visual Studio 2012, while you can create the package and upload manually. Below is the SDK version of my deployment which is 1.7.30602.1703 in the developer portal.   Summary In this post I introduced about the new Windows Azure SDK 1.7 especially on how it works on the latest Visual Studio 2012 RC. There’s no significant changes in the visual studio tool in this version but some small enhancement such as ASP.NET MVC 4, Cache Worker Role, using SQL 2012 Local DB and IIS Express, etc..   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • TOTD #166: Using NoSQL database in your Java EE 6 Applications on GlassFish - MongoDB for now!

    - by arungupta
    The Java EE 6 platform includes Java Persistence API to work with RDBMS. The JPA specification defines a comprehensive API that includes, but not restricted to, how a database table can be mapped to a POJO and vice versa, provides mechanisms how a PersistenceContext can be injected in a @Stateless bean and then be used for performing different operations on the database table and write typesafe queries. There are several well known advantages of RDBMS but the NoSQL movement has gained traction over past couple of years. The NoSQL databases are not intended to be a replacement for the mainstream RDBMS. As Philosophy of NoSQL explains, NoSQL database was designed for casual use where all the features typically provided by an RDBMS are not required. The name "NoSQL" is more of a category of databases that is more known for what it is not rather than what it is. The basic principles of NoSQL database are: No need to have a pre-defined schema and that makes them a schema-less database. Addition of new properties to existing objects is easy and does not require ALTER TABLE. The unstructured data gives flexibility to change the format of data any time without downtime or reduced service levels. Also there are no joins happening on the server because there is no structure and thus no relation between them. Scalability and performance is more important than the entire set of functionality typically provided by an RDBMS. This set of databases provide eventual consistency and/or transactions restricted to single items but more focus on CRUD. Not be restricted to SQL to access the information stored in the backing database. Designed to scale-out (horizontal) instead of scale-up (vertical). This is important knowing that databases, and everything else as well, is moving into the cloud. RBDMS can scale-out using sharding but requires complex management and not for the faint of heart. Unlike RBDMS which require a separate caching tier, most of the NoSQL databases comes with integrated caching. Designed for less management and simpler data models lead to lower administration as well. There are primarily three types of NoSQL databases: Key-Value stores (e.g. Cassandra and Riak) Document databases (MongoDB or CouchDB) Graph databases (Neo4J) You may think NoSQL is panacea but as I mentioned above they are not meant to replace the mainstream databases and here is why: RDBMS have been around for many years, very stable, and functionally rich. This is something CIOs and CTOs can bet their money on without much worry. There is a reason 98% of Fortune 100 companies run Oracle :-) NoSQL is cutting edge, brings excitement to developers, but enterprises are cautious about them. Commercial databases like Oracle are well supported by the backing enterprises in terms of providing support resources on a global scale. There is a full ecosystem built around these commercial databases providing training, performance tuning, architecture guidance, and everything else. NoSQL is fairly new and typically backed by a single company not able to meet the scale of these big enterprises. NoSQL databases are good for CRUDing operations but business intelligence is extremely important for enterprises to stay competitive. RDBMS provide extensive tooling to generate this data but that was not the original intention of NoSQL databases and is lacking in that area. Generating any meaningful information other than CRUDing require extensive programming. Not suited for complex transactions such as banking systems or other highly transactional applications requiring 2-phase commit. SQL cannot be used with NoSQL databases and writing simple queries can be involving. Enough talking, lets take a look at some code. This blog has published multiple blogs on how to access a RDBMS using JPA in a Java EE 6 application. This Tip Of The Day (TOTD) will show you can use MongoDB (a document-oriented database) with a typical 3-tier Java EE 6 application. Lets get started! The complete source code of this project can be downloaded here. Download MongoDB for your platform from here (1.8.2 as of this writing) and start the server as: arun@ArunUbuntu:~/tools/mongodb-linux-x86_64-1.8.2/bin$./mongod./mongod --help for help and startup optionsSun Jun 26 20:41:11 [initandlisten] MongoDB starting : pid=11210port=27017 dbpath=/data/db/ 64-bit Sun Jun 26 20:41:11 [initandlisten] db version v1.8.2, pdfile version4.5Sun Jun 26 20:41:11 [initandlisten] git version:433bbaa14aaba6860da15bd4de8edf600f56501bSun Jun 26 20:41:11 [initandlisten] build sys info: Linuxbs-linux64.10gen.cc 2.6.21.7-2.ec2.v1.2.fc8xen #1 SMP Fri Nov 2017:48:28 EST 2009 x86_64 BOOST_LIB_VERSION=1_41Sun Jun 26 20:41:11 [initandlisten] waiting for connections on port 27017Sun Jun 26 20:41:11 [websvr] web admin interface listening on port 28017 The default directory for the database is /data/db and needs to be created as: sudo mkdir -p /data/db/sudo chown `id -u` /data/db You can specify a different directory using "--dbpath" option. Refer to Quickstart for your specific platform. Using NetBeans, create a Java EE 6 project and make sure to enable CDI and add JavaServer Faces framework. Download MongoDB Java Driver (2.6.3 of this writing) and add it to the project library by selecting "Properties", "LIbraries", "Add Library...", creating a new library by specifying the location of the JAR file, and adding the library to the created project. Edit the generated "index.xhtml" such that it looks like: <h1>Add a new movie</h1><h:form> Name: <h:inputText value="#{movie.name}" size="20"/><br/> Year: <h:inputText value="#{movie.year}" size="6"/><br/> Language: <h:inputText value="#{movie.language}" size="20"/><br/> <h:commandButton actionListener="#{movieSessionBean.createMovie}" action="show" title="Add" value="submit"/></h:form> This page has a simple HTML form with three text boxes and a submit button. The text boxes take name, year, and language of a movie and the submit button invokes the "createMovie" method of "movieSessionBean" and then render "show.xhtml". Create "show.xhtml" ("New" -> "Other..." -> "Other" -> "XHTML File") such that it looks like: <head> <title><h1>List of movies</h1></title> </head> <body> <h:form> <h:dataTable value="#{movieSessionBean.movies}" var="m" > <h:column><f:facet name="header">Name</f:facet>#{m.name}</h:column> <h:column><f:facet name="header">Year</f:facet>#{m.year}</h:column> <h:column><f:facet name="header">Language</f:facet>#{m.language}</h:column> </h:dataTable> </h:form> This page shows the name, year, and language of all movies stored in the database so far. The list of movies is returned by "movieSessionBean.movies" property. Now create the "Movie" class such that it looks like: import com.mongodb.BasicDBObject;import com.mongodb.BasicDBObject;import com.mongodb.DBObject;import javax.enterprise.inject.Model;import javax.validation.constraints.Size;/** * @author arun */@Modelpublic class Movie { @Size(min=1, max=20) private String name; @Size(min=1, max=20) private String language; private int year; // getters and setters for "name", "year", "language" public BasicDBObject toDBObject() { BasicDBObject doc = new BasicDBObject(); doc.put("name", name); doc.put("year", year); doc.put("language", language); return doc; } public static Movie fromDBObject(DBObject doc) { Movie m = new Movie(); m.name = (String)doc.get("name"); m.year = (int)doc.get("year"); m.language = (String)doc.get("language"); return m; } @Override public String toString() { return name + ", " + year + ", " + language; }} Other than the usual boilerplate code, the key methods here are "toDBObject" and "fromDBObject". These methods provide a conversion from "Movie" -> "DBObject" and vice versa. The "DBObject" is a MongoDB class that comes as part of the mongo-2.6.3.jar file and which we added to our project earlier.  The complete javadoc for 2.6.3 can be seen here. Notice, this class also uses Bean Validation constraints and will be honored by the JSF layer. Finally, create "MovieSessionBean" stateless EJB with all the business logic such that it looks like: package org.glassfish.samples;import com.mongodb.BasicDBObject;import com.mongodb.DB;import com.mongodb.DBCollection;import com.mongodb.DBCursor;import com.mongodb.DBObject;import com.mongodb.Mongo;import java.net.UnknownHostException;import java.util.ArrayList;import java.util.List;import javax.annotation.PostConstruct;import javax.ejb.Stateless;import javax.inject.Inject;import javax.inject.Named;/** * @author arun */@Stateless@Namedpublic class MovieSessionBean { @Inject Movie movie; DBCollection movieColl; @PostConstruct private void initDB() throws UnknownHostException { Mongo m = new Mongo(); DB db = m.getDB("movieDB"); movieColl = db.getCollection("movies"); if (movieColl == null) { movieColl = db.createCollection("movies", null); } } public void createMovie() { BasicDBObject doc = movie.toDBObject(); movieColl.insert(doc); } public List<Movie> getMovies() { List<Movie> movies = new ArrayList(); DBCursor cur = movieColl.find(); System.out.println("getMovies: Found " + cur.size() + " movie(s)"); for (DBObject dbo : cur.toArray()) { movies.add(Movie.fromDBObject(dbo)); } return movies; }} The database is initialized in @PostConstruct. Instead of a working with a database table, NoSQL databases work with a schema-less document. The "Movie" class is the document in our case and stored in the collection "movies". The collection allows us to perform query functions on all movies. The "getMovies" method invokes "find" method on the collection which is equivalent to the SQL query "select * from movies" and then returns a List<Movie>. Also notice that there is no "persistence.xml" in the project. Right-click and run the project to see the output as: Enter some values in the text box and click on enter to see the result as: If you reached here then you've successfully used MongoDB in your Java EE 6 application, congratulations! Some food for thought and further play ... SQL to MongoDB mapping shows mapping between traditional SQL -> Mongo query language. Tutorial shows fun things you can do with MongoDB. Try the interactive online shell  The cookbook provides common ways of using MongoDB In terms of this project, here are some tasks that can be tried: Encapsulate database management in a JPA persistence provider. Is it even worth it because the capabilities are going to be very different ? MongoDB uses "BSonObject" class for JSON representation, add @XmlRootElement on a POJO and how a compatible JSON representation can be generated. This will make the fromXXX and toXXX methods redundant.

    Read the article

  • TOTD #166: Using NoSQL database in your Java EE 6 Applications on GlassFish - MongoDB for now!

    - by arungupta
    The Java EE 6 platform includes Java Persistence API to work with RDBMS. The JPA specification defines a comprehensive API that includes, but not restricted to, how a database table can be mapped to a POJO and vice versa, provides mechanisms how a PersistenceContext can be injected in a @Stateless bean and then be used for performing different operations on the database table and write typesafe queries. There are several well known advantages of RDBMS but the NoSQL movement has gained traction over past couple of years. The NoSQL databases are not intended to be a replacement for the mainstream RDBMS. As Philosophy of NoSQL explains, NoSQL database was designed for casual use where all the features typically provided by an RDBMS are not required. The name "NoSQL" is more of a category of databases that is more known for what it is not rather than what it is. The basic principles of NoSQL database are: No need to have a pre-defined schema and that makes them a schema-less database. Addition of new properties to existing objects is easy and does not require ALTER TABLE. The unstructured data gives flexibility to change the format of data any time without downtime or reduced service levels. Also there are no joins happening on the server because there is no structure and thus no relation between them. Scalability and performance is more important than the entire set of functionality typically provided by an RDBMS. This set of databases provide eventual consistency and/or transactions restricted to single items but more focus on CRUD. Not be restricted to SQL to access the information stored in the backing database. Designed to scale-out (horizontal) instead of scale-up (vertical). This is important knowing that databases, and everything else as well, is moving into the cloud. RBDMS can scale-out using sharding but requires complex management and not for the faint of heart. Unlike RBDMS which require a separate caching tier, most of the NoSQL databases comes with integrated caching. Designed for less management and simpler data models lead to lower administration as well. There are primarily three types of NoSQL databases: Key-Value stores (e.g. Cassandra and Riak) Document databases (MongoDB or CouchDB) Graph databases (Neo4J) You may think NoSQL is panacea but as I mentioned above they are not meant to replace the mainstream databases and here is why: RDBMS have been around for many years, very stable, and functionally rich. This is something CIOs and CTOs can bet their money on without much worry. There is a reason 98% of Fortune 100 companies run Oracle :-) NoSQL is cutting edge, brings excitement to developers, but enterprises are cautious about them. Commercial databases like Oracle are well supported by the backing enterprises in terms of providing support resources on a global scale. There is a full ecosystem built around these commercial databases providing training, performance tuning, architecture guidance, and everything else. NoSQL is fairly new and typically backed by a single company not able to meet the scale of these big enterprises. NoSQL databases are good for CRUDing operations but business intelligence is extremely important for enterprises to stay competitive. RDBMS provide extensive tooling to generate this data but that was not the original intention of NoSQL databases and is lacking in that area. Generating any meaningful information other than CRUDing require extensive programming. Not suited for complex transactions such as banking systems or other highly transactional applications requiring 2-phase commit. SQL cannot be used with NoSQL databases and writing simple queries can be involving. Enough talking, lets take a look at some code. This blog has published multiple blogs on how to access a RDBMS using JPA in a Java EE 6 application. This Tip Of The Day (TOTD) will show you can use MongoDB (a document-oriented database) with a typical 3-tier Java EE 6 application. Lets get started! The complete source code of this project can be downloaded here. Download MongoDB for your platform from here (1.8.2 as of this writing) and start the server as: arun@ArunUbuntu:~/tools/mongodb-linux-x86_64-1.8.2/bin$./mongod./mongod --help for help and startup optionsSun Jun 26 20:41:11 [initandlisten] MongoDB starting : pid=11210port=27017 dbpath=/data/db/ 64-bit Sun Jun 26 20:41:11 [initandlisten] db version v1.8.2, pdfile version4.5Sun Jun 26 20:41:11 [initandlisten] git version:433bbaa14aaba6860da15bd4de8edf600f56501bSun Jun 26 20:41:11 [initandlisten] build sys info: Linuxbs-linux64.10gen.cc 2.6.21.7-2.ec2.v1.2.fc8xen #1 SMP Fri Nov 2017:48:28 EST 2009 x86_64 BOOST_LIB_VERSION=1_41Sun Jun 26 20:41:11 [initandlisten] waiting for connections on port 27017Sun Jun 26 20:41:11 [websvr] web admin interface listening on port 28017 The default directory for the database is /data/db and needs to be created as: sudo mkdir -p /data/db/sudo chown `id -u` /data/db You can specify a different directory using "--dbpath" option. Refer to Quickstart for your specific platform. Using NetBeans, create a Java EE 6 project and make sure to enable CDI and add JavaServer Faces framework. Download MongoDB Java Driver (2.6.3 of this writing) and add it to the project library by selecting "Properties", "LIbraries", "Add Library...", creating a new library by specifying the location of the JAR file, and adding the library to the created project. Edit the generated "index.xhtml" such that it looks like: <h1>Add a new movie</h1><h:form> Name: <h:inputText value="#{movie.name}" size="20"/><br/> Year: <h:inputText value="#{movie.year}" size="6"/><br/> Language: <h:inputText value="#{movie.language}" size="20"/><br/> <h:commandButton actionListener="#{movieSessionBean.createMovie}" action="show" title="Add" value="submit"/></h:form> This page has a simple HTML form with three text boxes and a submit button. The text boxes take name, year, and language of a movie and the submit button invokes the "createMovie" method of "movieSessionBean" and then render "show.xhtml". Create "show.xhtml" ("New" -> "Other..." -> "Other" -> "XHTML File") such that it looks like: <head> <title><h1>List of movies</h1></title> </head> <body> <h:form> <h:dataTable value="#{movieSessionBean.movies}" var="m" > <h:column><f:facet name="header">Name</f:facet>#{m.name}</h:column> <h:column><f:facet name="header">Year</f:facet>#{m.year}</h:column> <h:column><f:facet name="header">Language</f:facet>#{m.language}</h:column> </h:dataTable> </h:form> This page shows the name, year, and language of all movies stored in the database so far. The list of movies is returned by "movieSessionBean.movies" property. Now create the "Movie" class such that it looks like: import com.mongodb.BasicDBObject;import com.mongodb.BasicDBObject;import com.mongodb.DBObject;import javax.enterprise.inject.Model;import javax.validation.constraints.Size;/** * @author arun */@Modelpublic class Movie { @Size(min=1, max=20) private String name; @Size(min=1, max=20) private String language; private int year; // getters and setters for "name", "year", "language" public BasicDBObject toDBObject() { BasicDBObject doc = new BasicDBObject(); doc.put("name", name); doc.put("year", year); doc.put("language", language); return doc; } public static Movie fromDBObject(DBObject doc) { Movie m = new Movie(); m.name = (String)doc.get("name"); m.year = (int)doc.get("year"); m.language = (String)doc.get("language"); return m; } @Override public String toString() { return name + ", " + year + ", " + language; }} Other than the usual boilerplate code, the key methods here are "toDBObject" and "fromDBObject". These methods provide a conversion from "Movie" -> "DBObject" and vice versa. The "DBObject" is a MongoDB class that comes as part of the mongo-2.6.3.jar file and which we added to our project earlier.  The complete javadoc for 2.6.3 can be seen here. Notice, this class also uses Bean Validation constraints and will be honored by the JSF layer. Finally, create "MovieSessionBean" stateless EJB with all the business logic such that it looks like: package org.glassfish.samples;import com.mongodb.BasicDBObject;import com.mongodb.DB;import com.mongodb.DBCollection;import com.mongodb.DBCursor;import com.mongodb.DBObject;import com.mongodb.Mongo;import java.net.UnknownHostException;import java.util.ArrayList;import java.util.List;import javax.annotation.PostConstruct;import javax.ejb.Stateless;import javax.inject.Inject;import javax.inject.Named;/** * @author arun */@Stateless@Namedpublic class MovieSessionBean { @Inject Movie movie; DBCollection movieColl; @PostConstruct private void initDB() throws UnknownHostException { Mongo m = new Mongo(); DB db = m.getDB("movieDB"); movieColl = db.getCollection("movies"); if (movieColl == null) { movieColl = db.createCollection("movies", null); } } public void createMovie() { BasicDBObject doc = movie.toDBObject(); movieColl.insert(doc); } public List<Movie> getMovies() { List<Movie> movies = new ArrayList(); DBCursor cur = movieColl.find(); System.out.println("getMovies: Found " + cur.size() + " movie(s)"); for (DBObject dbo : cur.toArray()) { movies.add(Movie.fromDBObject(dbo)); } return movies; }} The database is initialized in @PostConstruct. Instead of a working with a database table, NoSQL databases work with a schema-less document. The "Movie" class is the document in our case and stored in the collection "movies". The collection allows us to perform query functions on all movies. The "getMovies" method invokes "find" method on the collection which is equivalent to the SQL query "select * from movies" and then returns a List<Movie>. Also notice that there is no "persistence.xml" in the project. Right-click and run the project to see the output as: Enter some values in the text box and click on enter to see the result as: If you reached here then you've successfully used MongoDB in your Java EE 6 application, congratulations! Some food for thought and further play ... SQL to MongoDB mapping shows mapping between traditional SQL -> Mongo query language. Tutorial shows fun things you can do with MongoDB. Try the interactive online shell  The cookbook provides common ways of using MongoDB In terms of this project, here are some tasks that can be tried: Encapsulate database management in a JPA persistence provider. Is it even worth it because the capabilities are going to be very different ? MongoDB uses "BSonObject" class for JSON representation, add @XmlRootElement on a POJO and how a compatible JSON representation can be generated. This will make the fromXXX and toXXX methods redundant.

    Read the article

  • Cloud Based Load Testing Using TF Service &amp; VS 2013

    - by Tarun Arora [Microsoft MVP]
    Originally posted on: http://geekswithblogs.net/TarunArora/archive/2013/06/30/cloud-based-load-testing-using-tf-service-amp-vs-2013.aspx One of the new features announced as part of the Visual Studio 2013 Ultimate Preview is ‘Cloud Based Load Testing’. In this blog post I’ll walk you through, What is Cloud Based Load Testing? How have I been using this feature? – Success story! Where can you find more resources on this feature? What is Cloud Based Load Testing? It goes without saying that performance testing your application not only gives you the confidence that the application will work under heavy levels of stress but also gives you the ability to test how scalable the architecture of your application is. It is important to know how much is too much for your application! Working with various clients in the industry I have realized that the biggest barriers in Load Testing & Performance Testing adoption are, High infrastructure and administration cost that comes with this phase of testing Time taken to procure & set up the test infrastructure Finding use for this infrastructure investment after completion of testing Is cloud the answer? 100% Visual Studio Compatible Scalable and Realistic Start testing in < 2 minutes Intuitive Pay only for what you need Use existing on premise tests on cloud There are a lot of vendors out there offering Cloud Based Load Testing, to name a few, Load Storm Soasta Blaze Meter Blitz And others… The question you may want to ask is, why should you go with Microsoft’s Cloud based Load Test offering. If you are a Microsoft shop or already have investments in Microsoft technologies, you’ll see great benefit in the natural integration this offers with existing Microsoft products such as Visual Studio and Windows Azure. For example, your existing Web tests authored in Visual Studio 2010 or Visual Studio 2012 will run on the cloud without requiring any modifications what so ever. Microsoft’s cloud test rig also supports API based testing, for example, if you are building a WPF application which consumes WCF services, you can write unit tests to invoke the WCF service, these tests can be run on the cloud test rig and loaded with ‘N’ concurrent users for performance testing. If you have your assets already hosted in the Azure and possibly in the same data centre as the Cloud test rig, your Azure app will not incur a usage cost because of the generated traffic since the traffic is coming from the same data centre. The licensing or pricing information on Microsoft’s cloud based Load test service is yet to be announced, but I would expect this to be priced attractively to match the market competition.   The only additional configuration required for running load tests on Microsoft Cloud based Load Tests service is to select the Test run location as Run tests using Visual Studio Team Foundation Service, How have I been using Microsoft’s Cloud based Load Test Service? I have been part of the Microsoft Cloud Based Load Test Service advisory council for the last 7 months. This gave the opportunity to see the product shape up from concept to working solution. I was also the first person outside of Microsoft to try this offering out. This gave me the opportunity to test real world application at various clients using the Microsoft Load Test Service and provide real world feedback to the Microsoft product team. One of the most recent systems I tested using the Load Test Service has been an insurance quote generation engine. This insurance quote generation engine is,   hosted in Windows Azure expected to get quote requests from across the globe expected to handle 5 Million quote requests in a day (not clear how this load will be distributed across the day) There was no way, I could simulate such kind of load from on premise without standing up additional hardware. But Microsoft’s Cloud based Load Test service allowed me to test my key performance testing scenarios, i.e. Simulate expected Load, Endurance Testing, Threshold Testing and Testing for Latency. Simulating expected load: approach to devising a load pattern My approach to devising a load test pattern has been to run the test scenario with 1 user to figure out the response time. Then work out how many users are required to reach the target load. So, for example, to invoke 1 quote from the quote engine software takes 0.5 seconds. Now if you do the math,   1 quote request by 1 user = 0.5 seconds   quotes generated by 1 user in 24 hour = 1 * (((2 * 60) * 60) * 24) = 172,800   quotes generated by 30 users in 24 hours = 172,800 * 30 =  5,184,000 This was a very simple example, if your application requires more concurrent users to test scenario’s such as caching, etc then you can devise your own load pattern, some examples of load test patterns can be found here.  Endurance Testing To test for endurance, I loaded the quote generation engine with an expected fixed user load and ran the test for very long duration such as over 48 hours and observed the affect of the long running test on the Azure infrastructure. Currently Microsoft Load Test service does not support metrics from the machine under test. I used Azure diagnostics to begin with, but later started using Cerebrata Azure Diagnostics Manager to capture the metrics of the machine under test. Threshold Testing To figure out how much user load the application could cope with before falling on its belly, I opted to step load the quote generation engine by incrementing user load with different variations of incremental user load per minute till the application crashed out and forced an IIS reset. Testing for Latency Currently the Microsoft Load Test service does not support generating geographically distributed load, I however, deployed the insurance quote generation engine in different Azure data centres and ran the same set of performance tests to measure for latency. Because I could compare load test results from different runs by exporting the results to excel (this feature is provided out of the box right from Visual Studio 2010) I could see the different in response times. More resources on Microsoft Cloud based Load Test Service A few important links to get you started, Download Visual Studio Ultimate 2013 Preview Getting started guide for load testing using Team Foundation Service Troubleshooting guide for FAQs and known issues Team Foundation Service forum for questions and support Detailed demo and presentation (link to Tech-Ed session recording) Detailed demo and presentation (link to Build session recording) There a few limits on the usage of Microsoft Cloud based Load Test service that you can read about here. If you have any feedback on Microsoft Cloud based Load Test service, feel free to share it with the product team via the Visual Studio User Voice forum. I hope you found this useful. Thank you for taking the time out and reading this blog post. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Stay tuned!

    Read the article

  • Bump the Bill

    - by David Dorf
    I'm writing this from 3,400 feet in the air somewhere between Chicago and Austin. GoGo In-flight strikes again. Is there anywhere I can't get a WiFi connection? While listening to Deacon Blues by Steely Dan and skimming the news, I just came across an interesting article on mobile payments. Remember when I wrote about the iPhone Bump application and its possible use in retail? Well it looks like PayPal updated their mobile payments application to include the bump technology. Now its possible to transfer money between individuals by bumping iPhones. According to the WSJ, Paypal did 24 million transactions in 2008 and 140 million in 2009 on mobile phones. As the technology gets easier to use, that number is bound to increase. Alternatives to Paypal include Google Checkout, Amazon Payments, wireless carriers ("put it on my phone bill"), smart cards (using your phone's SIM card), and iTunes. That last one comes courtesy of a story Joe Skorupa wrote on mobile payments. It looks like Apple allows iPhone apps to take micro-payments via iTunes accounts, so there may come a time when its possible to use your iPhone to make a purchase in a retail store and have your credit card charged via your iTunes account. There are still some improvements in usability to be made before using a phone will be easier than swiping a credit card, but its already better than fussing with cash.

    Read the article

  • Ground Control by David Baum

    - by JuergenKress
    As cloud computing moves out of the early-adopter phase, organizations are carefully evaluating how to get to the cloud. They are examining standard methods for developing, integrating, deploying, and scaling their cloud applications, and after weighing their choices, they are choosing to develop and deploy cloud applications based on Oracle Cloud Application Foundation, part of Oracle Fusion Middleware. Oracle WebLogic Server is the flagship software product of Oracle Cloud Application Foundation. Oracle WebLogic Server is optimized to run on Oracle Exalogic Elastic Cloud, the integrated hardware and software platform for the Oracle Cloud Application Foundation family. Many companies, including Reliance Commercial Finance, are adopting this middleware infrastructure to enable private cloud computing and its convenient, on-demand access to a shared pool of configurable computing resources. “Cloud computing has become an extremely critical design factor for us,” says Shashi Kumar Ravulapaty, senior vice president and chief technology officer at Reliance Commercial Finance. “It’s one of our main focus areas. Oracle Exalogic, especially in combination with Oracle WebLogic, is a perfect fit for rapidly provisioning capacity in a private cloud infrastructure.” Reliance Commercial Finance provides loans to tens of thousands of customers throughout India. With more than 1,500 employees accessing the company’s core business applications every day, the company was having trouble processing more than 6,000 daily transactions with its legacy infrastructure, especially at the end of each month when hundreds of concurrent users need to access the company’s loan processing and approval applications. Read the complete article here. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • CodePlex Daily Summary for Wednesday, May 05, 2010

    CodePlex Daily Summary for Wednesday, May 05, 2010New Projects2010微软精英大挑战Heritage of Dragon项目: 我们来自上海市同济大学,兴趣相投,集聚于此共同构建一个开放的网络平台。致力于运用构建在云端基于地图的服务,使用文字、图片、视频、互动动画等形式来展示全国各地的传统手工艺。并且充分发挥网络的优势,通过开放协作的维基平台人人都可以参与到内容的添加修改与完善中来。目的在于记录、展示、挖掘、传承中国古...AutoArchive: Auto archive your "my documents" to a remote machine. I'm writing this so my wife can put things in "my documents" and it'll automaticly archive i...BigDoor .NET Client: A .NET client for the BigDoor Media API. The API enables secured virtual transactions with support for any number of currencies, transactions, awar...bubujie: Dreamweaver LibraryGeckoGit: GeckoGit is a combination of TortoiseSVN and AnkhSVN, but for Git repositories, and built on the GitSharp library.Global: global, config, mail, http, rest, xml, serialization, helper, path, ioIndustrial Dashboard Connected Grid webpart: This Sharepoint 2007/10 webpart provides a simple way to display grid based reports populated with data that comes from a SQL Server stored procedu...IpControls: "IpControls" contains IPv4 and IPv6 text boxes, both as Windows Forms and WPF version. The IPv6 control automatically detects the older hybrid for...LiteME: LiteME is short for LiteMapleStoryEmulator... it is v75, open-source, and still going through it's alpha stages. It is still in development!Meditel PHP Class: Une classe PHP qui vous permet de d'envoyer des SMS vers tous les numeros Meditel en utilisant leservice des SMS gratuits depuis le site Meditel.maMoneySafe: Help people.Mouse Zoom - Visual Studio Extension: Mouse Zoom is a Visual Studio 2010 extension that will cause the mouse zoom functionality to zoom at the mouse's cursor instead of at the top of th...Multi-Language Words Memorizer: This .net application is designed for learning words and help foreign language learners by lots of automatic features. After you select a list of ...Navigation for ASP.NET Web Forms: Navigation for ASP.NET Web Forms manages movement and data passing between aspx Pages in a unit testable manner. There is no Client-side logic, so ...NazTek.Extension.Clr4: CLR 4.0 extensions and utility APIOpalis Community Releases: Sample workflows, objects, code and other items related to System Center's Opalis Integration Server, published by the Opalis team.Power Video Player: Power Video Player is a slim feature-rich video/dvd player that meets everyday needs in video playback on PC with a bunch of advanced features on b...SchemeEditor: <WPF> <.NET> <Editor> <Silverlight> <Scheme> <Graphics> <simulink> <schematic>StyleCop+: StyleCop+ is a plug-in that extends original StyleCop features.timemanager2010: Just another work time managerTweetTunes: Updates Twitter with current song playing in iTunes - if your Twitter account is linked to Facebook - it will update that too The twittervb2 down...WCF Discovery Library: WCF Discovery Library is a small collection of utilities that makes it easy to add WCF 4.0 Discovery features into your projects.New ReleasesAjaxControlToolkit additional extenders: ControlToolkitExtended: this build contains web example with BreadCrumbsAnyCAD: AnyCAD Free Beta1: AnyCAD Free Beta1Baccarat: Single player practice baccarat: This is a simple baccarat game for Windows Mobile. It is single player and is only a practice version, which will help users familiarize themselve...BigDoor .NET Client: BigDoor .NET 2.0 Client (Alpha): Our first iteration of the .NET client. Please fork and or ask to be added if you want to make any contributions.CBM-Command: 2010-05-04: Release NotesNew Features Panel navigation now complete. Scroll up and down through directories using the up and down cursor keys. Switch between...Directory Linker: Directory Linker 2.1: This release introduces XP support, more information about all features can be found at http://www.humblecoder.co.uk/?p=141Extend SmallBasic: Teaching Extensions v.015: added high low quizGoogle AJAX Search Services for jQuery: jquery.gss-0.1.3.js: First official release - use at your own discretion. Thanks, AndrewIndustrial Dashboard Connected Grid webpart: Filtered Industrial Grid: Filtered Industrial Grid web part for SharePoint 2007/2010, First Release.jQuery Library for SharePoint Web Services: SPServices 0.5.5: IMPORTANT NOTE: This release is in an alpha state. You should only download it if you know what you are getting and are interested in testing it f...Meditel PHP Class: Meditel PHP Class: Zipped File : Example : exemplemeditel.php PHP Class : meditel.class.phpMulti-Language Words Memorizer: Memorizer 1.0: First release.mwNSPECT: mwNSPECT Plugin DLL: mwNSPECT Mapwindow plugin dll. Place in your MapWindow or BASINS plugins directory. Presently only for testing form functionality (not including...mwNSPECT: mwNSPECT Simple Installer: Simplistic mwNSPECT Mapwindow plugin installer using Inno setup. Installs all the files you'll need for NSPECT into the C:\NSPECT folder and insta...MyWSAT - ASP.NET Membership Administration Tool: MyWSAT v3.5.3: MyWSAT 3.5.3 Update Notes - May 4th 2010 1.) Added the user search box and a-z navigation menu to all relevant user gridviews. 2.) Added a membersh...Object/Relational Mapper & Code Generator in Net 2.0 for Relational & XML Schema: 2.7: Upgraded UI-generation templates for special case of associative tables (2-column primary keys). Minor bugfix with template-editor.Open NFSe: Open NFSe 2.0: Versao para Belo Horizonte utilizando Windows Services.Power Video Player: PVP 1.1.3776: v1.1.3776 This is mainly a rebuild of version 1.1 under Ms-PL license and is the 1st version available at CodePlex.PROGRAMMABLE SOFTWARE DEVELOPMENT ENVIRONMENT: PROGRAMMABLE SOFTWARE DEVELOPMENT ENVIRONMENT-3.1: The following error has been corrected: PCG ERROR: srcproj -- 3933 PCG ERROR: srcproj -- 2943 PCG ERROR: devproj -- 1474 PCG ERROR: mainprj -- 128...Rehost Image: 1.3.9: Fixed locations saving for mac and linux platforms.Robot Shootans: Robot Shootans 0.5.1 (Windows): This is the first public release of this game. Instructions on how to play are included in the game itself Known issues: Changing control style wh...SchemeEditor: SchemeEditor Beta: First release. Wait for documentation & update for some new functionSharePoint Rsync List: SharePoint Rsync 0.9.0.0: Initial release of sprsync. Comments, questions, feedback, and code enhancements are welcome!Software Is Hardwork: Sw. Is Hw. Lib. 3.0.0.x+01: Sw. Is Hw. Lib. 3.0.0.x+01 UNSUPPORTED, UNTESTED ALPHA RELEASE Code may disappear. This is just a preview of code that was in progress. Code is s...Software Localization Tool: SharpSLT 1.0.1: Minor release: bug fixes slight changes in the UIStyleCop+: StyleCop+ 0.6: Several important improvements made for Advanced Naming Rules: - Added new entities for fields and constants - Added new entities for methods (incl...turing machine simulator: First version of turing machine: Overview: First version of turing simulator with example script (transaction function). Files: SimulatorGui.exe - main GUI of simulator TuringMach...VCC: Latest build, v2.1.30504.0: Automatic drop of latest buildVocabulary Training Center: Basic Edition 1.1: A release with medium large changes: New functionality: Multiple-choice questions added Grammatical questions added Evaluation changed accordin...Web Service Software Factory: Web Service Software Factory 2010 RC: To use the Web Service Software Factory 2010, you need the following software installed on your computer: • Microsoft Visual Studio 2010 (Ultima...Web Service Software Factory: WSSF2010 Guide: This is the help and guidance for Web Service Software Factory 2010Windows Phone 7 Panorama control: panorama control v0.6 + samples: IMPORTANT NOTE: Please read the following bug + suggested workaround. I'll fix this in a new release shortly. Panorama Control source code + sampl...WPF Behavior Library: WPF Behavior Library 0.2 Release: Drag & Drop Took away the ItemType and DataTemplate requirements Added functions for inheritors to be able to provide custom logic to handle movi...Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight Toolkitpatterns & practices – Enterprise LibraryWindows Presentation Foundation (WPF)iTuner - The iTunes CompanionDotNetNuke® Community EditionASP.NETMost Active Projectspatterns & practices – Enterprise LibraryAJAX Control FrameworkHydroServer - CUAHSI Hydrologic Information System ServerIonics Isapi Rewrite Filterpatterns & practices: Azure Security GuidanceRawrBlogEngine.NETTinyProjectNB_Store - Free DotNetNuke Ecommerce Catalog ModuleAll-In-One Code Framework

    Read the article

  • JMS Adapter Step 0 : Configuring the WLS-JMS resources

    - by [email protected]
    Before getting started with the JMS Adapter, we must configure the connection factories/JMS queues on the WLS admin console. In particular, we will be required to follow these stepsCreate a connection factory. In our case, we will create a "XA Connection Factory". This step is mandatory if you need your JMS queues to participate in a global transaction. Create the WLS JMS QueuesCreating the connection factory:1) Login to the WLS Admin console. On my setup, the url looks like "http://localhost:7001/console".2) Select Services -> Messaging -> JMS Modules -> SOAJMSModule as shown below. We can also create a new JMS Module, but, I took the easier way out by selecting the SOAJMSModule. 3) Click on "New" as shown in order to create the Connection factory.4) Select "Connection Factory" radio button and click "Next".5) Enter the Connection Factory properties as shown and click on "Finish".6) Target the connection factory to your managed server and click on "Finish". 7) Now, go back and select the Connection Factory that you've just created (see Step 2 above) . Click on "Transactions" and enable XA and click on "Save".

    Read the article

  • UPK Professional Customer Success Story: Medtronic

    - by [email protected]
    In case you missed the live event, be sure to listen to last week's UPK Customer iSeminar featuring Medtronic. This was the first iSeminar in our quarterly series to showcase UPK Professional (UPK and Knowledge Pathways). Donna Miller and Staci Gilbert gave viewers an inside look at samples of Medtronic's content as they shared their experiences, methodology and best practices for use of the solution. Here are some highlights of the call: • Medtronic initially purchased UPK Professional to support a multi-year, global SAP rollout for 9,000 end users located in 24 countries. • As time went on, they expanded their use of UPK Professional to include several of their other enterprise applications: PeopleSoft, Siebel CRM, Hyperion Financial Management, a number of SAP bolt-ons, Documentum, TrackWise, and many others. • In combination with their Saba LMS, UPK Professional has allowed Medtronic to create, deploy, track and certify consistent end user training for critical transactions and processes across their organization worldwide - essential for a company in a heavily regulated industry. • For key pieces of content or certain end user populations, some Medtronic business units localize/translate the global UPK content. Staci demonstrated examples of their SAP content which has been translated into Japanese. • In the live SAP environment, end users rely on UPK's context sensitive in-application performance support. Medtronic has found this to be very helpful post go-live, giving just-in-time support so end users are confident in a new system or when performing tasks they don't often touch (at quarter or year end). UPK also serves as Medtronic's internal Google. • Medtronic has realized savings on many fronts: reduction in support calls due to in-application performance support, elimination of their training clients, and speedier training (1.5 days rather than 5-7 days) of temporary workers by moving from ILT to a blended solution that includes UPK simulations for eLearning. Thanks again to Donna and Staci for an exceptional presentation. They offered so many great examples for anyone who's looking for ways to get more out of UPK or interested in learning about UPK Professional: Knowledge Pathways. - Karen Rihs, Oracle UPK Outbound Product Management

    Read the article

  • Book Review - Programming Windows Azure by Siriram Krishnan

    - by BuckWoody
    As part of my professional development, I’ve created a list of books to read throughout the year, starting in June of 2011. This a review of the first one, called Programming Windows Azure by Siriram Krishnan. You can find my entire list of books I’m reading for my career here: http://blogs.msdn.com/b/buckwoody/archive/2011/06/07/head-in-the-clouds-eyes-on-the-books.aspx  Why I Chose This Book: As part of my learning style, I try to read multiple books about a single subject. I’ve found that at least 3 books are necessary to get the right amount of information to me. This is a “technical” work, meaning that it deals with technology and not business, writing or other facets of my career. I’ll have a mix of all of those as I read along. I chose this work in addition to others I’ve read since it covers everything from an introduction to more advanced topics in a single book. It also has some practical examples of actually working with the product, particularly on storage. Although it’s dated, many examples normally translate. I also saw that it had pretty good reviews. What I learned: I learned a great deal about storage, and many useful code snippets. I do think that there could have been more of a focus on the application fabric - but of course that wasn’t as mature a feature when this book was written. I learned some great architecture examples, and in one section I learned more about encryption. In that example, however, I would rather have seen the examples go the other way - the book focused on moving data from on-premise to Azure storage in an encrypted fashion. Using the Application Fabric I would rather see sensitive data left in a hybrid fashion on premise, and connect to for the Azure application. Even so, the examples were very useful. If you’re looking for a good “starter” Azure book, this is a good choice. I also recommend the last chapter as a quick read for a DBA, or Database Administrator. It’s not very long, but useful. Note that the limits described are incorrect - which is one of the dangers of reading a book about any cloud offering. The services offered are updated so quickly that the information is in constant danger of being “stale”. Even so, I found this a useful book, which I believe will help me work with Azure better. Raw Notes: I take notes as I read, calling that process “reading with a pencil”. I find that when I do that I pay attention better, and record some things that I need to know later. I’ll take these notes, categorize them into a OneNote notebook that I synchronize in my Live.com account, and that way I can search them from anywhere. I can even read them on the web, since the Live.com has a OneNote program built in. Note that these are the raw notes, so they might not make a lot of sense out of context - I include them here so you can watch my though process. Programming Windows Azure by Siriram Krishnan: Learning about how to select applications suitable for Distributed Technology. Application Fabric gets the least attention; probably because it was newer at the time. Very clear (Chapter One) Good foundation Background and history, but not too much I normally arrange my descriptions differently, starting with the use-cases and moving to physicality, but this difference helps me. Interesting that I am reading this using Safari Books Online, which uses many of these concepts. Taught me some new aspects of a Hypervisor – very low-level information about the Azure Fabric (not to be confused with the Application Fabric feature) (Chapter Two) Good detail of what is included in the SDK. Even more is available now. CS = Cloud Service (Chapter 3) Place Storage info in the configuration file, since it can be streamed in-line with a running app. Ditto for logging, and keep separated configs for staging and testing. Easy-switch in and switch out.  (Chapter 4) There are two Runtime API’s, one of external and one for internal. Realizing how powerful this paradigm really is. Some places seem light, and to drop off but perhaps that’s best. Managing API is not charged, which is nice. I don’t often think about the price, until it comes to an actual deployment (Chapter 5) Csmanage is something I want to dig into deeper. API requires package moves to Blob storage first, so it needs a URL. Csmanage equivalent can be written in Unix scripting using openssl. Upgrades are possible, and you use the upgradeDomainCount attribute in the Service-Definition.csdef file  Always use a low-privileged account to test on the dev fabric, since Windows Azure runs in partial trust. Full trust is available, but can be dangerous and must be well-thought out. (Chapter 6) Learned how to run full CMD commands in a web window – not that you would ever do that, but it was an interesting view into those links. This leads to a discussion on hosting other runtimes (such as Java or PHP) in Windows Azure. I got an expanded view on this process, although this is where the book shows its age a little. Books can be a problem for Cloud Computing for this reason – things just change too quickly. Windows Azure storage is not eventually consistent – it is instantly consistent with multi-phase commit. Plumbing for this is internal, not required to code that. (Chapter 7) REST API makes the service interoperable, hybrid, and consistent across code architectures. Nicely done. Use affinity groups to keep data and code together. Side note: e-book readers need a common “notes” feature. There’s a decent quick description of REST in this chapter. Learned about CloudDrive code – PowerShell sample that mounts Blob storage as a local provider. Works against Dev fabric by default, can be switched to Account. Good treatment in the storage chapters on the differences between using Dev storage and Azure storage. These can be mitigated. No, blobs are not of any size or number. Not a good statement (Chapter 8) Blob storage is probably Azure’s closest play to Infrastructure as a Service (Iaas). Blob change operations must be authenticated, even when public. Chapters on storage are pretty in-depth. Queue Messages are base-64 encoded (Chapter 9) The visibility timeout ensures processing of message in a disconnected system. Order is not guaranteed for a message, so if you need that set an increasing number in the queue mechanism. While Queues are accessible via REST, they are not public and are secured by default. Interesting – the header for a queue request includes an estimated count. This can be useful to create more worker roles in a dynamic system. Each Entity (row) in the Azure Table service is atomic – all or nothing. (Chapter 10) An entity can have up to 255 Properties  Use “ID” for the class to indicate the key value, or use the [DataServiceKey] Attribute.  LINQ makes working with the Azure Table Service much easier, although Interop is certainly possible. Good description on the process of selecting the Partition and Row Key.  When checking for continuation tokens for pagination, include logic that falls out of the check in case you are at the last page.  On deleting a storage object, it is instantly unavailable, however a background process is dispatched to perform the physical deletion. So if you want to re-create a storage object with the same name, add retry logic into the code. Interesting approach to deleting an index entity without having to read it first – create a local entity with the same keys and apply it to the Azure system regardless of change-state.  Although the “Indexes” description is a little vague, it’s interesting to see a Folding and Stemming discussion a-la the Porter Stemming Algorithm. (Chapter 11)  Presents a better discussion of indexes (at least inverted indexes) later in the chapter. Great treatment for DBA’s in Chapter 11. We need to work on getting secondary indexes in Table storage. There is a limited form of transactions called “Entity Group Transactions” that, although they have conditions, makes a transactional system more possible. Concurrency also becomes an issue, but is handled well if you’re using Data Services in .NET. It watches the Etag and allows you to take action appropriately. I do not recommend using Azure as a location for secure backups. In fact, I would rather have seen the examples in (Chapter 12) go the other way, showing how data could be brought back to a local store as a DR or HA strategy. Good information on cryptography and so on even so. Chapter seems out of place, and should be combined with the Blob chapter.  (Chapter 13) on SQL Azure is dated, although the base concepts are OK.  Nice example of simple ADO.NET access to a SQL Azure (or any SQL Server Really) database.  

    Read the article

< Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >