Search Results

Search found 7776 results on 312 pages for 'configure in'.

Page 224/312 | < Previous Page | 220 221 222 223 224 225 226 227 228 229 230 231  | Next Page >

  • Node.js Adventure - Host Node.js on Windows Azure Worker Role

    - by Shaun
    In my previous post I demonstrated about how to develop and deploy a Node.js application on Windows Azure Web Site (a.k.a. WAWS). WAWS is a new feature in Windows Azure platform. Since it’s low-cost, and it provides IIS and IISNode components so that we can host our Node.js application though Git, FTP and WebMatrix without any configuration and component installation. But sometimes we need to use the Windows Azure Cloud Service (a.k.a. WACS) and host our Node.js on worker role. Below are some benefits of using worker role. - WAWS leverages IIS and IISNode to host Node.js application, which runs in x86 WOW mode. It reduces the performance comparing with x64 in some cases. - WACS worker role does not need IIS, hence there’s no restriction of IIS, such as 8000 concurrent requests limitation. - WACS provides more flexibility and controls to the developers. For example, we can RDP to the virtual machines of our worker role instances. - WACS provides the service configuration features which can be changed when the role is running. - WACS provides more scaling capability than WAWS. In WAWS we can have at most 3 reserved instances per web site while in WACS we can have up to 20 instances in a subscription. - Since when using WACS worker role we starts the node by ourselves in a process, we can control the input, output and error stream. We can also control the version of Node.js.   Run Node.js in Worker Role Node.js can be started by just having its execution file. This means in Windows Azure, we can have a worker role with the “node.exe” and the Node.js source files, then start it in Run method of the worker role entry class. Let’s create a new windows azure project in Visual Studio and add a new worker role. Since we need our worker role execute the “node.exe” with our application code we need to add the “node.exe” into our project. Right click on the worker role project and add an existing item. By default the Node.js will be installed in the “Program Files\nodejs” folder so we can navigate there and add the “node.exe”. Then we need to create the entry code of Node.js. In WAWS the entry file must be named “server.js”, which is because it’s hosted by IIS and IISNode and IISNode only accept “server.js”. But here as we control everything we can choose any files as the entry code. For example, I created a new JavaScript file named “index.js” in project root. Since we created a C# Windows Azure project we cannot create a JavaScript file from the context menu “Add new item”. We have to create a text file, and then rename it to JavaScript extension. After we added these two files we should set their “Copy to Output Directory” property to “Copy Always”, or “Copy if Newer”. Otherwise they will not be involved in the package when deployed. Let’s paste a very simple Node.js code in the “index.js” as below. As you can see I created a web server listening at port 12345. 1: var http = require("http"); 2: var port = 12345; 3:  4: http.createServer(function (req, res) { 5: res.writeHead(200, { "Content-Type": "text/plain" }); 6: res.end("Hello World\n"); 7: }).listen(port); 8:  9: console.log("Server running at port %d", port); Then we need to start “node.exe” with this file when our worker role was started. This can be done in its Run method. I found the Node.js and entry JavaScript file name, and then create a new process to run it. Our worker role will wait for the process to be exited. If everything is OK once our web server was opened the process will be there listening for incoming requests, and should not be terminated. The code in worker role would be like this. 1: public override void Run() 2: { 3: // This is a sample worker implementation. Replace with your logic. 4: Trace.WriteLine("NodejsHost entry point called", "Information"); 5:  6: // retrieve the node.exe and entry node.js source code file name. 7: var node = Environment.ExpandEnvironmentVariables(@"%RoleRoot%\approot\node.exe"); 8: var js = "index.js"; 9:  10: // prepare the process starting of node.exe 11: var info = new ProcessStartInfo(node, js) 12: { 13: CreateNoWindow = false, 14: ErrorDialog = true, 15: WindowStyle = ProcessWindowStyle.Normal, 16: UseShellExecute = false, 17: WorkingDirectory = Environment.ExpandEnvironmentVariables(@"%RoleRoot%\approot") 18: }; 19: Trace.WriteLine(string.Format("{0} {1}", node, js), "Information"); 20:  21: // start the node.exe with entry code and wait for exit 22: var process = Process.Start(info); 23: process.WaitForExit(); 24: } Then we can run it locally. In the computer emulator UI the worker role started and it executed the Node.js, then Node.js windows appeared. Open the browser to verify the website hosted by our worker role. Next let’s deploy it to azure. But we need some additional steps. First, we need to create an input endpoint. By default there’s no endpoint defined in a worker role. So we will open the role property window in Visual Studio, create a new input TCP endpoint to the port we want our website to use. In this case I will use 80. Even though we created a web server we should add a TCP endpoint of the worker role, since Node.js always listen on TCP instead of HTTP. And then changed the “index.js”, let our web server listen on 80. 1: var http = require("http"); 2: var port = 80; 3:  4: http.createServer(function (req, res) { 5: res.writeHead(200, { "Content-Type": "text/plain" }); 6: res.end("Hello World\n"); 7: }).listen(port); 8:  9: console.log("Server running at port %d", port); Then publish it to Windows Azure. And then in browser we can see our Node.js website was running on WACS worker role. We may encounter an error if we tried to run our Node.js website on 80 port at local emulator. This is because the compute emulator registered 80 and map the 80 endpoint to 81. But our Node.js cannot detect this operation. So when it tried to listen on 80 it will failed since 80 have been used.   Use NPM Modules When we are using WAWS to host Node.js, we can simply install modules we need, and then just publish or upload all files to WAWS. But if we are using WACS worker role, we have to do some extra steps to make the modules work. Assuming that we plan to use “express” in our application. Firstly of all we should download and install this module through NPM command. But after the install finished, they are just in the disk but not included in the worker role project. If we deploy the worker role right now the module will not be packaged and uploaded to azure. Hence we need to add them to the project. On solution explorer window click the “Show all files” button, select the “node_modules” folder and in the context menu select “Include In Project”. But that not enough. We also need to make all files in this module to “Copy always” or “Copy if newer”, so that they can be uploaded to azure with the “node.exe” and “index.js”. This is painful step since there might be many files in a module. So I created a small tool which can update a C# project file, make its all items as “Copy always”. The code is very simple. 1: static void Main(string[] args) 2: { 3: if (args.Length < 1) 4: { 5: Console.WriteLine("Usage: copyallalways [project file]"); 6: return; 7: } 8:  9: var proj = args[0]; 10: File.Copy(proj, string.Format("{0}.bak", proj)); 11:  12: var xml = new XmlDocument(); 13: xml.Load(proj); 14: var nsManager = new XmlNamespaceManager(xml.NameTable); 15: nsManager.AddNamespace("pf", "http://schemas.microsoft.com/developer/msbuild/2003"); 16:  17: // add the output setting to copy always 18: var contentNodes = xml.SelectNodes("//pf:Project/pf:ItemGroup/pf:Content", nsManager); 19: UpdateNodes(contentNodes, xml, nsManager); 20: var noneNodes = xml.SelectNodes("//pf:Project/pf:ItemGroup/pf:None", nsManager); 21: UpdateNodes(noneNodes, xml, nsManager); 22: xml.Save(proj); 23:  24: // remove the namespace attributes 25: var content = xml.InnerXml.Replace("<CopyToOutputDirectory xmlns=\"\">", "<CopyToOutputDirectory>"); 26: xml.LoadXml(content); 27: xml.Save(proj); 28: } 29:  30: static void UpdateNodes(XmlNodeList nodes, XmlDocument xml, XmlNamespaceManager nsManager) 31: { 32: foreach (XmlNode node in nodes) 33: { 34: var copyToOutputDirectoryNode = node.SelectSingleNode("pf:CopyToOutputDirectory", nsManager); 35: if (copyToOutputDirectoryNode == null) 36: { 37: var n = xml.CreateNode(XmlNodeType.Element, "CopyToOutputDirectory", null); 38: n.InnerText = "Always"; 39: node.AppendChild(n); 40: } 41: else 42: { 43: if (string.Compare(copyToOutputDirectoryNode.InnerText, "Always", true) != 0) 44: { 45: copyToOutputDirectoryNode.InnerText = "Always"; 46: } 47: } 48: } 49: } Please be careful when use this tool. I created only for demo so do not use it directly in a production environment. Unload the worker role project, execute this tool with the worker role project file name as the command line argument, it will set all items as “Copy always”. Then reload this worker role project. Now let’s change the “index.js” to use express. 1: var express = require("express"); 2: var app = express(); 3:  4: var port = 80; 5:  6: app.configure(function () { 7: }); 8:  9: app.get("/", function (req, res) { 10: res.send("Hello Node.js!"); 11: }); 12:  13: app.get("/User/:id", function (req, res) { 14: var id = req.params.id; 15: res.json({ 16: "id": id, 17: "name": "user " + id, 18: "company": "IGT" 19: }); 20: }); 21:  22: app.listen(port); Finally let’s publish it and have a look in browser.   Use Windows Azure SQL Database We can use Windows Azure SQL Database (a.k.a. WACD) from Node.js as well on worker role hosting. Since we can control the version of Node.js, here we can use x64 version of “node-sqlserver” now. This is better than if we host Node.js on WAWS since it only support x86. Just install the “node-sqlserver” module from NPM, copy the “sqlserver.node” from “Build\Release” folder to “Lib” folder. Include them in worker role project and run my tool to make them to “Copy always”. Finally update the “index.js” to use WASD. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:{SERVER NAME}.database.windows.net,1433;Database={DATABASE NAME};Uid={LOGIN}@{SERVER NAME};Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Publish to azure and now we can see our Node.js is working with WASD through x64 version “node-sqlserver”.   Summary In this post I demonstrated how to host our Node.js in Windows Azure Cloud Service worker role. By using worker role we can control the version of Node.js, as well as the entry code. And it’s possible to do some pre jobs before the Node.js application started. It also removed the IIS and IISNode limitation. I personally recommended to use worker role as our Node.js hosting. But there are some problem if you use the approach I mentioned here. The first one is, we need to set all JavaScript files and module files as “Copy always” or “Copy if newer” manually. The second one is, in this way we cannot retrieve the cloud service configuration information. For example, we defined the endpoint in worker role property but we also specified the listening port in Node.js hardcoded. It should be changed that our Node.js can retrieve the endpoint. But I can tell you it won’t be working here. In the next post I will describe another way to execute the “node.exe” and Node.js application, so that we can get the cloud service configuration in Node.js. I will also demonstrate how to use Windows Azure Storage from Node.js by using the Windows Azure Node.js SDK.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Post-Purchase Social Media

    - by David Dorf
    When you make a particularly good purchase, the natural tendency is to share the experience with friends. You show them your cool new toy or garment, then explain how you discovered such a great deal, all the while implying you are the world's most savvy shopper. My wife does it with clothes, housewares, and books, and I do it with wiz-bang techie stuff. Post-purchase euphoria or Buyer's remorse are associated with most purchases beyond day-to-day needs. So now let's add social media to the mix. Haul videos are a YouTube phenomenon where a shopper describes their latest haul on video. Blair Fowler, aka juicystar07, is an excellent example. She and her older sister's haul videos have been viewed 75,000,000 times, at times causing particular items to sell out after being showcased. If you're not already on this bandwagon, checkout Blair's haul video from her trip to Forever21. There are a couple good articles on this trend from ABC's GMA, Slate, and NPR. Some retailers are already sending free products to these fashionistas in the hopes they'll be reviewed on camera. For those less willing to exert themselves, there's Blippy, a service that automatically tweets your purchases. Similar to Twitter, your purchases are tweeted so your friends can see what you've purchased and your network can make comments. In the example to the right, co-founder Philip Kaplan purchased a gift for his wife from the store Does Your Mother Know, proving the point that the need for privacy is overblown. Blippy has partnerships with selected merchants like Apple, Amazon, and Netflix and can also get purchases from the credit cards you've registered. When you register, you can configure whether to automatically tweet each purchase, or approve them first. No sense in broadcasting my need for Rogaine, right? This is a good thing for retailers, as it helps spread the word about purchases and gives other people ideas. Rick just bought an ooma from Amazon. What the heck is ooma? Oh, its like Vonage but no monthly bills. I'm there.

    Read the article

  • Using DNFS for test purposes

    - by rene.kundersma
    Because of other priorities such as bringing the first v2 Database Machine in Netherlands into production I did spend less time on my blog that planned. I do however like to tell some things about DNFS, the build-in NFS client we have in Oracle RDBMS since 11.1. What DNFS is and how to set it up can all be found here . As you see this documentation is actually the "Clusterware Installation Guide". I think that is weird, I would expect this to be part of the Admin Guide, especially the "Tablespace" chapter. I do however want to show what I did not find in the documentation that quickly (and solved after talking to my famous colleague "the prutser"): First, a quick setup: 1. The standard ODM library needs to be replaced with the NFS ODM library: [oracle@ocm01 ~]$ cp $ORACLE_HOME/lib/libodm11.so $ORACLE_HOME/lib/libodm11.so_stub [oracle@ocm01 ~]$ ln -s $ORACLE_HOME/lib/libnfsodm11.so $ORACLE_HOME/lib/libodm11.so After changing to this library you will notice the following in your alert.log: Oracle instance running with ODM: Oracle Direct NFS ODM Library Version 2.0 2. The intention is to mount the datafiles over normal NAS (like NetApp). But, in case you want to test yourself and use an exported NFS filesystem, it should look like the following: [oracle@ocm01 ~]$ cat /etc/exports /u01/scratch/nfs *(rw,sync,insecure) Please note the "insecure" option in the export, since you will not be able to use DNFS without it if you export a filesystem from a host. Without the "insecure" option the NFS server considers the port used by the database "insecure" and the database is unable to acquire the mount: Direct NFS: NFS3ERR 1 Not owner. path ocm01.nl.oracle.com mntport 930 nfsport 2049 3. Before configuring the new Oracle stanza for NFS we still need to configure a regular kernel NFS mount: [root@ocm01 ~]# cat /etc/fstab | grep nfs ocm01.nl.oracle.com:/u01/scratch/nfs /incoming nfs rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,actimeo=0,vers=3,timeo=600 4. Then a so called Oracle-'nfstab' needs to be created that specifies what the available exports to use: [oracle@ocm01 ~]$ cat /etc/oranfstab server:ocm01.nl.oracle.com path:192.168.1.40 export:/u01/scratch/nfs mount:/incoming 5. Creating a tablespace with a datafile on the NFS location: SQL create tablespace rk datafile '/incoming/rk.dbf' size 10M; Tablespace created. Be sure to know that it may happen that you do not specify the insecure option (like I did). In that case you will still see output from the query v$dnfs_servers: SQL select * from v$dnfs_servers; ID SVRNAME DIRNAME MNTPORT NFSPORT WTMAX RTMAX -- -------------------- ----------------- --------- ---------- ------ ------ 1 ocm01.nl.oracle.com /u01/scratch/nfs 684 2049 32768 32768 But, querying v$dnfsfiles and v$dnfs_channels will now return any result, and indeed, you will see the following message in the alert-log when you create a file : Direct NFS: NFS3ERR 1 Not owner. path ocm01.nl.oracle.com mntport 930 nfsport 2049 After correcting the export: SQL select * from v$dnfs_files; FILENAME FILESIZE PNUM SVR_ID --------------- -------- ------ ------ /incoming/rk.dbf 10493952 20 1 Rene Kundersma Oracle Technology Services, The Netherlands

    Read the article

  • Oracle Linux Training Calendar

    - by Antoinette O'Sullivan
    The Oracle Linux System Administrator Curriculum is designed to provide you with the knowledge and skills necessary to effectively administer an Oracle Linux environment. These classes will help you prepare to install, configure, and manage your enterprise Linux environment as well as prepare you for the Oracle Linux Certification. You can take these courses as a: Live-Virtual event: Following the instructor-led classes from your own desk - no travel required. There is an extensive list of events on the schedule to suit different timezones. See full list on http://oracle.com/education/linux. In-Class event: Travel to an education center to take these classes. Below is a sample of in-class events on the schedule: Unix and Linux Essentials: This 3-day class is for those new to the linux operating system. You learn to manage files & directories from the command line, perform remote connections, file transfers & more.  Location  Date  Delivery Language  Nairobi, Kenya  3 December 2012  English  Riyadh, Saudia Arabia  5 January 2013  English  Cape Town, South Africa  9 January 2013  English  Durban, South Africa  9 January 2013  English  Johannesburg, South Africa  9 January 2013  English  Woodmead, South Africa  15 July 2013  English  Denver, United States  23 January 2013  English  Columbia, United States  2 January 2013  English  East Lansing, United States  9 January 2013  English  Roseville, United States  1 April 2013  English  Morrisville, United States  11 February 2013  English  Jakarta, Indonesia  26 December 2012  English  Kuala Lumpur, Malaysia  29 January 2013  English  Auckland, New Zealand  12 December 2012  English  Makati City, Philippines  14 January 2013  English  Singapore  13 February 2013  English  North Sydney, Australia  4 February 2013  English  Brisbane, Australia  29 April 2013  English  Melbourne, Australia  29 January 2013  English Oracle Linux System Administration: This 5 day course covers a broad range of Oracle Linux system administration tasks, from installing the operating system to preparing the system for Oracle Database. The course also provides an extensive hands-on experience for key system administration tasks. You will gain comprehensive skills in installing, configuring, and managing an Oracle Linux system as well as insight into ULN, Ksplice and UEK.  Location  Date  Delivery Language  Brussels, Belgium  26 November 2012  English  Windhof, Luxembourg  17 December 2012  English  Utrecht, Netherlands  11 February 2013  Dutch  Warsaw, Poland  25 February 2013  Polish  Gabarone, Botswana  22 April 2013  English  Nairobi, Kenya  10 December 2012  English  Johannesburg, South Africa  11 March 2013  English  Belmont, CA, United States  11 February 2013  English  Irvine, CA, United States  25 March 2013  English  Roseville, MN, United States  26 November 2013  English  Irving, TX, United States  14 January 2013  English  Jakarta, Indonesia  3 December 2012  English  Singapore  26 November 2012  English  Canberra, Australia  21 January 2013  English  Sydney, Australia  21 January 2013  English  Melbourne, Australia  11 February 2013  English To test your Oracle Linux System Administration skills, take the Oracle Linux 6 Implementation Essentials Certification Exam. For more information on the Oracle Linux Curriculum or to express interest in additional events, go to http://oracle.com/education/linux.

    Read the article

  • Setting up a local AI server - easy with Solaris 11

    - by Stefan Hinker
    Many things are new in Solaris 11, Autoinstall is one of them.  If, like me, you've known Jumpstart for the last 2 centuries or so, you'll have to start from scratch.  Well, almost, as the concepts are similar, and it's not all that difficult.  Just new. I wanted to have an AI server that I could use for demo purposes, on the train if need be.  That answers the question of hardware requirements: portable.  But let's start at the beginning. First, you need an OS image, of course.  In the new world of Solaris 11, it is now called a repository.  The original can be downloaded from the Solaris 11 page at Oracle.   What you want is the "Oracle Solaris 11 11/11 Repository Image", which comes in two parts that can be combined using cat.  MD5 checksums for these (and all other downloads from that page) are available closer to the top of the page. With that, building the repository is quick and simple: # zfs create -o mountpoint=/export/repo rpool/ai/repo # zfs create rpool/ai/repo/s11 # mount -o ro -F hsfs /tmp/sol-11-1111-repo-full.iso /mnt # rsync -aP /mnt/repo /export/repo/s11 # umount /mnt # pkgrepo rebuild -s /export/repo/sol11/repo # zfs snapshot rpool/ai/repo/sol11@fcs # pkgrepo info -s /export/repo/sol11/repo PUBLISHER PACKAGES STATUS UPDATED solaris 4292 online 2012-03-12T20:47:15.378639Z That's all there's to it.  Let's make a snapshot, just to be on the safe side.  You never know when one will come in handy.  To use this repository, you could just add it as a file-based publisher: # pkg set-publisher -g file:///export/repo/sol11/repo solaris In case I'd want to access this repository through a (virtual) network, i'll now quickly activate the repository-service: # svccfg -s application/pkg/server \ setprop pkg/inst_root=/export/repo/sol11/repo # svccfg -s application/pkg/server setprop pkg/readonly=true # svcadm refresh application/pkg/server # svcadm enable application/pkg/server That's all you need - now point your browser to http://localhost/ to view your beautiful repository-server. Step 1 is done.  All of this, by the way, is nicely documented in the README file that's contained in the repository image. Of course, we already have updates to the original release.  You can find them in MOS in the Oracle Solaris 11 Support Repository Updates (SRU) Index.  You can simply add these to your existing repository or create separate repositories for each SRU.  The individual SRUs are self-sufficient and incremental - SRU4 includes all updates from SRU2 and SRU3.  With ZFS, you can also get both: A full repository with all updates and at the same time incremental ones up to each of the updates: # mount -o ro -F hsfs /tmp/sol-11-1111-sru4-05-incr-repo.iso /mnt # pkgrecv -s /mnt/repo -d /export/repo/sol11/repo '*' # umount /mnt # pkgrepo rebuild -s /export/repo/sol11/repo # zfs snapshot rpool/ai/repo/sol11@sru4 # zfs set snapdir=visible rpool/ai/repo/sol11 # svcadm restart svc:/application/pkg/server:default The normal repository is now updated to SRU4.  Thanks to the ZFS snapshots, there is also a valid repository of Solaris 11 11/11 without the update located at /export/repo/sol11/.zfs/snapshot/fcs . If you like, you can also create another repository service for each update, running on a separate port. But now lets continue with the AI server.  Just a little bit of reading in the dokumentation makes it clear that we will need to run a DHCP server for this.  Since I already have one active (for my SunRay installation) and since it's a good idea to have these kinds of services separate anyway, I decided to create this in a Zone.  So, let's create one first: # zfs create -o mountpoint=/export/install rpool/ai/install # zfs create -o mountpoint=/zones rpool/zones # zonecfg -z ai-server zonecfg:ai-server> create create: Using system default template 'SYSdefault' zonecfg:ai-server> set zonepath=/zones/ai-server zonecfg:ai-server> add dataset zonecfg:ai-server:dataset> set name=rpool/ai/install zonecfg:ai-server:dataset> set alias=install zonecfg:ai-server:dataset> end zonecfg:ai-server> commit zonecfg:ai-server> exit # zoneadm -z ai-server install # zoneadm -z ai-server boot ; zlogin -C ai-server Give it a hostname and IP address at first boot, and there's the Zone.  For a publisher for Solaris packages, it will be bound to the "System Publisher" from the Global Zone.  The /export/install filesystem, of course, is intended to be used by the AI server.  Let's configure it now: #zlogin ai-server root@ai-server:~# pkg install install/installadm root@ai-server:~# installadm create-service -n x86-fcs -a i386 \ -s pkg://solaris/install-image/[email protected],5.11-0.175.0.0.0.2.1482 \ -d /export/install/fcs -i 192.168.2.20 -c 3 With that, the core AI server is already done.  What happened here?  First, I installed the AI server software.  IPS makes that nice and easy.  If necessary, it'll also pull in the required DHCP-Server and anything else that might be missing.  Watch out for that DHCP server software.  In Solaris 11, there are two different versions.  There's the one you might know from Solaris 10 and earlier, and then there's a new one from ISC.  The latter is the one we need for AI.  The SMF service names of both are very similar.  The "old" one is "svc:/network/dhcp-server:default". The ISC-server comes with several SMF-services. We at least need "svc:/network/dhcp/server:ipv4".  The command "installadm create-service" creates the installation-service. It's called "x86-fcs", serves the "i386" architecture and gets its boot image from the repository of the system publisher, using version 5.11,5.11-0.175.0.0.0.2.1482, which is Solaris 11 11/11.  (The option "-a i386" in this example is optional, since the installserver itself runs on a x86 machine.) The boot-environment for clients is created in /export/install/fcs and the DHCP-server is configured for 3 IP-addresses starting at 192.168.2.20.  This configuration is stored in a very human readable form in /etc/inet/dhcpd4.conf.  An AI-service for SPARC systems could be created in the very same way, using "-a sparc" as the architecture option. Now we would be ready to register and install the first client.  It would be installed with the default "solaris-large-server" using the publisher "http://pkg.oracle.com/solaris/release" and would query it's configuration interactively at first boot.  This makes it very clear that an AI-server is really only a boot-server.  The true source of packets to install can be different.  Since I don't like these defaults for my demo setup, I did some extra config work for my clients. The configuration of a client is controlled by manifests and profiles.  The manifest controls which packets are installed and how the filesystems are layed out.  In that, it's very much like the old "rules.ok" file in Jumpstart.  Profiles contain additional configuration like root passwords, primary user account, IP addresses, keyboard layout etc.  Hence, profiles are very similar to the old sysid.cfg file. The easiest way to get your hands on a manifest is to ask the AI server we just created to give us it's default one.  Then modify that to our liking and give it back to the installserver to use: root@ai-server:~# mkdir -p /export/install/configs/manifests root@ai-server:~# cd /export/install/configs/manifests root@ai-server:~# installadm export -n x86-fcs -m orig_default \ -o orig_default.xml root@ai-server:~# cp orig_default.xml s11-fcs.small.local.xml root@ai-server:~# vi s11-fcs.small.local.xml root@ai-server:~# more s11-fcs.small.local.xml <!DOCTYPE auto_install SYSTEM "file:///usr/share/install/ai.dtd.1"> <auto_install> <ai_instance name="S11 Small fcs local"> <target> <logical> <zpool name="rpool" is_root="true"> <filesystem name="export" mountpoint="/export"/> <filesystem name="export/home"/> <be name="solaris"/> </zpool> </logical> </target> <software type="IPS"> <destination>  </destination> <source> <publisher name="solaris"> <origin name="http://192.168.2.12/"/> </publisher> </source> <!-- By default the latest build available, in the specified IPS repository, is installed. If another build is required, the build number has to be appended to the 'entire' package in the following form: <name>pkg:/[email protected]#</name> --> <software_data action="install"> <name>pkg:/[email protected],5.11-0.175.0.0.0.2.0</name> <name>pkg:/group/system/solaris-small-server</name> </software_data> </software> </ai_instance> </auto_install> root@ai-server:~# installadm create-manifest -n x86-fcs -d \ -f ./s11-fcs.small.local.xml root@ai-server:~# installadm list -m -n x86-fcs Manifest Status Criteria -------- ------ -------- S11 Small fcs local Default None orig_default Inactive None The major points in this new manifest are: Install "solaris-small-server" Install a few locales less than the default.  I'm not that fluid in French or Japanese... Use my own package service as publisher, running on IP address 192.168.2.12 Install the initial release of Solaris 11:  pkg:/[email protected],5.11-0.175.0.0.0.2.0 Using a similar approach, I'll create a default profile interactively and use it as a template for a few customized building blocks, each defining a part of the overall system configuration.  The modular approach makes it easy to configure numerous clients later on: root@ai-server:~# mkdir -p /export/install/configs/profiles root@ai-server:~# cd /export/install/configs/profiles root@ai-server:~# sysconfig create-profile -o default.xml root@ai-server:~# cp default.xml general.xml; cp default.xml mars.xml root@ai-server:~# cp default.xml user.xml root@ai-server:~# vi general.xml mars.xml user.xml root@ai-server:~# more general.xml mars.xml user.xml :::::::::::::: general.xml :::::::::::::: <!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1"> <service_bundle type="profile" name="sysconfig"> <service version="1" type="service" name="system/timezone"> <instance enabled="true" name="default"> <property_group type="application" name="timezone"> <propval type="astring" name="localtime" value="Europe/Berlin"/> </property_group> </instance> </service> <service version="1" type="service" name="system/environment"> <instance enabled="true" name="init"> <property_group type="application" name="environment"> <propval type="astring" name="LANG" value="C"/> </property_group> </instance> </service> <service version="1" type="service" name="system/keymap"> <instance enabled="true" name="default"> <property_group type="system" name="keymap"> <propval type="astring" name="layout" value="US-English"/> </property_group> </instance> </service> <service version="1" type="service" name="system/console-login"> <instance enabled="true" name="default"> <property_group type="application" name="ttymon"> <propval type="astring" name="terminal_type" value="vt100"/> </property_group> </instance> </service> <service version="1" type="service" name="network/physical"> <instance enabled="true" name="default"> <property_group type="application" name="netcfg"> <propval type="astring" name="active_ncp" value="DefaultFixed"/> </property_group> </instance> </service> <service version="1" type="service" name="system/name-service/switch"> <property_group type="application" name="config"> <propval type="astring" name="default" value="files"/> <propval type="astring" name="host" value="files dns"/> <propval type="astring" name="printer" value="user files"/> </property_group> <instance enabled="true" name="default"/> </service> <service version="1" type="service" name="system/name-service/cache"> <instance enabled="true" name="default"/> </service> <service version="1" type="service" name="network/dns/client"> <property_group type="application" name="config"> <property type="net_address" name="nameserver"> <net_address_list> <value_node value="192.168.2.1"/> </net_address_list> </property> </property_group> <instance enabled="true" name="default"/> </service> </service_bundle> :::::::::::::: mars.xml :::::::::::::: <!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1"> <service_bundle type="profile" name="sysconfig"> <service version="1" type="service" name="network/install"> <instance enabled="true" name="default"> <property_group type="application" name="install_ipv4_interface"> <propval type="astring" name="address_type" value="static"/> <propval type="net_address_v4" name="static_address" value="192.168.2.100/24"/> <propval type="astring" name="name" value="net0/v4"/> <propval type="net_address_v4" name="default_route" value="192.168.2.1"/> </property_group> <property_group type="application" name="install_ipv6_interface"> <propval type="astring" name="stateful" value="yes"/> <propval type="astring" name="stateless" value="yes"/> <propval type="astring" name="address_type" value="addrconf"/> <propval type="astring" name="name" value="net0/v6"/> </property_group> </instance> </service> <service version="1" type="service" name="system/identity"> <instance enabled="true" name="node"> <property_group type="application" name="config"> <propval type="astring" name="nodename" value="mars"/> </property_group> </instance> </service> </service_bundle> :::::::::::::: user.xml :::::::::::::: <!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1"> <service_bundle type="profile" name="sysconfig"> <service version="1" type="service" name="system/config-user"> <instance enabled="true" name="default"> <property_group type="application" name="root_account"> <propval type="astring" name="login" value="root"/> <propval type="astring" name="password" value="noIWillNotTellYouMyPasswordNotEvenEncrypted"/> <propval type="astring" name="type" value="role"/> </property_group> <property_group type="application" name="user_account"> <propval type="astring" name="login" value="stefan"/> <propval type="astring" name="password" value="noIWillNotTellYouMyPasswordNotEvenEncrypted"/> <propval type="astring" name="type" value="normal"/> <propval type="astring" name="description" value="Stefan Hinker"/> <propval type="count" name="uid" value="12345"/> <propval type="count" name="gid" value="10"/> <propval type="astring" name="shell" value="/usr/bin/bash"/> <propval type="astring" name="roles" value="root"/> <propval type="astring" name="profiles" value="System Administrator"/> <propval type="astring" name="sudoers" value="ALL=(ALL) ALL"/> </property_group> </instance> </service> </service_bundle> root@ai-server:~# installadm create-profile -n x86-fcs -f general.xml root@ai-server:~# installadm create-profile -n x86-fcs -f user.xml root@ai-server:~# installadm create-profile -n x86-fcs -f mars.xml \ -c ipv4=192.168.2.100 root@ai-server:~# installadm list -p Service Name Profile ------------ ------- x86-fcs general.xml mars.xml user.xml root@ai-server:~# installadm list -n x86-fcs -p Profile Criteria ------- -------- general.xml None mars.xml ipv4 = 192.168.2.100 user.xml None Here's the idea behind these files: "general.xml" contains settings valid for all my clients.  Stuff like DNS servers, for example, which in my case will always be the same. "user.xml" only contains user definitions.  That is, a root password and a primary user.Both of these profiles will be valid for all clients (for now). "mars.xml" defines network settings for an individual client.  This profile is associated with an IP-Address.  For this to work, I'll have to tweak the DHCP-settings in the next step: root@ai-server:~# installadm create-client -e 08:00:27:AA:3D:B1 -n x86-fcs root@ai-server:~# vi /etc/inet/dhcpd4.conf root@ai-server:~# tail -5 /etc/inet/dhcpd4.conf host 080027AA3DB1 { hardware ethernet 08:00:27:AA:3D:B1; fixed-address 192.168.2.100; filename "01080027AA3DB1"; } This completes the client preparations.  I manually added the IP-Address for mars to /etc/inet/dhcpd4.conf.  This is needed for the "mars.xml" profile.  Disabling arbitrary DHCP-replies will shut up this DHCP server, making my life in a shared environment a lot more peaceful ;-)Now, I of course want this installation to be completely hands-off.  For this to work, I'll need to modify the grub boot menu for this client slightly.  You can find it in /etc/netboot.  "installadm create-client" will create a new boot menu for every client, identified by the client's MAC address.  The template for this can be found in a subdirectory with the name of the install service, /etc/netboot/x86-fcs in our case.  If you don't want to change this manually for every client, modify that template to your liking instead. root@ai-server:~# cd /etc/netboot root@ai-server:~# cp menu.lst.01080027AA3DB1 menu.lst.01080027AA3DB1.org root@ai-server:~# vi menu.lst.01080027AA3DB1 root@ai-server:~# diff menu.lst.01080027AA3DB1 menu.lst.01080027AA3DB1.org 1,2c1,2 < default=1 < timeout=10 --- > default=0 > timeout=30 root@ai-server:~# more menu.lst.01080027AA3DB1 default=1 timeout=10 min_mem64=0 title Oracle Solaris 11 11/11 Text Installer and command line kernel$ /x86-fcs/platform/i86pc/kernel/$ISADIR/unix -B install_media=htt p://$serverIP:5555//export/install/fcs,install_service=x86-fcs,install_svc_addre ss=$serverIP:5555 module$ /x86-fcs/platform/i86pc/$ISADIR/boot_archive title Oracle Solaris 11 11/11 Automated Install kernel$ /x86-fcs/platform/i86pc/kernel/$ISADIR/unix -B install=true,inst all_media=http://$serverIP:5555//export/install/fcs,install_service=x86-fcs,inst all_svc_address=$serverIP:5555,livemode=text module$ /x86-fcs/platform/i86pc/$ISADIR/boot_archive Now just boot the client off the network using PXE-boot.  For my demo purposes, that's a client from VirtualBox, of course.  That's all there's to it.  And despite the fact that this blog entry is a little longer - that wasn't that hard now, was it?

    Read the article

  • Developing web apps using ASP.NET MVC 3, Razor and EF Code First - Part 1

    - by shiju
    In this post, I will demonstrate web application development using ASP. NET MVC 3, Razor and EF code First. This post will also cover Dependency Injection using Unity 2.0 and generic Repository and Unit of Work for EF Code First. The following frameworks will be used for this step by step tutorial. ASP.NET MVC 3 EF Code First CTP 5 Unity 2.0 Define Domain Model Let’s create domain model for our simple web application Category class public class Category {     public int CategoryId { get; set; }     [Required(ErrorMessage = "Name Required")]     [StringLength(25, ErrorMessage = "Must be less than 25 characters")]     public string Name { get; set;}     public string Description { get; set; }     public virtual ICollection<Expense> Expenses { get; set; } }   Expense class public class Expense {             public int ExpenseId { get; set; }            public string  Transaction { get; set; }     public DateTime Date { get; set; }     public double Amount { get; set; }     public int CategoryId { get; set; }     public virtual Category Category { get; set; } } We have two domain entities - Category and Expense. A single category contains a list of expense transactions and every expense transaction should have a Category. In this post, we will be focusing on CRUD operations for the entity Category and will be working on the Expense entity with a View Model object in the later post. And the source code for this application will be refactored over time. The above entities are very simple POCO (Plain Old CLR Object) classes and the entity Category is decorated with validation attributes in the System.ComponentModel.DataAnnotations namespace. Now we want to use these entities for defining model objects for the Entity Framework 4. Using the Code First approach of Entity Framework, we can first define the entities by simply writing POCO classes without any coupling with any API or database library. This approach lets you focus on domain model which will enable Domain-Driven Development for applications. EF code first support is currently enabled with a separate API that is runs on top of the Entity Framework 4. EF Code First is reached CTP 5 when I am writing this article. Creating Context Class for Entity Framework We have created our domain model and let’s create a class in order to working with Entity Framework Code First. For this, you have to download EF Code First CTP 5 and add reference to the assembly EntitFramework.dll. You can also use NuGet to download add reference to EEF Code First.    public class MyFinanceContext : DbContext {     public MyFinanceContext() : base("MyFinance") { }     public DbSet<Category> Categories { get; set; }     public DbSet<Expense> Expenses { get; set; }         }   The above class MyFinanceContext is derived from DbContext that can connect your model classes to a database. The MyFinanceContext class is mapping our Category and Expense class into database tables Categories and Expenses using DbSet<TEntity> where TEntity is any POCO class. When we are running the application at first time, it will automatically create the database. EF code-first look for a connection string in web.config or app.config that has the same name as the dbcontext class. If it is not find any connection string with the convention, it will automatically create database in local SQL Express database by default and the name of the database will be same name as the dbcontext class. You can also define the name of database in constructor of the the dbcontext class. Unlike NHibernate, we don’t have to use any XML based mapping files or Fluent interface for mapping between our model and database. The model classes of Code First are working on the basis of conventions and we can also use a fluent API to refine our model. The convention for primary key is ‘Id’ or ‘<class name>Id’.  If primary key properties are detected with type ‘int’, ‘long’ or ‘short’, they will automatically registered as identity columns in the database by default. Primary key detection is not case sensitive. We can define our model classes with validation attributes in the System.ComponentModel.DataAnnotations namespace and it automatically enforces validation rules when a model object is updated or saved. Generic Repository for EF Code First We have created model classes and dbcontext class. Now we have to create generic repository pattern for data persistence with EF code first. If you don’t know about the repository pattern, checkout Martin Fowler’s article on Repository Let’s create a generic repository to working with DbContext and DbSet generics. public interface IRepository<T> where T : class     {         void Add(T entity);         void Delete(T entity);         T GetById(long Id);         IEnumerable<T> All();     }   RepositoryBasse – Generic Repository class public abstract class RepositoryBase<T> where T : class { private MyFinanceContext database; private readonly IDbSet<T> dbset; protected RepositoryBase(IDatabaseFactory databaseFactory) {     DatabaseFactory = databaseFactory;     dbset = Database.Set<T>(); }   protected IDatabaseFactory DatabaseFactory {     get; private set; }   protected MyFinanceContext Database {     get { return database ?? (database = DatabaseFactory.Get()); } } public virtual void Add(T entity) {     dbset.Add(entity);            }        public virtual void Delete(T entity) {     dbset.Remove(entity); }   public virtual T GetById(long id) {     return dbset.Find(id); }   public virtual IEnumerable<T> All() {     return dbset.ToList(); } }   DatabaseFactory class public class DatabaseFactory : Disposable, IDatabaseFactory {     private MyFinanceContext database;     public MyFinanceContext Get()     {         return database ?? (database = new MyFinanceContext());     }     protected override void DisposeCore()     {         if (database != null)             database.Dispose();     } } Unit of Work If you are new to Unit of Work pattern, checkout Fowler’s article on Unit of Work . According to Martin Fowler, the Unit of Work pattern "maintains a list of objects affected by a business transaction and coordinates the writing out of changes and the resolution of concurrency problems." Let’s create a class for handling Unit of Work   public interface IUnitOfWork {     void Commit(); }   UniOfWork class public class UnitOfWork : IUnitOfWork {     private readonly IDatabaseFactory databaseFactory;     private MyFinanceContext dataContext;       public UnitOfWork(IDatabaseFactory databaseFactory)     {         this.databaseFactory = databaseFactory;     }       protected MyFinanceContext DataContext     {         get { return dataContext ?? (dataContext = databaseFactory.Get()); }     }       public void Commit()     {         DataContext.Commit();     } }   The Commit method of the UnitOfWork will call the commit method of MyFinanceContext class and it will execute the SaveChanges method of DbContext class.   Repository class for Category In this post, we will be focusing on the persistence against Category entity and will working on other entities in later post. Let’s create a repository for handling CRUD operations for Category using derive from a generic Repository RepositoryBase<T>.   public class CategoryRepository: RepositoryBase<Category>, ICategoryRepository     {     public CategoryRepository(IDatabaseFactory databaseFactory)         : base(databaseFactory)         {         }                } public interface ICategoryRepository : IRepository<Category> { } If we need additional methods than generic repository for the Category, we can define in the CategoryRepository. Dependency Injection using Unity 2.0 If you are new to Inversion of Control/ Dependency Injection or Unity, please have a look on my articles at http://weblogs.asp.net/shijuvarghese/archive/tags/IoC/default.aspx. I want to create a custom lifetime manager for Unity to store container in the current HttpContext.   public class HttpContextLifetimeManager<T> : LifetimeManager, IDisposable {     public override object GetValue()     {         return HttpContext.Current.Items[typeof(T).AssemblyQualifiedName];     }     public override void RemoveValue()     {         HttpContext.Current.Items.Remove(typeof(T).AssemblyQualifiedName);     }     public override void SetValue(object newValue)     {         HttpContext.Current.Items[typeof(T).AssemblyQualifiedName] = newValue;     }     public void Dispose()     {         RemoveValue();     } }   Let’s create controller factory for Unity in the ASP.NET MVC 3 application. public class UnityControllerFactory : DefaultControllerFactory { IUnityContainer container; public UnityControllerFactory(IUnityContainer container) {     this.container = container; } protected override IController GetControllerInstance(RequestContext reqContext, Type controllerType) {     IController controller;     if (controllerType == null)         throw new HttpException(                 404, String.Format(                     "The controller for path '{0}' could not be found" +     "or it does not implement IController.",                 reqContext.HttpContext.Request.Path));       if (!typeof(IController).IsAssignableFrom(controllerType))         throw new ArgumentException(                 string.Format(                     "Type requested is not a controller: {0}",                     controllerType.Name),                     "controllerType");     try     {         controller= container.Resolve(controllerType) as IController;     }     catch (Exception ex)     {         throw new InvalidOperationException(String.Format(                                 "Error resolving controller {0}",                                 controllerType.Name), ex);     }     return controller; }   }   Configure contract and concrete types in Unity Let’s configure our contract and concrete types in Unity for resolving our dependencies.   private void ConfigureUnity() {     //Create UnityContainer               IUnityContainer container = new UnityContainer()                 .RegisterType<IDatabaseFactory, DatabaseFactory>(new HttpContextLifetimeManager<IDatabaseFactory>())     .RegisterType<IUnitOfWork, UnitOfWork>(new HttpContextLifetimeManager<IUnitOfWork>())     .RegisterType<ICategoryRepository, CategoryRepository>(new HttpContextLifetimeManager<ICategoryRepository>());                 //Set container for Controller Factory                ControllerBuilder.Current.SetControllerFactory(             new UnityControllerFactory(container)); }   In the above ConfigureUnity method, we are registering our types onto Unity container with custom lifetime manager HttpContextLifetimeManager. Let’s call ConfigureUnity method in the Global.asax.cs for set controller factory for Unity and configuring the types with Unity.   protected void Application_Start() {     AreaRegistration.RegisterAllAreas();     RegisterGlobalFilters(GlobalFilters.Filters);     RegisterRoutes(RouteTable.Routes);     ConfigureUnity(); }   Developing web application using ASP.NET MVC 3 We have created our domain model for our web application and also have created repositories and configured dependencies with Unity container. Now we have to create controller classes and views for doing CRUD operations against the Category entity. Let’s create controller class for Category Category Controller   public class CategoryController : Controller {     private readonly ICategoryRepository categoryRepository;     private readonly IUnitOfWork unitOfWork;           public CategoryController(ICategoryRepository categoryRepository, IUnitOfWork unitOfWork)     {         this.categoryRepository = categoryRepository;         this.unitOfWork = unitOfWork;     }       public ActionResult Index()     {         var categories = categoryRepository.All();         return View(categories);     }     [HttpGet]     public ActionResult Edit(int id)     {         var category = categoryRepository.GetById(id);         return View(category);     }       [HttpPost]     public ActionResult Edit(int id, FormCollection collection)     {         var category = categoryRepository.GetById(id);         if (TryUpdateModel(category))         {             unitOfWork.Commit();             return RedirectToAction("Index");         }         else return View(category);                 }       [HttpGet]     public ActionResult Create()     {         var category = new Category();         return View(category);     }           [HttpPost]     public ActionResult Create(Category category)     {         if (!ModelState.IsValid)         {             return View("Create", category);         }                     categoryRepository.Add(category);         unitOfWork.Commit();         return RedirectToAction("Index");     }       [HttpPost]     public ActionResult Delete(int  id)     {         var category = categoryRepository.GetById(id);         categoryRepository.Delete(category);         unitOfWork.Commit();         var categories = categoryRepository.All();         return PartialView("CategoryList", categories);       }        }   Creating Views in Razor Now we are going to create views in Razor for our ASP.NET MVC 3 application.  Let’s create a partial view CategoryList.cshtml for listing category information and providing link for Edit and Delete operations. CategoryList.cshtml @using MyFinance.Helpers; @using MyFinance.Domain; @model IEnumerable<Category>      <table>         <tr>         <th>Actions</th>         <th>Name</th>          <th>Description</th>         </tr>     @foreach (var item in Model) {             <tr>             <td>                 @Html.ActionLink("Edit", "Edit",new { id = item.CategoryId })                 @Ajax.ActionLink("Delete", "Delete", new { id = item.CategoryId }, new AjaxOptions { Confirm = "Delete Expense?", HttpMethod = "Post", UpdateTargetId = "divCategoryList" })                           </td>             <td>                 @item.Name             </td>             <td>                 @item.Description             </td>         </tr>          }       </table>     <p>         @Html.ActionLink("Create New", "Create")     </p> The delete link is providing Ajax functionality using the Ajax.ActionLink. This will call an Ajax request for Delete action method in the CategoryCotroller class. In the Delete action method, it will return Partial View CategoryList after deleting the record. We are using CategoryList view for the Ajax functionality and also for Index view using for displaying list of category information. Let’s create Index view using partial view CategoryList  Index.chtml @model IEnumerable<MyFinance.Domain.Category> @{     ViewBag.Title = "Index"; }    <h2>Category List</h2>    <script src="@Url.Content("~/Scripts/jquery.unobtrusive-ajax.min.js")" type="text/javascript"></script>    <div id="divCategoryList">               @Html.Partial("CategoryList", Model) </div>   We can call the partial views using Html.Partial helper method. Now we are going to create View pages for insert and update functionality for the Category. Both view pages are sharing common user interface for entering the category information. So I want to create an EditorTemplate for the Category information. We have to create the EditorTemplate with the same name of entity object so that we can refer it on view pages using @Html.EditorFor(model => model) . So let’s create template with name Category. Let’s create view page for insert Category information   @model MyFinance.Domain.Category   @{     ViewBag.Title = "Save"; }   <h2>Create</h2>   <script src="@Url.Content("~/Scripts/jquery.validate.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.validate.unobtrusive.min.js")" type="text/javascript"></script>   @using (Html.BeginForm()) {     @Html.ValidationSummary(true)     <fieldset>         <legend>Category</legend>                @Html.EditorFor(model => model)               <p>             <input type="submit" value="Create" />         </p>     </fieldset> }   <div>     @Html.ActionLink("Back to List", "Index") </div> ViewStart file In Razor views, we can add a file named _viewstart.cshtml in the views directory  and this will be shared among the all views with in the Views directory. The below code in the _viewstart.cshtml, sets the Layout page for every Views in the Views folder.      @{     Layout = "~/Views/Shared/_Layout.cshtml"; }   Source Code You can download the source code from http://efmvc.codeplex.com/ . The source will be refactored on over time.   Summary In this post, we have created a simple web application using ASP.NET MVC 3 and EF Code First. We have discussed on technologies and practices such as ASP.NET MVC 3, Razor, EF Code First, Unity 2, generic Repository and Unit of Work. In my later posts, I will modify the application and will be discussed on more things. Stay tuned to my blog  for more posts on step by step application building.

    Read the article

  • Coherence Based WebLogic Server Session Management

    - by [email protected]
    Specifications Supported Configurations WebLogic Server 10.3.2( or 10.3.1 ) Coherence 3.5.2/463 If you use other verion above, then please check the following matrix:   WebLogic Server 9.2 MP1 Weblogic Server 10.3 WebLogic Smart Update Patch ID: AJQB Patch ID: 6W2W Minimum Coherence Release Level/MetaLink Patch ID 3.4.2 Patch 2-Patch ID:8429415 3.4.2 Patch6-Patch ID:11399293 Environment Variables %COHERENCE_HOME%: coherence installation directory %DOMAIN_HOME%: weblogic domain foler. Instructions We Will create to weblogic domains: domain_a, domain_b. To configure those domains with coherence-based session management . Then the changings of session variable value in one domain will propagate to another domain. Main Steps WebLogic Server create domain_a The process is ignored copy %COHERENCE_HOME%\lib\coherence.jar to %DOMAIN_HOME%\lib startup domain deploy %COHERENCE_HOME%\lib\coherence-web-spi.war as a Shared Library repeat step 1~4 at domain_b Coherence duplicate %COHERENCE_HOME%\bin\cache-server.cmd at the same folder and rename it to web-cache-server.cmd modify web-cache-server.cmd java -server -Xms512m -Xmx512m -cp %coherence_home%/lib/coherence.jar;%coherence_home%/lib/coherence-web-spi.war -Dtangosol.coherence.management.remote=true -Dtangosol.coherence.cacheconfig=WEB-INF/classes/session-cache-config.xml -Dtangosol.coherence.session.localstorage=true com.tangosol.net.DefaultCacheServer startup web-cache-server.cmd Testing develop a web app  with OEPE or JDeveloper and implment functions: changing, viewing, listing  session variables. ( or download sample codes here ) modify weblogic.xml with following content: <?xml version="1.0" encoding="UTF-8"?> <wls:weblogic-web-app xmlns:wls=http://xmlns.oracle.com/weblogic/weblogic-web-app xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd http://xmlns.oracle.com/weblogic/weblogic-web-app http://xmlns.oracle.com/weblogic/weblogic-web-app/1.0/weblogic-web-app.xsd"> <wls:weblogic-version>10.3.2</wls:weblogic-version> <wls:context-root>CoherenceWeb</wls:context-root> <wls:library-ref> <wls:library-name>coherence-web-spi</wls:library-name> <wls:specification-version>1.0.0.0</wls:specification-version> <wls:exact-match>true</wls:exact-match> </wls:library-ref> </wls:weblogic-web-app> deploy the web app to domain_a and domain_b change session varaible vlaue at domain_a and check whethe if changed at domain_b References Using Oracle Coherence*Web 3.4.2 with Oracle WebLogic Server 10gR3 Oracle Coherence*Web 3.4.2 with Oracle WebLogic Server 10gR3

    Read the article

  • ¿Como puedo instalar Age of Mythology en Ubuntu? - How I can install Age of Mythology on Ubuntu?

    - by edgarsalguero93
    Spanish: Tengo un problema al tratar de instalar Age of Mythology Gold Edition (v 1.03). Al iniciarle con el Wine (1.3.28) en mi Ubuntu 11.10, y después de ingresar el serial y pulsar Siguiente, me sale un error "No se puede cargar PidGen.dll", y se regresa a la ventana anterior. En Configurar Wine, y en la pestaña Librerías le añadí el archivo DLL que me pide pero todo sigue igual. He intentado todo para que se instale pero no funciona. También he intentado instalarlo en una maquina virtual de VMware Workstation 8.0.1 con Microsoft Windows XP SP3 y me sale un error de compatibilidad con la tarjeta de vídeo: "Tarjeta de vídeo 0: vmx_fb.dll VMware SVGA II Vendor(0x15AD) Device(0x405)" y se cierra el Age of Mythology. Me parece que la solución en este caso seria aumentar la cantidad de vídeo que pone a disposición el VMware a la maquina virtual o un driver para esta tarjeta, pero no se como hacer eso. O tal vez alguien sabe la manera de que mi tarjeta de vídeo que tiene mi computadora se use también para VMware o destinar parte de la memoria del vídeo a la maquina virtual. He buscado en varios sitios, pero ninguno soluciona mi problema. Por favor, si alguien ya encontró la respuesta, que responda a esta pregunta, y de antemano gracias. English: I have a problem trying to install Age of Mythology Gold Edition (v 1.03). At the beginning of the Wine (1.3.28) on my Ubuntu 11.10, and after entering the serial and click Next, I get an error "Unable to load pidgen.dll" and return to the previous window. In Configure Wine and in the Libraries tab I added the DLL that calls me but nothing has changed. I tried everything to be installed but not working. I have also tried to install in a virtual machine with VMware Workstation 8.0.1 Microsoft Windows XP SP3 and I get a compatibility error with video card: "Video Card 0: VMware SVGA II vmx_fb.dll Vendor (0x15AD) Device (0x405) "and closes the Age of Mythology. I think the solution in this case would increase the amount of video available to the VMware virtual machine or a driver for this card, but not how to do that. Or maybe someone knows the way that my video card that has my computer is also used for VMware or earmark part of the video memory to the virtual machine. I searched several sites, but none solved my problem. Please, if someone already found the answer, to answer this question. I also apologize if the translation is not well understood. I used the Google translator. Thanks and greetings from Latin America

    Read the article

  • Restore Files from Backups on Windows Home Server

    - by Mysticgeek
    If you use Windows Home Server to backup the machines on your network, your in luck if you accidentally delete important files or they become corrupted. Today we take a look at getting your data back from backups on your home server. Open Windows Home Server Console and click select the Computers and Backup tab. Right-click on the computer you need to restore files for and select View Backups. This will open a list of your recent backups. Highlight the one you want to open, then click the Open button in the Restore or View Files section. If this is the first time you’re restoring a file, you’ll be asked to verify installation of the device software. Check the box next to Always trust software from Microsoft Corporation and click Install. Now wait while the backup data is retrieved. After the backup data has been retrieved, an explorer windows opens up to drive (Z:) which is the backup data. It’s just like if you were opening a drive on your local machine. Now you can browse through the backup and find the files your missing. You can open the files directly, or drag them onto your machine to the location you want to restore them.   Restoring your data is actually a very easy process with Windows Home Server. Of course you’ll want to make sure the computers on your network are being backed up to WHS. if you need help with that, check out our article on how to configure your computer to backup to WHS. If you want to backup your home server shares, check out our article on how to backup WHS folder to an external drive. Similar Articles Productive Geek Tips GMedia Blog: Setting Up a Windows Home ServerRestore Your PC from Windows Home ServerCreate A Windows Home Server Home Computer Restore DiscInstalling Windows Home ServerConfigure Your Computer to Backup to Windows Home Server TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Make your Joomla & Drupal Sites Mobile with OSMOBI Integrate Twitter and Delicious and Make Life Easier Design Your Web Pages Using the Golden Ratio Worldwide Growth of the Internet How to Find Your Mac Address Use My TextTools to Edit and Organize Text

    Read the article

  • Sixeyed.Caching available now on NuGet and GitHub!

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2013/10/22/sixeyed.caching-available-now-on-nuget-and-github.aspxThe good guys at Pluralsight have okayed me to publish my caching framework (as seen in Caching in the .NET Stack: Inside-Out) as an open-source library, and it’s out now. You can get it here: Sixeyed.Caching source code on GitHub, and here: Sixeyed.Caching package v1.0.0 on NuGet. If you haven’t seen the course, there’s a preview here on YouTube: In-Process and Out-of-Process Caches, which gives a good flavour. The library is a wrapper around various cache providers, including the .NET MemoryCache, AppFabric cache, and  memcached*. All the wrappers inherit from a base class which gives you a set of common functionality against all the cache implementations: •    inherits OutputCacheProvider, so you can use your chosen cache provider as an ASP.NET output cache; •    serialization and encryption, so you can configure whether you want your cache items serialized (XML, JSON or binary) and encrypted; •    instrumentation, you can optionally use performance counters to monitor cache attempts and hits, at a low level. The framework wraps up different caches into an ICache interface, and it lets you use a provider directly like this: Cache.Memory.Get<RefData>(refDataKey); - or with configuration to use the default cache provider: Cache.Default.Get<RefData>(refDataKey); The library uses Unity’s interception framework to implement AOP caching, which you can use by flagging methods with the [Cache] attribute: [Cache] public RefData GetItem(string refDataKey) - and you can be more specific on the required cache behaviour: [Cache(CacheType=CacheType.Memory, Days=1] public RefData GetItem(string refDataKey) - or really specific: [Cache(CacheType=CacheType.Disk, SerializationFormat=SerializationFormat.Json, Hours=2, Minutes=59)] public RefData GetItem(string refDataKey) Provided you get instances of classes with cacheable methods from the container, the attributed method results will be cached, and repeated calls will be fetched from the cache. You can also set a bunch of cache defaults in application config, like whether to use encryption and instrumentation, and whether the cache system is enabled at all: <sixeyed.caching enabled="true"> <performanceCounters instrumentCacheTotalCounts="true" instrumentCacheTargetCounts="true" categoryNamePrefix ="Sixeyed.Caching.Tests"/> <encryption enabled="true" key="1234567890abcdef1234567890abcdef" iv="1234567890abcdef"/> <!-- key must be 32 characters, IV must be 16 characters--> </sixeyed.caching> For AOP and methods flagged with the cache attribute, you can override the compile-time cache settings at runtime with more config (keyed by the class and method name): <sixeyed.caching enabled="true"> <targets> <target keyPrefix="MethodLevelCachingStub.GetRandomIntCacheConfiguredInternal" enabled="false"/> <target keyPrefix="MethodLevelCachingStub.GetRandomIntCacheExpiresConfiguredInternal" seconds="1"/> </targets> It’s released under the MIT license, so you can use it freely in your own apps and modify as required. I’ll be adding more content to the GitHub wiki, which will be the main source of documentation, but for now there’s an FAQ to get you started. * - in the course the framework library also wraps NCache Express, but there's no public redistributable library that I can find, so it's not in Sixeyed.Caching.

    Read the article

  • Passthrough Objects – Duck Typing++

    - by EltonStoneman
    [Source: http://geekswithblogs.net/EltonStoneman] Can't see a genuine use for this, but I got the idea in my head and wanted to work it through. It's an extension to the idea of duck typing, for scenarios where types have similar behaviour, but implemented in differently-named members. So you may have a set of objects you want to treat as an interface, which don't implement the interface explicitly, and don't have the same member names so they can't be duck-typed into implicitly implementing the interface. In a fictitious example, I want to call Get on whichever ICache implementation is current, and have the call passed through to the relevant method – whether it's called Read, Retrieve or whatever: A sample implementation is up on github here: PassthroughSample. This uses Castle's DynamicProxy behind the scenes in the same way as my duck typing sample, but allows you to configure the passthrough to specify how the inner (implementation) and outer (interface) members are mapped:       var setup = new Passthrough();     var cache = setup.Create("PassthroughSample.Tests.Stubs.AspNetCache, PassthroughSample.Tests")                             .WithPassthrough("Name", "CacheName")                             .WithPassthrough("Get", "Retrieve")                             .WithPassthrough("Set", "Insert")                             .As<ICache>(); - or using some ugly Lambdas to avoid the strings :     Expression<Func<ICache, string, object>> get = (o, s) => o.Get(s);     Expression<Func<Memcached, string, object>> read = (i, s) => i.Read(s);     Expression<Action<ICache, string, object>> set = (o, s, obj) => o.Set(s, obj);     Expression<Action<Memcached, string, object>> insert = (i, s, obj) => i.Put(s, obj);       ICache cache = new Passthrough<ICache, Memcached>()                     .Create()                     .WithPassthrough(o => o.Name, i => i.InstanceName)                     .WithPassthrough(get, read)                     .WithPassthrough(set, insert)                     .As();   - or even in config:   ICache cache = Passthrough.GetConfigured<ICache>(); ...  <passthrough>     <types>       <typename="PassthroughSample.Tests.Stubs.ICache, PassthroughSample.Tests"             passesThroughTo="PassthroughSample.Tests.Stubs.AppFabricCache, PassthroughSample.Tests">         <members>           <membername="Name"passesThroughTo="RegionName"/>           <membername="Get"passesThroughTo="Out"/>           <membername="Set"passesThroughTo="In"/>         </members>       </type>   Possibly useful for injecting stubs for dependencies in tests, when your application code isn't using an IoC container. Possibly it also has an alternative implementation using .NET 4.0 dynamic objects, rather than the dynamic proxy.

    Read the article

  • Nginx and PHP Fundamentals

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2013/08/01/nginx-and-php-fundamentals.aspxHot on the heels of my .NET caching course, I’ve had my first “fundamentals” course released on Pluralsight: Nginx and PHP Fundamentals. It’s a practical look at two of the biggest technologies on the web – Nginx, which is the fastest growing HTTP server around (currently hosting 100+ million sites), and PHP, which powers more websites than any other server-side framework (currently 240+ million sites). The two technologies work well together, both are open-source and cross-platform and both are lightweight and easy to get started with - you just need to download and unzip the runtimes, and with a text editor you can create and host dynamic websites. I’ve used PHP as a second (sometimes third) language since 2005 when I was brought cold into an established codebase to help improve performance, and Nginx to host tier 2 apps for the last couple of years. As with any training course, you learn new things as you produce it, and it was good to focus on a different stack from my commercial .NET world. In the course I start with a website in two parts – one which is just static content, and one which processes a user registration form using ASP.NET MVC, both running in IIS. Over four modules I migrate the app to Nginx and PHP: Hosting Static Content in Nginx – how to deploy and configure Nginx for a basic website; PHP Part 1: Basic Web Forms – installing PHP and an IDE, and building a simple form with server-side validation; PHP Part 2: Packages and Integration – using PECL and Composer for packages to connect to Azure, AWS, Mongo and reCAPTCHA; Hosting PHP in Nginx – configuring Nginx to host our PHP site. Along the way I run some performance stats with JMeter, and the headlines are that Nginx running on Linux outperforms IIS on Windows for static content,by 800 requests per second over 1000 concurrent requests; and Linux+Ngnix+PHP outperforms Windows+IIS+ASP.NET MVC by 700 request per second with the same load. Of course, the headline stats don’t tell the whole story, and when you add OpCode caching for PHP and the ASP.NET Output Cache, the results are very different. As Web architecture moves away from heavy server-side processing, to Single Page Apps with client-side frameworks like AngularJS and Knockout, I think there’s an increasing need for high-performance, low-cost server technologies, and the combination of Nginx and PHP makes a compelling case.

    Read the article

  • Oracle collaborates with leading IT vendors on Cloud Management Standards

    - by Anand Akela
    During the last couple of days, two key specifications for cloud management standards have been announced. Oracle collaborated with leading technology vendors from the IT industry on both of these cloud management specifications. One of the specifications focuses "Infrastructure as a Service" ( IaaS )  cloud service model , while the other specification announced today focuses on "Platform as a Service" ( PaaS ) cloud service model. Please see The NIST Definition of Cloud Computing to learn more about IaaS and PaaS . Earlier today Oracle , CloudBees, Cloudsoft, Huawei, Rackspace, Red Hat, and Software AG   announced the Cloud Application Management for Platforms (CAMP) specification that will be submitted to Organization for the Advancement of Structured Information Standards (OASIS) for development of an industry standard, in an effort to help ensure interoperability for deploying and managing applications across cloud environments.  Typical PaaS architecture - Source : CAMP specification The CAMP specification defines the artifacts and APIs that need to be offered by a PaaS cloud to manage the building, running, administration, monitoring and patching of applications in the cloud. Its purpose is to enable interoperability among self-service interfaces to PaaS clouds by defining artifacts and formats that can be used with any conforming cloud and enable independent vendors to create tools and services that interact with any conforming cloud using the defined interfaces. Cloud vendors can use these interfaces to develop new PaaS offerings that will interact with independently developed tools and components. In a separate cloud standards announcement yesterday, the Distributed Management Task Force ( DMTF ), the organization bringing the IT industry together to collaborate on systems management standards development, validation, promotion and adoption, released the new Cloud Infrastructure Management Interface (CIMI) specification. Oracle collaborated with various technology vendors and industry organizations on this specification. CIMI standardizes interactions between cloud environments to achieve interoperable cloud infrastructure management between service providers and their consumers and developers, enabling users to manage their cloud infrastructure use easily and without complexity. DMTF developed CIMI as a self-service interface for infrastructure clouds ( IaaS focus ) , allowing users to dynamically provision, configure and administer their cloud usage with a high-level interface that greatly simplifies cloud systems management. Mark Carlson, Principal Cloud Strategist at Oracle provides more details about CAMP  and CIMI his blog . Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • WLS MBeans

    - by Jani Rautiainen
    WLS provides a set of Managed Beans (MBeans) to configure, monitor and manage WLS resources. We can use the WLS MBeans to automate some of the tasks related to the configuration and maintenance of the WLS instance. The MBeans can be accessed a number of ways; using various UIs and programmatically using Java or WLST Python scripts.For customization development we can use the features to e.g. manage the deployed customization in MDS, control logging levels, automate deployment of dependent libraries etc. This article is an introduction on how to access and use the WLS MBeans. The goal is to illustrate the various access methods in a single article; the details of the features are left to the linked documentation.This article covers Windows based environment, steps for Linux would be similar however there would be some differences e.g. on how the file paths are defined. MBeansThe WLS MBeans can be categorized to runtime and configuration MBeans.The Runtime MBeans can be used to access the runtime information about the server and its resources. The data from runtime beans is only available while the server is running. The runtime beans can be used to e.g. check the state of the server or deployment.The Configuration MBeans contain information about the configuration of servers and resources. The configuration of the domain is stored in the config.xml file and the configuration MBeans can be used to access and modify the configuration data. For more information on the WLS MBeans refer to: Understanding WebLogic Server MBeans WLS MBean reference Java Management Extensions (JMX)We can use JMX APIs to access the WLS MBeans. This allows us to create Java programs to configure, monitor, and manage WLS resources. In order to use the WLS MBeans we need to add the following library into the class-path: WL_HOME\lib\wljmxclient.jar Connecting to a WLS MBean server The WLS MBeans are contained in a Mbean server, depending on the requirement we can connect to (MBean Server / JNDI Name): Domain Runtime MBean Server weblogic.management.mbeanservers.domainruntime Runtime MBean Server weblogic.management.mbeanservers.runtime Edit MBean Server weblogic.management.mbeanservers.edit To connect to the WLS MBean server first we need to create a map containing the credentials; Hashtable<String, String> param = new Hashtable<String, String>(); param.put(Context.SECURITY_PRINCIPAL, "weblogic");        param.put(Context.SECURITY_CREDENTIALS, "weblogic1");        param.put(JMXConnectorFactory.PROTOCOL_PROVIDER_PACKAGES, "weblogic.management.remote"); These define the user, password and package containing the protocol. Next we create the connection: JMXServiceURL serviceURL =     new JMXServiceURL("t3","127.0.0.1",7101,     "/jndi/weblogic.management.mbeanservers.domainruntime"); JMXConnector connector = JMXConnectorFactory.connect(serviceURL, param); MBeanServerConnection connection = connector.getMBeanServerConnection(); With the connection we can now access the MBeans for the WLS instance. For a complete example see Appendix A of this post. For more details refer to Accessing WebLogic Server MBeans with JMX Accessing WLS MBeans The WLS MBeans are structured hierarchically; in order to access content we need to know the path to the MBean we are interested in. The MBean is accessed using “MBeanServerConnection. getAttribute” API.  WLS provides entry points to the hierarchy allowing us to navigate all the WLS MBeans in the hierarchy (MBean Server / JMX object name): Domain Runtime MBean Server com.bea:Name=DomainRuntimeService,Type=weblogic.management.mbeanservers.domainruntime.DomainRuntimeServiceMBean Runtime MBean Servers com.bea:Name=RuntimeService,Type=weblogic.management.mbeanservers.runtime.RuntimeServiceMBean Edit MBean Server com.bea:Name=EditService,Type=weblogic.management.mbeanservers.edit.EditServiceMBean For example we can access the Domain Runtime MBean using: ObjectName service = new ObjectName( "com.bea:Name=DomainRuntimeService," + "Type=weblogic.management.mbeanservers.domainruntime.DomainRuntimeServiceMBean"); Same syntax works for any “child” WLS MBeans e.g. to find out all application deployments we can: ObjectName domainConfig = (ObjectName)connection.getAttribute(service,"DomainConfiguration"); ObjectName[] appDeployments = (ObjectName[])connection.getAttribute(domainConfig,"AppDeployments"); Alternatively we could access the same MBean using the full syntax: ObjectName domainConfig = new ObjectName("com.bea:Location=DefaultDomain,Name=DefaultDomain,Type=Domain"); ObjectName[] appDeployments = (ObjectName[])connection.getAttribute(domainConfig,"AppDeployments"); For more details refer to Accessing WebLogic Server MBeans with JMX Invoking operations on WLS MBeans The WLS MBean operations can be invoked with MBeanServerConnection. invoke API; in the following example we query the state of “AppsLoggerService” application: ObjectName appRuntimeStateRuntime = new ObjectName("com.bea:Name=AppRuntimeStateRuntime,Type=AppRuntimeStateRuntime"); Object[] parameters = { "AppsLoggerService", "DefaultServer" }; String[] signature = { "java.lang.String", "java.lang.String" }; String result = (String)connection.invoke(appRuntimeStateRuntime,"getCurrentState",parameters, signature); The result returned should be "STATE_ACTIVE" assuming the "AppsLoggerService" application is up and running. WebLogic Scripting Tool (WLST) The WebLogic Scripting Tool (WLST) is a command-line scripting environment that we can access the same WLS MBeans. The tool is located under: $MW_HOME\oracle_common\common\bin\wlst.bat Do note that there are several instances of the wlst script under the $MW_HOME, each of them works, however the commands available vary, so we want to use the one under “oracle_common”. The tool is started in offline mode. In offline mode we can access and manipulate the domain configuration. In online mode we can access the runtime information. We connect to the Administration Server : connect("weblogic","weblogic1", "t3://127.0.0.1:7101") In both online and offline modes we can navigate the WLS MBean using commands like "ls" to print content and "cd" to navigate between objects, for example: All the commands available can be obtained with: help('all') For details of the tool refer to WebLogic Scripting Tool and for the commands available WLST Command and Variable Reference. Also do note that the WLST tool can be invoked from Java code in Embedded Mode. Running Scripts The WLST tool allows us to automate tasks using Python scripts in Script Mode. The script can be manually created or recorded by the WLST tool. Example commands of recording a script: startRecording("c:/temp/recording.py") <commands that we want to record> stopRecording() We can run the script from WLST: execfile("c:/temp/recording.py") We can also run the script from the command line: C:\apps\Oracle\Middleware\oracle_common\common\bin\wlst.cmd c:/temp/recording.py There are various sample scripts are provided with the WLS instance. UI to Access the WLS MBeans There are various UIs through which we can access the WLS MBeans. Oracle Enterprise Manager Fusion Middleware Control Oracle WebLogic Server Administration Console Fusion Middleware Control MBean Browser In the integrated JDeveloper environment only the Oracle WebLogic Server Administration Console is available to us. For more information refer to the documentation, one noteworthy feature in the console is the ability to record WLST scripts based on the navigation. In addition to the UIs above the JConsole included in the JDK can be used to access the WLS MBeans. The JConsole needs to be started with specific parameter to force WLS objects to be used and jar files in the classpath: "C:\apps\Oracle\Middleware\jdk160_24\bin\jconsole" -J-Djava.class.path=C:\apps\Oracle\Middleware\jdk160_24\lib\jconsole.jar;C:\apps\Oracle\Middleware\jdk160_24\lib\tools.jar;C:\apps\Oracle\Middleware\wlserver_10.3\server\lib\wljmxclient.jar -J-Djmx.remote.protocol.provider.pkgs=weblogic.management.remote For more details refer to the Accessing Custom MBeans from JConsole. Summary In this article we have covered various ways we can access and use the WLS MBeans in context of integrated WLS in JDeveloper to be used for Fusion Application customization development. References Developing Custom Management Utilities With JMX for Oracle WebLogic Server Accessing WebLogic Server MBeans with JMX WebLogic Server MBean Reference WebLogic Scripting Tool WLST Command and Variable Reference Appendix A package oracle.apps.test; import java.io.IOException;import java.net.MalformedURLException;import java.util.Hashtable;import javax.management.MBeanServerConnection;import javax.management.MalformedObjectNameException;import javax.management.ObjectName;import javax.management.remote.JMXConnector;import javax.management.remote.JMXConnectorFactory;import javax.management.remote.JMXServiceURL;import javax.naming.Context;/** * This class contains simple examples on how to access WLS MBeans using JMX. */public class BlogExample {    /**     * Connection to the WLS MBeans     */    private MBeanServerConnection connection;    /**     * Constructor that takes in the connection information for the      * domain and obtains the resources from WLS MBeans using JMX.     * @param hostName host name to connect to for the WLS server     * @param port port to connect to for the WLS server     * @param userName user name to connect to for the WLS server     * @param password password to connect to for the WLS server     */    public BlogExample(String hostName, String port, String userName,                       String password) {        super();        try {            initConnection(hostName, port, userName, password);        } catch (Exception e) {            throw new RuntimeException("Unable to connect to the domain " +                                       hostName + ":" + port);        }    }    /**     * Default constructor.     * Tries to create connection with default values. Runtime exception will be     * thrown if the default values are not used in the local instance.     */    public BlogExample() {        this("127.0.0.1", "7101", "weblogic", "weblogic1");    }    /**     * Initializes the JMX connection to the WLS Beans     * @param hostName host name to connect to for the WLS server     * @param port port to connect to for the WLS server     * @param userName user name to connect to for the WLS server     * @param password password to connect to for the WLS server     * @throws IOException error connecting to the WLS MBeans     * @throws MalformedURLException error connecting to the WLS MBeans     * @throws MalformedObjectNameException error connecting to the WLS MBeans     */    private void initConnection(String hostName, String port, String userName,                                String password)                                 throws IOException, MalformedURLException,                                        MalformedObjectNameException {        String protocol = "t3";        String jndiroot = "/jndi/";        String mserver = "weblogic.management.mbeanservers.domainruntime";        JMXServiceURL serviceURL =            new JMXServiceURL(protocol, hostName, Integer.valueOf(port),                              jndiroot + mserver);        Hashtable<String, String> h = new Hashtable<String, String>();        h.put(Context.SECURITY_PRINCIPAL, userName);        h.put(Context.SECURITY_CREDENTIALS, password);        h.put(JMXConnectorFactory.PROTOCOL_PROVIDER_PACKAGES,              "weblogic.management.remote");        JMXConnector connector = JMXConnectorFactory.connect(serviceURL, h);        connection = connector.getMBeanServerConnection();    }    /**     * Main method used to invoke the logic for testing     * @param args arguments passed to the program     */    public static void main(String[] args) {        BlogExample blogExample = new BlogExample();        blogExample.testEntryPoint();        blogExample.testDirectAccess();        blogExample.testInvokeOperation();    }    /**     * Example of using an entry point to navigate the WLS MBean hierarchy.     */    public void testEntryPoint() {        try {            System.out.println("testEntryPoint");            ObjectName service =             new ObjectName("com.bea:Name=DomainRuntimeService,Type=" +"weblogic.management.mbeanservers.domainruntime.DomainRuntimeServiceMBean");            ObjectName domainConfig =                (ObjectName)connection.getAttribute(service,                                                    "DomainConfiguration");            ObjectName[] appDeployments =                (ObjectName[])connection.getAttribute(domainConfig,                                                      "AppDeployments");            for (ObjectName appDeployment : appDeployments) {                String resourceIdentifier =                    (String)connection.getAttribute(appDeployment,                                                    "SourcePath");                System.out.println(resourceIdentifier);            }        } catch (Exception e) {            throw new RuntimeException(e);        }    }    /**     * Example of accessing WLS MBean directly with a full reference.     * This does the same thing as testEntryPoint in slightly difference way.     */    public void testDirectAccess() {        try {            System.out.println("testDirectAccess");            ObjectName appDeployment =                new ObjectName("com.bea:Location=DefaultDomain,"+                               "Name=AppsLoggerService,Type=AppDeployment");            String resourceIdentifier =                (String)connection.getAttribute(appDeployment, "SourcePath");            System.out.println(resourceIdentifier);        } catch (Exception e) {            throw new RuntimeException(e);        }    }    /**     * Example of invoking operation on a WLS MBean.     */    public void testInvokeOperation() {        try {            System.out.println("testInvokeOperation");            ObjectName appRuntimeStateRuntime =                new ObjectName("com.bea:Name=AppRuntimeStateRuntime,"+                               "Type=AppRuntimeStateRuntime");            String identifier = "AppsLoggerService";            String serverName = "DefaultServer";            Object[] parameters = { identifier, serverName };            String[] signature = { "java.lang.String", "java.lang.String" };            String result =                (String)connection.invoke(appRuntimeStateRuntime, "getCurrentState",                                          parameters, signature);            System.out.println("State of " + identifier + " = " + result);        } catch (Exception e) {            throw new RuntimeException(e);        }    }}

    Read the article

  • Add a Sleep Timer to Windows 7 Media Center

    - by DigitalGeekery
    Do you make it a habit of falling asleep at night while watching Windows Media Center? Today we are going to take a look at the MC7 Sleep Timer for Windows 7 Media Center. This simple little plugin allows you to schedule an automatic shutdown time in Media Center. Note: At this point MC7 Sleep Timer doesn’t work with extenders. If you’re using ClamAV or Panda it may detect this plugin as a virus, we’ve tested it and this is a false positive for these two antivirus apps. Installation and Usage Download and install MC7 Sleep Timer. (See download below) After the installation is finished, you will find MC7 Sleep Timer located in the Media Center Extras Library. Click on the tile to open the timer and configure your settings. The MC7 Sleep Timer will open in full screen mode. You can choose to shutdown the PC after 30 or 60 minutes, create a custom length shutdown timer at any 5 minute interval, or select the exact time you want the PC to shutdown.  After setting your PC to shutdown, you’ll get an audio confirmation. To set a custom timer length, scroll to the “Custom timer” option and click right or left on your Media Center remote or, the right or left arrow keys, to choose how many minutes before shutdown. To schedule a shutdown for a certain time, browse to the “Shutdown at time” button, and scroll right or left with the arrow keys on the keyboard or remote. When you’ve chosen your time, hit “Enter” on the keyboard or “OK” on the remote.   Clicking the “Monitor Off” button will turn off only the monitor and “Cancel Timer” will cancel your shutdown request. Conclusion If you often find yourself falling asleep every night watching Media Center and then fumbling and stumbling in the middle of the night to shutdown your computer, MC7 Sleep timer might just be a perfect addition to your Media Center setup. Download MC7 Sleep Timer Similar Articles Productive Geek Tips Using Netflix Watchnow in Windows Vista Media Center (Gmedia)Re-Enable Sleep Mode in Windows VistaSchedule Updates for Windows Media CenterIntegrate Hulu Desktop and Windows Media Center in Windows 7Add Color Coding to Windows 7 Media Center Program Guide TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Use My TextTools to Edit and Organize Text Discovery Channel LIFE Theme (Win7) Increase the size of Taskbar Previews (Win 7) Scan your PC for nasties with Panda ActiveScan CleanMem – Memory Cleaner AceStock – The Personal Stock Monitor

    Read the article

  • SQL SERVER – Automation Process Good or Ugly

    - by pinaldave
    This blog post is written in response to T-SQL Tuesday hosted by SQL Server Insane Asylum. The idea of this post really caught my attention. Automation – something getting itself done after the initial programming, is my understanding of the subject. The very next thought was – is it good or evil? The reality is there is no right answer. However, what if we quickly note a few things, then I would like to request your help to complete this post. We will start with the positive parts in SQL Server where automation happens. The Good If I start thinking of SQL Server and Automation the very first thing that comes to my mind is SQL Agent, which runs various jobs. Once I configure any task or job, it runs fine (till something goes wrong!). Well, automation has its own advantages. We all have used SQL Agent for so many things – backup, various validation jobs, maintenance jobs and numerous other things. What other kinds of automation tasks do you run in your database server? The Ugly This part is very interesting, because it can get really ugly(!). During my career I have found so many bad automation agent jobs. Client had an agent job where he was dropping the clean buffers every hour Client using database mail to send regular emails instead of necessary alert related emails The best one – A client used new Missing Index and Unused Index scripts in SQL Agent Job to follow suggestions 100%. Believe me, I have never seen such a badly performing and hard to optimize database. (I ended up dropping all non-clustered indexes on the development server and ran production workload on the development server again, then configured with optimal indexes). Shrinking database is performance killer. It should never be automated. SQL SERVER – Shrinking Database is Bad – Increases Fragmentation – Reduces Performance The one I hate the most is AutoShrink Database. It has given me hard time in my career quite a few times. SQL SERVER – SHRINKDATABASE For Every Database in the SQL Server Automation is necessary but common sense is a must when creating automation. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Alert visualization recipe: Get out your blender, drop in some sp_send_dbmail, Google Charts API, add your favorite colors and sprinkle with html. Blend till it’s smooth and looks pretty enough to taste.

    - by Maria Zakourdaev
      I really like database monitoring. My email inbox have a constant flow of different types of alerts coming from our production servers with all kinds of information, sometimes more useful and sometimes less useful. Usually database alerts look really simple, it’s usually a plain text email saying “Prod1 Database data file on Server X is 80% used. You’d better grow it manually before some query triggers the AutoGrowth process”. Imagine you could have received email like the one below.  In addition to the alert description it could have also included the the database file growth chart over the past 6 months. Wouldn’t it give you much more information whether the data growth is natural or extreme? That’s truly what data visualization is for. Believe it or not, I have sent the graph below from SQL Server stored procedure without buying any additional data monitoring/visualization tool.   Would you like to visualize your database alerts like I do? Then like myself, you’d love the Google Charts. All you need to know is a little HTML and have a mail profile configured on your SQL Server instance regardless of the SQL Server version. First of all, I hope you know that the sp_send_dbmail procedure has a great parameter @body_format = ‘HTML’, which allows us to send rich and colorful messages instead of boring black and white ones. All that we need is to dynamically create HTML code. This is how, for instance, you can create a table and populate it with some data: DECLARE @html varchar(max) SET @html = '<html>' + '<H3><font id="Text" style='color: Green;'>Top Databases: </H3>' + '<table border="1" bordercolor="#3300FF" style='background-color:#DDF8CC' width='70%' cellpadding='3' cellspacing='3'>' + '<tr><font color="Green"><th>Database Name</th><th>Size</th><th>Physical Name</th></tr>' + CAST( (SELECT TOP 10                             td = name,'',                             td = size * 8/1024 ,'',                             td = physical_name              FROM sys.master_files               ORDER BY size DESC             FOR XML PATH ('tr'),TYPE ) AS VARCHAR(MAX)) + '</table>' EXEC msdb.dbo.sp_send_dbmail @recipients = '[email protected]', @subject ='Top databases', @body = @html, @body_format = 'HTML' This is the result:   If you want to add more visualization effects, you can use Google Charts Tools https://google-developers.appspot.com/chart/interactive/docs/index which is a free and rich library of data visualization charts, they’re also easy to populate and embed. There are two versions of the Google Charts Image based charts: https://google-developers.appspot.com/chart/image/docs/gallery/chart_gall This is an old version, it’s officially deprecated although it will be up for a next few years or so. I really enjoy using this one because it can be viewed within the email body. For mobile devices you need to change the “Load remote images” property in your email application configuration.           Charts based on JavaScript classes: https://google-developers.appspot.com/chart/interactive/docs/gallery This API is newer, with rich and highly interactive charts, and it’s much more easier to understand and configure. The only downside of it is that they cannot be viewed within the email body. Outlook, Gmail and many other email clients, as part of their security policy, do not run any JavaScript that’s placed within the email body. However, you can still enjoy this API by sending the report as an email attachment. Here is an example of the old version of Google Charts API, sending the same top databases report as in the previous example but instead of a simple table, this script is using a pie chart right from  the T-SQL code DECLARE @html  varchar(8000) DECLARE @Series  varchar(800),@Labels  varchar(8000),@Legend  varchar(8000);     SET @Series = ''; SET @Labels = ''; SET @Legend = ''; SELECT TOP 5 @Series = @Series + CAST(size * 8/1024 as varchar) + ',',                         @Labels = @Labels +CAST(size * 8/1024 as varchar) + 'MB'+'|',                         @Legend = @Legend + name + '|' FROM sys.master_files ORDER BY size DESC SELECT @Series = SUBSTRING(@Series,1,LEN(@Series)-1),         @Labels = SUBSTRING(@Labels,1,LEN(@Labels)-1),         @Legend = SUBSTRING(@Legend,1,LEN(@Legend)-1) SET @html =   '<H3><font color="Green"> '+@@ServerName+' top 5 databases : </H3>'+    '<br>'+    '<img src="http://chart.apis.google.com/chart?'+    'chf=bg,s,DDF8CC&'+    'cht=p&'+    'chs=400x200&'+    'chco=3072F3|7777CC|FF9900|FF0000|4A8C26&'+    'chd=t:'+@Series+'&'+    'chl='+@Labels+'&'+    'chma=0,0,0,0&'+    'chdl='+@Legend+'&'+    'chdlp=b"'+    'alt="'+@@ServerName+' top 5 databases" />'              EXEC msdb.dbo.sp_send_dbmail @recipients = '[email protected]',                             @subject = 'Top databases',                             @body = @html,                             @body_format = 'HTML' This is what you get. Isn’t it great? Chart parameters reference: chf     Gradient fill  bg - backgroud ; s- solid cht     chart type  ( p - pie) chs        chart size width/height chco    series colors chd        chart data string        1,2,3,2 chl        pir chart labels        a|b|c|d chma    chart margins chdl    chart legend            a|b|c|d chdlp    chart legend text        b - bottom of chart   Line graph implementation is also really easy and powerful DECLARE @html varchar(max) DECLARE @Series varchar(max) DECLARE @HourList varchar(max) SET @Series = ''; SET @HourList = ''; SELECT @HourList = @HourList + SUBSTRING(CONVERT(varchar(13),last_execution_time,121), 12,2)  + '|' ,              @Series = @Series + CAST( COUNT(1) as varchar) + ',' FROM sys.dm_exec_query_stats s     CROSS APPLY sys.dm_exec_sql_text(plan_handle) t WHERE last_execution_time > = getdate()-1 GROUP BY CONVERT(varchar(13),last_execution_time,121) ORDER BY CONVERT(varchar(13),last_execution_time,121) SET @Series = SUBSTRING(@Series,1,LEN(@Series)-1) SET @html = '<img src="http://chart.apis.google.com/chart?'+ 'chco=CA3D05,87CEEB&'+ 'chd=t:'+@Series+'&'+ 'chds=1,350&'+ 'chdl= Proc executions from cache&'+ 'chf=bg,s,1F1D1D|c,lg,0,363433,1.0,2E2B2A,0.0&'+ 'chg=25.0,25.0,3,2&'+ 'chls=3|3&'+ 'chm=d,CA3D05,0,-1,12,0|d,FFFFFF,0,-1,8,0|d,87CEEB,1,-1,12,0|d,FFFFFF,1,-1,8,0&'+ 'chs=600x450&'+ 'cht=lc&'+ 'chts=FFFFFF,14&'+ 'chtt=Executions for from' +(SELECT CONVERT(varchar(16),min(last_execution_time),121)          FROM sys.dm_exec_query_stats          WHERE last_execution_time > = getdate()-1) +' till '+ +(SELECT CONVERT(varchar(16),max(last_execution_time),121)     FROM sys.dm_exec_query_stats) + '&'+ 'chxp=1,50.0|4,50.0&'+ 'chxs=0,FFFFFF,12,0|1,FFFFFF,12,0|2,FFFFFF,12,0|3,FFFFFF,12,0|4,FFFFFF,14,0&'+ 'chxt=y,y,x,x,x&'+ 'chxl=0:|1|350|1:|N|2:|'+@HourList+'3:|Hour&'+ 'chma=55,120,0,0" alt="" />' EXEC msdb.dbo.sp_send_dbmail @recipients = '[email protected]', @subject ='Daily number of executions', @body = @html, @body_format = 'HTML' Chart parameters reference: chco    series colors chd        series data chds    scale format chdl    chart legend chf        background fills chg        grid line chls    line style chm        line fill chs        chart size cht        chart type chts    chart style chtt    chart title chxp    axis label positions chxs    axis label styles chxt    axis tick mark styles chxl    axis labels chma    chart margins If you don’t mind to get your charts as an email attachment, you can enjoy the Java based Google Charts which are even easier to configure, and have much more advanced graphics. In the example below, the sp_send_email procedure uses the parameter @query which will be executed at the time that sp_send_dbemail is executed and the HTML result of this execution will be attached to the email. DECLARE @html varchar(max),@query varchar(max) DECLARE @SeriesDBusers  varchar(800);     SET @SeriesDBusers = ''; SELECT @SeriesDBusers = @SeriesDBusers +  ' ["'+DB_NAME(r.database_id) +'", ' +cast(count(1) as varchar)+'],' FROM sys.dm_exec_requests r GROUP BY DB_NAME(database_id) ORDER BY count(1) desc; SET @SeriesDBusers = SUBSTRING(@SeriesDBusers,1,LEN(@SeriesDBusers)-1) SET @query = ' PRINT '' <html>   <head>     <script type="text/javascript" src="https://www.google.com/jsapi"></script>     <script type="text/javascript">       google.load("visualization", "1", {packages:["corechart"]});        google.setOnLoadCallback(drawChart);       function drawChart() {                      var data = google.visualization.arrayToDataTable([                        ["Database Name", "Active users"],                        '+@SeriesDBusers+'                      ]);                        var options = {                        title: "Active users",                        pieSliceText: "value"                      };                        var chart = new google.visualization.PieChart(document.getElementById("chart_div"));                      chart.draw(data, options);       };     </script>   </head>   <body>     <table>     <tr><td>         <div id="chart_div" style='width: 800px; height: 300px;'></div>         </td></tr>     </table>   </body> </html> ''' EXEC msdb.dbo.sp_send_dbmail    @recipients = '[email protected]',    @subject ='Active users',    @body = @html,    @body_format = 'HTML',    @query = @Query,     @attach_query_result_as_file = 1,     @query_attachment_filename = 'Results.htm' After opening the email attachment in the browser you are getting this kind of report: In fact, the above is not only for database alerts. It can be used for applicative reports if you need high levels of customization that you cannot achieve using standard methods like SSRS. If you need more information on how to customize the charts, you can try the following: Image Based Charts wizard https://google-developers.appspot.com/chart/image/docs/chart_wizard  Live Image Charts Playground https://google-developers.appspot.com/chart/image/docs/chart_playground Image Based Charts Parameters List https://google-developers.appspot.com/chart/image/docs/chart_params Java Script Charts Playground https://code.google.com/apis/ajax/playground/?type=visualization Use the above examples as a starting point for your procedures and I’d be more than happy to hear of your implementations of the above techniques. Yours, Maria

    Read the article

  • Devoxx 2011 Trip Report + Pictures

    - by arungupta
    3350 attendees from 40 countries lived in "paradise" for 5 days last week. This paradise had 170+ rock star speakers delivering 200+ hours of technical content in about 150 sessions. And it truly was a paradise with a clear differentiation from other Java conferences. There were several Oracle speakers at the paradise covering the entire gamut of Java platform. I delivered a Java EE 6 hands-on lab (new content), showcased Java EE 7 and GlassFish 4.0 early work at the keynote, and participated in a panel to talk about Contexts and Dependency Injection. The demo in the keynote showed how to deploy a Java EE application in a managed environment. The demo showed a Conference Planner application that can be used by conference organizers to display sessions, tracks, and speaker information. This same application can be deployed and display data from JavaOne 2011 or Devoxx 2011 based upon the SQL chosen for database initialization. If javaone-sf-2011.sql is chosen for datbase initialization then the application looks like as shown: If devoxx-2011.sql is chosen then the application looks like as shown: And of course, clicking on Tracks, Speakers, Sessions shows you information from the respective conference. The complete source code for the application and detailed instructions are availaable at glassfish.org/javaone2011. In short: Download the sample app and unzip Download GlassFish build b05. Download platform-specific Load Balancer template Run "bin/install.sh" to configure GlassFish Pick javaone-sf-2011.sql or devoxx-2011.sql for database initialization You can also watch the application in action in this video: A breaking news shared at the conference was that Devoxx France is coming from April 18- 20 and 75% of the talks will be in French. Stay tuned for more details on that. I'm sure Antonio and gang will put up a great show out there! Just a tip for the first timers to Devoxx ... A bus leaves from Brussels airport to Antwerp city center between 4am - 11pm at the top of every hour, takes about 45 minutes, and costs 10 euros (only cash). Take a tram #6 (going towards Luchtbal) from Astrid station (next to the city center) and get off at the last station for Metropolis. It takes about 15 minutes. Purchase a day pass at the station using kiosks (much cheaper) or you can buy in the bus as well (about double the price). Either way, cash only. Here are a few pictures captured from the event: And the complete album here: Thank you Stephan for giving me an opportunity to speak at my first Devoxx. I hope to be back next year, just in time for Java EE 7 going final!

    Read the article

  • How do I resolve not fully installed package (python3-setuptools)?

    - by user3737693
    I was trying to install python3-setuptools, and when i run $ sudo apt-get install python3-setuptools I get this error: Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 0 to remove and 6 not upgraded. 1 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Setting up python3-setuptools (0.6.34-0ubuntu1) ... Traceback (most recent call last): File "/usr/bin/py3compile", line 36, in <module> from debpython import files as dpf File "/usr/share/python3/debpython/files.py", line 25, in <module> from debpython.pydist import PUBLIC_DIR_RE File "/usr/share/python3/debpython/pydist.py", line 28, in <module> from debpython.tools import memoize File "/usr/share/python3/debpython/tools.py", line 25, in <module> from datetime import datetime ImportError: /usr/bin/datetime.so: undefined symbol: _Py_ZeroStruct dpkg: error processing python3-setuptools (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: python3-setuptools E: Sub-process /usr/bin/dpkg returned an error code (1) I tried apt-get clean, apt-get autoclean, apt-get remove python3-setuptools, dpkg --remove python3-setuptools, apt-get install -f, dpkg -P --force-remove-reinstreq, dpkg -P --force-all --force-remove-reinstreq and dpkg --purge, but none of them worked. Output of sudo dpkg -P --force-all --force-remove-reinstreq python3-setuptools (Reading database ... 225309 files and directories currently installed.) Removing python3-setuptools ... Traceback (most recent call last): File "/usr/bin/py3clean", line 32, in <module> from debpython import files as dpf File "/usr/share/python3/debpython/files.py", line 25, in <module> from debpython.pydist import PUBLIC_DIR_RE File "/usr/share/python3/debpython/pydist.py", line 28, in <module> from debpython.tools import memoize File "/usr/share/python3/debpython/tools.py", line 25, in <module> from datetime import datetime ImportError: /usr/bin/datetime.so: undefined symbol: _Py_ZeroStruct dpkg: error processing python3-setuptools (--purge): subprocess installed pre-removal script returned error exit status 1 Traceback (most recent call last): File "/usr/bin/py3compile", line 36, in <module> from debpython import files as dpf File "/usr/share/python3/debpython/files.py", line 25, in <module> from debpython.pydist import PUBLIC_DIR_RE File "/usr/share/python3/debpython/pydist.py", line 28, in <module> from debpython.tools import memoize File "/usr/share/python3/debpython/tools.py", line 25, in <module> from datetime import datetime ImportError: /usr/bin/datetime.so: undefined symbol: _Py_ZeroStruct dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: python3-setuptools

    Read the article

  • Configuring JPA Primary key sequence generators

    - by pachunoori.vinay.kumar(at)oracle.com
    This article describes the JPA feature of generating and assigning the unique sequence numbers to JPA entity .This article provides information on jpa sequence generator annotations and its usage. UseCase Description Adding a new Employee to the organization using Employee form should assign unique employee Id. Following description provides the detailed steps to implement the generation of unique employee numbers using JPA generators feature Steps to configure JPA Generators 1.Generate Employee Entity using "Entities from Table Wizard". View image2.Create a Database Connection and select the table "Employee" for which entity will be generated and Finish the wizards with default selections. View image 3.Select the offline database sources-Schema-create a Sequence object or you can copy to offline db from online database connection. View image 4.Open the persistence.xml in application navigator and select the Entity "Employee" in structure view and select the tab "Generators" in flat editor. 5.In the Sequence Generator section,enter name of sequence "InvSeq" and select the sequence from drop down list created in step3. View image 6.Expand the Employees in structure view and select EmployeeId and select the "Primary Key Generation" tab.7.In the Generated value section,select the "Use Generated value" check box ,select the strategy as "Sequence" and select the Generator as "InvSeq" defined step 4. View image   Following annotations gets added for the JPA generator configured in JDeveloper for an entity To use a specific named sequence object (whether it is generated by schema generation or already exists in the database) you must define a sequence generator using a @SequenceGenerator annotation. Provide a unique label as the name for the sequence generator and refer the name in the @GeneratedValue annotation along with generation strategy  For  example,see the below Employee Entity sample code configured for sequence generation. EMPLOYEE_ID is the primary key and is configured for auto generation of sequence numbers. EMPLOYEE_SEQ is the sequence object exist in database.This sequence is configured for generating the sequence numbers and assign the value as primary key to Employee_id column in Employee table. @SequenceGenerator(name="InvSeq", sequenceName = "EMPLOYEE_SEQ")   @Entity public class Employee implements Serializable {    @Id    @Column(name="EMPLOYEE_ID", nullable = false)    @GeneratedValue(strategy = GenerationType.SEQUENCE, generator="InvSeq")   private Long employeeId; }   @SequenceGenerator @GeneratedValue @SequenceGenerator - will define the sequence generator based on a  database sequence object Usage: @SequenceGenerator(name="SequenceGenerator", sequenceName = "EMPLOYEE_SEQ") @GeneratedValue - Will define the generation strategy and refers the sequence generator  Usage:     @GeneratedValue(strategy = GenerationType.SEQUENCE, generator="name of the Sequence generator defined in @SequenceGenerator")

    Read the article

  • Oracle VM Deep Dives

    - by rickramsey
    "With IT staff now tasked to deliver on-demand services, datacenter virtualization requirements have gone beyond simple consolidation and cost reduction. Simply provisioning and delivering an operating environment falls short. IT organizations must rapidly deliver services, such as infrastructure-as-a-service (IaaS), platform-as-a-service (PaaS), and software-as-a-service (SaaS). Virtualization solutions need to be application-driven and enable:" "Easier deployment and management of business critical applications" "Rapid and automated provisioning of the entire application stack inside the virtual machine" "Integrated management of the complete stack including the VM and the applications running inside the VM." Application Driven Virtualization, an Oracle white paper That was published in August of 2011. The new release of Oracle VM Server delivers significant virtual networking performance improvements, among other things. If you're not sure how virtual networks work or how to use them, these two articles by Greg King and friends might help. Looking Under the Hood at Virtual Networking by Greg King Oracle VM Server for x86 lets you create logical networks out of physical Ethernet ports, bonded ports, VLAN segments, virtual MAC addresses (VNICs), and network channels. You can then assign channels (or "roles") to each logical network so that it handles the type of traffic you want it to. Greg King explains how you go about doing this, and how Oracle VM Server for x86 implements the network infrastructure you configured. He also describes how the VM interacts with paravirtualized guest operating systems, hardware virtualized operating systems, and VLANs. Finally, he provides an example that shows you how it all looks from the VM Manager view, the logical view, and the command line view of Oracle VM Server for x86. Fundamental Concepts of VLAN Networks by Greg King and Don Smerker Oracle VM Server for x86 supports a wide range of options in network design, varying in complexity from a single network to configurations that include network bonds, VLANS, bridges, and multiple networks connecting the Oracle VM servers and guests. You can create separate networks to isolate traffic, or you can configure a single network for multiple roles. Network design depends on many factors, including the number and type of network interfaces, reliability and performance goals, the number of Oracle VM servers and guests, and the anticipated workload. The Oracle VM Manager GUI presents four different ways to create an Oracle VM network: Bonds and ports VLANs Both bond/ports and VLANS A local network This article focuses the second option, designing a complex Oracle VM network infrastructure using only VLANs, and it steps through the concepts needed to create a robust network infrastructure for your Oracle VM servers and guests. More Resources Virtual Networking for Dummies Download Oracle VM Server for x86 Find technical resources for Oracle VM Server for x86 -Rick Follow me on: Blog | Facebook | Twitter | Personal Twitter | YouTube | The Great Peruvian Novel

    Read the article

  • Haskell's cabal dependency problem with happy

    - by wirrbel
    I have problems installing ghc-mod on my linux machine. cabal worries about "happy" not being available in versione = 1.17: $ cabal install ghc-mod Resolving dependencies... [1 of 1] Compiling Main ( /tmp/haskell-src-exts-1.14.0-1357/haskell-src-exts-1.14.0/Setup.hs, /tmp/haskell-src-exts-1.14.0-1357/haskell-src-exts-1.14.0/dist/setup/Main.o ) Linking /tmp/haskell-src-exts-1.14.0-1357/haskell-src-exts-1.14.0/dist/setup/setup ... Configuring haskell-src-exts-1.14.0... setup: The program happy version =1.17 is required but it could not be found. Failed to install haskell-src-exts-1.14.0 cabal: Error: some packages failed to install: ghc-mod-3.1.3 depends on haskell-src-exts-1.14.0 which failed to install. haskell-src-exts-1.14.0 failed during the configure step. The exception was: ExitFailure 1 hlint-1.8.53 depends on haskell-src-exts-1.14.0 which failed to install. However, it even is installed in v. 1.19, as you can see here: $ cabal install happy Resolving dependencies... [1 of 1] Compiling Main ( /tmp/happy-1.19.0-1124/happy-1.19.0/Setup.lhs, /tmp/happy-1.19.0-1124/happy-1.19.0/dist/setup/Main.o ) Linking /tmp/happy-1.19.0-1124/happy-1.19.0/dist/setup/setup ... Configuring happy-1.19.0... Building happy-1.19.0... Preprocessing executable 'happy' for happy-1.19.0... [ 1 of 18] Compiling NameSet ( src/NameSet.hs, dist/build/happy/happy-tmp/NameSet.o ) [ 2 of 18] Compiling Target ( src/Target.lhs, dist/build/happy/happy-tmp/Target.o ) [ 3 of 18] Compiling AbsSyn ( src/AbsSyn.lhs, dist/build/happy/happy-tmp/AbsSyn.o ) [ 4 of 18] Compiling ParamRules ( src/ParamRules.hs, dist/build/happy/happy-tmp/ParamRules.o ) [ 5 of 18] Compiling GenUtils ( src/GenUtils.lhs, dist/build/happy/happy-tmp/GenUtils.o ) [ 6 of 18] Compiling ParseMonad ( src/ParseMonad.lhs, dist/build/happy/happy-tmp/ParseMonad.o ) [ 7 of 18] Compiling Lexer ( src/Lexer.lhs, dist/build/happy/happy-tmp/Lexer.o ) [ 8 of 18] Compiling Parser ( dist/build/happy/happy-tmp/Parser.hs, dist/build/happy/happy-tmp/Parser.o ) [ 9 of 18] Compiling AttrGrammar ( src/AttrGrammar.lhs, dist/build/happy/happy-tmp/AttrGrammar.o ) [10 of 18] Compiling AttrGrammarParser ( dist/build/happy/happy-tmp/AttrGrammarParser.hs, dist/build/happy/happy-tmp/AttrGrammarParser.o ) [11 of 18] Compiling Grammar ( src/Grammar.lhs, dist/build/happy/happy-tmp/Grammar.o ) [12 of 18] Compiling First ( src/First.lhs, dist/build/happy/happy-tmp/First.o ) [13 of 18] Compiling LALR ( src/LALR.lhs, dist/build/happy/happy-tmp/LALR.o ) [14 of 18] Compiling Paths_happy ( dist/build/autogen/Paths_happy.hs, dist/build/happy/happy-tmp/Paths_happy.o ) [15 of 18] Compiling ProduceCode ( src/ProduceCode.lhs, dist/build/happy/happy-tmp/ProduceCode.o ) [16 of 18] Compiling ProduceGLRCode ( src/ProduceGLRCode.lhs, dist/build/happy/happy-tmp/ProduceGLRCode.o ) [17 of 18] Compiling Info ( src/Info.lhs, dist/build/happy/happy-tmp/Info.o ) [18 of 18] Compiling Main ( src/Main.lhs, dist/build/happy/happy-tmp/Main.o ) Linking dist/build/happy/happy ... Installing executable(s) in /home/hope/.cabal/bin Installed happy-1.19.0 Any ideas? cabal-install version 1.16.0.2 using version 1.16.0 of the Cabal library

    Read the article

  • What&rsquo;s new in VS.10 &amp; TFS.10?

    - by johndoucette
    Getting my geek on… I have decided to call the products VS.10 (Visual Studio 2010), TP.10 (Test Professional 2010),  and TFS.10 (Team Foundation Server 2010) Thanks Neno Loje. What's new in Visual Studio & Team Foundation Server 2010? Focusing on Visual Studio Team System (VSTS) ALM-related parts: Visual Studio Ultimate 2010 NEW: IntelliTrace® (aka the historical debugger) NEW: Architecture Tools New Project Type: Modeling Project UML Diagrams UML Use Case Diagram UML Class Diagram UML Sequence Diagram (supports reverse enginneering) UML Activity Diagram UML Component Diagram Layer Diagram (with Team Build integration for layer validation) Architecuture Explorer Dependency visualization DGML Web & Load Tests Visual Studio Premium 2010 NEW: Architecture Tools Read-only model viewer Development Tools Code Analysis New Rules like SQL Injection detection Rule Sets Code Profiler Multi-Tier Profiling JScript Profiling Profiling applications on virtual machines in sampling mode Code Metrics Test Tools Code Coverage NEW: Test Impact Analysis NEW: Coded UI Test Database Tools (DB schema versioning & deployment) Visual Studio Professional 2010 Debuger Mixed Mode Debugging for 64-bit Applications Export/Import of Breakpoints and data tips Visual Studio Test Professional 2010 Microsoft Test Manager (MTM, formerly known as "Camano")) Fast Forward Testing Visual Studio Team Foundation Server 2010 Work Item Tracking and Project Management New MSF templatesfor Agile and CMMI (V 5.0) Hierarchical Work Items Custom Work Item Link Types Ready to use Excel agile project management workbooks for managing your backlogs (including capacity planing) Convert Work Item query to an Excel report MS Excel integration Support for Work Item hierarchies Formatting is preserved after doing a 'Refresh' MS Project integration Hierarchy and successor/predecessor info is now synchronized NEW: Test Case Management Version Control Public Workspaces Branch & Merge Visualization Tracking of Changesets & Work Items Gated Check-In Team Build Build Controllers and Agents Workflow 4-based build process NEW: Lab Management (only a pre-release is avaiable at the moment!) Project Portal & Reporting Dashboards (on SharePoint Portal) Burndown Chart TFS Web Parts (to show data from TFS) Administration & Operations Topology enhancements Application tier network load balancing (NLB) SQL Server scale out Improved Sharepoint flexibility Report Server flexibility Zone support Kerberos support Separation of TFS and SQL administration Setup Separate install from configure Improved installation wizards Optional components Simplified account requirements Improved Reporting Services configuration Setup consolidation Upgrading from previous TFS versions Improved IIS flexibility Administration Consolidation of command line tools User rename support Project Collections Archive/restore individual project collections Move Team Project Collections Server consolidation Team Project Collection Split Team Project Collection Isolation Server request cancellation Licensing: TFS server license included in MSDN subscriptions Removed features (former features not part of Visual Studio 2010): Debug » Start With Application Verifier Object Test Bench IntelliSense for C++ / CLI Debugging support for SQL 2000

    Read the article

  • Deploying Socket.IO App to Windows Azure Web Site with Azure CLI

    - by shiju
    In this blog post, I will demonstrate how to deploy Socket.IO app to Windows Azure Website using Windows Azure Cross-Platform Command-Line Interface, which leverages the Windows Azure Website’s new support for Web Sockets. Recently Windows Azure has announced lot of enhancements including the support for Web Sockets in Windows Azure Websites, which lets the Node.js developers deploy Socket.IO apps to Windows Azure Websites. In this blog post, I am using  Windows Azure CLI for create and deploy Windows Azure Website. Install  Windows Azure CLI The Windows Azure CLI available as a NPM module so that you can install Windows Azure CLI using  NPM as shown in the below command. After installing the azure-cli, just enter the command “azure” which will show the useful commands provided by Azure CLI. Import Windows Azure Subscription Account In order to import our Azure subscription account, we need to download the Windows Azure subscription profile. The Azure CLI command “account download” lets you download the  Windows Azure subscription profile as shown in the below command. The command redirect you login to Windows Azure portal and allow you to download the Windows Azure publish settings file. The account import command lets you import the downloaded publish settings file so that you can create and manage Websites, Cloud Services, Virtual Machines and Mobile Services in Windows Azure. Create Windows Azure Website and Enable Web Sockets In this post, we are going to deploy Socket.IO app to Windows Azure Website by using the Web Socket support provided by Windows Azure. Let’s create a Website named “socketiochatapp” using the Azure CLI. The above command will create a Windows Azure Website that will also initialize a Git repository with a remote named Azure. We can see the newly created Website from Azure portal. By default, the Web Sockets will be disabled. So let’s enable it by navigating to the Configure tab of the Website, and select “ON” in Web Sockets option and save the configuration changes. Deploy a Node.js Socket.IO App to Windows Azure Now, our Windows Azure Website supports Web Sockets so that we can easily deploy Socket.IO app to Windows Azure Website. Let’s add Node.js chat app which leverages Socket.IO module. Please note that you have to add npm module dependencies in the package.json file so that Windows Azure can install the dependencies when deploying the app. Let’s add the Node.js app and add the files to git repository. Let’s commit the changes to git repository. We have committed the changes to git local repository. Let’s push the changes to Windows Azure production environment. The successful deployment can see from the Windows Azure portal by navigating to the deployments tab of the selected Windows Azure Website. The screen shot below shows that our chat app is running successfully.   You can follow me on Twitter @shijucv

    Read the article

  • How can I troubleshoot flash player/hardware conflict?

    - by sparthikas
    OBJECTIVE: Have a web browser on my Ubuntu install that can play youtube and hulu videos. Also would like to understand problem so that I can fix it again if I change software. Workarounds welcome, technical understanding and solution preferable. SYMPTOMS: Flash does not run - cannot make the right-click menu appear, an empty box is where video should be, changes to black box when hovering over other links. The "The Adobe Flash plugin has crashed" message does not appear with its sad lego face. cannot activate proprietary graphics driver - causes system to reboot to a prompt. SOLUTIONS TRIED: Replaced OS (tried slackware 13.37, fedora 17, linuxmint 13 maya, gentoo, lubuntu, and even winXP. lubuntu confirmed to work, don't remember how much tweaking, if any, this required. Slack, fedora, mint, and gentoo all failed to run flash just like ubuntu) many reinstalls of flash player via different methods, including cleaning up old installs first, also tried gnash and lightspark. replaced graphics card (replaced HIS IceQ Radeon HD 4670 AGP with older GeForce 5700 LE no change in problem) flash does successfully work on winXP installation with Catalyst AGP hotfix driver applied, however I consider windows wholly unacceptable for web browsing due to lack of security. Lubuntu install also works, however I do not want to be tied down to just using Lubuntu on this computer. SYSINFO: Have latest versions of Ubuntu, Firefox, and Flash on fresh Ubuntu install. Using Gigabyte 7s748 motherboard with Athlon XP 2800+ and 3 GB of RAM with Radeon HD 4670 AGP card, also a Dell soundblaster live series sound card (due to malfunction of onboard sound on motherboard) Wired internet connection, Maxtor 6Y120L3 HDD, Sony DVD RW AW-Q170A, Dell M993s monitor. NOTES: I do not know if the graphics driver issue and the flash troubles are linked. Substitution of older graphics card having same flash troubles seems to suggest they aren't. My troubleshooting method is rather reductionist, consisting mainly of "replace things until you find out what was causing the error by process of elimination" only it seems that this must be a conflict which arises when software decides how to configure itself on my hardware. That is, I know the hardware can run Flash, and I know that on other systems the same software can too, but somehow the combination fails. Consequently I feel out of my depth. I will keep trying things off and on, but I have spent probably about 30 man-hours in the last 4 months working on this problem with no joy other than the lubuntu workaround. Any help will be appreciated, I will be checking back and posting updates. Any pertinent questions regarding me or my computer will be answered, outputs from config files can be accessed and posted (IDK which ones or what parts to post).

    Read the article

< Previous Page | 220 221 222 223 224 225 226 227 228 229 230 231  | Next Page >