Search Results

Search found 31206 results on 1249 pages for 'version detection'.

Page 340/1249 | < Previous Page | 336 337 338 339 340 341 342 343 344 345 346 347  | Next Page >

  • submiting forms values xml flash php

    - by Menew
    I want to dynamically submit form values into this xml file: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <globals soundFXvolume="5" fadeInTime="0.25" glowKnockout="false" glowInner="false" glowQuality="1" glowStrength="1" glowBlurRadius="2" glowAlpha="1"/> however, i was trying to change the original xml file to this: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <globals> <soundFXvolume>5</soundFXvolume> <fadeInTime>0.25</fadeInTime> <glowKnockout>false</glowKnockout> <glowInner>false</glowInner> <glowQuality>1</glowQuality> <glowStrength>1</glowStrength> <glowBlurRadius>2</glowBlurRadius> <glowAlpha>1</glowAlpha> </globals> then from there, i will use php to submit the form values. is there a way to submit my form values to the original xml below: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <globals soundFXvolume="want this to be form values" fadeInTime="0.25" glowKnockout="false" glowInner="false" glowQuality="1" glowStrength="1" glowBlurRadius="want this to be form values" glowAlpha="1"/> thanks

    Read the article

  • RESTful API: How to model JSON representation?

    - by Jan P.
    I am designing a RESTful API for a booking application. You can request a list of accommodations. And that's where I don't really know how to design the JSON represenation. This is my XML representation: <?xml version="1.0" encoding="utf-8"?> <accommodations> <accommodation> <name>...</name> <category>couch</category> </accommodation> <accommodation> <name>...</name> <category>room</category> </accommodation> <accommodations> My first try to convert this to JSON resulted in this output (1): { "0": { "name": "...", "category": "couch" }, "1": { "name": "...", "category": "room" } } But as I looked how others APIs did it, I found something looking more like this (2): [ { "name": "...", "category": "couch" }, { "name": "...", "category": "room" } ] I know version 1 is an object, and version 2 an array. But which one is better in this case?

    Read the article

  • JAR file: Could not find main class

    - by ApertureT3CH
    Okay, I have a strange problem. I wanted to run one of my programs as a .jar file, but when I open it by double-clicking it, I get an error message like "Could not find main class, program is shutting down". I'm pretty sure I did everything right, the jar should work afaik. I also tried other programs, it's the same with every single one. (I'm creating the .jar's through BlueJ) There is no problem when I run them through a .bat . And here comes the strangest thing of all: The .jar's have worked some time ago (one or two months I guess), and I don't remember doing anything different. It's the same BlueJ-Version. Okay, maybe Java updated and something got messed up... I googled, but I couldn't find a solution. (some people seem to have a similar problem, and it seems to be only them who can't run their .jar's; they uploaded them and other people say the .jar's run fine.) What could be the problem? How can I solve it? I'd really appreciate some help here. Thank you :) ApertureT3CH EDIT: okay guys, you're making me unsure here. Imma check the manifest again, at this unholy time ( 1:34 am ) :P EDIT2: This is my MANIFEST.MF Manifest-Version: 1.0 Class-Path: Main-Class: LocalChatClientGUI [empty line] [empty line] The Main class is correct. EDIT3: Thanks to hgrey: There is nothing wrong with the jar. I can run it from a bat file, which actually should not be different from double-clicking the jar, right? Yet I get the error when clicking it, and it works fine through the bat. EDIT4: I finally solved the problem. I re-installed the JRE and now it works, although I can't see any version differences. Thanks to everyone!

    Read the article

  • MySQL Check and compare and if necessary create data.

    - by user2979677
    Hello I have three different tables Talk ID type_id YEAR NUM NUM_LETTER DATE series_id TALK speaker_id SCRIPREF DONE DOUBLE_SID DOUBLE_TYP LOCAL_CODE MISSING RESTRICTED WHY_RESTRI SR_S_1 SR_E_1 BCV_1 SR_S_2 SR_E_2 BCV_2 SR_S_3 SR_E_3 BCV_3 SR_S_4 SR_E_4 BCV_4 QTY_IV organisation_id recommended topic_id thumbnail mp3_file_size duration version Product_component id product_id talk_id position version Product id created product_type_id last_modified last_modified_by num_sold current_stock min_stock max_stock comment organisation_id series_id name subscription_type_id recipient_id discount_start discount_amount discount_desc discount_finish discount_percent voucher_amount audio_points talk_id price_override product_desc instant_download_status_id downloads instant_downloads promote_start promote_finish promote_desc restricted from_tape discontinued discontinued_reason discontinued_date external_url version I want to create a procedure that will check if select the id from talk and compare it with the id of product to see if there is a product id in the table and if there isn't then create it however my problem is that my tables talk and product can't talk, as id in talk is related to talk_id in product_component and id in product is related to product_id in product_component. Are there any ways for this to be done? I tried this, CREATE DEFINER=`sthelensmedia`@`localhost` PROCEDURE `CreateSingleCDProducts`() BEGIN DECLARE t_id INT; DECLARE t_restricted BOOLEAN; DECLARE t_talk CHAR(255); DECLARE t_series_id INT; DECLARE p_id INT; DECLARE done INT DEFAULT 0; DECLARE cur1 CURSOR FOR SELECT id,restricted,talk,series_id FROM talk WHERE organisation_id=2; DECLARE CONTINUE HANDLER FOR NOT FOUND SET done=1; UPDATE product SET restricted=TRUE WHERE product_type_id=3; OPEN cur1; create_loop: LOOP FETCH cur1 INTO t_id, t_restricted, t_talk, t_series_id; IF done=1 THEN LEAVE create_loop; END IF; INSERT INTO product (created,product_type_id,last_modified,organisation_id,series_id,name,restricted) VALUES (NOW(),3,NOW(),2,t_series_id,t_talk,t_restricted); SELECT LAST_INSERT_ID() INTO p_id; INSERT INTO product_component (product_id,talk_id,position) VALUES (p_id,t_id,0); END LOOP create_loop; CLOSE cur1; END Just wondering if anyone could help me.

    Read the article

  • SQL Server 05, which is optimal, LIKE %<term>% or CONTAINS() for searching large column

    - by Spud1
    I've got a function written by another developer which I am trying to modify for a slightly different use. It is used by a SP to check if a certain phrase exists in a text document stored in the DB, and returns 1 if the value is found or 0 if its not. This is the query: SELECT @mres=1 from documents where id=@DocumentID and contains(text, @search_term) The document contains mostly XML, and the search_term is a GUID formatted as an nvarchar(40). This seems to run quite slowly to me (taking 5-6 seconds to execute this part of the process), but in the same script file there is also this version of the above, commented out. SELECT @mres=1 from documents where id=@DocumentID and textlike '%' + @search_term + '%' This version runs MUCH quicker, taking 4ms compared to 15ms for the first example. So, my question is why use the first over the second? I assume this developer (who is no longer working with me) had a good reason, but at the moment I am struggling to find it.. Is it possibly something to do with the full text indexing? (this is a dev DB I am working with, so the production version may have better indexing..) I am not that clued up on FTI really so not quite sure at the moment. Thoughts/ideas?

    Read the article

  • Node.js Adventure - Host Node.js on Windows Azure Worker Role

    - by Shaun
    In my previous post I demonstrated about how to develop and deploy a Node.js application on Windows Azure Web Site (a.k.a. WAWS). WAWS is a new feature in Windows Azure platform. Since it’s low-cost, and it provides IIS and IISNode components so that we can host our Node.js application though Git, FTP and WebMatrix without any configuration and component installation. But sometimes we need to use the Windows Azure Cloud Service (a.k.a. WACS) and host our Node.js on worker role. Below are some benefits of using worker role. - WAWS leverages IIS and IISNode to host Node.js application, which runs in x86 WOW mode. It reduces the performance comparing with x64 in some cases. - WACS worker role does not need IIS, hence there’s no restriction of IIS, such as 8000 concurrent requests limitation. - WACS provides more flexibility and controls to the developers. For example, we can RDP to the virtual machines of our worker role instances. - WACS provides the service configuration features which can be changed when the role is running. - WACS provides more scaling capability than WAWS. In WAWS we can have at most 3 reserved instances per web site while in WACS we can have up to 20 instances in a subscription. - Since when using WACS worker role we starts the node by ourselves in a process, we can control the input, output and error stream. We can also control the version of Node.js.   Run Node.js in Worker Role Node.js can be started by just having its execution file. This means in Windows Azure, we can have a worker role with the “node.exe” and the Node.js source files, then start it in Run method of the worker role entry class. Let’s create a new windows azure project in Visual Studio and add a new worker role. Since we need our worker role execute the “node.exe” with our application code we need to add the “node.exe” into our project. Right click on the worker role project and add an existing item. By default the Node.js will be installed in the “Program Files\nodejs” folder so we can navigate there and add the “node.exe”. Then we need to create the entry code of Node.js. In WAWS the entry file must be named “server.js”, which is because it’s hosted by IIS and IISNode and IISNode only accept “server.js”. But here as we control everything we can choose any files as the entry code. For example, I created a new JavaScript file named “index.js” in project root. Since we created a C# Windows Azure project we cannot create a JavaScript file from the context menu “Add new item”. We have to create a text file, and then rename it to JavaScript extension. After we added these two files we should set their “Copy to Output Directory” property to “Copy Always”, or “Copy if Newer”. Otherwise they will not be involved in the package when deployed. Let’s paste a very simple Node.js code in the “index.js” as below. As you can see I created a web server listening at port 12345. 1: var http = require("http"); 2: var port = 12345; 3:  4: http.createServer(function (req, res) { 5: res.writeHead(200, { "Content-Type": "text/plain" }); 6: res.end("Hello World\n"); 7: }).listen(port); 8:  9: console.log("Server running at port %d", port); Then we need to start “node.exe” with this file when our worker role was started. This can be done in its Run method. I found the Node.js and entry JavaScript file name, and then create a new process to run it. Our worker role will wait for the process to be exited. If everything is OK once our web server was opened the process will be there listening for incoming requests, and should not be terminated. The code in worker role would be like this. 1: public override void Run() 2: { 3: // This is a sample worker implementation. Replace with your logic. 4: Trace.WriteLine("NodejsHost entry point called", "Information"); 5:  6: // retrieve the node.exe and entry node.js source code file name. 7: var node = Environment.ExpandEnvironmentVariables(@"%RoleRoot%\approot\node.exe"); 8: var js = "index.js"; 9:  10: // prepare the process starting of node.exe 11: var info = new ProcessStartInfo(node, js) 12: { 13: CreateNoWindow = false, 14: ErrorDialog = true, 15: WindowStyle = ProcessWindowStyle.Normal, 16: UseShellExecute = false, 17: WorkingDirectory = Environment.ExpandEnvironmentVariables(@"%RoleRoot%\approot") 18: }; 19: Trace.WriteLine(string.Format("{0} {1}", node, js), "Information"); 20:  21: // start the node.exe with entry code and wait for exit 22: var process = Process.Start(info); 23: process.WaitForExit(); 24: } Then we can run it locally. In the computer emulator UI the worker role started and it executed the Node.js, then Node.js windows appeared. Open the browser to verify the website hosted by our worker role. Next let’s deploy it to azure. But we need some additional steps. First, we need to create an input endpoint. By default there’s no endpoint defined in a worker role. So we will open the role property window in Visual Studio, create a new input TCP endpoint to the port we want our website to use. In this case I will use 80. Even though we created a web server we should add a TCP endpoint of the worker role, since Node.js always listen on TCP instead of HTTP. And then changed the “index.js”, let our web server listen on 80. 1: var http = require("http"); 2: var port = 80; 3:  4: http.createServer(function (req, res) { 5: res.writeHead(200, { "Content-Type": "text/plain" }); 6: res.end("Hello World\n"); 7: }).listen(port); 8:  9: console.log("Server running at port %d", port); Then publish it to Windows Azure. And then in browser we can see our Node.js website was running on WACS worker role. We may encounter an error if we tried to run our Node.js website on 80 port at local emulator. This is because the compute emulator registered 80 and map the 80 endpoint to 81. But our Node.js cannot detect this operation. So when it tried to listen on 80 it will failed since 80 have been used.   Use NPM Modules When we are using WAWS to host Node.js, we can simply install modules we need, and then just publish or upload all files to WAWS. But if we are using WACS worker role, we have to do some extra steps to make the modules work. Assuming that we plan to use “express” in our application. Firstly of all we should download and install this module through NPM command. But after the install finished, they are just in the disk but not included in the worker role project. If we deploy the worker role right now the module will not be packaged and uploaded to azure. Hence we need to add them to the project. On solution explorer window click the “Show all files” button, select the “node_modules” folder and in the context menu select “Include In Project”. But that not enough. We also need to make all files in this module to “Copy always” or “Copy if newer”, so that they can be uploaded to azure with the “node.exe” and “index.js”. This is painful step since there might be many files in a module. So I created a small tool which can update a C# project file, make its all items as “Copy always”. The code is very simple. 1: static void Main(string[] args) 2: { 3: if (args.Length < 1) 4: { 5: Console.WriteLine("Usage: copyallalways [project file]"); 6: return; 7: } 8:  9: var proj = args[0]; 10: File.Copy(proj, string.Format("{0}.bak", proj)); 11:  12: var xml = new XmlDocument(); 13: xml.Load(proj); 14: var nsManager = new XmlNamespaceManager(xml.NameTable); 15: nsManager.AddNamespace("pf", "http://schemas.microsoft.com/developer/msbuild/2003"); 16:  17: // add the output setting to copy always 18: var contentNodes = xml.SelectNodes("//pf:Project/pf:ItemGroup/pf:Content", nsManager); 19: UpdateNodes(contentNodes, xml, nsManager); 20: var noneNodes = xml.SelectNodes("//pf:Project/pf:ItemGroup/pf:None", nsManager); 21: UpdateNodes(noneNodes, xml, nsManager); 22: xml.Save(proj); 23:  24: // remove the namespace attributes 25: var content = xml.InnerXml.Replace("<CopyToOutputDirectory xmlns=\"\">", "<CopyToOutputDirectory>"); 26: xml.LoadXml(content); 27: xml.Save(proj); 28: } 29:  30: static void UpdateNodes(XmlNodeList nodes, XmlDocument xml, XmlNamespaceManager nsManager) 31: { 32: foreach (XmlNode node in nodes) 33: { 34: var copyToOutputDirectoryNode = node.SelectSingleNode("pf:CopyToOutputDirectory", nsManager); 35: if (copyToOutputDirectoryNode == null) 36: { 37: var n = xml.CreateNode(XmlNodeType.Element, "CopyToOutputDirectory", null); 38: n.InnerText = "Always"; 39: node.AppendChild(n); 40: } 41: else 42: { 43: if (string.Compare(copyToOutputDirectoryNode.InnerText, "Always", true) != 0) 44: { 45: copyToOutputDirectoryNode.InnerText = "Always"; 46: } 47: } 48: } 49: } Please be careful when use this tool. I created only for demo so do not use it directly in a production environment. Unload the worker role project, execute this tool with the worker role project file name as the command line argument, it will set all items as “Copy always”. Then reload this worker role project. Now let’s change the “index.js” to use express. 1: var express = require("express"); 2: var app = express(); 3:  4: var port = 80; 5:  6: app.configure(function () { 7: }); 8:  9: app.get("/", function (req, res) { 10: res.send("Hello Node.js!"); 11: }); 12:  13: app.get("/User/:id", function (req, res) { 14: var id = req.params.id; 15: res.json({ 16: "id": id, 17: "name": "user " + id, 18: "company": "IGT" 19: }); 20: }); 21:  22: app.listen(port); Finally let’s publish it and have a look in browser.   Use Windows Azure SQL Database We can use Windows Azure SQL Database (a.k.a. WACD) from Node.js as well on worker role hosting. Since we can control the version of Node.js, here we can use x64 version of “node-sqlserver” now. This is better than if we host Node.js on WAWS since it only support x86. Just install the “node-sqlserver” module from NPM, copy the “sqlserver.node” from “Build\Release” folder to “Lib” folder. Include them in worker role project and run my tool to make them to “Copy always”. Finally update the “index.js” to use WASD. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:{SERVER NAME}.database.windows.net,1433;Database={DATABASE NAME};Uid={LOGIN}@{SERVER NAME};Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Publish to azure and now we can see our Node.js is working with WASD through x64 version “node-sqlserver”.   Summary In this post I demonstrated how to host our Node.js in Windows Azure Cloud Service worker role. By using worker role we can control the version of Node.js, as well as the entry code. And it’s possible to do some pre jobs before the Node.js application started. It also removed the IIS and IISNode limitation. I personally recommended to use worker role as our Node.js hosting. But there are some problem if you use the approach I mentioned here. The first one is, we need to set all JavaScript files and module files as “Copy always” or “Copy if newer” manually. The second one is, in this way we cannot retrieve the cloud service configuration information. For example, we defined the endpoint in worker role property but we also specified the listening port in Node.js hardcoded. It should be changed that our Node.js can retrieve the endpoint. But I can tell you it won’t be working here. In the next post I will describe another way to execute the “node.exe” and Node.js application, so that we can get the cloud service configuration in Node.js. I will also demonstrate how to use Windows Azure Storage from Node.js by using the Windows Azure Node.js SDK.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Grandparent – Parent – Child Reports in SQL Developer

    - by thatjeffsmith
    You’ll never see one of these family stickers on my car, but I promise not to judge…much. Parent – Child reports are pretty straightforward in Oracle SQL Developer. You have a ‘parent’ report, and then one or more ‘child’ reports which are based off of a value in a selected row or value from the parent. If you need a quick tutorial to get up to speed on the subject, go ahead and take 5 minutes Shortly before I left for vacation 2 weeks agao, I got an interesting question from one of my Twitter Followers: @thatjeffsmith any luck with the #Oracle awr reports in #SQLDeveloper?This is easy with multi generation parent>child Done in #dbvisualizer — Ronald Rood (@Ik_zelf) August 26, 2012 Now that I’m back from vacation, I can tell Ronald and everyone else that the answer is ‘Yes!’ And here’s how Time to Get Out Your XML Editor Don’t have one? That’s OK, SQL Developer can edit XML files. While the Reporting interface doesn’t surface the ability to create multi-generational reports, the underlying code definitely supports it. We just need to hack away at the XML that powers a report. For this example I’m going to start simple. A query that brings back DEPARTMENTs, then EMPLOYEES, then JOBs. We can build the first two parts of the report using the report editor. A Parent-Child report in Oracle SQL Developer (Departments – Employees) Save the Report to XML Once you’ve generated the XML file, open it with your favorite XML editor. For this example I’ll be using the build-it XML editor in SQL Developer. SQL Developer Reports in their raw XML glory! Right after the PDF element in the XML document, we can start a new ‘child’ report by inserting a DISPLAY element. I just copied and pasted the existing ‘display’ down so I wouldn’t have to worry about screwing anything up. Note I also needed to change the ‘master’ name so it wouldn’t confuse SQL Developer when I try to import/open a report that has the same name. Also I needed to update the binds tags to reflect the names from the child versus the original parent report. This is pretty easy to figure out on your own actually – I mean I’m no real developer and I got it pretty quick. <?xml version="1.0" encoding="UTF-8" ?> <displays> <display id="92857fce-0139-1000-8006-7f0000015340" type="" style="Table" enable="true"> <name><![CDATA[Grandparent]]></name> <description><![CDATA[]]></description> <tooltip><![CDATA[]]></tooltip> <drillclass><![CDATA[null]]></drillclass> <CustomValues> <TYPE>horizontal</TYPE> </CustomValues> <query> <sql><![CDATA[select * from hr.departments]]></sql> </query> <pdf version="VERSION_1_7" compression="CONTENT"> <docproperty title="" author="" subject="" keywords="" /> <cell toppadding="2" bottompadding="2" leftpadding="2" rightpadding="2" horizontalalign="LEFT" verticalalign="TOP" wrap="true" /> <column> <heading font="Courier" size="10" style="NORMAL" color="-16777216" rowshading="-1" labeling="FIRST_PAGE" /> <footing font="Courier" size="10" style="NORMAL" color="-16777216" rowshading="-1" labeling="NONE" /> <blob blob="NONE" zip="false" /> </column> <table font="Courier" size="10" style="NORMAL" color="-16777216" userowshading="false" oddrowshading="-1" evenrowshading="-1" showborders="true" spacingbefore="12" spacingafter="12" horizontalalign="LEFT" /> <header enable="false" generatedate="false"> <data> null </data> </header> <footer enable="false" generatedate="false"> <data value="null" /> </footer> <security enable="false" useopenpassword="false" openpassword="" encryption="EXCLUDE_METADATA"> <permission enable="false" permissionpassword="" allowcopying="true" allowprinting="true" allowupdating="false" allowaccessdevices="true" /> </security> <pagesetup papersize="LETTER" orientation="1" measurement="in" margintop="1.0" marginbottom="1.0" marginleft="1.0" marginright="1.0" /> </pdf> <display id="null" type="" style="Table" enable="true"> <name><![CDATA[Parent]]></name> <description><![CDATA[]]></description> <tooltip><![CDATA[]]></tooltip> <drillclass><![CDATA[null]]></drillclass> <CustomValues> <TYPE>horizontal</TYPE> </CustomValues> <query> <sql><![CDATA[select * from hr.employees where department_id = EPARTMENT_ID]]></sql> <binds> <bind id="DEPARTMENT_ID"> <prompt><![CDATA[DEPARTMENT_ID]]></prompt> <tooltip><![CDATA[DEPARTMENT_ID]]></tooltip> <value><![CDATA[NULL_VALUE]]></value> </bind> </binds> </query> <pdf version="VERSION_1_7" compression="CONTENT"> <docproperty title="" author="" subject="" keywords="" /> <cell toppadding="2" bottompadding="2" leftpadding="2" rightpadding="2" horizontalalign="LEFT" verticalalign="TOP" wrap="true" /> <column> <heading font="Courier" size="10" style="NORMAL" color="-16777216" rowshading="-1" labeling="FIRST_PAGE" /> <footing font="Courier" size="10" style="NORMAL" color="-16777216" rowshading="-1" labeling="NONE" /> <blob blob="NONE" zip="false" /> </column> <table font="Courier" size="10" style="NORMAL" color="-16777216" userowshading="false" oddrowshading="-1" evenrowshading="-1" showborders="true" spacingbefore="12" spacingafter="12" horizontalalign="LEFT" /> <header enable="false" generatedate="false"> <data> null </data> </header> <footer enable="false" generatedate="false"> <data value="null" /> </footer> <security enable="false" useopenpassword="false" openpassword="" encryption="EXCLUDE_METADATA"> <permission enable="false" permissionpassword="" allowcopying="true" allowprinting="true" allowupdating="false" allowaccessdevices="true" /> </security> <pagesetup papersize="LETTER" orientation="1" measurement="in" margintop="1.0" marginbottom="1.0" marginleft="1.0" marginright="1.0" /> </pdf> <display id="null" type="" style="Table" enable="true"> <name><![CDATA[Child]]></name> <description><![CDATA[]]></description> <tooltip><![CDATA[]]></tooltip> <drillclass><![CDATA[null]]></drillclass> <CustomValues> <TYPE>horizontal</TYPE> </CustomValues> <query> <sql><![CDATA[select * from hr.jobs where job_id = :JOB_ID]]></sql> <binds> <bind id="JOB_ID"> <prompt><![CDATA[JOB_ID]]></prompt> <tooltip><![CDATA[JOB_ID]]></tooltip> <value><![CDATA[NULL_VALUE]]></value> </bind> </binds> </query> <pdf version="VERSION_1_7" compression="CONTENT"> <docproperty title="" author="" subject="" keywords="" /> <cell toppadding="2" bottompadding="2" leftpadding="2" rightpadding="2" horizontalalign="LEFT" verticalalign="TOP" wrap="true" /> <column> <heading font="Courier" size="10" style="NORMAL" color="-16777216" rowshading="-1" labeling="FIRST_PAGE" /> <footing font="Courier" size="10" style="NORMAL" color="-16777216" rowshading="-1" labeling="NONE" /> <blob blob="NONE" zip="false" /> </column> <table font="Courier" size="10" style="NORMAL" color="-16777216" userowshading="false" oddrowshading="-1" evenrowshading="-1" showborders="true" spacingbefore="12" spacingafter="12" horizontalalign="LEFT" /> <header enable="false" generatedate="false"> <data> null </data> </header> <footer enable="false" generatedate="false"> <data value="null" /> </footer> <security enable="false" useopenpassword="false" openpassword="" encryption="EXCLUDE_METADATA"> <permission enable="false" permissionpassword="" allowcopying="true" allowprinting="true" allowupdating="false" allowaccessdevices="true" /> </security> <pagesetup papersize="LETTER" orientation="1" measurement="in" margintop="1.0" marginbottom="1.0" marginleft="1.0" marginright="1.0" /> </pdf> </display> </display> </display> </displays> Save the file and ‘Open Report…’ You’ll see your new report name in the tree. You just need to double-click it to open it. Here’s what it looks like running A 3 generation family Now Let’s Build an AWR Text Report Ronald wanted to have the ability to query AWR snapshots and generate the AWR reports. That requires a few inputs, including a START and STOP snapshot ID. That basically tells AWR what time period to use for generating the report. And here’s where it gets tricky. We’ll need to use aliases for the SNAP_ID column. Since we’re using the same column name from 2 different queries, we need to use different bind variables. Fortunately for us, SQL Developer’s clever enough to use the column alias as the BIND. Here’s what I mean: Grandparent Query SELECT snap_id start1, begin_interval_time, end_interval_time FROM dba_hist_snapshot ORDER BY 1 asc Parent Query SELECT snap_id stop1, begin_interval_time, end_interval_time, :START1 carry FROM dba_hist_snapshot WHERE snap_id > :START1 ORDER BY 1 asc And here’s where it gets even trickier – you can’t reference a bind from outside the parent query. My grandchild report can’t reference a value from the grandparent report. So I just carry the selected value down to the parent. In my parent query SELECT you see the ‘:START1′ at the end? That’s making that value available to me when I use it in my grandchild query. To complicate things a bit further, I can’t have a column name with a ‘:’ in it, or SQL Developer will get confused when I try to reference the value of the variable with the ‘:’ – and ‘::Name’ doesn’t work. But that’s OK, just alias it. Grandchild Query Select Output From Table(Dbms_Workload_Repository.Awr_Report_Text(1298953802, 1,:CARRY, :STOP1)); Ok, and the last trick – I hard-coded my report to use my database’s DB_ID and INST_ID into the AWR package call. Now a smart person could figure out a way to make that work on any database, but I got lazy and and ran out of time. But this should be far enough for you to take it from here. Here’s what my report looks like now: Caution: don’t run this if you haven’t licensed Enterprise Edition with Diagnostic Pack. The Raw XML for this AWR Report <?xml version="1.0" encoding="UTF-8" ?> <displays> <display id="927ba96c-0139-1000-8001-7f0000015340" type="" style="Table" enable="true"> <name><![CDATA[AWR Start Stop Report Final]]></name> <description><![CDATA[]]></description> <tooltip><![CDATA[]]></tooltip> <drillclass><![CDATA[null]]></drillclass> <CustomValues> <TYPE>horizontal</TYPE> </CustomValues> <query> <sql><![CDATA[SELECT snap_id start1, begin_interval_time, end_interval_time FROM dba_hist_snapshot ORDER BY 1 asc]]></sql> </query> <display id="null" type="" style="Table" enable="true"> <name><![CDATA[Stop SNAP_ID]]></name> <description><![CDATA[]]></description> <tooltip><![CDATA[]]></tooltip> <drillclass><![CDATA[null]]></drillclass> <CustomValues> <TYPE>horizontal</TYPE> </CustomValues> <query> <sql><![CDATA[SELECT snap_id stop1, begin_interval_time, end_interval_time, :START1 carry FROM dba_hist_snapshot WHERE snap_id > :START1 ORDER BY 1 asc]]></sql> </query> <display id="null" type="" style="Table" enable="true"> <name><![CDATA[AWR Report]]></name> <description><![CDATA[]]></description> <tooltip><![CDATA[]]></tooltip> <drillclass><![CDATA[null]]></drillclass> <CustomValues> <TYPE>horizontal</TYPE> </CustomValues> <query> <sql><![CDATA[Select Output From Table(Dbms_Workload_Repository.Awr_Report_Text(1298953802, 1,:CARRY, :STOP1 ))]]></sql> </query> </display> </display> </display> </displays> Should We Build Support for Multiple Levels of Reports into the User Interface? Let us know! A comment here or a suggestion on our SQL Developer Exchange might help your case!

    Read the article

  • 7u10: JavaFX packaging tools update

    - by igor
    Last weeks were very busy here in Oracle. JavaOne 2012 is next week. Come to see us there! Meanwhile i'd like to quickly update you on recent developments in the area of packaging tools. This is an area of ongoing development for the team, and we are  continuing to refine and improve both the tools and the process. Thanks to everyone who shared experiences and suggestions with us. We are listening and fixed many of reported issues. Please keep them coming as comments on the blog or (even better) file issues directly to the JIRA. In this post i'll focus on several new packaging features added in JDK 7 update 10: Self-Contained Applications: Select Java Runtime to bundle Self-Contained Applications: Create Package without Java Runtime Self-Contained Applications: Package non-JavaFX application Option to disable proxy setup in the JavaFX launcher Ability to specify codebase for WebStart application Option to update existing jar file Self-Contained Applications: Specify application icon Self-Contained Applications: Pass parameters on the command line All these features and number of other important bug fixes are available in the developer preview builds of JDK 7 update 10 (build 8 or later). Please give them a try and share your feedback! Self-Contained Applications: Select Java Runtime to bundle Packager tools in 7u6 assume current JDK (based on java.home property) is the source for embedded runtime. This is useful simplification for many scenarios but there are cases where ability to specify what to embed explicitly is handy. For example IDE may be using fixed JDK to build the project and this is not the version you want to bundle into your application. To make it more flexible we now allow to specify location of base JDK explicitly. It is optional and if you do not specify it then current JDK will be used (i.e. this change is fully backward compatible). New 'basedir' attribute was added to <fx:platform> tag. Its value is location of JDK to be used. It is ok to point to either JRE inside the JDK or JDK top level folder. However, it must be JDK and not JRE as we need other JDK tools for proper packaging and it must be recent version of JDK that is bundled with JavaFX (i.e. Java 7 update 6 or later). Here are examples (<fx:platform> is part of <fx:deploy> task): <fx:platform basedir="${java.home}"/> <fx:platform basedir="c:\tools\jdk7"/> Hint: this feature enables you to use packaging tools from JDK 7 update 10 (and benefit from bug fixes and other features described below) to create application package with bundled FCS version of JRE 7 update 6. Self-Contained Applications: Create Package without Java Runtime This may sound a bit puzzling at first glance. Package without embedded Java Runtime is not really self-contained and obviously will not help with: Deployment on fresh systems. JRE need to be installed separately (and this step will require admin permissions). Possible compatibility issues due to updates of system runtime. However, these packages are much much smaller in size. If download size matters and you are confident that user have recommended system JRE installed then this may be good option to consider if you want to improve user experience for install and launch. Technically, this is implemented as an extension of previous feature. Pass empty string as value for 'basedir' attribute and this will be treated as request to not bundle Java runtime, e.g. <fx:platform basedir=""/> Self-Contained Applications: Package non-JavaFX application One of popular questions people ask about self-contained applications - can i package my Java application as self-contained application? Absolutely. This is true even for tools shipped with JDK 7 update 6. Simply follow steps for creating package for Swing application with integrated JavaFX content and they will work even if your application does not use JavaFX. What's wrong with it? Well, there are few caveats: bundle size is larger because JavaFX is bundled whilst it is not really needed main application jar needs to be packaged to comply to JavaFX packaging requirements(and this may be not trivial to achieve in your existing build scripts) javafx application launcher may not work well with startup logic of your application (for example launcher will initialize networking stack and this may void custom networking settings in your application code) In JDK 7 update 6 <fx:deploy> was updated to accept arbitrary executable jar as an input. Self-contained application package will be created preserving input jar as-is, i.e. no JavaFX launcher will be embedded. This does not help with first point above but resolves other two. More formally following assertions must be true for packaging to succeed: application can be launched as "java -jar YourApp.jar" from the command line  mainClass attribute of <fx:application> refers to application main class <fx:resources> lists all resources needed for the application To give you an example lets assume we need to create a bundle for application consisting of 3 jars:     dist/javamain.jar     dist/lib/somelib.jar    dist/morelibs/anotherlib.jar where javamain.jar has manifest with      Main-Class: app.Main     Class-Path: lib/somelib.jar morelibs/anotherlib.jar Here is sample ant code to package it: <target name="package-bundle"> <taskdef resource="com/sun/javafx/tools/ant/antlib.xml" uri="javafx:com.sun.javafx.tools.ant" classpath="${javafx.tools.ant.jar}"/> <fx:deploy nativeBundles="all" width="100" height="100" outdir="native-packages/" outfile="MyJavaApp"> <info title="Sample project" vendor="Me" description="Test built from Java executable jar"/> <fx:application id="myapp" version="1.0" mainClass="app.Main" name="MyJavaApp"/> <fx:resources> <fx:fileset dir="dist"> <include name="javamain.jar"/> <include name="lib/somelib.jar"/> <include name="morelibs/anotherlib.jar"/> </fx:fileset> </fx:resources> </fx:deploy> </target> Option to disable proxy setup in the JavaFX launcher Since JavaFX 2.2 (part of JDK 7u6) properly packaged JavaFX applications  have proxy settings initialized according to Java Runtime configuration settings. This is handy for most of the application accessing network with one exception. If your application explicitly sets networking properties (e.g. socksProxyHost) then they must be set before networking stack is initialized. Proxy detection will initialize networking stack and therefore your custom settings will be ignored. One way to disable proxy setup by the embedded JavaFX launcher is to pass "-Djavafx.autoproxy.disable=true" on the command line. This is good for troubleshooting (proxy detection may cause significant startup time increases if network is misconfigured) but not really user friendly. Now proxy setup will be disabled if manifest of main application jar has "JavaFX-Feature-Proxy" entry with value "None". Here is simple example of adding this entry using <fx:jar> task: <fx:jar destfile="dist/sampleapp.jar"> <fx:application refid="myapp"/> <fx:resources refid="myresources"/> <fileset dir="build/classes"/> <manifest> <attribute name="JavaFX-Feature-Proxy" value="None"/> </manifest> </fx:jar> Ability to specify codebase for WebStart application JavaFX applications do not need to specify codebase (i.e. absolute location where application code will be deployed) for most of real world deployment scenarios. This is convenient as application does not need to be modified when it is moved from development to deployment environment. However, some developers want to ensure copies of their application JNLP file will redirect to master location. This is where codebase is needed. To avoid need to edit JNLP file manually <fx:deploy> task now accepts optional codebase attribute. If attribute is not specified packager will generate same no-codebase files as before. If codebase value is explicitly specified then generated JNLP files (including JNLP content embedded into web page) will use it.  Here is an example: <fx:deploy width="600" height="400" outdir="Samples" codebase="http://localhost/codebaseTest" outfile="TestApp"> .... </fx:deploy> Option to update existing jar file JavaFX packaging procedures are optimized for new application that can use ant or command line javafxpackager utility. This may lead to some redundant steps when you add it to your existing build process. One typical situation is that you might already have a build procedure that produces executable jar file with custom manifest. To properly package it as JavaFX executable jar you would need to unpack it and then use javafxpackager or <fx:jar> to create jar again (and you need to make sure you pass all important details from your custom manifest). We added option to pass jar file as an input to javafxpackager and <fx:jar>. This simplifies integration of JavaFX packaging tools into existing build  process as postprocessing step. By the way, we are looking for ways to simplify this further. Please share your suggestions! On the technical side this works as follows. Both <fx:jar> and javafxpackager will attempt to update existing jar file if this is the only input file. Update process will add JavaFX launcher classes and update the jar manifest with JavaFX attributes. Your custom attributes will be preserved. Update could be performed in place or result may be saved to a different file. Main-Class and Class-Path elements (if present) of manifest of input jar file will be used for JavaFX application  unless they are explicitly overriden in the packaging command you use. E.g. attribute mainClass of <fx:application> (or -appclass in the javafxpackager case) overrides existing Main-Class in the jar manifest. Note that class specified in the Main-Class attribute could either extend JavaFX Application or provide static main() method. Here are examples of updating jar file using javafxpackager: Create new JavaFX executable jar as a copy of given jar file javafxpackager -createjar -srcdir dist -srcfiles fish_proto.jar -outdir dist -outfile fish.jar  Update existing jar file to be JavaFX executable jar and use test.Fish as main application class javafxpackager -createjar -srcdir dist -appclass test.Fish -srcfiles fish.jar -outdir dist -outfile fish.jar  And here is example of using <fx:jar> to create new JavaFX executable jar from the existing fish_proto.jar: <fx:jar destfile="dist/fish.jar"> <fileset dir="dist"> <include name="fish_proto.jar"/> </fileset> </fx:jar> Self-Contained Applications: Specify application icon The only way to specify application icon for self-contained application using tools in JDK 7 update 6 is to use drop-in resources. Now this bug is resolved and you can also specify icon using <fx:icon> tag. Here is an example: <fx:deploy ...> <fx:info> <fx:icon href="default.png"/> </fx:info> ... </fx:deploy> Few things to keep in mind: Only default kind of icon is applicable to self-contained applications (as of now) Icon should follow platform specific rules for sizes and image format (e.g. .ico on Windows and .icns on Mac) Self-Contained Applications: Pass parameters on the command line JavaFX applications support two types of application parameters: named and unnamed (see the API for Application.Parameters). Static named parameters can be added to the application package using <fx:param> and unnamed parameters can be added using <fx:argument>. They are applicable to all execution modes including standalone applications. It is also possible to pass parameters to a JavaFX application from a Web page that hosts it, using <fx:htmlParam>.  Prior to JavaFX 2.2, this was only supported for embedded applications. Starting from JavaFX 2.2, <fx:htmlParam> is applicable to Web Start applications also. See JavaFX deployment guide for more details on this. However, there was no way to pass dynamic parameters to the self-contained application. This has been improved and now native launchers will  delegate parameters from command line to the application code. I.e. to pass parameter to the application you simply need to run it as "myapp.exe somevalue" and then use getParameters().getUnnamed().get(0) to get "somevalue".

    Read the article

  • The case of the phantom ADF developer (and other yarns)

    - by Chris Muir
    A few years of ADF experience means I see common mistakes made by different developers, some I regularly make myself.  This post is designed to assist beginners to Oracle JDeveloper Application Development Framework (ADF) avoid a common ADF pitfall, the case of the phantom ADF developer [add Scooby-Doo music here]. ADF Business Components - triggers, default table values and instead of views. Oracle's JDeveloper tutorials help with the A-B-Cs of ADF development, typically built on the nice 'n safe demo schema provided by with the Oracle database such as the HR demo schema. However it's not too long until ADF beginners, having built up some confidence from learning with the tutorials and vanilla demo schemas, start building ADF Business Components based upon their own existing database schema objects.  This is where unexpected problems can sneak in. The crime Developers may encounter a surprising error at runtime when editing a record they just created or updated and committed to the database, based on their own existing tables, namely the error: JBO-25014: Another user has changed the row with primary key oracle.jbo.Key[x] ...where X is the primary key value of the row at hand.  In a production environment with multiple users this error may be legit, one of the other users has updated the row since you queried it.  Yet in a development environment this error is just plain confusing.  If developers are isolated in their own database, creating and editing records they know other users can't possibly be working with, or all the other developers have gone home for the day, how is this error possible? There are no other users?  It must be the phantom ADF developer! [insert dramatic music here] The following picture is what you'll see in the Business Component Browser, and you'll receive a similar error message via an ADF Faces page: A false conclusion What can possibly cause this issue if it isn't our phantom ADF developer?  Doesn't ADF BC implement record locking, locking database records when the row is modified in the ADF middle-tier by a user?  How can our phantom ADF developer even take out a lock if this is the case?  Maybe ADF has a bug, maybe ADF isn't implementing record locking at all?  Shouldn't we see the error "JBO-26030: Failed to lock the record, another user holds the lock" as we attempt to modify the record, why do we see JBO-25014? : Let's verify that ADF is in fact issuing the correct SQL LOCK-FOR-UPDATE statement to the database. First we need to verify ADF's locking strategy.  It is determined by the Application Module's jbo.locking.mode property.  The default (as of JDev 11.1.1.4.0 if memory serves me correct) and recommended value is optimistic, and the other valid value is pessimistic. Next we need a mechanism to check that ADF is issuing the LOCK statements to the database.  We could ask DBAs to monitor locks with OEM, but optimally we'd rather not involve overworked DBAs in this process, so instead we can use the ADF runtime setting –Djbo.debugoutput=console.  At runtime this options turns on instrumentation within the ADF BC layer, which among a lot of extra detail displayed in the log window, will show the actual SQL statement issued to the database, including the LOCK statement we're looking to confirm. Setting our locking mode to pessimistic, opening the Business Components Browser of a JSF page allowing us to edit a record, say the CHARGEABLE field within a BOOKINGS record where BOOKING_NO = 1206, upon editing the record see among others the following log entries: [421] Built select: 'SELECT BOOKING_NO, EVENT_NO, RESOURCE_CODE, CHARGEABLE, MADE_BY, QUANTITY, COST, STATUS, COMMENTS FROM BOOKINGS Bookings'[422] Executing LOCK...SELECT BOOKING_NO, EVENT_NO, RESOURCE_CODE, CHARGEABLE, MADE_BY, QUANTITY, COST, STATUS, COMMENTS FROM BOOKINGS Bookings WHERE BOOKING_NO=:1 FOR UPDATE NOWAIT[423] Where binding param 1: 1206  As can be seen on line 422, in fact a LOCK-FOR-UPDATE is indeed issued to the database.  Later when we commit the record we see: [441] OracleSQLBuilder: SAVEPOINT 'BO_SP'[442] OracleSQLBuilder Executing, Lock 1 DML on: BOOKINGS (Update)[443] UPDATE buf Bookings>#u SQLStmtBufLen: 210, actual=62[444] UPDATE BOOKINGS Bookings SET CHARGEABLE=:1 WHERE BOOKING_NO=:2[445] Update binding param 1: N[446] Where binding param 2: 1206[447] BookingsView1 notify COMMIT ... [448] _LOCAL_VIEW_USAGE_model_Bookings_ResourceTypesView1 notify COMMIT ... [449] EntityCache close prepared statement ....and as a result the changes are saved to the database, and the lock is released. Let's see what happens when we use the optimistic locking mode, this time to change the same BOOKINGS record CHARGEABLE column again.  As soon as we edit the record we see little activity in the logs, nothing to indicate any SQL statement, let alone a LOCK has been taken out on the row. However when we save our records by issuing a commit, the following is recorded in the logs: [509] OracleSQLBuilder: SAVEPOINT 'BO_SP'[510] OracleSQLBuilder Executing doEntitySelect on: BOOKINGS (true)[511] Built select: 'SELECT BOOKING_NO, EVENT_NO, RESOURCE_CODE, CHARGEABLE, MADE_BY, QUANTITY, COST, STATUS, COMMENTS FROM BOOKINGS Bookings'[512] Executing LOCK...SELECT BOOKING_NO, EVENT_NO, RESOURCE_CODE, CHARGEABLE, MADE_BY, QUANTITY, COST, STATUS, COMMENTS FROM BOOKINGS Bookings WHERE BOOKING_NO=:1 FOR UPDATE NOWAIT[513] Where binding param 1: 1205[514] OracleSQLBuilder Executing, Lock 2 DML on: BOOKINGS (Update)[515] UPDATE buf Bookings>#u SQLStmtBufLen: 210, actual=62[516] UPDATE BOOKINGS Bookings SET CHARGEABLE=:1 WHERE BOOKING_NO=:2[517] Update binding param 1: Y[518] Where binding param 2: 1205[519] BookingsView1 notify COMMIT ... [520] _LOCAL_VIEW_USAGE_model_Bookings_ResourceTypesView1 notify COMMIT ... [521] EntityCache close prepared statement Again even though we're seeing the midtier delay the LOCK statement until commit time, it is in fact occurring on line 412, and released as part of the commit issued on line 419.  Therefore with either optimistic or pessimistic locking a lock is indeed issued. Our conclusion at this point must be, unless there's the unlikely cause the LOCK statement is never really hitting the database, or the even less likely cause the database has a bug, then ADF does in fact take out a lock on the record before allowing the current user to update it.  So there's no way our phantom ADF developer could even modify the record if he tried without at least someone receiving a lock error. Hmm, we can only conclude the locking mode is a red herring and not the true cause of our problem.  Who is the phantom? At this point we'll need to conclude that the error message "JBO-25014: Another user has changed" is somehow legit, even though we don't understand yet what's causing it. This leads onto two further questions, how does ADF know another user has changed the row, and what's been changed anyway? To answer the first question, how does ADF know another user has changed the row, the Fusion Guide's section 4.10.11 How to Protect Against Losing Simultaneous Updated Data , that details the Entity Object Change-Indicator property, gives us the answer: At runtime the framework provides automatic "lost update" detection for entity objects to ensure that a user cannot unknowingly modify data that another user has updated and committed in the meantime. Typically, this check is performed by comparing the original values of each persistent entity attribute against the corresponding current column values in the database at the time the underlying row is locked. Before updating a row, the entity object verifies that the row to be updated is still consistent with the current state of the database.  The guide further suggests to make this solution more efficient: You can make the lost update detection more efficient by identifying any attributes of your entity whose values you know will be updated whenever the entity is modified. Typical candidates include a version number column or an updated date column in the row.....To detect whether the row has been modified since the user queried it in the most efficient way, select the Change Indicator option to compare only the change-indicator attribute values. We now know that ADF BC doesn't use the locking mechanism at all to protect the current user against updates, but rather it keeps a copy of the original record fetched, separate to the user changed version of the record, and it compares the original record against the one in the database when the lock is taken out.  If values don't match, be it the default compare-all-columns behaviour, or the more efficient Change Indicator mechanism, ADF BC will throw the JBO-25014 error. This leaves one last question.  Now we know the mechanism under which ADF identifies a changed row, what we don't know is what's changed and who changed it? The real culprit What's changed?  We know the record in the mid-tier has been changed by the user, however ADF doesn't use the changed record in the mid-tier to compare to the database record, but rather a copy of the original record before it was changed.  This leaves us to conclude the database record has changed, but how and by who? There are three potential causes: Database triggers The database trigger among other uses, can be configured to fire PLSQL code on a database table insert, update or delete.  In particular in an insert or update the trigger can override the value assigned to a particular column.  The trigger execution is actioned by the database on behalf of the user initiating the insert or update action. Why this causes the issue specific to our ADF use, is when we insert or update a record in the database via ADF, ADF keeps a copy of the record written to the database.  However the cached record is instantly out of date as the database triggers have modified the record that was actually written to the database.  Thus when we update the record we just inserted or updated for a second time to the database, ADF compares its original copy of the record to that in the database, and it detects the record has been changed – giving us JBO-25014. This is probably the most common cause of this problem. Default values A second reason this issue can occur is another database feature, default column values.  When creating a database table the schema designer can define default values for specific columns.  For example a CREATED_BY column could be set to SYSDATE, or a flag column to Y or N.  Default values are only used by the database when a user inserts a new record and the specific column is assigned NULL.  The database in this case will overwrite the column with the default value. As per the database trigger section, it then becomes apparent why ADF chokes on this feature, though it can only specifically occur in an insert-commit-update-commit scenario, not the update-commit-update-commit scenario. Instead of trigger views I must admit I haven't double checked this scenario but it seems plausible, that of the Oracle database's instead of trigger view (sometimes referred to as instead of views).  A view in the database is based on a query, and dependent on the queries complexity, may support insert, update and delete functionality to a limited degree.  In order to support fully insertable, updateable and deletable views, Oracle introduced the instead of view, that gives the view designer the ability to not only define the view query, but a set of programmatic PLSQL triggers where the developer can define their own logic for inserts, updates and deletes. While this provides the database programmer a very powerful feature, it can cause issues for our ADF application.  On inserting or updating a record in the instead of view, the record and it's data that goes in is not necessarily the data that comes out when ADF compares the records, as the view developer has the option to practically do anything with the incoming data, including throwing it away or pushing it to tables which aren't used by the view underlying query for fetching the data. Readers are at this point reminded that this article is specifically about how the JBO-25014 error occurs in the context of 1 developer on an isolated database.  The article is not considering how the error occurs in a production environment where there are multiple users who can cause this error in a legitimate fashion.  Assuming none of the above features are the cause of the problem, and optimistic locking is turned on (this error is not possible if pessimistic locking is the default mode *and* none of the previous causes are possible), JBO-25014 is quite feasible in a production ADF application if 2 users modify the same record. At this point under project timelines pressure, the obvious fix for developers is to drop both database triggers and default values from the underlying tables.  However we must be careful that these legacy constructs aren't used and assumed to be in place by other legacy systems.  Dropping the database triggers or default value that the existing Oracle Forms  applications assumes and requires to be in place could cause unexpected behaviour and bugs in the Forms application.  Proficient software engineers would recognize such a change may require a partial or full regression test of the existing legacy system, a potentially costly and timely exercise, not ideal. Solving the mystery once and for all Luckily ADF has built in functionality to deal with this issue, though it's not a surprise, as Oracle as the author of ADF also built the database, and are fully aware of the Oracle database's feature set.  At the Entity Object attribute level, the Refresh After Insert and Refresh After Update properties.  Simply selecting these instructs ADF BC after inserting or updating a record to the database, to expect the database to modify the said attributes, and read a copy of the changed attributes back into its cached mid-tier record.  Thus next time the developer modifies the current record, the comparison between the mid-tier record and the database record match, and JBO-25014: Another user has changed" is no longer an issue. [Post edit - as per the comment from Oracle's Steven Davelaar below, as he correctly points out the above solution will not work for instead-of-triggers views as it relies on SQL RETURNING clause which is incompatible with this type of view] Alternatively you can set the Change Indicator on one of the attributes.  This will work as long as the relating column for the attribute in the database itself isn't inadvertently updated.  In turn you're possibly just masking the issue rather than solving it, because if another developer turns the Change Indicator back on the original issue will return.

    Read the article

  • Using Stub Objects

    - by user9154181
    Having told the long and winding tale of where stub objects came from and how we use them to build Solaris, I'd like to focus now on the the nuts and bolts of building and using them. The following new features were added to the Solaris link-editor (ld) to support the production and use of stub objects: -z stub This new command line option informs ld that it is to build a stub object rather than a normal object. In this mode, it accepts the same command line arguments as usual, but will quietly ignore any objects and sharable object dependencies. STUB_OBJECT Mapfile Directive In order to build a stub version of an object, its mapfile must specify the STUB_OBJECT directive. When producing a non-stub object, the presence of STUB_OBJECT causes the link-editor to perform extra validation to ensure that the stub and non-stub objects will be compatible. ASSERT Mapfile Directive All data symbols exported from the object must have an ASSERT symbol directive in the mapfile that declares them as data and supplies the size, binding, bss attributes, and symbol aliasing details. When building the stub objects, the information in these ASSERT directives is used to create the data symbols. When building the real object, these ASSERT directives will ensure that the real object matches the linking interface presented by the stub. Although ASSERT was added to the link-editor in order to support stub objects, they are a general purpose feature that can be used independently of stub objects. For instance you might choose to use an ASSERT directive if you have a symbol that must have a specific address in order for the object to operate properly and you want to automatically ensure that this will always be the case. The material presented here is derived from a document I originally wrote during the development effort, which had the dual goals of providing supplemental materials for the stub object PSARC case, and as a set of edits that were eventually applied to the Oracle Solaris Linker and Libraries Manual (LLM). The Solaris 11 LLM contains this information in a more polished form. Stub Objects A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be used at runtime. However, an application can be built against a stub object, where the stub object provides the real object name to be used at runtime, and then use the real object at runtime. When building a stub object, the link-editor ignores any object or library files specified on the command line, and these files need not exist in order to build a stub. Since the compilation step can be omitted, and because the link-editor has relatively little work to do, stub objects can be built very quickly. Stub objects can be used to solve a variety of build problems: Speed Modern machines, using a version of make with the ability to parallelize operations, are capable of compiling and linking many objects simultaneously, and doing so offers significant speedups. However, it is typical that a given object will depend on other objects, and that there will be a core set of objects that nearly everything else depends on. It is necessary to impose an ordering that builds each object before any other object that requires it. This ordering creates bottlenecks that reduce the amount of parallelization that is possible and limits the overall speed at which the code can be built. Complexity/Correctness In a large body of code, there can be a large number of dependencies between the various objects. The makefiles or other build descriptions for these objects can become very complex and difficult to understand or maintain. The dependencies can change as the system evolves. This can cause a given set of makefiles to become slightly incorrect over time, leading to race conditions and mysterious rare build failures. Dependency Cycles It might be desirable to organize code as cooperating shared objects, each of which draw on the resources provided by the other. Such cycles cannot be supported in an environment where objects must be built before the objects that use them, even though the runtime linker is fully capable of loading and using such objects if they could be built. Stub shared objects offer an alternative method for building code that sidesteps the above issues. Stub objects can be quickly built for all the shared objects produced by the build. Then, all the real shared objects and executables can be built in parallel, in any order, using the stub objects to stand in for the real objects at link-time. Afterwards, the executables and real shared objects are kept, and the stub shared objects are discarded. Stub objects are built from a mapfile, which must satisfy the following requirements. The mapfile must specify the STUB_OBJECT directive. This directive informs the link-editor that the object can be built as a stub object, and as such causes the link-editor to perform validation and sanity checking intended to guarantee that an object and its stub will always provide identical linking interfaces. All function and data symbols that make up the external interface to the object must be explicitly listed in the mapfile. The mapfile must use symbol scope reduction ('*'), to remove any symbols not explicitly listed from the external interface. All global data exported from the object must have an ASSERT symbol attribute in the mapfile to specify the symbol type, size, and bss attributes. In the case where there are multiple symbols that reference the same data, the ASSERT for one of these symbols must specify the TYPE and SIZE attributes, while the others must use the ALIAS attribute to reference this primary symbol. Given such a mapfile, the stub and real versions of the shared object can be built using the same command line for each, adding the '-z stub' option to the link for the stub object, and omiting the option from the link for the real object. To demonstrate these ideas, the following code implements a shared object named idx5, which exports data from a 5 element array of integers, with each element initialized to contain its zero-based array index. This data is available as a global array, via an alternative alias data symbol with weak binding, and via a functional interface. % cat idx5.c int _idx5[5] = { 0, 1, 2, 3, 4 }; #pragma weak idx5 = _idx5 int idx5_func(int index) { if ((index 4)) return (-1); return (_idx5[index]); } A mapfile is required to describe the interface provided by this shared object. % cat mapfile $mapfile_version 2 STUB_OBJECT; SYMBOL_SCOPE { _idx5 { ASSERT { TYPE=data; SIZE=4[5] }; }; idx5 { ASSERT { BINDING=weak; ALIAS=_idx5 }; }; idx5_func; local: *; }; The following main program is used to print all the index values available from the idx5 shared object. % cat main.c #include <stdio.h> extern int _idx5[5], idx5[5], idx5_func(int); int main(int argc, char **argv) { int i; for (i = 0; i The following commands create a stub version of this shared object in a subdirectory named stublib. elfdump is used to verify that the resulting object is a stub. The command used to build the stub differs from that of the real object only in the addition of the -z stub option, and the use of a different output file name. This demonstrates the ease with which stub generation can be added to an existing makefile. % cc -Kpic -G -M mapfile -h libidx5.so.1 idx5.c -o stublib/libidx5.so.1 -zstub % ln -s libidx5.so.1 stublib/libidx5.so % elfdump -d stublib/libidx5.so | grep STUB [11] FLAGS_1 0x4000000 [ STUB ] The main program can now be built, using the stub object to stand in for the real shared object, and setting a runpath that will find the real object at runtime. However, as we have not yet built the real object, this program cannot yet be run. Attempts to cause the system to load the stub object are rejected, as the runtime linker knows that stub objects lack the actual code and data found in the real object, and cannot execute. % cc main.c -L stublib -R '$ORIGIN/lib' -lidx5 -lc % ./a.out ld.so.1: a.out: fatal: libidx5.so.1: open failed: No such file or directory Killed % LD_PRELOAD=stublib/libidx5.so.1 ./a.out ld.so.1: a.out: fatal: stublib/libidx5.so.1: stub shared object cannot be used at runtime Killed We build the real object using the same command as we used to build the stub, omitting the -z stub option, and writing the results to a different file. % cc -Kpic -G -M mapfile -h libidx5.so.1 idx5.c -o lib/libidx5.so.1 Once the real object has been built in the lib subdirectory, the program can be run. % ./a.out [0] 0 0 0 [1] 1 1 1 [2] 2 2 2 [3] 3 3 3 [4] 4 4 4 Mapfile Changes The version 2 mapfile syntax was extended in a number of places to accommodate stub objects. Conditional Input The version 2 mapfile syntax has the ability conditionalize mapfile input using the $if control directive. As you might imagine, these directives are used frequently with ASSERT directives for data, because a given data symbol will frequently have a different size in 32 or 64-bit code, or on differing hardware such as x86 versus sparc. The link-editor maintains an internal table of names that can be used in the logical expressions evaluated by $if and $elif. At startup, this table is initialized with items that describe the class of object (_ELF32 or _ELF64) and the type of the target machine (_sparc or _x86). We found that there were a small number of cases in the Solaris code base in which we needed to know what kind of object we were producing, so we added the following new predefined items in order to address that need: NameMeaning ...... _ET_DYNshared object _ET_EXECexecutable object _ET_RELrelocatable object ...... STUB_OBJECT Directive The new STUB_OBJECT directive informs the link-editor that the object described by the mapfile can be built as a stub object. STUB_OBJECT; A stub shared object is built entirely from the information in the mapfiles supplied on the command line. When the -z stub option is specified to build a stub object, the presence of the STUB_OBJECT directive in a mapfile is required, and the link-editor uses the information in symbol ASSERT attributes to create global symbols that match those of the real object. When the real object is built, the presence of STUB_OBJECT causes the link-editor to verify that the mapfiles accurately describe the real object interface, and that a stub object built from them will provide the same linking interface as the real object it represents. All function and data symbols that make up the external interface to the object must be explicitly listed in the mapfile. The mapfile must use symbol scope reduction ('*'), to remove any symbols not explicitly listed from the external interface. All global data in the object is required to have an ASSERT attribute that specifies the symbol type and size. If the ASSERT BIND attribute is not present, the link-editor provides a default assertion that the symbol must be GLOBAL. If the ASSERT SH_ATTR attribute is not present, or does not specify that the section is one of BITS or NOBITS, the link-editor provides a default assertion that the associated section is BITS. All data symbols that describe the same address and size are required to have ASSERT ALIAS attributes specified in the mapfile. If aliased symbols are discovered that do not have an ASSERT ALIAS specified, the link fails and no object is produced. These rules ensure that the mapfiles contain a description of the real shared object's linking interface that is sufficient to produce a stub object with a completely compatible linking interface. SYMBOL_SCOPE/SYMBOL_VERSION ASSERT Attribute The SYMBOL_SCOPE and SYMBOL_VERSION mapfile directives were extended with a symbol attribute named ASSERT. The syntax for the ASSERT attribute is as follows: ASSERT { ALIAS = symbol_name; BINDING = symbol_binding; TYPE = symbol_type; SH_ATTR = section_attributes; SIZE = size_value; SIZE = size_value[count]; }; The ASSERT attribute is used to specify the expected characteristics of the symbol. The link-editor compares the symbol characteristics that result from the link to those given by ASSERT attributes. If the real and asserted attributes do not agree, a fatal error is issued and the output object is not created. In normal use, the link editor evaluates the ASSERT attribute when present, but does not require them, or provide default values for them. The presence of the STUB_OBJECT directive in a mapfile alters the interpretation of ASSERT to require them under some circumstances, and to supply default assertions if explicit ones are not present. See the definition of the STUB_OBJECT Directive for the details. When the -z stub command line option is specified to build a stub object, the information provided by ASSERT attributes is used to define the attributes of the global symbols provided by the object. ASSERT accepts the following: ALIAS Name of a previously defined symbol that this symbol is an alias for. An alias symbol has the same type, value, and size as the main symbol. The ALIAS attribute is mutually exclusive to the TYPE, SIZE, and SH_ATTR attributes, and cannot be used with them. When ALIAS is specified, the type, size, and section attributes are obtained from the alias symbol. BIND Specifies an ELF symbol binding, which can be any of the STB_ constants defined in <sys/elf.h>, with the STB_ prefix removed (e.g. GLOBAL, WEAK). TYPE Specifies an ELF symbol type, which can be any of the STT_ constants defined in <sys/elf.h>, with the STT_ prefix removed (e.g. OBJECT, COMMON, FUNC). In addition, for compatibility with other mapfile usage, FUNCTION and DATA can be specified, for STT_FUNC and STT_OBJECT, respectively. TYPE is mutually exclusive to ALIAS, and cannot be used in conjunction with it. SH_ATTR Specifies attributes of the section associated with the symbol. The section_attributes that can be specified are given in the following table: Section AttributeMeaning BITSSection is not of type SHT_NOBITS NOBITSSection is of type SHT_NOBITS SH_ATTR is mutually exclusive to ALIAS, and cannot be used in conjunction with it. SIZE Specifies the expected symbol size. SIZE is mutually exclusive to ALIAS, and cannot be used in conjunction with it. The syntax for the size_value argument is as described in the discussion of the SIZE attribute below. SIZE The SIZE symbol attribute existed before support for stub objects was introduced. It is used to set the size attribute of a given symbol. This attribute results in the creation of a symbol definition. Prior to the introduction of the ASSERT SIZE attribute, the value of a SIZE attribute was always numeric. While attempting to apply ASSERT SIZE to the objects in the Solaris ON consolidation, I found that many data symbols have a size based on the natural machine wordsize for the class of object being produced. Variables declared as long, or as a pointer, will be 4 bytes in size in a 32-bit object, and 8 bytes in a 64-bit object. Initially, I employed the conditional $if directive to handle these cases as follows: $if _ELF32 foo { ASSERT { TYPE=data; SIZE=4 } }; bar { ASSERT { TYPE=data; SIZE=20 } }; $elif _ELF64 foo { ASSERT { TYPE=data; SIZE=8 } }; bar { ASSERT { TYPE=data; SIZE=40 } }; $else $error UNKNOWN ELFCLASS $endif I found that the situation occurs frequently enough that this is cumbersome. To simplify this case, I introduced the idea of the addrsize symbolic name, and of a repeat count, which together make it simple to specify machine word scalar or array symbols. Both the SIZE, and ASSERT SIZE attributes support this syntax: The size_value argument can be a numeric value, or it can be the symbolic name addrsize. addrsize represents the size of a machine word capable of holding a memory address. The link-editor substitutes the value 4 for addrsize when building 32-bit objects, and the value 8 when building 64-bit objects. addrsize is useful for representing the size of pointer variables and C variables of type long, as it automatically adjusts for 32 and 64-bit objects without requiring the use of conditional input. The size_value argument can be optionally suffixed with a count value, enclosed in square brackets. If count is present, size_value and count are multiplied together to obtain the final size value. Using this feature, the example above can be written more naturally as: foo { ASSERT { TYPE=data; SIZE=addrsize } }; bar { ASSERT { TYPE=data; SIZE=addrsize[5] } }; Exported Global Data Is Still A Bad Idea As you can see, the additional plumbing added to the Solaris link-editor to support stub objects is minimal. Furthermore, about 90% of that plumbing is dedicated to handling global data. We have long advised against global data exported from shared objects. There are many ways in which global data does not fit well with dynamic linking. Stub objects simply provide one more reason to avoid this practice. It is always better to export all data via a functional interface. You should always hide your data, and make it available to your users via a function that they can call to acquire the address of the data item. However, If you do have to support global data for a stub, perhaps because you are working with an already existing object, it is still easilily done, as shown above. Oracle does not like us to discuss hypothetical new features that don't exist in shipping product, so I'll end this section with a speculation. It might be possible to do more in this area to ease the difficulty of dealing with objects that have global data that the users of the library don't need. Perhaps someday... Conclusions It is easy to create stub objects for most objects. If your library only exports function symbols, all you have to do to build a faithful stub object is to add STUB_OBJECT; and then to use the same link command you're currently using, with the addition of the -z stub option. Happy Stubbing!

    Read the article

  • Detecting HTML5/CSS3 Features using Modernizr

    - by dwahlin
    HTML5, CSS3, and related technologies such as canvas and web sockets bring a lot of useful new features to the table that can take Web applications to the next level. These new technologies allow applications to be built using only HTML, CSS, and JavaScript allowing them to be viewed on a variety of form factors including tablets and phones. Although HTML5 features offer a lot of promise, it’s not realistic to develop applications using the latest technologies without worrying about supporting older browsers in the process. If history has taught us anything it’s that old browsers stick around for years and years which means developers have to deal with backward compatibility issues. This is especially true when deploying applications to the Internet that target the general public. This begs the question, “How do you move forward with HTML5 and CSS3 technologies while gracefully handling unsupported features in older browsers?” Although you can write code by hand to detect different HTML5 and CSS3 features, it’s not always straightforward. For example, to check for canvas support you need to write code similar to the following:   <script> window.onload = function () { if (canvasSupported()) { alert('canvas supported'); } }; function canvasSupported() { var canvas = document.createElement('canvas'); return (canvas.getContext && canvas.getContext('2d')); } </script> If you want to check for local storage support the following check can be made. It’s more involved than it should be due to a bug in older versions of Firefox. <script> window.onload = function () { if (localStorageSupported()) { alert('local storage supported'); } }; function localStorageSupported() { try { return ('localStorage' in window && window['localStorage'] != null); } catch(e) {} return false; } </script> Looking through the previous examples you can see that there’s more than meets the eye when it comes to checking browsers for HTML5 and CSS3 features. It takes a lot of work to test every possible scenario and every version of a given browser. Fortunately, you don’t have to resort to writing custom code to test what HTML5/CSS3 features a given browser supports. By using a script library called Modernizr you can add checks for different HTML5/CSS3 features into your pages with a minimal amount of code on your part. Let’s take a look at some of the key features Modernizr offers.   Getting Started with Modernizr The first time I heard the name “Modernizr” I thought it “modernized” older browsers by added missing functionality. In reality, Modernizr doesn’t actually handle adding missing features or “modernizing” older browsers. The Modernizr website states, “The name Modernizr actually stems from the goal of modernizing our development practices (and ourselves)”. Because it relies on feature detection rather than browser sniffing (a common technique used in the past – that never worked that great), Modernizr definitely provides a more modern way to test features that a browser supports and can even handle loading additional scripts called shims or polyfills that fill in holes that older browsers may have. It’s a great tool to have in your arsenal if you’re a web developer. Modernizr is available at http://modernizr.com. Two different types of scripts are available including a development script and custom production script. To generate a production script, the site provides a custom script generation tool rather than providing a single script that has everything under the sun for HTML5/CSS3 feature detection. Using the script generation tool you can pick the specific test functionality that you need and ignore everything that you don’t need. That way the script is kept as small as possible. An example of the custom script download screen is shown next. Notice that specific CSS3, HTML5, and related feature tests can be selected. Once you’ve downloaded your custom script you can add it into your web page using the standard <script> element and you’re ready to start using Modernizr. <script src="Scripts/Modernizr.js" type="text/javascript"></script>   Modernizr and the HTML Element Once you’ve add a script reference to Modernizr in a page it’ll go to work for you immediately. In fact, by adding the script several different CSS classes will be added to the page’s <html> element at runtime. These classes define what features the browser supports and what features it doesn’t support. Features that aren’t supported get a class name of “no-FeatureName”, for example “no-flexbox”. Features that are supported get a CSS class name based on the feature such as “canvas” or “websockets”. An example of classes added when running a page in Chrome is shown next:   <html class=" js flexbox canvas canvastext webgl no-touch geolocation postmessage websqldatabase indexeddb hashchange history draganddrop websockets rgba hsla multiplebgs backgroundsize borderimage borderradius boxshadow textshadow opacity cssanimations csscolumns cssgradients cssreflections csstransforms csstransforms3d csstransitions fontface generatedcontent video audio localstorage sessionstorage webworkers applicationcache svg inlinesvg smil svgclippaths"> Here’s an example of what the <html> element looks like at runtime with Internet Explorer 9:   <html class=" js no-flexbox canvas canvastext no-webgl no-touch geolocation postmessage no-websqldatabase no-indexeddb hashchange no-history draganddrop no-websockets rgba hsla multiplebgs backgroundsize no-borderimage borderradius boxshadow no-textshadow opacity no-cssanimations no-csscolumns no-cssgradients no-cssreflections csstransforms no-csstransforms3d no-csstransitions fontface generatedcontent video audio localstorage sessionstorage no-webworkers no-applicationcache svg inlinesvg smil svgclippaths">   When using Modernizr it’s a common practice to define an <html> element in your page with a no-js class added as shown next:   <html class="no-js">   You’ll see starter projects such as HTML5 Boilerplate (http://html5boilerplate.com) or Initializr (http://initializr.com) follow this approach (see my previous post for more information on HTML5 Boilerplate). By adding the no-js class it’s easy to tell if a browser has JavaScript enabled or not. If JavaScript is disabled then no-js will stay on the <html> element. If JavaScript is enabled, no-js will be removed by Modernizr and a js class will be added along with other classes that define supported/unsupported features. Working with HTML5 and CSS3 Features You can use the CSS classes added to the <html> element directly in your CSS files to determine what style properties to use based upon the features supported by a given browser. For example, the following CSS can be used to render a box shadow for browsers that support that feature and a simple border for browsers that don’t support the feature: .boxshadow #MyContainer { border: none; -webkit-box-shadow: #666 1px 1px 1px; -moz-box-shadow: #666 1px 1px 1px; } .no-boxshadow #MyContainer { border: 2px solid black; }   If a browser supports box-shadows the boxshadow CSS class will be added to the <html> element by Modernizr. It can then be associated with a given element. This example associates the boxshadow class with a div with an id of MyContainer. If the browser doesn’t support box shadows then the no-boxshadow class will be added to the <html> element and it can be used to render a standard border around the div. This provides a great way to leverage new CSS3 features in supported browsers while providing a graceful fallback for older browsers. In addition to using the CSS classes that Modernizr provides on the <html> element, you also use a global Modernizr object that’s created. This object exposes different properties that can be used to detect the availability of specific HTML5 or CSS3 features. For example, the following code can be used to detect canvas and local storage support. You can see that the code is much simpler than the code shown at the beginning of this post. It also has the added benefit of being tested by a large community of web developers around the world running a variety of browsers.   $(document).ready(function () { if (Modernizr.canvas) { //Add canvas code } if (Modernizr.localstorage) { //Add local storage code } }); The global Modernizr object can also be used to test for the presence of CSS3 features. The following code shows how to test support for border-radius and CSS transforms:   $(document).ready(function () { if (Modernizr.borderradius) { $('#MyDiv').addClass('borderRadiusStyle'); } if (Modernizr.csstransforms) { $('#MyDiv').addClass('transformsStyle'); } });   Several other CSS3 feature tests can be performed such as support for opacity, rgba, text-shadow, CSS animations, CSS transitions, multiple backgrounds, and more. A complete list of supported HTML5 and CSS3 tests that Modernizr supports can be found at http://www.modernizr.com/docs.   Loading Scripts using Modernizr In cases where a browser doesn’t support a specific feature you can either provide a graceful fallback or load a shim/polyfill script to fill in missing functionality where appropriate (more information about shims/polyfills can be found at https://github.com/Modernizr/Modernizr/wiki/HTML5-Cross-Browser-Polyfills). Modernizr has a built-in script loader that can be used to test for a feature and then load a script if the feature isn’t available. The script loader is built-into Modernizr and is also available as a standalone yepnope script (http://yepnopejs.com). It’s extremely easy to get started using the script loader and it can really simplify the process of loading scripts based on the availability of a particular browser feature. To load scripts dynamically you can use Modernizr’s load() function which accepts properties defining the feature to test (test property), the script to load if the test succeeds (yep property), the script to load if the test fails (nope property), and a script to load regardless of if the test succeeds or fails (both property). An example of using load() with these properties is show next: Modernizr.load({ test: Modernizr.canvas, yep: 'html5CanvasAvailable.js’, nope: 'excanvas.js’, both: 'myCustomScript.js' }); In this example Modernizr is used to not only load scripts but also to test for the presence of the canvas feature. If the target browser supports the HTML5 canvas then the html5CanvasAvailable.js script will be loaded along with the myCustomScript.js script (use of the yep property in this example is a bit contrived – it was added simply to demonstrate how the property can be used in the load() function). Otherwise, a polyfill script named excanvas.js will be loaded to add missing canvas functionality for Internet Explorer versions prior to 9. Once excanvas.js is loaded the myCustomScript.js script will be loaded. Because Modernizr handles loading scripts, you can also use it in creative ways. For example, you can use it to load local scripts when a 3rd party Content Delivery Network (CDN) such as one provided by Google or Microsoft is unavailable for whatever reason. The Modernizr documentation provides the following example that demonstrates the process for providing a local fallback for jQuery when a CDN is down:   Modernizr.load([ { load: '//ajax.googleapis.com/ajax/libs/jquery/1.6.4/jquery.js', complete: function () { if (!window.jQuery) { Modernizr.load('js/libs/jquery-1.6.4.min.js'); } } }, { // This will wait for the fallback to load and // execute if it needs to. load: 'needs-jQuery.js' } ]); This code attempts to load jQuery from the Google CDN first. Once the script is downloaded (or if it fails) the function associated with complete will be called. The function checks to make sure that the jQuery object is available and if it’s not Modernizr is used to load a local jQuery script. After all of that occurs a script named needs-jQuery.js will be loaded. Conclusion If you’re building applications that use some of the latest and greatest features available in HTML5 and CSS3 then Modernizr is an essential tool. By using it you can reduce the amount of custom code required to test for browser features and provide graceful fallbacks or even load shim/polyfill scripts for older browsers to help fill in missing functionality. 

    Read the article

  • CodePlex Daily Summary for Thursday, June 09, 2011

    CodePlex Daily Summary for Thursday, June 09, 2011Popular ReleasesNetOffice - The easiest way to use Office in .NET: NetOffice Release 0.9: Changes: - fix examples (include issue 16026) - add new examples - 32Bit/64Bit Walkthrough is now available in technical Documentation. Includes: - Runtime Binaries and Source Code for .NET Framework:......v2.0, v3.0, v3.5, v4.0 - Tutorials in C# and VB.Net:..............................................................COM Proxy Management, Events, etc. - Examples in C# and VB.Net:............................................................Excel, Word, Outlook, PowerPoint, Access - COMAddi...Reusable Library: V1.1.3: A collection of reusable abstractions for enterprise application developerDevpad: 0.4: Whats new for Devpad 0.4: New JavaScript support New Options dialog Minor Bug Fix's, improvements and speed upsClosedXML - The easy way to OpenXML: ClosedXML 0.54.0: New on this release: 1) Mayor performance improvements. 2) AdjustToContents now take into account the text rotation. 3) Fixed issues 6782, 6784, 6788HTML-IDEx: HTML-IDEx .15 ALPHA: This release fixes line counting a little bit and adds the masshighlight() sub, which highlights pasted and inserted code.AutoLoL: AutoLoL v2.0.3: - Improved summoner spells are now displayed - Fixed some of the startup errors people got - Double clicking an item selects it - Some usability changes that make using AutoLoL just a little easier - Bug fixes AutoLoL v2 is not an update, but an entirely new version! Please install to a different directory than AutoLoL v1VidCoder: 0.9.2: Updated to HandBrake 4024svn. This fixes problems with mpeg2 sources: corrupted previews, incorrect progress indicators and encodes that incorrectly report as failed. Fixed a problem that prevented target sizes above 2048 MB.SharePoint Search XSL Samples: SharePoint 2010 Samples: I have updated some of the samples from the 2007 release. These all work in SharePoint 2010. I removed the Pivot on File Extension because SharePoint 2010 search has refiners that perform the same function.Fingertip detection via OpenNI: Fingertip Detection 1.0.0.0: This release will allow you to recognize fingertips. To do that shake you hand, after pressing button. Known Issues : 1. When your hand is near to some objects, recognition is not working very well 2. When you hand is far away from sensor, recognition is not working very wellAcDown????? - Anime&Comic Downloader: AcDown????? v3.0 Beta5: ??AcDown?????????????,??????????????,????、????。?????Acfun????? ????32??64? Windows XP/Vista/7 ????????????? ??:????????Windows XP???,?????????.NET Framework 2.0???(x86)?.NET Framework 2.0???(x64),?????"?????????"??? ??v3.0 Beta5 ?????????? ???? ?? ???????? ???"????????"?? ????????????? ????????/???? ?? ???"????"??? ?? ??????????? ?? ?? ??????????? ?? ?????????????????? ??????????????????? ???????????????? ????????????Discussions???????? ????AcDown??????????????VFPX: GoFish 4 Beta 1: Current beta is Build 144 (released 2011-06-07 ) See the GoFish4 info page for details and video link: http://vfpx.codeplex.com/wikipage?title=GoFishSharePoint 2010 FBA Pack: SharePoint 2010 FBA Pack 1.0.3: Fixed User Management screen when "RequiresQuestionAndAnswer" set to true Reply to Email Address can now be customized User Management page now only displays users that reside in the membership database Web parts have been changed to inherit from System.Web.UI.WebControls.WebParts.WebPart, so that they will display on anonymous application pages For installation and configuration steps see here.SizeOnDisk: 1.0.8.2: With installerTerrariViewer: TerrariViewer v2.5: Added new items associated with Terraria v1.0.3 to the character editor. Fixed multiple bugs with Piggy Bank EditorySterling NoSQL OODB for .NET 4.0, Silverlight 4 and 5, and Windows Phone 7: Sterling OODB v1.5: Welcome to the Sterling 1.5 RTM. This version is backwards compatible without modification to the 1.4 beta. For the 1.0, you will need to upgrade your database. Please see this discussion for details. You must modify your 1.0 code for persistence. The 1.5 version defaults to an in-memory driver. To save to isolated storage or use one of the new mechanisms, see the available drivers and pass an instance of the appropriate one to your database (different databases may use different drivers). ...EnhSim: EnhSim 2.4.6 BETA: 2.4.6 BETAThis release supports WoW patch 4.1 at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Added in the proper...Grammar and Spell Checking Plugin for Windows Live Writer: Grammar Checker Plugin v1.0: First version of the grammar checker plugin for Windows Live Writer. You can show your appreciation for this plugin and support further development by donating via PayPal. Any amount will be appreciated. Thank you. Donatepatterns & practices: Project Silk: Project Silk Community Drop 10 - June 3, 2011: Changes from previous drop: Many code changes: please see the readme.mht for details. New "Application Notifications" chapter. Updated "Server-Side Implementation" chapter. Guidance Chapters Ready for Review The Word documents for the chapters are included with the source code in addition to the CHM to help you provide feedback. The PDF is provided as a separate download for your convenience. Installation Overview To install and run the reference implementation, you must perform the fol...Claims Based Identity & Access Control Guide: Release Candidate: Highlights of this release This is the release candidate drop of the new "Claims Identity Guide" edition. In this release you will find: All code samples, including all ACS v2: ACS as a Federation Provider - Showing authentication with LiveID, Google, etc. ACS as a FP with Multiple Business Partners. ACS and REST endpoints. Using a WP7 client with REST endpoints. All ACS specific chapters. Two new chapters on SharePoint (SSO and Federation) All revised v1 chapters We are now ...Terraria Map Generator: TerrariaMapTool 1.0.0.4 Beta: 1) Fixed the generated map.html file so that the file:/// is included in the base path. 2) Added the ability to use parallelization during generation. This will cause the program to use as many threads as there are physical cores. 3) Fixed some background overdraw.New ProjectsAlumni Association Website Centre: We named our project 'MCIS - Ma Chung Information System Association Centre'. This project is only simulation for us to try building a complex website that have functional features.ASP.NET without ClientID's: Render ASP.NET Controls without ClientID - reduce HTML output responseBLPImage: .BLP files are texture files used in games made by Blizzard Entertainment, also used in other games like Neverwinter Nights.Breaker Dev IRC Client: Breaker Dev IRC is just another IRC client that's Open source.Cloud Upload: CloudUpload shows how to use Shared Access Signatures to upload, download, delete and list blobs in a Azure Storage container. Using shared acces signatures permits to partition a storage account and NOT to use the storage shared key in the application. The project uses REST API.Clounter, count how many lines you have coded: A tiny, small, lightweight, ultra simple app to count how many lines of code you have programmed so far. Or hoe many lines of anything there's on a folder. You can filter by directory and by file extension.Color picker control for Silverlight: Color picker is a custom Silverlight control that makes it easy to add color picking functionality to the Silverlight projects.CommonSample: a code sampleCompactMapper: A simple object/relational mapper for the Compact Framework. Based on attributes and supporting SQLite Tracker https://www.pivotaltracker.com/projects/303523dbkk101 hello codeplex: Trying out ClickOnce deployment/updatesDesktop Browser: Web based file explorer, runs locally on your browser, designed for media desktops.EFMagic: EFMagic is a library based on Entity Framework 4.1 that manage the schema of your database, automatically generates sql script to update your database and generate with Code First your schema.EWF_Install: Installs EWF (Enhanced Write Filter) in Windows XP without hassle. Changes system settings, accordingly to EWF characteristics. Extreme Download: Projeto desenvolvido no tempo livre para auxiliar a gerenciar download de vários arquivos.Flac2Wav: Converts FLAC file into WAV. Example of using libFLAC.dll in C#.Flac2Wma: Converts FLAC files into lossless WMA files. Uses libFLAC.dll, taglib-sharp.dll, MediaInfo32.dll. Converts directory tree, with only one click. Retains information from Tags.GeoMedia PostGIS data server: GeoMedia PostGIS data server is a GDO (Geographic Database Object) component that enables read and write to a Postgre/PostGIS database from Intergraph GeoMedia product family.Google Page Speed for .Net: This little piece of software sends a request to the Google page speed API, then parses the JSON response and presents them in a nice class. I use this class on this webpage to show the page speed score at the bottom right corner of my site: http://schaffhauser.meHubLog: Tool for application administrator: collects and joins logs from multiple instances of an application, and displays them. Example of usage in C#: the telnet protocol, dynamically compiled functions as elements of configuration.hydrodesktop2: HydroDesktop is a free and open source desktop application developed in C# .NET that serves as a client for CUAHSI HIS WaterOneFlow web services data and includes data discovery, download, visualization, editing, and integration with other analysis and modeling tools. ImageResizing.Net: Makes resizing and cropping images easy and fast - just change the URL. Mature and popular HttpModule for ASP.NET and IIS. Works great with jQuery, jCrop, Galleria, and and can be easily integrated into any CMS.License Header Manager for Visual Studio: Automatically insert license headers into your source code files in Visual Studio.Ma Chung Voice website: Ma Chung University website project. created by Yogi Rinaldi & Irfan using ASP.NET Ma Chung University, Indonesia.MCFC Futsal Machung Malang Indonesia: Websites We discuss the sport of futsal. Here we make in such a way that can be accessed easily by the user. please try.Mp3Cleaner: Mp3Cleaner is used to "clean up" and organize the music files external and internal (tags) information. Despite the name, supports most types of music files (flac, wma, wav .... ogg, …), it uses the tagLib library.MSBuild CI Tasks: Provides MSBuild utility tasks for continuous integration. NetMemory: let us remember ancestor better!nhibernate of mvcmusicstore: convert by http://mvcmusicstore.codeplex.com/ ·Shows data access via nhibernate & fluent tool: ·http://nmg.codeplex.com/ demo:http://www.pcme.info Parallelminds Asp.net Web Parts Demo Code: This is Asp.net web part demo code by www.parallelminds.biz to help new developers, community understand what are Asp.net web parts and how to use it for various purposes.This will continue exploring more asp.net related web parts functionality.PivotalTrackerAgent: Pivotal Tracker Agent provides ways to update your projects on PivotalTracker.com. Your developers will no longer have to update it through its web-based GUI if you incorporate it with your event handler of source control and/or bug tracking system.Programa voluntario: Sistema de gerenciamento de instituições e voluntáriosRaytracer for fun: Simple raytracingengine written for fun.SCOPE PHOTOGRAPHY MA CHUNG INDONESIA: This website contains a collection of people who hobby photography. We can shared photo and shared more info about photography. And we can give a comment in other member photo galery. Look and Join us. sqlscriptmover: Exports stored procedures, function, views, tables and triggers to individual files or imports same and attempts to create in designated database.UKM MA CHUNG FUTSAL: This Website Created to all of Ma Chung Student for develop his talent of futsal especially with user friendly interfaceVideoStreamer Iphone/Ipad: Video streamer that support single Range. it will handle video streaming especially MP4, for iphone and ipadVolleyExtracurriculer: We named our project 'UMC Information System Volley Extracurricular’. It doesn't mean this project is really used to be official website for volley extracurricular in our campus, but it only simulation for us to tried to built one complex website.Web Gozar Manager: HI

    Read the article

  • Detecting HTML5/CSS3 Features using Modernizr

    - by dwahlin
    HTML5, CSS3, and related technologies such as canvas and web sockets bring a lot of useful new features to the table that can take Web applications to the next level. These new technologies allow applications to be built using only HTML, CSS, and JavaScript allowing them to be viewed on a variety of form factors including tablets and phones. Although HTML5 features offer a lot of promise, it’s not realistic to develop applications using the latest technologies without worrying about supporting older browsers in the process. If history has taught us anything it’s that old browsers stick around for years and years which means developers have to deal with backward compatibility issues. This is especially true when deploying applications to the Internet that target the general public. This begs the question, “How do you move forward with HTML5 and CSS3 technologies while gracefully handling unsupported features in older browsers?” Although you can write code by hand to detect different HTML5 and CSS3 features, it’s not always straightforward. For example, to check for canvas support you need to write code similar to the following:   <script> window.onload = function () { if (canvasSupported()) { alert('canvas supported'); } }; function canvasSupported() { var canvas = document.createElement('canvas'); return (canvas.getContext && canvas.getContext('2d')); } </script> If you want to check for local storage support the following check can be made. It’s more involved than it should be due to a bug in older versions of Firefox. <script> window.onload = function () { if (localStorageSupported()) { alert('local storage supported'); } }; function localStorageSupported() { try { return ('localStorage' in window && window['localStorage'] != null); } catch(e) {} return false; } </script> Looking through the previous examples you can see that there’s more than meets the eye when it comes to checking browsers for HTML5 and CSS3 features. It takes a lot of work to test every possible scenario and every version of a given browser. Fortunately, you don’t have to resort to writing custom code to test what HTML5/CSS3 features a given browser supports. By using a script library called Modernizr you can add checks for different HTML5/CSS3 features into your pages with a minimal amount of code on your part. Let’s take a look at some of the key features Modernizr offers.   Getting Started with Modernizr The first time I heard the name “Modernizr” I thought it “modernized” older browsers by added missing functionality. In reality, Modernizr doesn’t actually handle adding missing features or “modernizing” older browsers. The Modernizr website states, “The name Modernizr actually stems from the goal of modernizing our development practices (and ourselves)”. Because it relies on feature detection rather than browser sniffing (a common technique used in the past – that never worked that great), Modernizr definitely provides a more modern way to test features that a browser supports and can even handle loading additional scripts called shims or polyfills that fill in holes that older browsers may have. It’s a great tool to have in your arsenal if you’re a web developer. Modernizr is available at http://modernizr.com. Two different types of scripts are available including a development script and custom production script. To generate a production script, the site provides a custom script generation tool rather than providing a single script that has everything under the sun for HTML5/CSS3 feature detection. Using the script generation tool you can pick the specific test functionality that you need and ignore everything that you don’t need. That way the script is kept as small as possible. An example of the custom script download screen is shown next. Notice that specific CSS3, HTML5, and related feature tests can be selected. Once you’ve downloaded your custom script you can add it into your web page using the standard <script> element and you’re ready to start using Modernizr. <script src="Scripts/Modernizr.js" type="text/javascript"></script>   Modernizr and the HTML Element Once you’ve add a script reference to Modernizr in a page it’ll go to work for you immediately. In fact, by adding the script several different CSS classes will be added to the page’s <html> element at runtime. These classes define what features the browser supports and what features it doesn’t support. Features that aren’t supported get a class name of “no-FeatureName”, for example “no-flexbox”. Features that are supported get a CSS class name based on the feature such as “canvas” or “websockets”. An example of classes added when running a page in Chrome is shown next:   <html class=" js flexbox canvas canvastext webgl no-touch geolocation postmessage websqldatabase indexeddb hashchange history draganddrop websockets rgba hsla multiplebgs backgroundsize borderimage borderradius boxshadow textshadow opacity cssanimations csscolumns cssgradients cssreflections csstransforms csstransforms3d csstransitions fontface generatedcontent video audio localstorage sessionstorage webworkers applicationcache svg inlinesvg smil svgclippaths"> Here’s an example of what the <html> element looks like at runtime with Internet Explorer 9:   <html class=" js no-flexbox canvas canvastext no-webgl no-touch geolocation postmessage no-websqldatabase no-indexeddb hashchange no-history draganddrop no-websockets rgba hsla multiplebgs backgroundsize no-borderimage borderradius boxshadow no-textshadow opacity no-cssanimations no-csscolumns no-cssgradients no-cssreflections csstransforms no-csstransforms3d no-csstransitions fontface generatedcontent video audio localstorage sessionstorage no-webworkers no-applicationcache svg inlinesvg smil svgclippaths">   When using Modernizr it’s a common practice to define an <html> element in your page with a no-js class added as shown next:   <html class="no-js">   You’ll see starter projects such as HTML5 Boilerplate (http://html5boilerplate.com) or Initializr (http://initializr.com) follow this approach (see my previous post for more information on HTML5 Boilerplate). By adding the no-js class it’s easy to tell if a browser has JavaScript enabled or not. If JavaScript is disabled then no-js will stay on the <html> element. If JavaScript is enabled, no-js will be removed by Modernizr and a js class will be added along with other classes that define supported/unsupported features. Working with HTML5 and CSS3 Features You can use the CSS classes added to the <html> element directly in your CSS files to determine what style properties to use based upon the features supported by a given browser. For example, the following CSS can be used to render a box shadow for browsers that support that feature and a simple border for browsers that don’t support the feature: .boxshadow #MyContainer { border: none; -webkit-box-shadow: #666 1px 1px 1px; -moz-box-shadow: #666 1px 1px 1px; } .no-boxshadow #MyContainer { border: 2px solid black; }   If a browser supports box-shadows the boxshadow CSS class will be added to the <html> element by Modernizr. It can then be associated with a given element. This example associates the boxshadow class with a div with an id of MyContainer. If the browser doesn’t support box shadows then the no-boxshadow class will be added to the <html> element and it can be used to render a standard border around the div. This provides a great way to leverage new CSS3 features in supported browsers while providing a graceful fallback for older browsers. In addition to using the CSS classes that Modernizr provides on the <html> element, you also use a global Modernizr object that’s created. This object exposes different properties that can be used to detect the availability of specific HTML5 or CSS3 features. For example, the following code can be used to detect canvas and local storage support. You can see that the code is much simpler than the code shown at the beginning of this post. It also has the added benefit of being tested by a large community of web developers around the world running a variety of browsers.   $(document).ready(function () { if (Modernizr.canvas) { //Add canvas code } if (Modernizr.localstorage) { //Add local storage code } }); The global Modernizr object can also be used to test for the presence of CSS3 features. The following code shows how to test support for border-radius and CSS transforms:   $(document).ready(function () { if (Modernizr.borderradius) { $('#MyDiv').addClass('borderRadiusStyle'); } if (Modernizr.csstransforms) { $('#MyDiv').addClass('transformsStyle'); } });   Several other CSS3 feature tests can be performed such as support for opacity, rgba, text-shadow, CSS animations, CSS transitions, multiple backgrounds, and more. A complete list of supported HTML5 and CSS3 tests that Modernizr supports can be found at http://www.modernizr.com/docs.   Loading Scripts using Modernizr In cases where a browser doesn’t support a specific feature you can either provide a graceful fallback or load a shim/polyfill script to fill in missing functionality where appropriate (more information about shims/polyfills can be found at https://github.com/Modernizr/Modernizr/wiki/HTML5-Cross-Browser-Polyfills). Modernizr has a built-in script loader that can be used to test for a feature and then load a script if the feature isn’t available. The script loader is built-into Modernizr and is also available as a standalone yepnope script (http://yepnopejs.com). It’s extremely easy to get started using the script loader and it can really simplify the process of loading scripts based on the availability of a particular browser feature. To load scripts dynamically you can use Modernizr’s load() function which accepts properties defining the feature to test (test property), the script to load if the test succeeds (yep property), the script to load if the test fails (nope property), and a script to load regardless of if the test succeeds or fails (both property). An example of using load() with these properties is show next: Modernizr.load({ test: Modernizr.canvas, yep: 'html5CanvasAvailable.js’, nope: 'excanvas.js’, both: 'myCustomScript.js' }); In this example Modernizr is used to not only load scripts but also to test for the presence of the canvas feature. If the target browser supports the HTML5 canvas then the html5CanvasAvailable.js script will be loaded along with the myCustomScript.js script (use of the yep property in this example is a bit contrived – it was added simply to demonstrate how the property can be used in the load() function). Otherwise, a polyfill script named excanvas.js will be loaded to add missing canvas functionality for Internet Explorer versions prior to 9. Once excanvas.js is loaded the myCustomScript.js script will be loaded. Because Modernizr handles loading scripts, you can also use it in creative ways. For example, you can use it to load local scripts when a 3rd party Content Delivery Network (CDN) such as one provided by Google or Microsoft is unavailable for whatever reason. The Modernizr documentation provides the following example that demonstrates the process for providing a local fallback for jQuery when a CDN is down:   Modernizr.load([ { load: '//ajax.googleapis.com/ajax/libs/jquery/1.6.4/jquery.js', complete: function () { if (!window.jQuery) { Modernizr.load('js/libs/jquery-1.6.4.min.js'); } } }, { // This will wait for the fallback to load and // execute if it needs to. load: 'needs-jQuery.js' } ]); This code attempts to load jQuery from the Google CDN first. Once the script is downloaded (or if it fails) the function associated with complete will be called. The function checks to make sure that the jQuery object is available and if it’s not Modernizr is used to load a local jQuery script. After all of that occurs a script named needs-jQuery.js will be loaded. Conclusion If you’re building applications that use some of the latest and greatest features available in HTML5 and CSS3 then Modernizr is an essential tool. By using it you can reduce the amount of custom code required to test for browser features and provide graceful fallbacks or even load shim/polyfill scripts for older browsers to help fill in missing functionality. 

    Read the article

  • Ubuntu missing from the Grub menu

    - by varevarao
    Recently I've had some audio issues with Ubuntu (using precise), and in the process of trying to resolve that I ran a dist-upgrade. Everything went just fine, and the sound seemed good, until I rebooted my machine for the first time since the dist-upgrade. All I see now in the Grub menu at startup is memtest86+, another memtest variant, and Windows 7. It's not showing any of the linux kernels that Ubuntu is running on. I am attaching my bootinfoscript: Boot Info Script 0.61.full + Boot-Repair extra info [Boot-Info November 20th 2012] ============================= Boot Info Summary: =============================== => Grub2 (v1.99) is installed in the MBR of /dev/sda and looks at sector 1 of the same hard drive for core.img. core.img is at this location and looks for (,msdos6)/boot/grub on this drive. sda1: __________________________________________________________________________ File system: vfat Boot sector type: Dell Utility: FAT16 Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files: sda2: __________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7: NTFS Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files: sda3: __________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7: NTFS Boot sector info: No errors found in the Boot Parameter Block. Operating System: Windows 7 Boot files: sda4: __________________________________________________________________________ File system: Extended Partition Boot sector type: Unknown Boot sector info: sda5: __________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7: NTFS Boot sector info: According to the info in the boot sector, sda5 starts at sector 2048. Operating System: Boot files: sda6: __________________________________________________________________________ File system: ext4 Boot sector type: Grub2 (v1.99-2.00) Boot sector info: Grub2 (v1.99) is installed in the boot sector of sda6 and looks at sector 220046240 of the same hard drive for core.img. core.img is at this location and looks for (,msdos6)/boot/grub on this drive. Operating System: Ubuntu 12.04.1 LTS Boot files: /boot/grub/grub.cfg /etc/fstab /boot/grub/core.img sda7: __________________________________________________________________________ File system: swap Boot sector type: - Boot sector info: ============================ Drive/Partition Info: ============================= Drive: sda _____________________________________________________________________ Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start Sector End Sector # of Sectors Id System /dev/sda1 63 273,104 273,042 de Dell Utility /dev/sda2 * 274,432 19,406,847 19,132,416 7 NTFS / exFAT / HPFS /dev/sda3 19,406,848 218,274,364 198,867,517 7 NTFS / exFAT / HPFS /dev/sda4 218,275,838 625,139,711 406,863,874 f W95 Extended (LBA) /dev/sda5 328,630,272 625,139,711 296,509,440 7 NTFS / exFAT / HPFS /dev/sda6 218,275,840 324,030,463 105,754,624 83 Linux /dev/sda7 324,032,512 328,626,175 4,593,664 82 Linux swap / Solaris "blkid" output: ________________________________________________________________ Device UUID TYPE LABEL /dev/loop0 squashfs /dev/sda1 07DA-0512 vfat DellUtility /dev/sda2 8834146034145392 ntfs RECOVERY /dev/sda3 48E2189DE21890F4 ntfs OS /dev/sda5 BC2A44C02A447982 ntfs Varshneya /dev/sda6 34731459-4b0f-46ac-a9bf-cb360a2c947c ext4 /dev/sda7 dcb9ce9b-799a-4c65-b008-887b01775670 swap /dev/sr0 iso9660 Ubuntu 12.04.1 LTS i386 ================================ Mount points: ================================= Device Mount_Point Type Options /dev/loop0 /rofs squashfs (ro,noatime) /dev/sda6 /mnt ext4 (rw) /dev/sr0 /cdrom iso9660 (ro,noatime) =========================== sda6/boot/grub/grub.cfg: =========================== -------------------------------------------------------------------------------- # # DO NOT EDIT THIS FILE # # It is automatically generated by grub-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s $prefix/grubenv ]; then set have_grubenv=true load_env fi set default="0" if [ "${prev_saved_entry}" ]; then set saved_entry="${prev_saved_entry}" save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z "${boot_once}" ]; then saved_entry="${chosen}" save_env saved_entry fi } function recordfail { set recordfail=1 if [ -n "${have_grubenv}" ]; then if [ -z "${boot_once}" ]; then save_env recordfail; fi; fi } function load_video { insmod vbe insmod vga insmod video_bochs insmod video_cirrus } insmod part_msdos insmod ext2 set root='(hd0,msdos6)' search --no-floppy --fs-uuid --set=root 34731459-4b0f-46ac-a9bf-cb360a2c947c if loadfont /boot/grub/unicode.pf2 ; then set gfxmode=auto load_video insmod gfxterm insmod part_msdos insmod ext2 set root='(hd0,msdos6)' search --no-floppy --fs-uuid --set=root 34731459-4b0f-46ac-a9bf-cb360a2c947c set locale_dir=($root)/boot/grub/locale set lang=en_US insmod gettext fi terminal_output gfxterm if [ "${recordfail}" = 1 ]; then set timeout=-1 else set timeout=10 fi ### END /etc/grub.d/00_header ### ### BEGIN /etc/grub.d/05_debian_theme ### set menu_color_normal=white/black set menu_color_highlight=black/light-gray ### END /etc/grub.d/05_debian_theme ### ### BEGIN /etc/grub.d/10_linux ### function gfxmode { set gfxpayload="${1}" if [ "${1}" = "keep" ]; then set vt_handoff=vt.handoff=7 else set vt_handoff= fi } if [ "${recordfail}" != 1 ]; then if [ -e ${prefix}/gfxblacklist.txt ]; then if hwmatch ${prefix}/gfxblacklist.txt 3; then if [ ${match} = 0 ]; then set linux_gfx_mode=keep else set linux_gfx_mode=text fi else set linux_gfx_mode=text fi else set linux_gfx_mode=keep fi else set linux_gfx_mode=text fi export linux_gfx_mode if [ "${linux_gfx_mode}" != "text" ]; then load_video; fi ### END /etc/grub.d/10_linux ### ### BEGIN /etc/grub.d/20_linux_xen ### ### END /etc/grub.d/20_linux_xen ### ### BEGIN /etc/grub.d/20_memtest86+ ### menuentry "Memory test (memtest86+)" { insmod part_msdos insmod ext2 set root='(hd0,msdos6)' search --no-floppy --fs-uuid --set=root 34731459-4b0f-46ac-a9bf-cb360a2c947c linux16 /boot/memtest86+.bin } menuentry "Memory test (memtest86+, serial console 115200)" { insmod part_msdos insmod ext2 set root='(hd0,msdos6)' search --no-floppy --fs-uuid --set=root 34731459-4b0f-46ac-a9bf-cb360a2c947c linux16 /boot/memtest86+.bin console=ttyS0,115200n8 } ### END /etc/grub.d/20_memtest86+ ### ### BEGIN /etc/grub.d/30_os-prober ### menuentry "Windows 7 (loader) (on /dev/sda2)" --class windows --class os { insmod part_msdos insmod ntfs set root='(hd0,msdos2)' search --no-floppy --fs-uuid --set=root 8834146034145392 chainloader +1 } ### END /etc/grub.d/30_os-prober ### ### BEGIN /etc/grub.d/40_custom ### # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. ### END /etc/grub.d/40_custom ### ### BEGIN /etc/grub.d/41_custom ### if [ -f $prefix/custom.cfg ]; then source $prefix/custom.cfg; fi ### END /etc/grub.d/41_custom ### -------------------------------------------------------------------------------- =============================== sda6/etc/fstab: ================================ -------------------------------------------------------------------------------- # /etc/fstab: static file system information. # # Use 'blkid -o value -s UUID' to print the universally unique identifier # for a device; this may be used with UUID= as a more robust way to name # devices that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sda6 during installation UUID=34731459-4b0f-46ac-a9bf-cb360a2c947c / ext4 errors=remount-ro,user_xattr 0 1 # swap was on /dev/sda7 during installation UUID=dcb9ce9b-799a-4c65-b008-887b01775670 none swap sw 0 0 -------------------------------------------------------------------------------- =================== sda6: Location of files loaded by Grub: ==================== GiB - GB File Fragment(s) 104.851909637 = 112.583880704 boot/grub/core.img 1 121.191410065 = 130.128285696 boot/grub/grub.cfg 1 ======================== Unknown MBRs/Boot Sectors/etc: ======================== Unknown BootLoader on sda4 00000000 eb 0f 2a 5d f4 b7 75 f2 e9 56 12 b8 50 b4 79 ec |..*]..u..V..P.y.| 00000010 89 91 ca c3 16 40 31 d0 ae c4 53 3d c7 dd d7 98 |[email protected]=....| 00000020 bd a4 f2 a4 e8 ab fc ea 36 30 1b 34 cf 8a 28 30 |........60.4..(0| 00000030 43 95 6c 31 3e 76 93 58 84 37 99 c3 ae 3a 88 a3 |C.l1>v.X.7...:..| 00000040 c2 a6 36 2a f8 e0 e1 03 91 8d a1 50 cd ad b0 b5 |..6*.......P....| 00000050 ad 69 3a 49 63 1f 4a 33 97 6e 0c 71 bf 7d bd 35 |.i:Ic.J3.n.q.}.5| 00000060 86 c5 17 93 b4 9f e5 af e0 c4 6f f4 6f f9 4b dd |..........o.o.K.| 00000070 14 39 e2 9e b9 36 ca b1 56 5b d9 b1 66 2c 05 b2 |.9...6..V[..f,..| 00000080 5d 5b 99 c0 db e6 81 27 ab c2 e1 55 00 ac 0b 2c |][.....'...U...,| 00000090 24 d3 8e 54 b0 3d ab 58 e4 23 fc 3a 79 93 fb 5e |$..T.=.X.#.:y..^| 000000a0 94 5a 3a c2 16 4e 56 cb 1b 7f 7e b3 4c 38 ca 5b |.Z:..NV...~.L8.[| 000000b0 ca ab c1 2c 2a 64 e7 77 fe 2a ba ee 08 33 b5 9b |...,*d.w.*...3..| 000000c0 d0 c2 b4 a8 fc 73 4f 01 fd 03 61 75 eb 6d 1a 74 |.....sO...au.m.t| 000000d0 5f 79 31 7f ed e6 f5 99 21 36 16 ed 25 d9 6d 2b |_y1.....!6..%.m+| 000000e0 5f f4 42 b8 9d 01 89 10 fe df a4 98 e7 ab ab ea |_.B.............| 000000f0 1d 1c 44 e1 49 d9 19 c9 ab f5 41 eb 4a 32 c2 39 |..D.I.....A.J2.9| 00000100 87 57 f6 f6 f3 b5 4d 17 72 f2 b1 16 19 aa ec 24 |.W....M.r......$| 00000110 39 bd e3 b1 68 b3 b0 7f fa 2a 3a 2e 99 ed db 8a |9...h....*:.....| 00000120 f8 61 b4 ef 9d 7d 85 95 ed ad eb 9e 71 f4 27 d3 |.a...}......q.'.| 00000130 f3 04 8b 8a 69 98 02 72 df e1 f9 83 27 5b 01 4c |....i..r....'[.L| 00000140 d4 9a b9 3b db ca 1e 40 35 db 6f c1 52 c0 7f 27 |...;[email protected]..'| 00000150 8a 1d bc 34 89 24 b6 e3 fd ec a1 2a e5 9e d1 8f |...4.$.....*....| 00000160 77 e0 d5 52 c0 4c c4 38 38 3c 28 19 bf 20 f0 03 |w..R.L.88<(.. ..| 00000170 38 a4 b1 b5 ed 6a b8 f7 a9 7b 65 b1 7b 64 4a 33 |8....j...{e.{dJ3| 00000180 66 1a 60 29 38 1d 5b 52 40 31 de a5 0c 0f cc 6f |f.`)8.[[email protected]| 00000190 dd 31 6d 3d f0 2a 32 85 67 66 ca 4f 02 aa 0d 30 |.1m=.*2.gf.O...0| 000001a0 66 c9 b2 33 c2 4b 8a fa 3c 7b 52 02 00 88 8e cf |f..3.K..<{R.....| 000001b0 67 1e d4 20 49 1d 1a b8 71 ad c2 d4 37 9d 00 fe |g.. I...q...7...| 000001c0 ff ff 07 fe ff ff 02 e0 93 06 00 60 ac 11 00 fe |...........`....| 000001d0 ff ff 05 fe ff ff 01 00 00 00 01 b0 4d 06 00 00 |............M...| 000001e0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 000001f0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 55 aa |..............U.| 00000200 ADDITIONAL INFORMATION : =================== log of boot-repair 2012-11-24__09h45 =================== boot-repair version : 3.195~ppa2~precise boot-sav version : 3.195~ppa2~precise glade2script version : 3.2.2~ppa45~precise boot-sav-extra version : 3.195~ppa2~precise boot-repair is executed in live-session (Ubuntu 12.04.1 LTS, precise, Ubuntu, i686) CPU op-mode(s): 32-bit, 64-bit file=/cdrom/preseed/ubuntu.seed boot=casper initrd=/casper/initrd.lz quiet splash -- =================== os-prober: /dev/sda2:Windows 7 (loader):Windows:chain /dev/sda6:Ubuntu 12.04.1 LTS (12.04):Ubuntu:linux =================== blkid: /dev/sda1: SEC_TYPE="msdos" LABEL="DellUtility" UUID="07DA-0512" TYPE="vfat" /dev/sda2: LABEL="RECOVERY" UUID="8834146034145392" TYPE="ntfs" /dev/sda3: LABEL="OS" UUID="48E2189DE21890F4" TYPE="ntfs" /dev/sda5: LABEL="Varshneya" UUID="BC2A44C02A447982" TYPE="ntfs" /dev/loop0: TYPE="squashfs" /dev/sda6: UUID="34731459-4b0f-46ac-a9bf-cb360a2c947c" TYPE="ext4" /dev/sda7: UUID="dcb9ce9b-799a-4c65-b008-887b01775670" TYPE="swap" /dev/sr0: LABEL="Ubuntu 12.04.1 LTS i386" TYPE="iso9660" 1 disks with OS, 2 OS : 1 Linux, 0 MacOS, 1 Windows, 0 unknown type OS. Windows not detected by os-prober on sda3. Warning: extended partition does not start at a cylinder boundary. DOS and Linux will interpret the contents differently. =================== /mnt/etc/default/grub : # If you change this file, run 'update-grub' afterwards to update # /boot/grub/grub.cfg. # For full documentation of the options in this file, see: # info -f grub -n 'Simple configuration' GRUB_DEFAULT=0 GRUB_HIDDEN_TIMEOUT=0 GRUB_HIDDEN_TIMEOUT_QUIET=true GRUB_TIMEOUT=10 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" GRUB_CMDLINE_LINUX="" # Uncomment to enable BadRAM filtering, modify to suit your needs # This works with Linux (no patch required) and with any kernel that obtains # the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...) #GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef" # Uncomment to disable graphical terminal (grub-pc only) #GRUB_TERMINAL=console # The resolution used on graphical terminal # note that you can use only modes which your graphic card supports via VBE # you can see them in real GRUB with the command `vbeinfo' #GRUB_GFXMODE=640x480 # Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux #GRUB_DISABLE_LINUX_UUID=true # Uncomment to disable generation of recovery mode menu entries #GRUB_DISABLE_RECOVERY="true" # Uncomment to get a beep at grub start #GRUB_INIT_TUNE="480 440 1" =================== /mnt/etc/grub.d/ : drwxr-xr-x 2 root root 4096 Nov 22 16:15 grub.d total 56 -rwxr-xr-x 1 root root 6743 Sep 12 20:19 00_header -rwxr-xr-x 1 root root 5522 Sep 12 20:05 05_debian_theme -rwxr-xr-x 1 root root 7407 Sep 12 20:19 10_linux -rwxr-xr-x 1 root root 6335 Sep 12 20:19 20_linux_xen -rwxr-xr-x 1 root root 1588 Sep 24 2010 20_memtest86+ -rwxr-xr-x 1 root root 7603 Sep 12 20:19 30_os-prober -rwxr-xr-x 1 root root 214 Sep 12 20:19 40_custom -rwxr-xr-x 1 root root 95 Sep 12 20:19 41_custom -rw-r--r-- 1 root root 483 Sep 12 20:19 README =================== No kernel in /mnt/boot: grub memtest86+.bin memtest86+_multiboot.bin =================== UEFI/Legacy mode: This live-session is not EFI-compatible. SecureBoot maybe enabled. =================== PARTITIONS & DISKS: sda1 : sda, not-sepboot, no-grubenv nogrub, no-docgrub, no-update-grub, 32, no-boot, no-os, not--efi--part, part-has-no-fstab, part-has-no-fstab, no-nt, no-winload, no-recov-nor-hid, no-bmgr, notwinboot, nopakmgr, nogrubinstall, no---usr, part-has-no-fstab, not-sep-usr, standard, not-far, /mnt/boot-sav/sda1. sda2 : sda, not-sepboot, no-grubenv nogrub, no-docgrub, no-update-grub, 32, no-boot, is-os, not--efi--part, part-has-no-fstab, part-has-no-fstab, no-nt, no-winload, no-recov-nor-hid, bootmgr, is-winboot, nopakmgr, nogrubinstall, no---usr, part-has-no-fstab, not-sep-usr, standard, not-far, /mnt/boot-sav/sda2. sda3 : sda, not-sepboot, no-grubenv nogrub, no-docgrub, no-update-grub, 32, no-boot, is-os, not--efi--part, part-has-no-fstab, part-has-no-fstab, no-nt, haswinload, no-recov-nor-hid, no-bmgr, notwinboot, nopakmgr, nogrubinstall, no---usr, part-has-no-fstab, not-sep-usr, standard, farbios, /mnt/boot-sav/sda3. sda5 : sda, not-sepboot, no-grubenv nogrub, no-docgrub, no-update-grub, 32, no-boot, no-os, not--efi--part, part-has-no-fstab, part-has-no-fstab, no-nt, no-winload, no-recov-nor-hid, no-bmgr, notwinboot, nopakmgr, nogrubinstall, no---usr, part-has-no-fstab, not-sep-usr, standard, farbios, /mnt/boot-sav/sda5. sda6 : sda, not-sepboot, grubenv-ok grub2, grub-pc, update-grub, 64, no-kernel, is-os, not--efi--part, fstab-without-boot, fstab-without-efi, no-nt, no-winload, no-recov-nor-hid, no-bmgr, notwinboot, apt-get, grub-install, with--usr, fstab-without-usr, not-sep-usr, standard, farbios, /mnt. sda : not-GPT, BIOSboot-not-needed, has-no-EFIpart, not-usb, has-os, 63 sectors * 512 bytes =================== parted -l: Model: ATA ST9320423AS (scsi) Disk /dev/sda: 320GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 32.3kB 140MB 140MB primary fat16 diag 2 141MB 9936MB 9796MB primary ntfs boot 3 9936MB 112GB 102GB primary ntfs 4 112GB 320GB 208GB extended lba 6 112GB 166GB 54.1GB logical ext4 7 166GB 168GB 2352MB logical linux-swap(v1) 5 168GB 320GB 152GB logical ntfs Model: HL-DT-ST DVD+-RW GA31N (scsi) Disk /dev/sr0: 4700MB Sector size (logical/physical): 2048B/2048B Partition Table: msdos Number Start End Size Type File system Flags 1 131kB 2916MB 2916MB primary boot, hidden =================== parted -lm: BYT; /dev/sda:320GB:scsi:512:512:msdos:ATA ST9320423AS; 1:32.3kB:140MB:140MB:fat16::diag; 2:141MB:9936MB:9796MB:ntfs::boot; 3:9936MB:112GB:102GB:ntfs::; 4:112GB:320GB:208GB:::lba; 6:112GB:166GB:54.1GB:ext4::; 7:166GB:168GB:2352MB:linux-swap(v1)::; 5:168GB:320GB:152GB:ntfs::; BYT; /dev/sr0:4700MB:scsi:2048:2048:msdos:HL-DT-ST DVD+-RW GA31N; 1:131kB:2916MB:2916MB:::boot, hidden; =================== mount: /cow on / type overlayfs (rw) proc on /proc type proc (rw,noexec,nosuid,nodev) sysfs on /sys type sysfs (rw,noexec,nosuid,nodev) udev on /dev type devtmpfs (rw,mode=0755) devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755) /dev/sr0 on /cdrom type iso9660 (ro,noatime) /dev/loop0 on /rofs type squashfs (ro,noatime) none on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) tmpfs on /tmp type tmpfs (rw,nosuid,nodev) none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880) none on /run/shm type tmpfs (rw,nosuid,nodev) gvfs-fuse-daemon on /home/ubuntu/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev,user=ubuntu) /dev/sda6 on /mnt type ext4 (rw) /dev on /mnt/dev type none (rw,bind) /proc on /mnt/proc type none (rw,bind) /sys on /mnt/sys type none (rw,bind) /usr on /mnt/usr type none (rw,bind) /dev/sda1 on /mnt/boot-sav/sda1 type vfat (rw) /dev/sda2 on /mnt/boot-sav/sda2 type fuseblk (rw,nosuid,nodev,allow_other,blksize=4096) /dev/sda3 on /mnt/boot-sav/sda3 type fuseblk (rw,nosuid,nodev,allow_other,blksize=4096) /dev/sda5 on /mnt/boot-sav/sda5 type fuseblk (rw,nosuid,nodev,allow_other,blksize=4096) =================== ls: /sys/block/sda (filtered): alignment_offset bdi capability dev device discard_alignment events events_async events_poll_msecs ext_range holders inflight power queue range removable ro sda1 sda2 sda3 sda4 sda5 sda6 sda7 size slaves stat subsystem trace uevent /sys/block/sr0 (filtered): alignment_offset bdi capability dev device discard_alignment events events_async events_poll_msecs ext_range holders inflight power queue range removable ro size slaves stat subsystem trace uevent /dev (filtered): autofs block bsg btrfs-control bus cdrom cdrw char console core cpu cpu_dma_latency disk dri dvd dvdrw ecryptfs fb0 fd full fuse fw0 hidraw0 hpet input kmsg log mapper mcelog mei mem net network_latency network_throughput null oldmem port ppp psaux ptmx pts random rfkill rtc rtc0 sda sda1 sda2 sda3 sda4 sda5 sda6 sda7 sg0 sg1 shm snapshot snd sr0 stderr stdin stdout uinput urandom usbmon0 usbmon1 usbmon2 v4l vga_arbiter video0 zero ls /dev/mapper: control =================== df -Th: Filesystem Type Size Used Avail Use% Mounted on /cow overlayfs 1.9G 113M 1.8G 6% / udev devtmpfs 1.9G 12K 1.9G 1% /dev tmpfs tmpfs 777M 872K 776M 1% /run /dev/sr0 iso9660 696M 696M 0 100% /cdrom /dev/loop0 squashfs 667M 667M 0 100% /rofs tmpfs tmpfs 1.9G 20K 1.9G 1% /tmp none tmpfs 5.0M 0 5.0M 0% /run/lock none tmpfs 1.9G 176K 1.9G 1% /run/shm /dev/sda6 ext4 51G 27G 22G 56% /mnt /dev/sda1 vfat 134M 9.1M 125M 7% /mnt/boot-sav/sda1 /dev/sda2 fuseblk 9.2G 5.6G 3.6G 61% /mnt/boot-sav/sda2 /dev/sda3 fuseblk 95G 80G 16G 84% /mnt/boot-sav/sda3 /dev/sda5 fuseblk 142G 130G 12G 92% /mnt/boot-sav/sda5 =================== fdisk -l: Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xb8000000 Device Boot Start End Blocks Id System /dev/sda1 63 273104 136521 de Dell Utility /dev/sda2 * 274432 19406847 9566208 7 HPFS/NTFS/exFAT /dev/sda3 19406848 218274364 99433758+ 7 HPFS/NTFS/exFAT /dev/sda4 218275838 625139711 203431937 f W95 Ext'd (LBA) /dev/sda5 328630272 625139711 148254720 7 HPFS/NTFS/exFAT /dev/sda6 218275840 324030463 52877312 83 Linux /dev/sda7 324032512 328626175 2296832 82 Linux swap / Solaris Partition table entries are not in disk order =================== Repair blockers 64bits detected. Please use this software in a 64bits session. (Please use Ubuntu-Secure-Remix-64bits (www.sourceforge.net/p/ubuntu-secured) which contains a 64bits-compatible version of this software.) This will enable this feature. =================== Final advice in case of recommended repair The boot files of [Ubuntu 12.04.1 LTS] are far from the start of the disk. Your BIOS may not detect them. You may want to retry after creating a /boot partition (EXT4, >200MB, start of the disk). This can be performed via tools such as gParted. Then select this partition via the [Separate /boot partition:] option of [Boot Repair]. (https://help.ubuntu.com/community/BootPartition) =================== Default settings Recommended-Repair This setting would reinstall the grub2 of sda6 into the MBR of sda, using the following options: kernel-purge Additional repair would be performed: unhide-bootmenu-10s fix-windows-boot =================== Settings chosen by the user Boot-Info This setting will not act on the MBR. No change has been performed on your computer. See you soon! pastebinit packages needed dpkg-preconfigure: unable to re-open stdin: No such file or directory pastebin.com ko (), using paste.ubuntu Please report this message to [email protected] Any help would be great, I'm really missing Ubuntu (hate being stuck in the Windows world).

    Read the article

  • mkfs.xfs: libxfs_device_zero write failed: Input/output error

    - by Crazy_Bash
    I can't find a way to create a filesystem on one of my disks. first i'm geting the following output: [root@~]# mkfs.xfs /dev/sdb1 mkfs.xfs: /dev/sdb1 appears to contain a partition table (dos). mkfs.xfs: Use the -f option to force overwrite. after using -F flag: [root@~]# mkfs.xfs -f /dev/sdb1 meta-data=/dev/sdb1 isize=256 agcount=32, agsize=22892696 blks = sectsz=512 attr=2 data = bsize=4096 blocks=732566272, imaxpct=5 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 log =internal log bsize=4096 blocks=357698, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 **mkfs.xfs: libxfs_device_zero write failed: Input/output error** /dev/sdb: Disk /dev/sdb: 3001GB 1 1049kB 3001GB 3001GB primary Linux: Centos 6.3 Linux 1 2.6.32-279.el6.x86_64 #1 SMP Fri Jun 22 12:19:21 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux what i've tried so far: recreating partition with parted rm 1

    Read the article

  • PHP suddenly failed after IIS update

    - by James Hay
    All my application pools were stopped this morning after I got to work. I can restart them, but when I try to load the website the app pool crashes again. Update: I've looked in the GAC as the error below suggests and it seems that the file is not there. How do I get it back? Update 2: I found a further error in the event log saying The Module name FastCgiModule path C:\WINDOWS\System32\inetsrv\iisfcgi.dll returned an error from registration. The data is the error. So following the information from here http://forums.iis.net/t/1153937.aspx I removed CGI and my sites are working again. This has fixed the initial problem, but now I don't have FastCGI so I'm fairly sure that PHP will no longer be working (I don't have any PHP at the moment to test). Original Post I'm getting this error in the event viewer: IISMANAGER_ERROR_LOADING_PROVIDER_TYPE IIS Manager could not load type 'Web.Management.PHP.PHPProvider, Web.Management.PHP, Version=1.2.0.0, Culture=neutral, PublicKeyToken=8175de49a9aec91d' for module provider 'PHP' that is declared in %windir%\system32\inetsrv\config\administration.config. Verify that the type is correct, and that the assembly that contains the module provider is in the Global Assembly Cache (GAC). Exception:System.IO.FileNotFoundException: Could not load file or assembly 'Web.Management.PHP, Version=1.2.0.0, Culture=neutral, PublicKeyToken=8175de49a9aec91d' or one of its dependencies. The system cannot find the file specified. File name: 'Web.Management.PHP, Version=1.2.0.0, Culture=neutral, PublicKeyToken=8175de49a9aec91d' at System.RuntimeTypeHandle._GetTypeByName(String name, Boolean throwOnError, Boolean ignoreCase, Boolean reflectionOnly, StackCrawlMark& stackMark, Boolean loadTypeFromPartialName) at System.RuntimeTypeHandle.GetTypeByName(String name, Boolean throwOnError, Boolean ignoreCase, Boolean reflectionOnly, StackCrawlMark& stackMark) at System.RuntimeType.PrivateGetType(String typeName, Boolean throwOnError, Boolean ignoreCase, Boolean reflectionOnly, StackCrawlMark& stackMark) at System.Type.GetType(String typeName, Boolean throwOnError) at Microsoft.Web.Management.Server.AdministrationModuleProvider.GetModuleProvider(String userName, String connectionName) WRN: Assembly binding logging is turned OFF. To enable assembly bind failure logging, set the registry value [HKLM\Software\Microsoft\Fusion!EnableLog] (DWORD) to 1. Note: There is some performance penalty associated with assembly bind failure logging. To turn this feature off, remove the registry value [HKLM\Software\Microsoft\Fusion!EnableLog]. Process:InetMgr Connection:CT211511\Administrator Everything was working fine last night when I left work, and since they've done the maintenance it's all broken.

    Read the article

  • Unable to install Windows 2008R2 x64 on HP ProLiant ML 150 G6 with Smart Array B110i

    - by lrosa
    I am installing Windows 2008 R2 x64 (Microsoft OEM version) on a ProLint ML 150 with Smart Array B110i controller. I load the B110i drivers from a USB thumbdrive, Windows Setup can create, delete and format correctly the partitions on the raidset, but when I exit the disk manager to start setup, Windows says that he cannot find a primary partition to install Windows. The EasySetup disk provided with the system does not support Windows 2008 R2 x64 Microsoft OEM. I downloaded the latest version of EasySetup from HP website, but does not support Windows 2008 R2 x64 Microsoft OEM. Any suggestion?

    Read the article

  • MDAC 2.8/WDAC 6.0 on Windows Server 2008 IIS 7

    - by ajdams
    What I am trying to do is simple. I have some Classic ASP with include headers like the following referencing MDAC 2.5: METADATA TYPE="TypeLib" NAME="Microsoft ActiveX Data Objects 2.5 Library" UUID="{00000205-0000-0010-8000-00AA006D2EA4}" VERSION="2.5" Now I am trying to call this same include file on a Windows 2008 box with IIS 7 using MDAC 2.8 with this: METADATA TYPE="typelib" NAME="Microsoft ActiveX Data Objects 2.8 Library" UUID="{2A75196C-D9EB-4129-B803-931327F72D5C}" VERSION="2.8" My question is, why does this not work anymore? Has the tag reference changed? I ran MDAC checker and everything is installed correctly. What am I missing? Thanks in advance!

    Read the article

  • How would you apply a BIOS update for a Dell M610 blade with VMware ESXi installed?

    - by Guamaniac
    Hello, all. We've got a Dell M610 blade with VMware ESXi 4 installed and we need to update it's BIOS to the latest version. Unfortunately, Dell only makes available a Windows (.exe) and Linux (.bin) versions of the BIOS update program (as well as a bootable DOS floppy version that is too big to fit on a 1.44MB floppy!). We've tried using various "LiveCD" versions of linux distributions but keep running into errors. Anyone out there with experience with Dell blades who could give us a hint or two to get this working? Thanks a lot, in advance. Joe

    Read the article

  • Sage 50 Accounts 2010 wont run on windows 7

    - by admintech
    I have sucessfuly installed Sage 50 Accounts 2010 onto my 32 bit Windows 7 machine, yet whenever i try to run it i encounter - Log Name: Application Source: Application Error Date: 24/05/2010 17:14:13 Event ID: 1000 Task Category: (100) Level: Error Keywords: Classic User: N/A Computer: LukeThomas-PC.domain.co.uk Description: Faulting application name: Sage.SBD.Platform.Installation.SoftwareUpdates.UI.exe, version: 2.0.0.91, time stamp: 0x4a8c22fe Faulting module name: igdumd32.dll, version: 8.15.10.1872, time stamp: 0x4a848a05 Exception code: 0xc0000409 Fault offset: 0x00012f96 Faulting process id: 0x1778 Faulting application start time: 0x01cafb5c2493b609 Faulting application path: C:\Program Files\Common Files\Sage SBD\Sage.SBD.Platform.Installation.SoftwareUpdates.UI.exe Faulting module path: C:\Windows\system32\igdumd32.dll Report Id: 63e2246b-674f-11df-96ba-002564c97988 Event Xml: 1000 2 100 0x80000000000000 5062 Application LukeThomas-PC.domain.co.uk Sage.SBD.Platform.Installation.SoftwareUpdates.UI.exe 2.0.0.91 4a8c22fe igdumd32.dll 8.15.10.1872 4a848a05 c0000409 00012f96 1778 01cafb5c2493b609 C:\Program Files\Common Files\Sage SBD\Sage.SBD.Platform.Installation.SoftwareUpdates.UI.exe C:\Windows\system32\igdumd32.dll 63e2246b-674f-11df-96ba-002564c97988 Any help would be appreciated as i cant find anything about this error

    Read the article

  • OS X 10.9 Mavericks Kernel Panics out of the box

    - by Kevin
    OS X Kernel panics after a fresh install of OS X 10.9 on a 17" Macbook Pro. Anonymous UUID: D002464D-24B7-C2B5-3D83-1C0B02873B29 Wed Oct 30 11:08:17 2013 panic(cpu 1 caller 0xffffff8006edc19e): Kernel trap at 0xffffff7f88e0a96c, type 14=page fault, registers: CR0: 0x000000008001003b, CR2: 0xffffef7f88e309b8, CR3: 0x0000000009c2d000, CR4: 0x0000000000000660 RAX: 0x0fffffd0c7b30000, RBX: 0xffffef7f88e309b0, RCX: 0x0000000000000001, RDX: 0x000002f384d06471 RSP: 0xffffff80eff03d80, RBP: 0xffffff80eff03e70, RSI: 0x0000031384cfb168, RDI: 0xffffff80e8f05148 R8: 0xffffff801b0f8670, R9: 0x0000000000000005, R10: 0x0000000000004a24, R11: 0x0000000000000202 R12: 0xffffff801938b800, R13: 0x0000000000000005, R14: 0xffffff80e8f05148, R15: 0xffffff7f88e2ee20 RFL: 0x0000000000010006, RIP: 0xffffff7f88e0a96c, CS: 0x0000000000000008, SS: 0x0000000000000010 Fault CR2: 0xffffef7f88e309b8, Error code: 0x0000000000000002, Fault CPU: 0x1 Backtrace (CPU 1), Frame : Return Address 0xffffff80eff03a10 : 0xffffff8006e22f69 0xffffff80eff03a90 : 0xffffff8006edc19e 0xffffff80eff03c60 : 0xffffff8006ef3606 0xffffff80eff03c80 : 0xffffff7f88e0a96c 0xffffff80eff03e70 : 0xffffff7f88e09b89 0xffffff80eff03f30 : 0xffffff8006edda5c 0xffffff80eff03f50 : 0xffffff8006e3757a 0xffffff80eff03f90 : 0xffffff8006e378c8 0xffffff80eff03fb0 : 0xffffff8006ed6aa7 Kernel Extensions in backtrace: com.apple.driver.AppleIntelCPUPowerManagement(216.0)[A6EE4D7B-228E-3A3C-95BA-10ED6F331236]@0xffffff7f88e07000->0xffffff7f88e31fff BSD process name corresponding to current thread: kernel_task Mac OS version: 13A603 Kernel version: Darwin Kernel Version 13.0.0: Thu Sep 19 22:22:27 PDT 2013; root:xnu-2422.1.72~6/RELEASE_X86_64 Kernel UUID: 1D9369E3-D0A5-31B6-8D16-BFFBBB390393 Kernel slide: 0x0000000006c00000 Kernel text base: 0xffffff8006e00000 System model name: MacBookPro5,2 (Mac-F2268EC8) System uptime in nanoseconds: 4634353513870 last loaded kext at 39203945245: com.viscosityvpn.Viscosity.tun 1.0 (addr 0xffffff7f89200000, size 32768) last unloaded kext at 147930318702: com.apple.driver.AppleFileSystemDriver 3.0.1 (addr 0xffffff7f89110000, size 8192) loaded kexts: com.viscosityvpn.Viscosity.tun 1.0 com.viscosityvpn.Viscosity.tap 1.0 com.apple.driver.AudioAUUC 1.60 com.apple.driver.AppleHWSensor 1.9.5d0 com.apple.filesystems.autofs 3.0 com.apple.iokit.IOBluetoothSerialManager 4.2.0f6 com.apple.driver.AGPM 100.14.11 com.apple.driver.AppleMikeyHIDDriver 124 com.apple.driver.AppleHDA 2.5.2fc2 com.apple.iokit.BroadcomBluetoothHostControllerUSBTransport 4.2.0f6 com.apple.GeForceTesla 8.1.8 com.apple.driver.AppleMikeyDriver 2.5.2fc2 com.apple.iokit.IOUserEthernet 1.0.0d1 com.apple.driver.AppleUpstreamUserClient 3.5.13 com.apple.driver.AppleMuxControl 3.4.12 com.apple.driver.ACPI_SMC_PlatformPlugin 1.0.0 com.apple.driver.AppleSMCLMU 2.0.4d1 com.apple.Dont_Steal_Mac_OS_X 7.0.0 com.apple.driver.AppleHWAccess 1 com.apple.driver.AppleMCCSControl 1.1.12 com.apple.driver.AppleLPC 1.7.0 com.apple.driver.SMCMotionSensor 3.0.4d1 com.apple.driver.AppleUSBTCButtons 240.2 com.apple.driver.AppleUSBTCKeyboard 240.2 com.apple.driver.AppleIRController 325.7 com.apple.AppleFSCompression.AppleFSCompressionTypeDataless 1.0.0d1 com.apple.AppleFSCompression.AppleFSCompressionTypeZlib 1.0.0d1 com.apple.BootCache 35 com.apple.iokit.SCSITaskUserClient 3.6.0 com.apple.driver.XsanFilter 404 com.apple.iokit.IOAHCIBlockStorage 2.4.0 com.apple.driver.AppleUSBHub 650.4.4 com.apple.driver.AppleUSBEHCI 650.4.1 com.apple.driver.AppleFWOHCI 4.9.9 com.apple.driver.AirPort.Brcm4331 700.20.22 com.apple.driver.AppleAHCIPort 2.9.5 com.apple.nvenet 2.0.21 com.apple.driver.AppleUSBOHCI 650.4.1 com.apple.driver.AppleSmartBatteryManager 161.0.0 com.apple.driver.AppleRTC 2.0 com.apple.driver.AppleHPET 1.8 com.apple.driver.AppleACPIButtons 2.0 com.apple.driver.AppleSMBIOS 2.0 com.apple.driver.AppleACPIEC 2.0 com.apple.driver.AppleAPIC 1.7 com.apple.driver.AppleIntelCPUPowerManagementClient 216.0.0 com.apple.nke.applicationfirewall 153 com.apple.security.quarantine 3 com.apple.driver.AppleIntelCPUPowerManagement 216.0.0 com.apple.kext.triggers 1.0 com.apple.iokit.IOSerialFamily 10.0.7 com.apple.AppleGraphicsDeviceControl 3.4.12 com.apple.driver.DspFuncLib 2.5.2fc2 com.apple.vecLib.kext 1.0.0 com.apple.iokit.IOAudioFamily 1.9.4fc11 com.apple.kext.OSvKernDSPLib 1.14 com.apple.iokit.IOBluetoothHostControllerUSBTransport 4.2.0f6 com.apple.iokit.IOSurface 91 com.apple.iokit.IOBluetoothFamily 4.2.0f6 com.apple.nvidia.classic.NVDANV50HalTesla 8.1.8 com.apple.driver.AppleSMBusPCI 1.0.12d1 com.apple.driver.AppleGraphicsControl 3.4.12 com.apple.driver.IOPlatformPluginLegacy 1.0.0 com.apple.driver.AppleBacklightExpert 1.0.4 com.apple.iokit.IOFireWireIP 2.2.5 com.apple.driver.AppleHDAController 2.5.2fc2 com.apple.iokit.IOHDAFamily 2.5.2fc2 com.apple.driver.AppleSMBusController 1.0.11d1 com.apple.nvidia.classic.NVDAResmanTesla 8.1.8 com.apple.driver.IOPlatformPluginFamily 5.5.1d27 com.apple.iokit.IONDRVSupport 2.3.6 com.apple.iokit.IOGraphicsFamily 2.3.6 com.apple.driver.AppleSMC 3.1.6d1 com.apple.driver.AppleUSBMultitouch 240.6 com.apple.iokit.IOUSBHIDDriver 650.4.4 com.apple.driver.AppleUSBMergeNub 650.4.0 com.apple.driver.AppleUSBComposite 650.4.0 com.apple.driver.CoreStorage 380 com.apple.iokit.IOSCSIMultimediaCommandsDevice 3.6.0 com.apple.iokit.IOBDStorageFamily 1.7 com.apple.iokit.IODVDStorageFamily 1.7.1 com.apple.iokit.IOCDStorageFamily 1.7.1 com.apple.iokit.IOAHCISerialATAPI 2.6.0 com.apple.iokit.IOSCSIArchitectureModelFamily 3.6.0 com.apple.iokit.IOUSBUserClient 650.4.4 com.apple.iokit.IOFireWireFamily 4.5.5 com.apple.iokit.IO80211Family 600.34 com.apple.iokit.IOAHCIFamily 2.6.0 com.apple.iokit.IONetworkingFamily 3.2 com.apple.iokit.IOUSBFamily 650.4.4 com.apple.driver.NVSMU 2.2.9 com.apple.driver.AppleEFINVRAM 2.0 com.apple.driver.AppleEFIRuntime 2.0 com.apple.iokit.IOHIDFamily 2.0.0 com.apple.iokit.IOSMBusFamily 1.1 com.apple.security.sandbox 278.10 com.apple.kext.AppleMatch 1.0.0d1 com.apple.security.TMSafetyNet 7 com.apple.driver.AppleKeyStore 2 com.apple.driver.DiskImages 371.1 com.apple.iokit.IOStorageFamily 1.9 com.apple.iokit.IOReportFamily 21 com.apple.driver.AppleFDEKeyStore 28.30 com.apple.driver.AppleACPIPlatform 2.0 com.apple.iokit.IOPCIFamily 2.8 com.apple.iokit.IOACPIFamily 1.4 com.apple.kec.pthread 1 com.apple.kec.corecrypto 1.0 System Profile: Model: MacBookPro5,2, BootROM MBP52.008E.B05, 2 processors, Intel Core 2 Duo, 2.8 GHz, 8 GB, SMC 1.42f4 Graphics: NVIDIA GeForce 9400M, NVIDIA GeForce 9400M, PCI, 256 MB Graphics: NVIDIA GeForce 9600M GT, NVIDIA GeForce 9600M GT, PCIe, 512 MB Memory Module: BANK 0/DIMM0, 4 GB, DDR3, 1333 MHz, 0x04CD, 0x46332D3130363636434C392D344742535100 Memory Module: BANK 1/DIMM0, 4 GB, DDR3, 1333 MHz, 0x04CD, 0x46332D3130363636434C392D344742535100 AirPort: spairport_wireless_card_type_airport_extreme (0x14E4, 0x8D), Broadcom BCM43xx 1.0 (5.106.98.100.22) Bluetooth: Version 4.2.0f6 12982, 3 services, 15 devices, 1 incoming serial ports Network Service: Wi-Fi, AirPort, en1 Serial ATA Device: Samsung SSD 840 Series, 120.03 GB Serial ATA Device: MATSHITADVD-R UJ-868 USB Device: Built-in iSight USB Device: BRCM2046 Hub USB Device: Bluetooth USB Host Controller USB Device: Apple Internal Keyboard / Trackpad USB Device: IR Receiver Thunderbolt Bus:

    Read the article

  • MemCached on Windows x64

    - by Django Reinhardt
    This question has previously been asked, but that was a year ago and I wanted to know if there had been any developments since then. Basically we'd like to use a MemCached Server on a Windows Server 2008 R2 machine... which is only x64, obviously. I haven't found any details on a Win64 version of MemCached, but there is still the solution from the previous thread (which I haven't tried yet) to use a bit of software called MemCacheD Manager running MemCached 1.2.6. However, the current version of MCd is 1.4.4 and I was wondering if there had been any improvements since then.

    Read the article

  • JVM process resident set size "equals" max heap size, not current heap size

    - by Volune
    After a few reading about jvm memory (here, here, here, others I forgot...), I am expecting the resident set size of my java process to be roughly equal to the current heap space capacity. That's not what the numbers are saying, it seems to be roughly equal to the max heap space capacity: Resident set size: # echo 0 $(cat /proc/1/smaps | grep Rss | awk '{print $2}' | sed 's#^#+#') | bc 11507912 # ps -C java -O rss | gawk '{ count ++; sum += $2 }; END {count --; print "Number of processes =",count; print "Memory usage per process =",sum/1024/count, "MB"; print "Total memory usage =", sum/1024, "MB" ;};' Number of processes = 1 Memory usage per process = 11237.8 MB Total memory usage = 11237.8 MB Java heap # jmap -heap 1 Attaching to process ID 1, please wait... Debugger attached successfully. Server compiler detected. JVM version is 24.55-b03 using thread-local object allocation. Garbage-First (G1) GC with 18 thread(s) Heap Configuration: MinHeapFreeRatio = 10 MaxHeapFreeRatio = 20 MaxHeapSize = 10737418240 (10240.0MB) NewSize = 1363144 (1.2999954223632812MB) MaxNewSize = 17592186044415 MB OldSize = 5452592 (5.1999969482421875MB) NewRatio = 2 SurvivorRatio = 8 PermSize = 20971520 (20.0MB) MaxPermSize = 85983232 (82.0MB) G1HeapRegionSize = 2097152 (2.0MB) Heap Usage: G1 Heap: regions = 2560 capacity = 5368709120 (5120.0MB) used = 1672045416 (1594.586769104004MB) free = 3696663704 (3525.413230895996MB) 31.144272834062576% used G1 Young Generation: Eden Space: regions = 627 capacity = 3279945728 (3128.0MB) used = 1314914304 (1254.0MB) free = 1965031424 (1874.0MB) 40.089514066496164% used Survivor Space: regions = 49 capacity = 102760448 (98.0MB) used = 102760448 (98.0MB) free = 0 (0.0MB) 100.0% used G1 Old Generation: regions = 147 capacity = 1986002944 (1894.0MB) used = 252273512 (240.5867691040039MB) free = 1733729432 (1653.413230895996MB) 12.702574926293766% used Perm Generation: capacity = 39845888 (38.0MB) used = 38884120 (37.082786560058594MB) free = 961768 (0.9172134399414062MB) 97.58628042120682% used 14654 interned Strings occupying 2188928 bytes. Are my expectations wrong? What should I expect? I need the heap space to be able to grow during spikes (to avoid very slow Full GC), but I would like to have the resident set size as low as possible the rest of the time, to benefit the other processes running on the server. Is there a better way to achieve that? Linux 3.13.0-32-generic x86_64 java version "1.7.0_55" Running in Docker version 1.1.2 Java is running elasticsearch 1.2.0: /usr/bin/java -Xms5g -Xmx10g -XX:MinHeapFreeRatio=10 -XX:MaxHeapFreeRatio=20 -Xss256k -Djava.awt.headless=true -XX:+UseG1GC -XX:MaxGCPauseMillis=350 -XX:InitiatingHeapOccupancyPercent=45 -XX:+AggressiveOpts -XX:+UseCompressedOops -XX:-OmitStackTraceInFastThrow -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintClassHistogram -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime -XX:+PrintGCApplicationConcurrentTime -Xloggc:/opt/elasticsearch/logs/gc.log -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/opt elasticsearch/logs/heapdump.hprof -XX:ErrorFile=/opt/elasticsearch/logs/hs_err.log -Des.logger.port=99999 -Des.logger.host=999.999.999.999 -Delasticsearch -Des.foreground=yes -Des.path.home=/opt/elasticsearch -cp :/opt/elasticsearch/lib/elasticsearch-1.2.0.jar:/opt/elasticsearch/lib/*:/opt/elasticsearch/lib/sigar/* org.elasticsearch.bootstrap.Elasticsearch There actually are 5 elasticsearch nodes, each in a different docker container. All have about the same memory usage. Some stats about the index: size: 9.71Gi (19.4Gi) docs: 3,925,398 (4,052,694)

    Read the article

  • Migrate from MySQL to PostgreSQL on Linux (Kubuntu)

    - by Dave Jarvis
    Storyline Trying to migrate a database from MySQL to PostgreSQL. All the documentation I have read covers, in great detail, how to migrate the structure. I have found very little documentation on migrating the data. The schema has 13 tables (which have been migrated successfully) and 9 GB of data. MySQL version: 5.1.x PostgreSQL version: 8.4.x I want to use the R programming language to analyze the data using SQL select statements; PostgreSQL has PL/R, but MySQL has nothing (as far as I can tell). A long time ago in a galaxy far, far away... Create the database location (/var has insufficient space; also dislike having the PostgreSQL version number everywhere -- upgrading would break scripts!): sudo mkdir -p /home/postgres/main sudo cp -Rp /var/lib/postgresql/8.4/main /home/postgres sudo chown -R postgres.postgres /home/postgres sudo chmod -R 700 /home/postgres sudo usermod -d /home/postgres/ postgres All good to here. Next, restart the server and configure the database using these installation instructions: sudo apt-get install postgresql pgadmin3 sudo /etc/init.d/postgresql-8.4 stop sudo vi /etc/postgresql/8.4/main/postgresql.conf Change data_directory to /home/postgres/main sudo /etc/init.d/postgresql-8.4 start sudo -u postgres psql postgres \password postgres sudo -u postgres createdb climate pgadmin3 Use pgadmin3 to configure the database and create a schema. A New Hope The episode began in a remote shell known as bash, with both databases running, and the installation of a command with a most unusual logo: SQL Fairy. perl Makefile.PL sudo make install sudo apt-get install perl-doc (strangely, it is not called perldoc) perldoc SQL::Translator::Manual Extract a PostgreSQL-friendly DDL and all the MySQL data: sqlt -f DBI --dsn dbi:mysql:climate --db-user user --db-password password -t PostgreSQL > climate-pg-ddl.sql mysqldump --skip-add-locks --complete-insert --no-create-db --no-create-info --quick --result-file="climate-my.sql" --databases climate --skip-comments -u root -p The Database Strikes Back Recreate the structure in PostgreSQL as follows: pgadmin3 (switch to it) Click the Execute arbitrary SQL queries icon Open climate-pg-ddl.sql Search for TABLE " replace with TABLE climate." (insert the schema name climate) Search for on " replace with on climate." (insert the schema name climate) Press F5 to execute This results in: Query returned successfully with no result in 122 ms. Replies of the Jedi At this point I am stumped. Where do I go from here (what are the steps) to convert climate-my.sql to climate-pg.sql so that they can be executed against PostgreSQL? How to I make sure the indexes are copied over correctly (to maintain referential integrity; I don't have constraints at the moment to ease the transition)? How do I ensure that adding new rows in PostgreSQL will start enumerating from the index of the last row inserted (and not conflict with an existing primary key from the sequence)? Resources A fair bit of information was needed to get this far: https://help.ubuntu.com/community/PostgreSQL http://articles.sitepoint.com/article/site-mysql-postgresql-1 http://wiki.postgresql.org/wiki/Converting_from_other_Databases_to_PostgreSQL#MySQL http://pgfoundry.org/frs/shownotes.php?release_id=810 http://sqlfairy.sourceforge.net/ Thank you!

    Read the article

  • IIS 7.5 Error 1007

    - by darkdog
    I have a Windows 2008 R2 Virtual Server. I removed the "Default Web Site" using IIS Manager and now all my other sites now report an error saying they can't access the site (or something similar). I thought I could just remove the role, re-install IIS again and start with a clean slate. After I re-installed IIS it's now reporting the following error in the event log: The World Wide Web Publishing Service (WWW service) failed to register the URL prefix of "http://www.mash-guild.com:80:192.168.245.132/" for the website of "4". The required network connectivity may already be used. The site has been disabled. The data field contains the error number. This is the full version with the german error note (I'm from germany): Der WWW-Publishingdienst (WWW-Dienst) konnte das URL-Präfix "http://www.mash-guild.com:80:192.168.245.132/" für die Website "4" nicht registrieren. Die erforderliche Netzwerkverbindung wird möglicherweise bereits verwendet. Die Website wurde deaktiviert. Das Datenfeld enthält die Fehlernummer. Ereignis-XML: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="Microsoft-Windows-IIS-W3SVC" Guid="{05448E22-93DE-4A7A-BBA5-92E27486A8BE}" EventSourceName="W3SVC" /> <EventID Qualifiers="49152">1007</EventID> <Version>0</Version> <Level>2</Level> <Task>0</Task> <Opcode>0</Opcode> <Keywords>0x80000000000000</Keywords> <TimeCreated SystemTime="2011-03-26T16:14:56.000000000Z" /> <EventRecordID>1435</EventRecordID> <Correlation /> <Execution ProcessID="0" ThreadID="0" /> <Channel>System</Channel> <Computer>WIN-DCJ8SN0QI5J</Computer> <Security /> </System> <EventData> <Data Name="UrlPrefix">http://www.mash-guild.com:80:192.168.245.132/</Data> <Data Name="SiteID">4</Data> <Binary>B7000780</Binary> </EventData> </Event> I would like to know if there is a way to fix this or just get back to the IIS + Server settings I had just after I installed windows 2008 ?

    Read the article

< Previous Page | 336 337 338 339 340 341 342 343 344 345 346 347  | Next Page >