Search Results

Search found 45031 results on 1802 pages for 'just name'.

Page 22/1802 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • JEE6 Global JNDI Name and Maven Deployment

    - by wobblycogs
    I'm having some problems with the global JNDI names of my EJB resources which is (or at least will) cause my JNDI look ups to fail. The project is being developed on Netbeans and is a standard Maven Web Application. When my application is deployed to GF3.0 the application name is set to something like: com.example_myapp_war_1.0-SNAPSHOT which is all well and good from Netbeans point of view because it ensures the name is unique but it also means all the EJBs get global names such as this: java:global/com.example_myapp_war_1.0-SNAPSHOT/CustomerService This, of course, is going to cause problems because every time the version changes all the global names change (I've tested this by changing the version and the names indeed changed). The name is being generated from the POM file and it's a concatenation of: <groupId>com.example</groupId> <artifactId>myapp</artifactId> <packaging>war</packaging> <version>1.0-SNAPSHOT</version> Up until now I've got away with just injecting all the resources using @EJB but now I need to access the CustomerService EJB from a JSF Converter so I'm doing a JNDI look up like this: try { Context ctx = new InitialContext(); CustomerService customerService = (CustomerService)ctx.lookup( "java:global/com.example_myapp_war_1.0-SNAPSHOT/CustomerService" ); return customerService.get(submittedValue); } catch( Exception e ) { logger.error( "Failed to convert customer.", e ); return null; } which will clearly break when the application is properly released and the module name changes. So, the million dollar question: how can I set the modle name in maven or how do I recover the module name so that I can programatically build the JNDI name at runtile. I've tried setting it in the web.xml file as suggested by that link but it was ignored. I think I'd rather build the name at runtime as that means there is less scope for screw ups when the application is deployed. Many thanks for any help, I've been tearing my hair out all day on this.

    Read the article

  • Duplicate column name by JPA with @ElementCollection and @Inheritance

    - by gerry
    I've created the following scenario: @javax.persistence.Entity @Inheritance(strategy = InheritanceType.TABLE_PER_CLASS) public class MyEntity implements Serializable{ @Id @GeneratedValue protected Long id; ... @ElementCollection @CollectionTable(name="ENTITY_PARAMS") @MapKeyColumn (name = "ENTITY_KEY") @Column(name = "ENTITY_VALUE") protected Map<String, String> parameters; ... } As well as: @javax.persistence.Entity public class Sensor extends MyEntity{ @Id @GeneratedValue protected Long id; ... // so here "protected Map<String, String> parameters;" is inherited !!!! ... } So running this example, no tables are created and i get the following message: WARNUNG: Got SQLException executing statement "CREATE TABLE ENTITY_PARAMS (Entity_ID BIGINT NOT NULL, ENTITY_VALUE VARCHAR(255), ENTITY_KEY VARCHAR(255), Sensor_ID BIGINT NOT NULL, ENTITY_VALUE VARCHAR(255))": com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Duplicate column name 'ENTITY_VALUE' I also tried overriding the attributes on the Sensor class... @AttributeOverrides({ @AttributeOverride(name = "ENTITY_KEY", column = @Column(name = "SENSOR_KEY")), @AttributeOverride(name = "ENTITY_VALUE", column = @Column(name = "SENSOR_VALUE")) }) ... but the same error. Can anybody help me?

    Read the article

  • Getting in touch with a domain owner

    - by David
    There is a domain name I want to use for a new business I am starting. It is a perfect fit and I really have my heart set on getting it. Only the .com of the name is registered, and I'm pretty sure the owner has forgotten about the domain. No changes have been made in 3 years, and the WHOIS information is a (almost funny) dead-end Listed email bounces Listed telephone goes to wrong number Listed mailing address physically no longer exists (I looked it up on Google streets, the nearby houses have been demolished and it looks like it's being turned into an apartment complex) Owner name is "D Smith" (do I have to call every D Smith in the region?) My question: Is there any way to track down the owner of a domain besides the WHOIS record?

    Read the article

  • How to test DNS glue record?

    - by Sunnz
    Hello I have just set up a DNS server for my domain example.org with 2 name servers ns1.example.org and ns2.example.org. I have attempted to set up a glue record for ns1 and ns2 at my registrar. It seems to work for now when I do a dig example.org but when I do a whois example.org it lists ns1.example.org and ns2.example.org but not their IP address which should be set up as a glue record. So I am wondering how do I check for the existence of a glue record? Do I do it with whois? I have seen .com and .net whois records that have both the domain name as well as the IP address for the name servers, is .org different? What's the proper way to test this? Thanks.

    Read the article

  • Make IP Address point to webroot instead of virtual hosts' documentroot

    - by Reuben L.
    I used to have a one-to-one domain name and IP. Recently I've paid for a second domain name and decided to host it on the same box and IP. As such, I added virtualhosts to point each domain name to a different document root (i.e. /var/www/webbie1 and /var/www/webbie2). The question I have is, can I still make the IP, e.g. http://XXX.XXX.XXX.XXX, point to the webroot, i.e. /var/www/? If so, how do I go about doing it? For a fuller picture, the box is on an Ubuntu server OS and I'm using apache2 as the app server. the changes I made to enable to virtual hosts were in the apache2.conf file with the <VirtualHost [IP address]> ... </VirtualHost> tags. Thanks.

    Read the article

  • Google Apps for Mail - MX entry related doubt

    - by niting
    I have signed up for google apps recently for my organisation. The google apps guide says that I need to edit the MX entry for my domain so that the mails get redirected to the google mail servers instead of my default mail server. But, I am having a doubt whether to edit the MX entry on my domain name provider or the hosting server. My domain name provider is godaddy.com and my server is ServInt. And, moreover, what difference does it make if I edit the MX entries on my hosting provider or my domain name provider. Thanks, niting

    Read the article

  • jquery get radiobuttonlist by name dynamically

    - by Cindy
    I have two radiobuttonlist and one checkboxlist on the page. Ideally based on the checkbox selected value, I want to enable/disable corresponding radibuttonlist with jquery function. But some how $("input[name*=" + columnName + "]") always return null. It can not find the radiobuttonlist by its name? $(function() { function checkBoxClicked() { var isChecked = $(this).is(":checked"); var columnName = "rblColumn" + $(this).parent().attr("alt"); if (isChecked) { $("input[name*=" + columnName + "]").removeAttr("disabled"); } else { $("input[name*=" + columnName + "]").attr("disabled", "disabled"); $("input[name*=" + columnName + "] input").each(function() { $(this).attr("checked", "") }); } } //intercept any check box click event inside the #list Div $(":checkbox").click(checkBoxClicked); }); <asp:Panel ID="TestPanel" runat="server"> <asp:CheckBoxList ID = "chkColumn" runat="server" RepeatDirection="Horizontal"> <asp:ListItem id = "Column1" runat="server" Text="Column 1" Value="1" alt="1" class="HeadColumn" /> <asp:ListItem id = "Column2" runat="server" Text="Column 2" Value="2" alt="2" class="HeadColumn"/> </asp:CheckBoxList> <table> <tr> <td> <asp:RadioButtonList ID = "rblColumn1" runat="server" RepeatDirection="Vertical" disabled="disabled"> <asp:ListItem id="liColumn1p" runat="server" /> <asp:ListItem id="liColumn1n" runat="server" /> </asp:RadioButtonList> </td> <td> <asp:RadioButtonList ID = "rblColumn2" runat="server" RepeatDirection="Vertical" disabled="disabled"> <asp:ListItem id="liColumn2p" runat="server" /> <asp:ListItem id="liColumn2n" runat="server" /> </asp:RadioButtonList> </td> </tr> </table> </asp:Panel> source: <div id="TestPanel"> <table id="chkColumn" border="0"> <tr> <td><span id="Column1" alt="1" class="HeadColumn"><input id="chkColumn_0" type="checkbox" name="chkColumn$0" /><label for="chkColumn_0">Column 1</label></span></td><td><span id="Column2" alt="2" class="HeadColumn"><input id="chkColumn_1" type="checkbox" name="chkColumn$1" /><label for="chkColumn_1">Column 2</label></span></td> </tr> </table> <table> <tr> <td> <table id="rblColumn1" class="myRadioButtonList" disabled="disabled" border="0"> <tr> <td><span id="liColumn1p"><input id="rblColumn1_0" type="radio" name="rblColumn1" value="" /></span></td> </tr><tr> <td><span id="liColumn1n"><input id="rblColumn1_1" type="radio" name="rblColumn1" value="" /></span></td> </tr> </table> </td> <td> <table id="rblColumn2" class="myRadioButtonList" disabled="disabled" border="0"> <tr> <td><span id="liColumn2p"><input id="rblColumn2_0" type="radio" name="rblColumn2" value="" /></span></td> </tr><tr> <td><span id="liColumn2n"><input id="rblColumn2_1" type="radio" name="rblColumn2" value="" /></span></td> </tr> </table> </td> </tr> </table> </div>

    Read the article

  • Losing jQuery functionality after postback

    - by David Lozzi
    I have seen a TON of people reporting this issue online, but no actual solutions. I'm not using AJAX or updatepanels, just a dropdown that posts back on selected index change. My HTML is <div id="myList"> <table id="ctl00_PlaceHolderMain_dlFields" cellspacing="0" border="0" style="border-collapse:collapse;"> <tr> <td> <tr> <td class="ms-formlabel" style="width: 175px; padding-left: 10px"> <span id="ctl00_PlaceHolderMain_dlFields_ctl00_lblDestinationField">Body</span> </td> <td class="ms-formbody" style="width: 485px"> <input name="ctl00$PlaceHolderMain$dlFields$ctl00$txtSource" type="text" id="ctl00_PlaceHolderMain_dlFields_ctl00_txtSource" class="ms-input" style="width:230px" /> <select name="ctl00$PlaceHolderMain$dlFields$ctl00$ddlSourceFields" id="ctl00_PlaceHolderMain_dlFields_ctl00_ddlSourceFields" class="ms-input"> <option value="Some Field Name 1">Some Field Name 1</option> <option value="Some Field Name 2">Some Field Name 2</option> <option value="Some Field Name 3">Some Field Name 3</option> <option value="Some Field Name 4">Some Field Name 4</option> </select> <a href="#" id="appendSelect">append</a> </td> </tr> </td> </tr><tr> <td> <tr> <td class="ms-formlabel" style="width: 175px; padding-left: 10px"> <span id="ctl00_PlaceHolderMain_dlFields_ctl01_lblDestinationField">Expires</span> </td> <td class="ms-formbody" style="width: 485px"> <input name="ctl00$PlaceHolderMain$dlFields$ctl01$txtSource" type="text" id="ctl00_PlaceHolderMain_dlFields_ctl01_txtSource" class="ms-input" style="width:230px" /> <select name="ctl00$PlaceHolderMain$dlFields$ctl01$ddlSourceFields" id="ctl00_PlaceHolderMain_dlFields_ctl01_ddlSourceFields" class="ms-input"> <option value="Some Field Name 1">Some Field Name 1</option> <option value="Some Field Name 2">Some Field Name 2</option> <option value="Some Field Name 3">Some Field Name 3</option> <option value="Some Field Name 4">Some Field Name 4</option> </select> <a href="#" id="appendSelect">append</a> </td> </tr> </td> </tr><tr> <td> <tr> <td class="ms-formlabel" style="width: 175px; padding-left: 10px"> <span id="ctl00_PlaceHolderMain_dlFields_ctl02_lblDestinationField">Title</span> </td> <td class="ms-formbody" style="width: 485px"> <input name="ctl00$PlaceHolderMain$dlFields$ctl02$txtSource" type="text" id="ctl00_PlaceHolderMain_dlFields_ctl02_txtSource" class="ms-input" style="width:230px" /> <select name="ctl00$PlaceHolderMain$dlFields$ctl02$ddlSourceFields" id="ctl00_PlaceHolderMain_dlFields_ctl02_ddlSourceFields" class="ms-input"> <option value="Some Field Name 1">Some Field Name 1</option> <option value="Some Field Name 2">Some Field Name 2</option> <option value="Some Field Name 3">Some Field Name 3</option> <option value="Some Field Name 4">Some Field Name 4</option> </select> <a href="#" id="appendSelect">append</a> </td> </tr> </td> </tr> </table></div> The above Div tag is static, and the table is generated from a DataList object. On postback the datalist reloads using a new dataset, for example <div id="myList"> <table id="ctl00_PlaceHolderMain_dlFields" cellspacing="0" border="0" style="border-collapse:collapse;"> <tr> <td> <tr> <td class="ms-formlabel" style="width: 175px; padding-left: 10px"> <span id="ctl00_PlaceHolderMain_dlFields_ctl00_lblDestinationField">Notes</span> </td> <td class="ms-formbody" style="width: 485px"> <input name="ctl00$PlaceHolderMain$dlFields$ctl00$txtSource" type="text" id="ctl00_PlaceHolderMain_dlFields_ctl00_txtSource" class="ms-input" style="width:230px" /> <select name="ctl00$PlaceHolderMain$dlFields$ctl00$ddlSourceFields" id="ctl00_PlaceHolderMain_dlFields_ctl00_ddlSourceFields" class="ms-input"> <option value="Some Field Name 1">Some Field Name 1</option> <option value="Some Field Name 2">Some Field Name 2</option> <option value="Some Field Name 3">Some Field Name 3</option> <option value="Some Field Name 4">Some Field Name 4</option> </select> <a href="#" id="appendSelect">append</a> </td> </tr> </td> </tr><tr> <td> <tr> <td class="ms-formlabel" style="width: 175px; padding-left: 10px"> <span id="ctl00_PlaceHolderMain_dlFields_ctl01_lblDestinationField">URL</span> </td> <td class="ms-formbody" style="width: 485px"> <input name="ctl00$PlaceHolderMain$dlFields$ctl01$txtSource" type="text" id="ctl00_PlaceHolderMain_dlFields_ctl01_txtSource" class="ms-input" style="width:230px" /> <select name="ctl00$PlaceHolderMain$dlFields$ctl01$ddlSourceFields" id="ctl00_PlaceHolderMain_dlFields_ctl01_ddlSourceFields" class="ms-input"> <option value="Some Field Name 1">Some Field Name 1</option> <option value="Some Field Name 2">Some Field Name 2</option> <option value="Some Field Name 3">Some Field Name 3</option> <option value="Some Field Name 4">Some Field Name 4</option> </select> <a href="#" id="appendSelect">append</a> </td> </tr> </td> </tr> </table></div> After the postback and the datalist is reloaded, my JQuery doesn't work anymore. No errors, nothing. I don't see any actual changes in the objects in the HTML that should cause this. How do I fix this? Any workarounds or bandaides I can apply? My JQuery is below <script type='text/javascript'> $(document).ready(function () { $('#myList a').live("click", function () { var $selectValue = $(this).siblings('select').val(); var $thatInput = $(this).siblings('input'); var val = $thatInput.val() + ' |[' + $selectValue + ']|'; $thatInput.val(jQuery.trim(val)); }) }); </script> Thanks!!

    Read the article

  • Amazon S3 Change file download name

    - by Daveo
    I have files stored on S3 with a GUID as the key name. I am using a pre signed URL to download as per S3 REST API I store the original file name in my own Database. When a user clicks to download a file from my web application I want to return their original file name, but currently all they get is a GUID. How can I achieve this? My web app is in salesforce so I do not have much control to do response.redirects all download the file to the web server then rename it due to governor limitations. Is there some HTML redirect, meta refresh, Javascript I can use? Is there some way to change the download file name for S3 (the only thing I can think of is coping the object to a new name, downloading it, then deleting it). I want to avoid creating a bucket per user as we will have a lot of users and still no guarantee each file with in each bucket will have a unique name Any other solutions?

    Read the article

  • Liquid templates - accessing members by name

    - by egarcia
    I'm using Jekyll to create a new blog. It uses Liquid underneath. Jekyll defines certain "variables": site, content, page, post and paginator. These "variables" have several "members". For instance, post.date will return the date of a post, while post.url will return its url. My question is: can I access a variable's member using another variable as the member name? See the following example: {% if my_condition %} {% assign name = 'date' %} {% else %} {% assign name = 'url' %} {% endif %} I have a variable called name which is either 'date' or 'url'. How can I make the liquid equivalent of post[name] in ruby? The only way I've found is using a for loop to iterate over all the pairs (key-value) of post. Beware! It is quite horrible: {% for property in post %} {% if property[0] == name %} {{ property[1] }} {% endif %} {% endfor %} Argh! I hope there is a better way. Thanks.

    Read the article

  • Specify Windows Service Name on install with Setup Project

    - by sympatric greg
    Objective: In support of a Windows Service that may have multiple instances on a single machine, use a Setup Project to create an MSI capable of: Receiving user input for Service Name Installing service Serializing Service Name from 1 (so that the proper name can be used in logging and uninstall) My initial hope was to set Service Name in App.config (and then retrieve it during uninstall upon instantiation of the ServiceInstaller. This seems to have been naive, because it is not accessible during the install. If MyInstaller extends Installer, it can call base.Install(); however, my attempts to write to app.config (within MyInstaller.Install() and after base.Install()) are inneffective. So while the service can be installed with a custom Service Name, that name is not serialized and the installer is most displeased upon uninstall. How should this be done?

    Read the article

  • How to Set ActiveX Control name on the install window of Internet Explorer

    - by Gohan
    I created a ActiveX control using ATL, already package it with signature. I want to use it on the webpage, but at the install window the name is MyActiveX.cab with no link. the MyActiveX.cab name can be changed by modifying the html page's tag codebase attribute. but the name is still format like "XXX.cab" with no hyperlink. I find a activex control from chinese website has its own name and link: and its object tags are nothing different: <object ID="CMBPB_OCX" CODEBASE="http://szdl.cmbchina.com/download/PB/pb50.cab#version=5,3,1,1" classid="clsid:F2EB8999-766E-4BF6-AAAD-188D398C0D0B" width="0" height="0"> </object> the pic was taken from MSDN Pages, it has link. Really want to know how to Set the activex control name? I try to get help from How to Set ActiveX Control Name, but still get stuck.

    Read the article

  • Hibernate not using schema and catalog name in id generation with strategy increment

    - by Ben
    Hi, I am using the hibernate increment strategy to create my IDs on my entities. @GenericGenerator(name="increment-strategy", strategy="increment") @Id @GeneratedValue(generator="increment=strategy") @Column(name="HDR_ID", unique=true, nullable=false) public int getHdrId(){ return this.hdrId; } The entity has the following table annotation @Table(name = "PORDER.PUB.PO_HEADER", schema = "UVOSi", catalog = "VIRT_UVOS") Please note I have two datasources. When I try to insert an entity Hibernate creates the following SQL statement: select max(hdr_id) from PORDER.PUB.PO_HEADER which causes the following error: Group specified is ambiguous, resubmit the query by fully qualifying group name. When I create a query by hand with entityManager.createQuery() hibernate uses the fully qualified name select XXX from VIRT_UVOS.UVOSi.PORDER.PUB.PO_HEADER and that works fine. So how do I get Hibernate to use the fully qualified name in the Id autogeneration? Btw. I am using Hibernate 3.2 and Seam 2.2 running on JBoss 4.2.3 Regards Immo

    Read the article

  • How to take first file name from a folder in C#

    - by riad
    Hi all, I need to get the first file name from a folder .How i do it in C#? My below code return all the file names.pls guide: DirectoryInfo di = new DirectoryInfo(imgfolderPath); foreach (FileInfo fi in di.GetFiles()) { if (fi.Name != "." && fi.Name != ".." && fi.Name != "Thumbs.db") { string fileName = fi.Name; string fullFileName = fileName.Substring(0, fileName.Length - 4); MessageBox.Show(fullFileName); } } I just need the first file name.pls guide thanks Riad

    Read the article

  • Name of dropdowlist is renamed automatically????

    - by Akawan
    Hello, I've a problem with DropDownlist Name in ASP.NET MVC In my EditorTemplate, I've <%: Html.DropDownList("PoolGeometry",Model.selectVm.PoolGeometry, new { id = "poolgeometry" })%> In generate html, I've <select name="Pool.PoolGeometry" id="poolgeometry"> Normally, "PoolGeometry" is a field in db. If my dropdownlistname has the same name, selected value is value of field. I don't understand this automatic rename! EDIT : Name is dependent on EditorTemplate : if EditorTemplate called like this: <%: Html.EditorFor(model => model.Pool,"SwimmingPool","")%> Name of dropdownlist is "PoolGeometry" and selectedvalues are ok. But if it is called like this: <%: Html.EditorFor(model => model.Pool,"SwimmingPool")%> Name of dropdownlist is "Pool.PoolGeometry"

    Read the article

  • WCF server component getting outdated user name

    - by JoelFan
    I am overriding System.IdentityModel.Policy.IAuthorizationPolicy.Evaluate as follows: public bool Evaluate(EvaluationContext evaluationContext,ref object state) { var ids = (IList<IIdentity>)evaluationContext.Properties["Identities"]; var userName = ids[0].Name; // look up "userName" in a database to check for app. permissions } Recently one of the users had her user name changed in Active Directory. She is able to login to her Windows box fine with her new user name, but when she tries to run the client side of our application, the server gets her old user name in the "userName" variable in the code above, which messes up our authentication (since her old user name is no longer in our database). Another piece of info: This only happens when she connects to the server code on the Production server. We have the same server code running on a QA server, and it does not have this issue (the QA server code gets her correct (new) user name) Any ideas what could be going on?

    Read the article

  • Get name and value from the input tag

    - by DroidIn.net
    Before you say "oh no, not again" here I'm stating my case. I'm parsing part of HTML output and the only thing I'm interested in is name and value attributes of each <input/ tag. HTML is actually HTML fragment, may not be well-formed. I don't have DOM or HTML parser and I don't try to parse nested elements anyway. The problem is that I don't know the order or number of attributes so it could be <input name="foo" value="boo"/> or <input type="hidden" name=foo> or <input id=blah value='boo' src="image.png" name="foo" type="img"/>. Is there a single regular expression that would get me values of name and value attribute in predictable order? I wouldn't have asked the question if I could assume that name attribute always precedes value but unfortunately this is not the case

    Read the article

  • Find record whose field 'name' not contained within any other record

    - by charlie
    I have a model Foo with a String bar and a String name. Some records' bar contain the name of other records in them. This is intentional. I want to find the "root Foo" records - that is, the ones where their name do not appear in the bar records of any other Foo records. Example: Foo id: 1 name: 'foo1' bar: 'something something' id: 2 name: 'foo2' bar: 'foo1 something' id: 3 name: 'foo3' bar: 'foo1, foo4' My method root_foos would return foo2 and foo3 since their names do not appear in any bar string. edit: I don't want to use a relation or foreign key here - just this method.

    Read the article

  • Upload File to Windows Azure Blob in Chunks through ASP.NET MVC, JavaScript and HTML5

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/07/01/upload-file-to-windows-azure-blob-in-chunks-through-asp.net.aspxMany people are using Windows Azure Blob Storage to store their data in the cloud. Blob storage provides 99.9% availability with easy-to-use API through .NET SDK and HTTP REST. For example, we can store JavaScript files, images, documents in blob storage when we are building an ASP.NET web application on a Web Role in Windows Azure. Or we can store our VHD files in blob and mount it as a hard drive in our cloud service. If you are familiar with Windows Azure, you should know that there are two kinds of blob: page blob and block blob. The page blob is optimized for random read and write, which is very useful when you need to store VHD files. The block blob is optimized for sequential/chunk read and write, which has more common usage. Since we can upload block blob in blocks through BlockBlob.PutBlock, and them commit them as a whole blob with invoking the BlockBlob.PutBlockList, it is very powerful to upload large files, as we can upload blocks in parallel, and provide pause-resume feature. There are many documents, articles and blog posts described on how to upload a block blob. Most of them are focus on the server side, which means when you had received a big file, stream or binaries, how to upload them into blob storage in blocks through .NET SDK.  But the problem is, how can we upload these large files from client side, for example, a browser. This questioned to me when I was working with a Chinese customer to help them build a network disk production on top of azure. The end users upload their files from the web portal, and then the files will be stored in blob storage from the Web Role. My goal is to find the best way to transform the file from client (end user’s machine) to the server (Web Role) through browser. In this post I will demonstrate and describe what I had done, to upload large file in chunks with high speed, and save them as blocks into Windows Azure Blob Storage.   Traditional Upload, Works with Limitation The simplest way to implement this requirement is to create a web page with a form that contains a file input element and a submit button. 1: @using (Html.BeginForm("About", "Index", FormMethod.Post, new { enctype = "multipart/form-data" })) 2: { 3: <input type="file" name="file" /> 4: <input type="submit" value="upload" /> 5: } And then in the backend controller, we retrieve the whole content of this file and upload it in to the blob storage through .NET SDK. We can split the file in blocks and upload them in parallel and commit. The code had been well blogged in the community. 1: [HttpPost] 2: public ActionResult About(HttpPostedFileBase file) 3: { 4: var container = _client.GetContainerReference("test"); 5: container.CreateIfNotExists(); 6: var blob = container.GetBlockBlobReference(file.FileName); 7: var blockDataList = new Dictionary<string, byte[]>(); 8: using (var stream = file.InputStream) 9: { 10: var blockSizeInKB = 1024; 11: var offset = 0; 12: var index = 0; 13: while (offset < stream.Length) 14: { 15: var readLength = Math.Min(1024 * blockSizeInKB, (int)stream.Length - offset); 16: var blockData = new byte[readLength]; 17: offset += stream.Read(blockData, 0, readLength); 18: blockDataList.Add(Convert.ToBase64String(BitConverter.GetBytes(index)), blockData); 19:  20: index++; 21: } 22: } 23:  24: Parallel.ForEach(blockDataList, (bi) => 25: { 26: blob.PutBlock(bi.Key, new MemoryStream(bi.Value), null); 27: }); 28: blob.PutBlockList(blockDataList.Select(b => b.Key).ToArray()); 29:  30: return RedirectToAction("About"); 31: } This works perfect if we selected an image, a music or a small video to upload. But if I selected a large file, let’s say a 6GB HD-movie, after upload for about few minutes the page will be shown as below and the upload will be terminated. In ASP.NET there is a limitation of request length and the maximized request length is defined in the web.config file. It’s a number which less than about 4GB. So if we want to upload a really big file, we cannot simply implement in this way. Also, in Windows Azure, a cloud service network load balancer will terminate the connection if exceed the timeout period. From my test the timeout looks like 2 - 3 minutes. Hence, when we need to upload a large file we cannot just use the basic HTML elements. Besides the limitation mentioned above, the simple HTML file upload cannot provide rich upload experience such as chunk upload, pause and pause-resume. So we need to find a better way to upload large file from the client to the server.   Upload in Chunks through HTML5 and JavaScript In order to break those limitation mentioned above we will try to upload the large file in chunks. This takes some benefit to us such as - No request size limitation: Since we upload in chunks, we can define the request size for each chunks regardless how big the entire file is. - No timeout problem: The size of chunks are controlled by us, which means we should be able to make sure request for each chunk upload will not exceed the timeout period of both ASP.NET and Windows Azure load balancer. It was a big challenge to upload big file in chunks until we have HTML5. There are some new features and improvements introduced in HTML5 and we will use them to implement our solution.   In HTML5, the File interface had been improved with a new method called “slice”. It can be used to read part of the file by specifying the start byte index and the end byte index. For example if the entire file was 1024 bytes, file.slice(512, 768) will read the part of this file from the 512nd byte to 768th byte, and return a new object of interface called "Blob”, which you can treat as an array of bytes. In fact,  a Blob object represents a file-like object of immutable, raw data. The File interface is based on Blob, inheriting blob functionality and expanding it to support files on the user's system. For more information about the Blob please refer here. File and Blob is very useful to implement the chunk upload. We will use File interface to represent the file the user selected from the browser and then use File.slice to read the file in chunks in the size we wanted. For example, if we wanted to upload a 10MB file with 512KB chunks, then we can read it in 512KB blobs by using File.slice in a loop.   Assuming we have a web page as below. User can select a file, an input box to specify the block size in KB and a button to start upload. 1: <div> 2: <input type="file" id="upload_files" name="files[]" /><br /> 3: Block Size: <input type="number" id="block_size" value="512" name="block_size" />KB<br /> 4: <input type="button" id="upload_button_blob" name="upload" value="upload (blob)" /> 5: </div> Then we can have the JavaScript function to upload the file in chunks when user clicked the button. 1: <script type="text/javascript"> 1: 2: $(function () { 3: $("#upload_button_blob").click(function () { 4: }); 5: });</script> Firstly we need to ensure the client browser supports the interfaces we are going to use. Just try to invoke the File, Blob and FormData from the “window” object. If any of them is “undefined” the condition result will be “false” which means your browser doesn’t support these premium feature and it’s time for you to get your browser updated. FormData is another new feature we are going to use in the future. It could generate a temporary form for us. We will use this interface to create a form with chunk and associated metadata when invoked the service through ajax. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: if (window.File && window.Blob && window.FormData) { 4: alert("Your brwoser is awesome, let's rock!"); 5: } 6: else { 7: alert("Oh man plz update to a modern browser before try is cool stuff out."); 8: return; 9: } 10: }); Each browser supports these interfaces by their own implementation and currently the Blob, File and File.slice are supported by Chrome 21, FireFox 13, IE 10, Opera 12 and Safari 5.1 or higher. After that we worked on the files the user selected one by one since in HTML5, user can select multiple files in one file input box. 1: var files = $("#upload_files")[0].files; 2: for (var i = 0; i < files.length; i++) { 3: var file = files[i]; 4: var fileSize = file.size; 5: var fileName = file.name; 6: } Next, we calculated the start index and end index for each chunks based on the size the user specified from the browser. We put them into an array with the file name and the index, which will be used when we upload chunks into Windows Azure Blob Storage as blocks since we need to specify the target blob name and the block index. At the same time we will store the list of all indexes into another variant which will be used to commit blocks into blob in Azure Storage once all chunks had been uploaded successfully. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10:  11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: var blockSizeInKB = $("#block_size").val(); 14: var blockSize = blockSizeInKB * 1024; 15: var blocks = []; 16: var offset = 0; 17: var index = 0; 18: var list = ""; 19: while (offset < fileSize) { 20: var start = offset; 21: var end = Math.min(offset + blockSize, fileSize); 22:  23: blocks.push({ 24: name: fileName, 25: index: index, 26: start: start, 27: end: end 28: }); 29: list += index + ","; 30:  31: offset = end; 32: index++; 33: } 34: } 35: }); Now we have all chunks’ information ready. The next step should be upload them one by one to the server side, and at the server side when received a chunk it will upload as a block into Blob Storage, and finally commit them with the index list through BlockBlobClient.PutBlockList. But since all these invokes are ajax calling, which means not synchronized call. So we need to introduce a new JavaScript library to help us coordinate the asynchronize operation, which named “async.js”. You can download this JavaScript library here, and you can find the document here. I will not explain this library too much in this post. We will put all procedures we want to execute as a function array, and pass into the proper function defined in async.js to let it help us to control the execution sequence, in series or in parallel. Hence we will define an array and put the function for chunk upload into this array. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4:  5: // start to upload each files in chunks 6: var files = $("#upload_files")[0].files; 7: for (var i = 0; i < files.length; i++) { 8: var file = files[i]; 9: var fileSize = file.size; 10: var fileName = file.name; 11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: ... ... 14:  15: // define the function array and push all chunk upload operation into this array 16: blocks.forEach(function (block) { 17: putBlocks.push(function (callback) { 18: }); 19: }); 20: } 21: }); 22: }); As you can see, I used File.slice method to read each chunks based on the start and end byte index we calculated previously, and constructed a temporary HTML form with the file name, chunk index and chunk data through another new feature in HTML5 named FormData. Then post this form to the backend server through jQuery.ajax. This is the key part of our solution. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: blocks.forEach(function (block) { 15: putBlocks.push(function (callback) { 16: // load blob based on the start and end index for each chunks 17: var blob = file.slice(block.start, block.end); 18: // put the file name, index and blob into a temporary from 19: var fd = new FormData(); 20: fd.append("name", block.name); 21: fd.append("index", block.index); 22: fd.append("file", blob); 23: // post the form to backend service (asp.net mvc controller action) 24: $.ajax({ 25: url: "/Home/UploadInFormData", 26: data: fd, 27: processData: false, 28: contentType: "multipart/form-data", 29: type: "POST", 30: success: function (result) { 31: if (!result.success) { 32: alert(result.error); 33: } 34: callback(null, block.index); 35: } 36: }); 37: }); 38: }); 39: } 40: }); Then we will invoke these functions one by one by using the async.js. And once all functions had been executed successfully I invoked another ajax call to the backend service to commit all these chunks (blocks) as the blob in Windows Azure Storage. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.series(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); That’s all in the client side. The outline of our logic would be - Calculate the start and end byte index for each chunks based on the block size. - Defined the functions of reading the chunk form file and upload the content to the backend service through ajax. - Execute the functions defined in previous step with “async.js”. - Commit the chunks by invoking the backend service in Windows Azure Storage finally.   Save Chunks as Blocks into Blob Storage In above we finished the client size JavaScript code. It uploaded the file in chunks to the backend service which we are going to implement in this step. We will use ASP.NET MVC as our backend service, and it will receive the chunks, upload into Windows Azure Bob Storage in blocks, then finally commit as one blob. As in the client side we uploaded chunks by invoking the ajax call to the URL "/Home/UploadInFormData", I created a new action under the Index controller and it only accepts HTTP POST request. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: } 8: catch (Exception e) 9: { 10: error = e.ToString(); 11: } 12:  13: return new JsonResult() 14: { 15: Data = new 16: { 17: success = string.IsNullOrWhiteSpace(error), 18: error = error 19: } 20: }; 21: } Then I retrieved the file name, index and the chunk content from the Request.Form object, which was passed from our client side. And then, used the Windows Azure SDK to create a blob container (in this case we will use the container named “test”.) and create a blob reference with the blob name (same as the file name). Then uploaded the chunk as a block of this blob with the index, since in Blob Storage each block must have an index (ID) associated with so that finally we can put all blocks as one blob by specifying their block ID list. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var index = int.Parse(Request.Form["index"]); 9: var file = Request.Files[0]; 10: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 11:  12: var container = _client.GetContainerReference("test"); 13: container.CreateIfNotExists(); 14: var blob = container.GetBlockBlobReference(name); 15: blob.PutBlock(id, file.InputStream, null); 16: } 17: catch (Exception e) 18: { 19: error = e.ToString(); 20: } 21:  22: return new JsonResult() 23: { 24: Data = new 25: { 26: success = string.IsNullOrWhiteSpace(error), 27: error = error 28: } 29: }; 30: } Next, I created another action to commit the blocks into blob once all chunks had been uploaded. Similarly, I retrieved the blob name from the Request.Form. I also retrieved the chunks ID list, which is the block ID list from the Request.Form in a string format, split them as a list, then invoked the BlockBlob.PutBlockList method. After that our blob will be shown in the container and ready to be download. 1: [HttpPost] 2: public JsonResult Commit() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var list = Request.Form["list"]; 9: var ids = list 10: .Split(',') 11: .Where(id => !string.IsNullOrWhiteSpace(id)) 12: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 13: .ToArray(); 14:  15: var container = _client.GetContainerReference("test"); 16: container.CreateIfNotExists(); 17: var blob = container.GetBlockBlobReference(name); 18: blob.PutBlockList(ids); 19: } 20: catch (Exception e) 21: { 22: error = e.ToString(); 23: } 24:  25: return new JsonResult() 26: { 27: Data = new 28: { 29: success = string.IsNullOrWhiteSpace(error), 30: error = error 31: } 32: }; 33: } Now we finished all code we need. The whole process of uploading would be like this below. Below is the full client side JavaScript code. 1: <script type="text/javascript" src="~/Scripts/async.js"></script> 2: <script type="text/javascript"> 3: $(function () { 4: $("#upload_button_blob").click(function () { 5: // assert the browser support html5 6: if (window.File && window.Blob && window.FormData) { 7: alert("Your brwoser is awesome, let's rock!"); 8: } 9: else { 10: alert("Oh man plz update to a modern browser before try is cool stuff out."); 11: return; 12: } 13:  14: // start to upload each files in chunks 15: var files = $("#upload_files")[0].files; 16: for (var i = 0; i < files.length; i++) { 17: var file = files[i]; 18: var fileSize = file.size; 19: var fileName = file.name; 20:  21: // calculate the start and end byte index for each blocks(chunks) 22: // with the index, file name and index list for future using 23: var blockSizeInKB = $("#block_size").val(); 24: var blockSize = blockSizeInKB * 1024; 25: var blocks = []; 26: var offset = 0; 27: var index = 0; 28: var list = ""; 29: while (offset < fileSize) { 30: var start = offset; 31: var end = Math.min(offset + blockSize, fileSize); 32:  33: blocks.push({ 34: name: fileName, 35: index: index, 36: start: start, 37: end: end 38: }); 39: list += index + ","; 40:  41: offset = end; 42: index++; 43: } 44:  45: // define the function array and push all chunk upload operation into this array 46: var putBlocks = []; 47: blocks.forEach(function (block) { 48: putBlocks.push(function (callback) { 49: // load blob based on the start and end index for each chunks 50: var blob = file.slice(block.start, block.end); 51: // put the file name, index and blob into a temporary from 52: var fd = new FormData(); 53: fd.append("name", block.name); 54: fd.append("index", block.index); 55: fd.append("file", blob); 56: // post the form to backend service (asp.net mvc controller action) 57: $.ajax({ 58: url: "/Home/UploadInFormData", 59: data: fd, 60: processData: false, 61: contentType: "multipart/form-data", 62: type: "POST", 63: success: function (result) { 64: if (!result.success) { 65: alert(result.error); 66: } 67: callback(null, block.index); 68: } 69: }); 70: }); 71: }); 72:  73: // invoke the functions one by one 74: // then invoke the commit ajax call to put blocks into blob in azure storage 75: async.series(putBlocks, function (error, result) { 76: var data = { 77: name: fileName, 78: list: list 79: }; 80: $.post("/Home/Commit", data, function (result) { 81: if (!result.success) { 82: alert(result.error); 83: } 84: else { 85: alert("done!"); 86: } 87: }); 88: }); 89: } 90: }); 91: }); 92: </script> And below is the full ASP.NET MVC controller code. 1: public class HomeController : Controller 2: { 3: private CloudStorageAccount _account; 4: private CloudBlobClient _client; 5:  6: public HomeController() 7: : base() 8: { 9: _account = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("DataConnectionString")); 10: _client = _account.CreateCloudBlobClient(); 11: } 12:  13: public ActionResult Index() 14: { 15: ViewBag.Message = "Modify this template to jump-start your ASP.NET MVC application."; 16:  17: return View(); 18: } 19:  20: [HttpPost] 21: public JsonResult UploadInFormData() 22: { 23: var error = string.Empty; 24: try 25: { 26: var name = Request.Form["name"]; 27: var index = int.Parse(Request.Form["index"]); 28: var file = Request.Files[0]; 29: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 30:  31: var container = _client.GetContainerReference("test"); 32: container.CreateIfNotExists(); 33: var blob = container.GetBlockBlobReference(name); 34: blob.PutBlock(id, file.InputStream, null); 35: } 36: catch (Exception e) 37: { 38: error = e.ToString(); 39: } 40:  41: return new JsonResult() 42: { 43: Data = new 44: { 45: success = string.IsNullOrWhiteSpace(error), 46: error = error 47: } 48: }; 49: } 50:  51: [HttpPost] 52: public JsonResult Commit() 53: { 54: var error = string.Empty; 55: try 56: { 57: var name = Request.Form["name"]; 58: var list = Request.Form["list"]; 59: var ids = list 60: .Split(',') 61: .Where(id => !string.IsNullOrWhiteSpace(id)) 62: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 63: .ToArray(); 64:  65: var container = _client.GetContainerReference("test"); 66: container.CreateIfNotExists(); 67: var blob = container.GetBlockBlobReference(name); 68: blob.PutBlockList(ids); 69: } 70: catch (Exception e) 71: { 72: error = e.ToString(); 73: } 74:  75: return new JsonResult() 76: { 77: Data = new 78: { 79: success = string.IsNullOrWhiteSpace(error), 80: error = error 81: } 82: }; 83: } 84: } And if we selected a file from the browser we will see our application will upload chunks in the size we specified to the server through ajax call in background, and then commit all chunks in one blob. Then we can find the blob in our Windows Azure Blob Storage.   Optimized by Parallel Upload In previous example we just uploaded our file in chunks. This solved the problem that ASP.NET MVC request content size limitation as well as the Windows Azure load balancer timeout. But it might introduce the performance problem since we uploaded chunks in sequence. In order to improve the upload performance we could modify our client side code a bit to make the upload operation invoked in parallel. The good news is that, “async.js” library provides the parallel execution function. If you remembered the code we invoke the service to upload chunks, it utilized “async.series” which means all functions will be executed in sequence. Now we will change this code to “async.parallel”. This will invoke all functions in parallel. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallel(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); In this way all chunks will be uploaded to the server side at the same time to maximize the bandwidth usage. This should work if the file was not very large and the chunk size was not very small. But for large file this might introduce another problem that too many ajax calls are sent to the server at the same time. So the best solution should be, upload the chunks in parallel with maximum concurrency limitation. The code below specified the concurrency limitation to 4, which means at the most only 4 ajax calls could be invoked at the same time. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallelLimit(putBlocks, 4, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: });   Summary In this post we discussed how to upload files in chunks to the backend service and then upload them into Windows Azure Blob Storage in blocks. We focused on the frontend side and leverage three new feature introduced in HTML 5 which are - File.slice: Read part of the file by specifying the start and end byte index. - Blob: File-like interface which contains the part of the file content. - FormData: Temporary form element that we can pass the chunk alone with some metadata to the backend service. Then we discussed the performance consideration of chunk uploading. Sequence upload cannot provide maximized upload speed, but the unlimited parallel upload might crash the browser and server if too many chunks. So we finally came up with the solution to upload chunks in parallel with the concurrency limitation. We also demonstrated how to utilize “async.js” JavaScript library to help us control the asynchronize call and the parallel limitation.   Regarding the chunk size and the parallel limitation value there is no “best” value. You need to test vary composition and find out the best one for your particular scenario. It depends on the local bandwidth, client machine cores and the server side (Windows Azure Cloud Service Virtual Machine) cores, memory and bandwidth. Below is one of my performance test result. The client machine was Windows 8 IE 10 with 4 cores. I was using Microsoft Cooperation Network. The web site was hosted on Windows Azure China North data center (in Beijing) with one small web role (1.7GB 1 core CPU, 1.75GB memory with 100Mbps bandwidth). The test cases were - Chunk size: 512KB, 1MB, 2MB, 4MB. - Upload Mode: Sequence, parallel (unlimited), parallel with limit (4 threads, 8 threads). - Chunk Format: base64 string, binaries. - Target file: 100MB. - Each case was tested 3 times. Below is the test result chart. Some thoughts, but not guidance or best practice: - Parallel gets better performance than series. - No significant performance improvement between parallel 4 threads and 8 threads. - Transform with binaries provides better performance than base64. - In all cases, chunk size in 1MB - 2MB gets better performance.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • export web page data to excel using javascript [on hold]

    - by Sreevani sri
    I have created web page using html.When i clicked on submit button it will export to excel. using javascript i wnt to export thadt data to excel. my html code is 1. Please give your Name:<input type="text" name="Name" /><br /> 2. Area where you reside:<input type="text" name="Res" /><br /> 3. Specify your age group<br /> (a)15-25<input type="text" name="age" /> (b)26-35<input type="text" name="age" /> (c)36-45<input type="text" name="age" /> (d) Above 46<input type="text" name="age" /><br /> 4. Specify your occupation<br /> (a) Student<input type="checkbox" name="occ" value="student" /> (b) Home maker<input type="checkbox" name="occ" value="home" /> (c) Employee<input type="checkbox" name="occ" value="emp" /> (d) Businesswoman <input type="checkbox" name="occ" value="buss" /> (e) Retired<input type="checkbox" name="occ" value="retired" /> (f) others (please specify)<input type="text" name="others" /><br /> 5. Specify the nature of your family<br /> (a) Joint family<input type="checkbox" name="family" value="jfamily" /> (b) Nuclear family<input type="checkbox" name="family" value="nfamily" /><br /> 6. Please give the Number of female members in your family and their average age approximately<br /> Members Age 1 2 3 4 5<br /> 8. Please give your highest level of education (a)SSC or below<input type="checkbox" name="edu" value="ssc" /> (b) Intermediate<input type="checkbox" name="edu" value="int" /> (c) Diploma <input type="checkbox" name="edu" value="dip" /> (d)UG degree <input type="checkbox" name="edu" value="deg" /> (e) PG <input type="checkbox" name="edu" value="pg" /> (g) Doctorial degree<input type="checkbox" name="edu" value="doc" /><br /> 9. Specify your monthly income approximately in RS <input type="text" name="income" /><br /> 10. Specify your time spent in making a purchase decision at the outlet<br /> (a)0-15 min <input type="checkbox" name="dis" value="0-15 min" /> (b)16-30 min <input type="checkbox" name="dis" value="16-30 min" /> (c) 30-45 min<input type="checkbox" name="dis" value="30-45 min" /> (d) 46-60 min<input type="checkbox" name="dis" value="46-60 min" /><br /> <input type="submit" onclick="exportToExcel()" value="Submit" /> </div> </form>

    Read the article

  • Is this over-abstraction? (And is there a name for it?)

    - by mwhite
    I work on a large Django application that uses CouchDB as a database and couchdbkit for mapping CouchDB documents to objects in Python, similar to Django's default ORM. It has dozens of model classes and a hundred or two CouchDB views. The application allows users to register a "domain", which gives them a unique URL containing the domain name that gives them access to a project whose data has no overlap with the data of other domains. Each document that is part of a domain has its domain property set to that domain's name. As far as relationships between the documents go, all domains are effectively mutually exclusive subsets of the data, except for a few edge cases (some users can be members of more than one domain, and there are some administrative reports that include all domains, etc.). The code is full of explicit references to the domain name, and I'm wondering if it would be worth the added complexity to abstract this out. I'd also like to know if there's a name for the sort of bound property approach I'm taking here. Basically, I have something like this in mind: Before in models.py class User(Document): domain = StringProperty() class Group(Document): domain = StringProperty() name = StringProperty() user_ids = StringListProperty() # method that returns related document set def users(self): return [User.get(id) for id in self.user_ids] # method that queries a couch view optimized for a specific lookup @classmethod def by_name(cls, domain, name): # the view method is provided by couchdbkit and handles # wrapping json CouchDB results as Python objects, and # can take various parameters modifying behavior return cls.view('groups/by_name', key=[domain, name]) # method that creates a related document def get_new_user(self): user = User(domain=self.domain) user.save() self.user_ids.append(user._id) return user in views.py: from models import User, Group # there are tons of views like this, (request, domain, ...) def create_new_user_in_group(request, domain, group_name): group = Group.by_name(domain, group_name)[0] user = User(domain=domain) user.save() group.user_ids.append(user._id) group.save() in group/by_name/map.js: function (doc) { if (doc.doc_type == "Group") { emit([doc.domain, doc.name], null); } } After models.py class DomainDocument(Document): domain = StringProperty() @classmethod def domain_view(cls, *args, **kwargs): kwargs['key'] = [cls.domain.default] + kwargs['key'] return super(DomainDocument, cls).view(*args, **kwargs) @classmethod def get(cls, *args, **kwargs, validate_domain=True): ret = super(DomainDocument, cls).get(*args, **kwargs) if validate_domain and ret.domain != cls.domain.default: raise Exception() return ret def models(self): # a mapping of all models in the application. accessing one returns the equivalent of class BoundUser(User): domain = StringProperty(default=self.domain) class User(DomainDocument): pass class Group(DomainDocument): name = StringProperty() user_ids = StringListProperty() def users(self): return [self.models.User.get(id) for id in self.user_ids] @classmethod def by_name(cls, name): return cls.domain_view('groups/by_name', key=[name]) def get_new_user(self): user = self.models.User() user.save() views.py @domain_view # decorator that sets request.models to the same sort of object that is returned by DomainDocument.models and removes the domain argument from the URL router def create_new_user_in_group(request, group_name): group = request.models.Group.by_name(group_name) user = request.models.User() user.save() group.user_ids.append(user._id) group.save() (Might be better to leave the abstraction leaky here in order to avoid having to deal with a couchapp-style //! include of a wrapper for emit that prepends doc.domain to the key or some other similar solution.) function (doc) { if (doc.doc_type == "Group") { emit([doc.name], null); } } Pros and Cons So what are the pros and cons of this? Pros: DRYer prevents you from creating related documents but forgetting to set the domain. prevents you from accidentally writing a django view - couch view execution path that leads to a security breach doesn't prevent you from accessing underlying self.domain and normal Document.view() method potentially gets rid of the need for a lot of sanity checks verifying whether two documents whose domains we expect to be equal are. Cons: adds some complexity hides what's really happening requires no model modules to have classes with the same name, or you would need to add sub-attributes to self.models for modules. However, requiring project-wide unique class names for models should actually be fine because they correspond to the doc_type property couchdbkit uses to decide which class to instantiate them as, which should be unique. removes explicit dependency documentation (from group.models import Group)

    Read the article

  • What is my miniport's service name?

    - by Ian Boyd
    i am trying to query the physical sector size of my drive using fsutil: C:\Windows\system32>fsutil fsinfo ntfsinfo c: NTFS Volume Serial Number : 0x78cc11b2cc116c1e Version : 3.1 Number Sectors : 0x000000003a382fff Total Clusters : 0x00000000074705ff Free Clusters : 0x00000000022fc29b Total Reserved : 0x00000000000007d0 Bytes Per Sector : 512 Bytes Per Physical Sector : <Not Supported> Bytes Per Cluster : 4096 Bytes Per FileRecord Segment : 1024 Clusters Per FileRecord Segment : 0 Mft Valid Data Length : 0x00000000305c0000 Mft Start Lcn : 0x00000000000c0000 Mft2 Start Lcn : 0x0000000003a382ff Mft Zone Start : 0x0000000006951940 Mft Zone End : 0x0000000006951c80 RM Identifier: 19B22CBE-570D-19DE-9C72-CD758F800DDC You can see that the Bytes Per Physical Sector value is Not Supported: Bytes Per Physical Sector : <Not Supported> In KB Article Microsoft support policy for 4K sector hard drives in Windows, Microsoft says: If fsutil.exe continues to display "Bytes Per Physical Sector : " after you apply the latest storage driver and the required hotfixes, make sure that the following registry path exists: HKLM\CurrentControlSet\Services\<miniport’s service name>\Parameters\Device\ Name: EnableQueryAccessAlignment Type: REG_DWORD Value: 1: Enable The only thing i don't know is what my Miniport's service name is. What is my miniport's service name. i know that my SATA drives are in AHCI mode, and AHCI uses the msahci driver service: Is that my miniport service? "MSAHCI"? See also Hitachi - Advanced Format Technology Brief RMPrepUSB - Advanced Format (4K sector) hard disks Microsoft support policy for 4K sector hard drives in Windows OSR Online - Advance Disk Format support in Storport Virtual Mniport diver Default cluster size for NTFS, FAT, and exFAT Wikipedia - Advanced Format

    Read the article

  • Drop outs when accessing share by DFS name.

    - by Stephen Woolhead
    I have a strange problem, aren't they all! I have a DFS root \domain\files\vms, it has a single target on a different server than the namespace. I can copy a test file set from the target directly via \server\vms$\testfiles and all is well, the files copy fine. I have repeated these tests many times. If I try and copy the files from the dfs root I get big pauses in the network traffic, about 50 seconds every couple of minutes, all the traffic just stops for the copy. If I start another copy between the same two machines during this pause, it starts copying fine, so I know it's not an issue with the disks on the server. Every once in a while the copy will fail, no errors, the progress bar will just zip all the way to 100% and the copy dialog will close. Checking the target folder show that the copy is incomplete. I've moved the LUN to another server and had the same problem. The servers are all 2008 R2, the clients are Vista x64, Windows7 x64 and 2008 R2, all have the same problem. Anyone got any ideas? Cheers, Stephen More Information: I've been running a NetMon trace on the connection when the file copy fails and what seems to be standing out is that when opening a file that the copy completes on the SMB command looks like this: SMB2: C CREATE (0x5), Name=Training\PDC2008\BB34 Live Services Notifications, Awareness, and Communications.wmv@#422082, Context=DHnQ, Context=MxAc, Context=QFid, Context=RqLs, Mid = 245376 SMB2: R CREATE (0x5), Context=MxAc, Context=RqLs, Context=DHnQ, Context=QFid, FID=0xFFFFFFFF00000015, Mid = 245376 But for the last file when the copy dialog closes looks like this: SMB2: C CREATE (0x5), Name=gt\files\Media\Training\PDC2008\BB36 FAST Building Search-Driven Portals with Microsoft Office SharePoint Server 2007 and Microsoft Silverlight.wmv@#859374, Context=DHnQ, Context=MxAc, Context=QFid, Context=RqLs, Mid = 77 SMB2: R , Mid = 77 - NT Status: System - Error, Code = (58) STATUS_OBJECT_PATH_NOT_FOUND The main difference seems to be in the name, one is relative to the open file share, the other has gained the gt\files\media prefix which is the name of the DFS target. These failures are always preceded by logoff and back on of the SMB target. Might have to bump this one to PSS.

    Read the article

  • nameserver spoiling avahi multicast name resolution of .local domain

    - by Doug Coburn
    After trying to ping a machine on my local network, I noticed that I was trying hit address 66.152.109.24. This is an external public address. Resolution should have occurred via avahi mDNS. I ran dig to see how the name resolution worked and my quest/centurylink name server was retuning results for my .local domain queries! I tried a random name and got the same ip address result. $ dig jakdafj.local ; <<>> DiG 9.8.1-P1-RedHat-9.8.1-3.P1.fc15 <<>> jakdafj.local ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 58410 ;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;jakdafj.local. IN A ;; ANSWER SECTION: jakdafj.local. 10 IN A 66.152.109.24 jakdafj.local. 10 IN A 204.232.231.46 ;; Query time: 104 msec ;; SERVER: 205.171.3.25#53(205.171.3.25) ;; WHEN: Sat Mar 24 20:40:17 2012 ;; MSG SIZE rcvd: 63 Am I missing something or is my DNS name server at 205.171.3.25 corrupted?

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >