Search Results

Search found 12950 results on 518 pages for 'field activities'.

Page 128/518 | < Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >

  • SEO Tools to Help You in Your Business

    SEO and other online strategies are being used more and more by businesses today. As the market is developing and becoming more educated, it is more important to ensure that you are on top of your game and understand the impact the SEO and other online tactics have on your website. This article will discuss some of the things that you need to watch out for in your online activities.

    Read the article

  • data validation on wpf passwordbox:type password, re-type password

    - by black sensei
    Hello Experts !! I've built a wpf windows application in with there is a registration form. Using IDataErrorInfo i could successfully bind the field to a class property and with the help of styles display meaningful information about the error to users.About the submit button i use the MultiDataTrigger with conditions (Based on a post here on stackoverflow).All went well. Now i need to do the same for the passwordbox and apparently it's not as straight forward.I found on wpftutorial an article and gave it a try but for some reason it wasn't working. i've tried another one from functionalfun. And in this Functionalfun case the properties(databind,databound) are not recognized as dependencyproperty even after i've changed their name as suggested somewhere in the comments plus i don't have an idea whether it will work for a windows application, since it's designed for web. to give you an idea here is some code on textboxes <Window.Resources> <data:General x:Key="recharge" /> <Style x:Key="validButton" TargetType="{x:Type Button}" BasedOn="{StaticResource {x:Type Button}}" > <Setter Property="IsEnabled" Value="False"/> <Style.Triggers> <MultiDataTrigger> <MultiDataTrigger.Conditions> <Condition Binding="{Binding ElementName=txtRecharge, Path=(Validation.HasError)}" Value="false" /> </MultiDataTrigger.Conditions> <Setter Property="IsEnabled" Value="True" /> </MultiDataTrigger> </Style.Triggers> </Style> <Style x:Key="txtboxerrors" TargetType="{x:Type TextBox}" BasedOn="{StaticResource {x:Type TextBox}}"> <Style.Triggers> <Trigger Property="Validation.HasError" Value="true"> <Setter Property="ToolTip" Value="{Binding RelativeSource={RelativeSource Self}, Path=(Validation.Errors)[0].ErrorContent}"/> <Setter Property="Validation.ErrorTemplate"> <Setter.Value> <ControlTemplate> <DockPanel LastChildFill="True"> <TextBlock DockPanel.Dock="Bottom" FontSize="8" FontWeight="ExtraBold" Foreground="red" Padding="5 0 0 0" Text="{Binding ElementName=showerror, Path=AdornedElement.(Validation.Errors)[0].ErrorContent}"></TextBlock> <Border BorderBrush="Red" BorderThickness="2"> <AdornedElementPlaceholder Name="showerror" /> </Border> </DockPanel> </ControlTemplate> </Setter.Value> </Setter> </Trigger> </Style.Triggers> </Style> </Window.Resources> <TextBox Margin="12,69,12,70" Name="txtRecharge" Style="{StaticResource txtboxerrors}"> <TextBox.Text> <Binding Path="Field" Source="{StaticResource recharge}" ValidatesOnDataErrors="True" UpdateSourceTrigger="PropertyChanged"> <Binding.ValidationRules> <ExceptionValidationRule /> </Binding.ValidationRules> </Binding> </TextBox.Text> </TextBox> <Button Height="23" Margin="98,0,0,12" Name="btnRecharge" VerticalAlignment="Bottom" Click="btnRecharge_Click" HorizontalAlignment="Left" Width="75" Style="{StaticResource validButton}">Recharge</Button> some C# : class General : IDataErrorInfo { private string _field; public string this[string columnName] { get { string result = null; if(columnName == "Field") { if(Util.NullOrEmtpyCheck(this._field)) { result = "Field cannot be Empty"; } } return result; } } public string Error { get { return null; } } public string Field { get { return _field; } set { _field = value; } } } So what are suggestion you guys have for me? I mean how would you go about this? how do you do this since the databinding first purpose here is not to load data onto the fields they are just (for now) for data validation. thanks for reading this.

    Read the article

  • protobuf-net: Issues deserializing DataMember fields in lieu of read-only property

    - by Paul Smith
    I'm having issues deserializing certain properties of ORM-generated entities using protobuf-net. I suspect something in the way the ORM manages serialization attributes on read-only properties (uses public backing fields with DataMember attributes & [de]serializes) those instead of the corresponding read-only property, which has an IgnoreDataMember attribute). Guid properties might have issues of their own, but the field vs. property thing is my working theory now. Here's a simplified example of the code. Say I have a class, Account with an AccountID read-only guid, and an AccountName read-write string. I serialize & immediately deserialize a clone. In this scenario I get one of two results (depending on the entity, haven't isolated the specific commonality yet). The deserialized clone either: ...has a different AccountID from the original, or ...throws an Incorrect wire-type deserializing Guid exception while deserializing. Here's example usage... Account acct = new Account() { AccountName = "Bob's Checking" }; Debug.WriteLine(acct.AccountID.ToString()); using (MemoryStream ms = new MemoryStream()) { ProtoBuf.Serializer.Serialize<Account>(ms, acct); Debug.WriteLine(Encoding.UTF8.GetString(ms.GetBuffer())); ms.Position = 0; Account clone = ProtoBuf.Serializer.Deserialize<Account>(ms); Debug.WriteLine(clone.AccountID.ToString()); } And here's an example ORM'd class (simplified; hopefully haven't removed the cause of the issue in the process). Uses a shell game to deserialize read-only properties by exposing the backing field ("can't write" essentially becomes "shouldn't write," but we can scan code for instances of assigning to these fields, so the hack works for our purposes): [DataContract()] [Serializable()] public partial class Account { public Account() { _accountID = Guid.NewGuid(); } [XmlAttribute("AccountID")] [DataMember(Name = "AccountID", Order = 0)] public Guid _accountID; /// <summary> /// A read-only property; XML, JSON and DataContract serializers all seem /// to correctly recognize the public backing field when deserializing: /// </summary> [IgnoreDataMember] [XmlIgnore] public Guid AccountID { get { return this._accountID; } } [IgnoreDataMember] protected string _accountName; [DataMember(Name = "AccountName", Order = 1)] [XmlAttribute] public string AccountName { get { return this._accountName; } set { this._accountName = value; } } } XML, JSON and DataContract serializers all seem to serialize / deserialize matching object graphs here, so this attribute arrangement apparently causes those serializers to correctly assign to the public backing field when deserializing. I've tried protobuf-net with lists vs. single instances, different prefix styles, etc., but always either get the 'incorrect wire type ... Guid' exception, or the Guid property (field) not deserializing correctly. So the specific questions are, is there a quick workaround for this, and/or is there an explanation for both of outcomes 1 & 2 above, and/or can protobuf-net somehow be corralled into behaving like WCF in cases like this (i.e. follow the same DataMember/IgnoreDataMember semantics)? We hope not to have to create a protobuf dependency directly in the entity layer; if that's the case, we'll probably create proxy DTO entities with all public properties having protobuf attributes. (This is a subjective issue I have with all declarative serialization models; it's a ubiquitous pattern, but IMO, "normal" should be to have objects and serialization contracts decoupled.) Thanks!

    Read the article

  • ASP.NET MVC File Upload Error - "The input is not a valid Base-64 string"

    - by Justin
    Hey all, I'm trying to add a file upload control to my ASP.NET MVC 2 form but after I select a jpg and click Save, it gives the following error: The input is not a valid Base-64 string as it contains a non-base 64 character, more than two padding characters, or a non-white space character among the padding characters. Here's the view: <% using (Html.BeginForm("Save", "Developers", FormMethod.Post, new {enctype = "multipart/form-data"})) { %> <%: Html.ValidationSummary(true) %> <fieldset> <legend>Fields</legend> <div class="editor-label"> Login Name </div> <div class="editor-field"> <%: Html.TextBoxFor(model => model.LoginName) %> <%: Html.ValidationMessageFor(model => model.LoginName) %> </div> <div class="editor-label"> Password </div> <div class="editor-field"> <%: Html.Password("Password") %> <%: Html.ValidationMessageFor(model => model.Password) %> </div> <div class="editor-label"> First Name </div> <div class="editor-field"> <%: Html.TextBoxFor(model => model.FirstName) %> <%: Html.ValidationMessageFor(model => model.FirstName) %> </div> <div class="editor-label"> Last Name </div> <div class="editor-field"> <%: Html.TextBoxFor(model => model.LastName) %> <%: Html.ValidationMessageFor(model => model.LastName) %> </div> <div class="editor-label"> Photo </div> <div class="editor-field"> <input id="Photo" name="Photo" type="file" /> </div> <p> <%: Html.Hidden("DeveloperID") %> <%: Html.Hidden("CreateDate") %> <input type="submit" value="Save" /> </p> </fieldset> <% } %> And the controller: //POST: /Secure/Developers/Save/ [AcceptVerbs(HttpVerbs.Post)] public ActionResult Save(Developer developer) { //get profile photo. var upload = Request.Files["Photo"]; if (upload.ContentLength > 0) { string savedFileName = Path.Combine( ConfigurationManager.AppSettings["FileUploadDirectory"], "Developer_" + developer.FirstName + "_" + developer.LastName + ".jpg"); upload.SaveAs(savedFileName); } developer.UpdateDate = DateTime.Now; if (developer.DeveloperID == 0) {//inserting new developer. DataContext.DeveloperData.Insert(developer); } else {//attaching existing developer. DataContext.DeveloperData.Attach(developer); } //save changes. DataContext.SaveChanges(); //redirect to developer list. return RedirectToAction("Index"); } Thanks, Justin

    Read the article

  • Dojox grid having problem with Contentpane

    - by ice
    the grid appears properly on template's first loading. But when you click the paging button to load flooders.php thru list_result1() only the paging buttons will appear. I already tested the flooders.php outside the template and it works properly. what seems to be the problem? and what are the tools that i can use to see if the javascript is loading properly because i think the error console of ff browser which i use to track errors won't give you that much info when you are working with contentpane. thanks! ice note: below are the codes... ** from contentpane js function list_result1(){ args=""; uri = "flooders.php" + args; dojo.xhrGet( { url: uri, handleAs: "text", timeout: 500, // Time in milliseconds load: function(response, ioArgs) { //alert(response); dojo.byId("flooders_table").innerHTML = response; return response; }, // The ERROR function will be called in an error case. error: function(response, ioArgs) { console.error("HTTP status code: ", ioArgs.xhr.status); return response; } }); //end of dojo.xhrGet } **flooders.php starts here*** @import "js/dojo-0.9.0/dojo/resources/dojo.css"; @import "js/dojo-0.9.0/dijit/themes/tundra/tundra.css"; @import "js/dojo-0.9.0/dijit/themes/tundra/tundra_rtl.css"; @import "css/ash.css"; @import "js/dojo-0.9.0/dojox/grid/resources/Grid.css"; @import "js/dojo-0.9.0/dojox/grid/resources/tundraGrid.css"; @import "js/dojo-0.9.0/dojo/resources/dojo.css"; @import "js/dojo-0.9.0/dijit/tests/css/dijitTests.css"; .dojoxGridRowEditing td { background-color: #F4FFF4; } .dojoxGrid input, .dojoxGrid select, .dojoxGrid textarea { margin: 0; padding: 0; border-style: none; width: 100%; font-size: 100%; font-family: inherit; } .dojoxGrid input { } .dojoxGrid select { } .dojoxGrid textarea { } #controls { padding: 0px 0; } #controls button { margin-left: 10px; } .myGrid { width: 550px; height: 230px; margin-left: 20px; /* border: 1px solid silver; */ } echo " // it has script heading here (function(){ // some sample data // global var 'data' data = { identifier: 'id', label: 'id', items: [] }; data_list = [ $banlist ]; var rows = $listnum ; var x=1; for(var i=0, l=data_list.length; i // global var 'test_store' test_store = new dojo.data.ItemFileWriteStore({data: data}); })(); // it has ending here "; ?   -- here's the javascript dojo.require("dijit.TitlePane"); dojo.require("dijit.dijit"); dojo.require("dojox.grid.DataGrid"); dojo.require("dojo.data.ItemFileWriteStore"); dojo.require("dojo.parser"); // scan page for widgets and instantiate them dojo.require("dijit.layout.LayoutContainer"); dojo.require("dijit.layout.AccordionContainer"); dojo.require("dijit.layout.ContentPane"); dojo.require("dijit.layout.TabContainer"); dojo.require("dijit.Editor"); dojo.require("dijit._editor.plugins.AlwaysShowToolbar"); dojo.require("dijit._editor.plugins.LinkDialog"); //this must be inlcuded below function() selectCell = { styles: 'text-align: center;', type: dojox.grid.cells.Select }; gridLayout = { defaultCell: { width: 5, styles: 'text-align: right;' }, rows: [ [ { name: 'Mark', width: 3, field: 'col1', editable: true, styles: 'text-align: center;', type: dojox.grid.cells.Bool }, { name: 'Id', width: 3, field: 'id' , editable: false }, { name: 'Username', field: 'col2', editable: false, styles: '', width: '70%' }, { name: 'Reason', field: 'col3', editable: false , styles: '', width: '100%' }, { name: 'Date Banned', field: 'col4', editable: false , styles: '', width: '70%' } ] ] };

    Read the article

  • How to overcome shortcomings in reporting from EAV database?

    - by David Archer
    The major shortcomings with Entity-Attribute-Value database designs in SQL all seem to be related to being able to query and report on the data efficiently and quickly. Most of the information I read on the subject warn against implementing EAV due to these problems and the commonality of querying/reporting for almost all applications. I am currently designing a system where almost all the fields necessary for data storage are not known at design/compile time and are defined by the end-user of the system. EAV seems like a good fit for this requirement but due to the problems I've read about, I am hesitant in implementing it as there are also some pretty heavy reporting requirements for this system as well. I think I've come up with a way around this but would like to pose the question to the SO community. Given that typical normalized database (OLTP) still isn't always the best option for running reports, a good practice seems to be having a "reporting" database (OLAP) where the data from the normalized database is copied to, indexed extensively, and possibly denormalized for easier querying. Could the same idea be used to work around the shortcomings of an EAV design? The main downside I see are the increased complexity of transferring the data from the EAV database to reporting as you may end up having to alter the tables in the reporting database as new fields are defined in the EAV database. But that is hardly impossible and seems to be an acceptable tradeoff for the increased flexibility given by the EAV design. This downside also exists if I use a non-SQL data store (i.e. CouchDB or similar) for the main data storage since all the standard reporting tools are expecting a SQL backend to query against. Do the issues with EAV systems mostly go away if you have a seperate reporting database for querying? EDIT: Thanks for the comments so far. One of the important things about the system I'm working on it that I'm really only talking about using EAV for one of the entities, not everything in the system. The whole gist of the system is to be able to pull data from multiple disparate sources that are not known ahead of time and crunch the data to come up with some "best known" data about a particular entity. So every "field" I'm dealing with is multi-valued and I'm also required to track history for each. The normalized design for this ends up being 1 table per field which makes querying it kind of painful anyway. Here are the table schemas and sample data I'm looking at (obviously changed from what I'm working on but I think it illustrates the point well): EAV Tables Person ------------------- - Id - Name - ------------------- - 123 - Joe Smith - ------------------- Person_Value ------------------------------------------------------------------- - PersonId - Source - Field - Value - EffectiveDate - ------------------------------------------------------------------- - 123 - CIA - HomeAddress - 123 Cherry Ln - 2010-03-26 - - 123 - DMV - HomeAddress - 561 Stoney Rd - 2010-02-15 - - 123 - FBI - HomeAddress - 676 Lancas Dr - 2010-03-01 - ------------------------------------------------------------------- Reporting Table Person_Denormalized ---------------------------------------------------------------------------------------- - Id - Name - HomeAddress - HomeAddress_Confidence - HomeAddress_EffectiveDate - ---------------------------------------------------------------------------------------- - 123 - Joe Smith - 123 Cherry Ln - 0.713 - 2010-03-26 - ---------------------------------------------------------------------------------------- Normalized Design Person ------------------- - Id - Name - ------------------- - 123 - Joe Smith - ------------------- Person_HomeAddress ------------------------------------------------------ - PersonId - Source - Value - Effective Date - ------------------------------------------------------ - 123 - CIA - 123 Cherry Ln - 2010-03-26 - - 123 - DMV - 561 Stoney Rd - 2010-02-15 - - 123 - FBI - 676 Lancas Dr - 2010-03-01 - ------------------------------------------------------ The "Confidence" field here is generated using logic that cannot be expressed easily (if at all) using SQL so my most common operation besides inserting new values will be pulling ALL data about a person for all fields so I can generate the record for the reporting table. This is actually easier in the EAV model as I can do a single query. In the normalized design, I end up having to do 1 query per field to avoid a massive cartesian product from joining them all together.

    Read the article

  • Problem designing xsd schema - because of a variable element name

    - by ssaboum
    Hi everyone, i'm not the best at creating XSD schema as this is actually my first one, i would like to validate an xml that must look like this : <?xml version="1.0"?> <Data> <FIELD name='toto'> <META mono='false' dynamic='false'> <COLUMN1> <REFTABLE>table</REFTABLE> <REFCOLUMN>key_column</REFCOLUMN> <REFLABELCOLUMN>test_column</REFLABELCOLUMN> </COLUMN1> <COLUMN2> <REFTABLE>table</REFTABLE> <REFCOLUMN>key_column</REFCOLUMN> <REFLABELCOLUMN>test_column</REFLABELCOLUMN> </COLUMN2> </META> <VALUEs> <VALUE>...</VALUE> </VALUEs> </FIELD> My problem is that into the META block the tags "COLUMN1","COLUMN2" are always different, it may become COLUMNxxx. For now my schema is : <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <xsd:element name="Data"> <xsd:complexType> <xsd:sequence> <xsd:element name="FIELD" type="Field" /> </xsd:sequence> <xsd:attribute name="id" type="xsd:int" use="required" /> </xsd:complexType> </xsd:element> <xsd:complexType name="dataSourceDef"> <xsd:sequence> <xsd:element name="DSD_REFTABLE" type="xsd:string" /> <xsd:element name="DSD_REFCOLUMN" type="xsd:string" /> <xsd:element name="DSD_REFLABELCOLUMN" type="xsd:string" /> </xsd:sequence> </xsd:complexType> <xsd:complexType name="MetaTag"> <xsd:sequence> <xsd:any processContents="lax" /> </xsd:sequence> <xsd:attribute name="mono" type="xsd:string" use="required" /> <xsd:attribute name="dynamic" type="xsd:string" use="required"/> </xsd:complexType> <xsd:complexType name="Field"> <xsd:sequence> <xsd:element name="META" type="MetaTag" minOccurs="1" /> <xsd:element name="VALUEs"> <xsd:complexType> <xsd:sequence> <xsd:any processContents="lax" /> </xsd:sequence> </xsd:complexType> </xsd:element> </xsd:sequence> <xsd:attribute name="name" type="xsd:string" use="required"/> </xsd:complexType> </xsd:schema> And i just can't get it to work, i don't know how to handle the fact that a precise level of my nodes isn't clear, and the rest is. Would you help me please ? thx

    Read the article

  • asp .net MVC 2.0 xval Validation element

    - by ANDyW
    I got one problem with showing error message to element. Is there any option to turn on messages on place where is Html.ValidationMessageFor(model = model.ConfirmPassword). Becsoue for me it isn’t show up. I would like to have summary and near field information too not only red border. Any one know how to do it? using (Ajax.BeginForm("CreateValidForm", "Test", new AjaxOptions { HttpMethod = "Post" })) {%> <div id="validationSummary1"> <%= Html.ValidationSummary(true)%> </div> <fieldset> <legend>Fields</legend> <div class="editor-label"> <%= Html.LabelFor(model => model.Name)%> </div> <div class="editor-field"> <%= Html.TextBoxFor(model => model.Name)%> <%= Html.ValidationMessageFor(model => model.Name)%> </div> <div class="editor-label"> <%= Html.LabelFor(model => model.Email)%> </div> <div class="editor-field"> <%= Html.TextBoxFor(model => model.Email)%> <%= Html.ValidationMessageFor(model => model.Email)%> </div> <div class="editor-label"> <%= Html.LabelFor(model => model.Password)%> </div> <div class="editor-field"> <%= Html.TextBoxFor(model => model.Password)%> <%= Html.ValidationMessageFor(model => model.Password)%> </div> <div class="editor-label"> <%= Html.LabelFor(model => model.ConfirmPassword)%> </div> <div class="editor-field"> <%= Html.TextBoxFor(model => model.ConfirmPassword)%> <%= Html.ValidationMessageFor(model => model.ConfirmPassword)%> </div> <p> <input type="submit" value="Create" /> </p> </fieldset> <% } %> <%= Html.ClientSideValidation<ValidModel>() .UseValidationSummary("validationSummary1", "Please fix the following problems:") %> Here is link for sample project http://www.sendspace.com/file/m9gl54 .

    Read the article

  • Problem with NHibernate

    - by Bernard Larouche
    I am trying to get a list of Products that share the Category. NHibernate returns no product which is wrong. Here is my Criteria API method : public IList<Product> GetProductForCategory(string name) { return _session.CreateCriteria(typeof(Product)) .CreateCriteria("Categories") .Add(Restrictions.Eq("Name", name)) .List<Product>(); } Here is my HQL method : public IList<Product> GetProductForCategory(string name) { return _session.CreateQuery("select from Product p, p.Categories.elements c where c.Name = :name").SetString("name",name).List<Product>(); } Both methods return no product when they should return 2 products. Here is the Mapping for the Product class : <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="CBL.CoderForTraders.DomainModel" namespace="CBL.CoderForTraders.DomainModel"> <class name="Product" table="Products" > <id name="_persistenceId" column="ProductId" type="Guid" access="field" unsaved-value="00000000-0000-0000-0000-000000000000"> <generator class="assigned" /> </id> <version name="_persistenceVersion" column="RowVersion" access="field" type="int" unsaved-value="0" /> <property name="Name" column="ProductName" type="String" not-null="true"/> <property name="Price" column="BasePrice" type="Decimal" not-null="true" /> <property name="IsTaxable" column="IsTaxable" type="Boolean" not-null="true" /> <property name="DefaultImage" column="DefaultImageFile" type="String"/> <bag name="Descriptors" table="ProductDescriptors"> <key column="ProductId" foreign-key="FK_Product_Descriptors"/> <one-to-many class="Descriptor"/> </bag> <bag name="Categories" table="Categories_Products" > <key column="ProductId" foreign-key="FK_Products_Categories"/> <many-to-many class="Category" column="CategoryId"></many-to-many> </bag> <bag name="Orders" generic="true" table="OrderProduct" > <key column="ProductId" foreign-key="FK_Products_Orders"/> <many-to-many column="OrderId" class="Order" /> </bag> </class> </hibernate-mapping> And finally the mapping for the Category class : <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="CBL.CoderForTraders.DomainModel" namespace="CBL.CoderForTraders.DomainModel" default-access="field.camelcase-underscore" default-lazy="true"> <class name="Category" table="Categories" > <id name="_persistenceId" column="CategoryId" type="Guid" access="field" unsaved-value="00000000-0000-0000-0000-000000000000"> <generator class="assigned" /> </id> <version name="_persistenceVersion" column="RowVersion" access="field" type="int" unsaved-value="0" /> <property name="Name" column="Name" type="String" not-null="true"/> <property name="IsDefault" column="IsDefault" type="Boolean" not-null="true" /> <property name="Description" column="Description" type="String" not-null="true" /> <many-to-one name="Parent" column="ParentID"></many-to-one> <bag name="SubCategories" inverse="true"> <key column="ParentID" foreign-key="FK_Category_ParentCategory" /> <one-to-many class="Category"/> </bag> <bag name="Products" table="Categories_Products"> <key column="CategoryId" foreign-key="FK_Categories_Products" /> <many-to-many column="ProductId" class="Product"></many-to-many> </bag> </class> </hibernate-mapping> Can you see what could be the problem ?

    Read the article

  • BB Code Parser (in formatting phase) with jQuery jammed due to messed up loops most likely

    - by Oskar
    Greetings everyone, I'm making a BB Code Parser but I'm stuck on the JavaScript front. I'm using jQuery and the caret library for noting selections in a text field. When someone selects a piece of text a div with formatting options will appear. I have two issues. Issue 1. How can I make this work for multiple textfields? I'm drawing a blank as it currently will detect the textfield correctly until it enters the $("#BBtoolBox a").mousedown(function() { } loop. After entering it will start listing one field after another in a random pattern in my eyes. !!! MAIN Issue 2. I'm guessing this is the main reason for issue 1 as well. When I press a formatting option it will work on the first action but not the ones afterwards. It keeps duplicating the variable parsed. (if I only keep to one field it will never print in the second) Issue 3 If you find anything especially ugly in the code, please tell me how to improve myself. I appriciate all help I can get. Thanks in advance $(document).ready(function() { BBCP(); }); function BBCP(el) { if(!el) { el = "textarea"; } // Stores the cursor position of selection start $(el).mousedown(function(e) { coordX = e.pageX; coordY = e.pageY; // Event of selection finish by using keyboard }).keyup(function() { BBtoolBox(this, coordX, coordY); // Event of selection finish by using mouse }).mouseup(function() { BBtoolBox(this, coordX, coordY); // Event of field unfocus }).blur(function() { $("#BBtoolBox").hide(); }); } function BBtoolBox(el, coordX, coordY) { // Variable containing the selected text by Caret selection = $(el).caret().text; // Ignore the request if no text is selected if(selection.length == 0) { $("#BBtoolBox").hide(); return; } // Print the toolbox if(!document.getElementById("BBtoolBox")) { $(el).before("<div id=\"BBtoolBox\" style=\"left: "+ ( coordX + 5 ) +"px; top: "+ ( coordY - 30 ) +"px;\"></div>"); // List of actions $("#BBtoolBox").append("<a href=\"#\" onclick=\"return false\"><img src=\"./icons/text_bold.png\" alt=\"B\" title=\"Bold\" /></a>"); $("#BBtoolBox").append("<a href=\"#\" onclick=\"return false\"><img src=\"./icons/text_italic.png\" alt=\"I\" title=\"Italic\" /></a>"); } else { $("#BBtoolBox").css({'left': (coordX + 3) +'px', 'top': (coordY - 30) +'px'}).show(); } // Parse the text according to the action requsted $("#BBtoolBox a").mousedown(function() { switch($(this).children(":first").attr("alt")) { case "B": // bold parsed = "[b]"+ selection +"[/b]"; break; case "I": // italic parsed = "[i]"+ selection +"[/i]"; break; } // Changes the field value by replacing the selection with the variable parsed $(el).val($(el).caret().replace(parsed)); $("#BBtoolBox").hide(); return false; }); }

    Read the article

  • What is the best free or low-cost Java reporting library (e.g. BIRT, JasperReports, etc.) for making

    - by Max3000
    I want to print, email and write to PDF very simple reports. The reports are basically a list of items, divided in various sections/columns. The sections are not necessarily identical. Think newspaper. I just wasted a solid 2 days of work trying to make this kind of reports using JasperReports. I find that Jasper is great for outputing "normalized" data. The kind that would come out of a database for instance, each row neatly describing an item and each item printed on a line. I'm simplifying a bit but that's the idea. However, given what I want to do I always ended up completely lost. Data not being displayed for no apparent reason, columns of texts never the correct size, column positioning always ending up incorrect, pagination not sanely possible (I was never able to figure it out; the FAQ gives an obscure workaround), etc. I came to the conclusion that Jasper is really not built to make the kind of reports I want. Am I missing something? I'm ready to pay for a tool, as long as the price is reasonable. By reasonable I mean a few $100s. Thanks. EDIT: To answer cetus, here is more information about the report I made in Jasper. What I want is something like this: text text text text ------------------- text | text text |---------- text | text text | text --------| text text |---------- text | text What I made in jasper is this: (detail band) subreport | subreport ------------------------------------ subreport | subreport ------------------------------------ subreport | subreport The subreports are all the same actual report. This report has one field (called "field") and basically just prints this field in a detail band. Hence, running a single subreport simply lists all items from the datasource. The datasource itself is a simple custom JRDatasource containing a collection of strings in the field "field". The datasource iterates over the collection until there are no more strings. Each subreport has its own datasource. I tried many different variations of the above, with all sorts of different properties for the report, subreports, etc. IMO, this is fairly simple stuff. However, the problems I encounter are as follows: Subreports starting from the 3rd don't show up when their position type is 'float'. They do show up when they have 'fix relative to top'. However, I don't want to do this because the first two subreports can be of any length. I can't make each subreport to stretch according to its own length. Instead, they either don't stretch at all (which is not desirable because they have different lenghts) or they stretch according to the longest subreport. This makes a weird layout for sure. Pagination doesn't happen. If some subreports fall outside the page, they simple don't show. One alternative is to increase the 'page height' considerably and the 'detail band height' accordingly. However, in this case it is not really possibly to know the total height in advance. So I'm stuck with calculating/guessing it myself, before the report is even generated. More importantly, long reports end up on one page and this is not acceptable (the printout text is too small, it's ugly/non-professional to have different reports with different PDF page lengths, etc.). BTW, I used iReport so it's possibly limitations of iReport I'm listing here and not of Jasper itself. That's one of the things I'm trying to find out asking this question here. One alternative would be to generate the jrxml myself with just static text but I'm afraid I'll encounter the very same limitations. Anyway, I just generally wasted so much time getting anything done with Jasper that I can't help thinking its not the right tool for the job. (Not to say that Jasper doesn't excel in what it's good at).

    Read the article

  • CakePHP access indirectly related model - beginner's question

    - by user325077
    Hi everyone, I am writing a CakePHP application to log the work I do for various clients, but after trying for days I seem unable to get it to do what I want. I have read most of the book CakePHP's website. and googled for all I'm worth, so I presume I am missing something obvious! Every 'log item' belongs to a 'sub-project, which in turn belongs to a 'project', which in turn belongs to a 'sub-client' which finally belongs to a client. These are the 5 MySQL tables I am using: mysql> DESCRIBE log_items; +-----------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-----------------+--------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | date | date | NO | | NULL | | | time | time | NO | | NULL | | | time_spent | int(11) | NO | | NULL | | | sub_projects_id | int(11) | NO | MUL | NULL | | | title | varchar(100) | NO | | NULL | | | description | text | YES | | NULL | | | created | datetime | YES | | NULL | | | modified | datetime | YES | | NULL | | +-----------------+--------------+------+-----+---------+----------------+ mysql> DESCRIBE sub_projects; +-------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-------------+--------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | name | varchar(100) | NO | | NULL | | | projects_id | int(11) | NO | MUL | NULL | | | created | datetime | YES | | NULL | | | modified | datetime | YES | | NULL | | +-------------+--------------+------+-----+---------+----------------+ mysql> DESCRIBE projects; +----------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +----------------+--------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | name | varchar(100) | NO | | NULL | | | sub_clients_id | int(11) | NO | MUL | NULL | | | created | datetime | YES | | NULL | | | modified | datetime | YES | | NULL | | +----------------+--------------+------+-----+---------+----------------+ mysql> DESCRIBE sub_clients; +------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +------------+--------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | name | varchar(100) | NO | | NULL | | | clients_id | int(11) | NO | MUL | NULL | | | created | datetime | YES | | NULL | | | modified | datetime | YES | | NULL | | +------------+--------------+------+-----+---------+----------------+ mysql> DESCRIBE clients; +----------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +----------+--------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | name | varchar(100) | NO | | NULL | | | created | datetime | YES | | NULL | | | modified | datetime | YES | | NULL | | +----------+--------------+------+-----+---------+----------------+ I have set up the following associations in CakePHP: LogItem belongsTo SubProjects SubProject belongsTo Projects Project belongsTo SubClients SubClient belongsTo Clients Client hasMany SubClients SubClient hasMany Projects Project hasMany SubProjects SubProject hasMany LogItems Using 'cake bake' I have created the models, controllers (index, view add, edit and delete) and views, and things seem to function - as in I am able to perform simple CRUD operations successfully. The Question When editing a 'log item' at www.mydomain/log_items/edit I am presented with the view you would all suspect; namely the columns of the log_items table with the appropriate textfields/select boxes etc. I would also like to incorporate select boxes to choose the client, sub-client, project and sub-project in the 'log_items' edit view. Ideally the 'sub-client' select box should populate itself depending upon the 'client' chosen, the 'project' select box should also populate itself depending on the 'sub-client' selected etc, etc. I guess the way to go about populating the select boxes with relevant options is Ajax, but I am unsure of how to go about actually accessing a model from the child view of a indirectly related model, for example how to create a 'sub-client' select box in the 'log_items' edit view. I have have found this example: http://forum.phpsitesolutions.com/php-frameworks/cakephp/ajax-cakephp-dynamically-populate-html-select-dropdown-box-t29.html where someone achieves something similar for US states, counties and cities. However, I noticed in the database schema - which is downloadable from the site above link - that the database tables don't have any foreign keys, so now I'm wondering if I'm going about things in the correct manner. Any pointers and advice would be very much appreciated. Kind regards, Chris

    Read the article

  • How do you get Windows 7 to show time remaining in the battery meter?

    - by MrDaniel
    Running Microsoft Windows 7 Home Premium on a HP Laptop. The system tray power meter never shows the time remaining in the system tray. Only really ever show a percentage remaining number as pictured. The windows help documentation on the "battery meter" seems to indicate that it should display a time remaining indicator, is this accurate? How accurate is the battery meter? The accuracy of what the battery meter reports—what percentage of a full charge remains and how long you can use your laptop before you must plug it in—depends on several factors. Most of these factors fall into the following two categories: What you use the laptop for. Because some activities drain the battery faster than others (for example, watching a DVD consumes more power than reading and writing e-mail), alternating between activities that have significantly different power requirements changes the rate at which your laptop uses battery power. This can vary the estimate of how much battery charge remains. Battery hardware and sensor circuitry. Newer, "smart" batteries are equipped with circuitry that calculates the measurements of charge remaining and reports the information to the battery meter. Older batteries use less sophisticated circuitry and might be less accurate.

    Read the article

  • Mail.app doesn't detect sender in Address Book

    - by CoreSandello
    Hi there. I don't understand, how does 'smart addresses' in Mail.app work. Recently I mentioned, that for some emails I don't see person's full name in 'From' column. I started to dig into this behavior and found out, that I have few contacts in my Address Book, that are not recognized by Mail.app. Here how it looks: I have a person in Address Book with filled email entry and filled first/last name (localized). I have an incoming email from that person (from email specified in Address Book), but first/last name in the email itself doesn't match with ones specified in Address Book (e. g. 'From' field in email looks like 'John [work] <[email protected]>' while Address Book entry is 'John Smith' (localized, in Russian)). And Mail.app doesn't recognize that this mail is originating from that person in Address Book: if I click on 'From' field, it suggests to me to add sender to Address Book, while for others' emails I have 'Show in Address Book' menu entry (especially for ones with full localized name in 'From' field). I'm wondering, is that behavior correct or I'm missing something? I'm using Snow Leopard & Mail 4.0; my system language set to English, if that matters. I'd like to have some clarifications on that Mail.app behavior: whenever it fixable or not (and if it's fixable, I'd like to see a fix). By the way, is it possible to match sender's address against Address Book entry in filter rules or not? That would be great, if I can create rules like 'move all mail from that person to that folder' without specifying exact source address. Thanks, Ivan.

    Read the article

  • Need help tuning Mysql and linux server

    - by Newtonx
    We have multi-user application (like MailChimp,Constant Contact) . Each of our customers has it's own contact's list (from 5 to 100.000 contacts). Everything is stored in one BIG database (currently 25G). Since we released our product we have the following data history. 5 years of data history : - users/customers (200+) - contacts (40 million records) - campaigns - campaign_deliveries (73.843.764 records) - campaign_queue ( 8 millions currently ) As we get more users and table records increase our system/web app is getting slower and slower . Some queries takes too long to execute . SCHEMA Table contacts --------------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +---------------------+------------------+------+-----+---------+----------------+ | contact_id | int(10) unsigned | NO | PRI | NULL | auto_increment | | client_id | int(10) unsigned | YES | | NULL | | | name | varchar(60) | YES | | NULL | | | mail | varchar(60) | YES | MUL | NULL | | | verified | int(1) | YES | | 0 | | | owner | int(10) unsigned | NO | MUL | 0 | | | date_created | date | YES | MUL | NULL | | | geolocation | varchar(100) | YES | | NULL | | | ip | varchar(20) | YES | MUL | NULL | | +---------------------+------------------+------+-----+---------+----------------+ Table campaign_deliveries +---------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +---------------+------------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | newsletter_id | int(10) unsigned | NO | MUL | 0 | | | contact_id | int(10) unsigned | NO | MUL | 0 | | | sent_date | date | YES | MUL | NULL | | | sent_time | time | YES | MUL | NULL | | | smtp_server | varchar(20) | YES | | NULL | | | owner | int(5) | YES | MUL | NULL | | | ip | varchar(20) | YES | MUL | NULL | | +---------------+------------------+------+-----+---------+----------------+ Table campaign_queue +---------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +---------------+------------------+------+-----+---------+----------------+ | queue_id | int(10) unsigned | NO | PRI | NULL | auto_increment | | newsletter_id | int(10) unsigned | NO | MUL | 0 | | | owner | int(10) unsigned | NO | MUL | 0 | | | date_to_send | date | YES | | NULL | | | contact_id | int(11) | NO | MUL | NULL | | | date_created | date | YES | | NULL | | +---------------+------------------+------+-----+---------+----------------+ Slow queries LOG -------------------------------------------- Query_time: 350 Lock_time: 1 Rows_sent: 1 Rows_examined: 971004 SELECT COUNT(*) as total FROM contacts WHERE (contacts.owner = 70 AND contacts.verified = 1); Query_time: 235 Lock_time: 1 Rows_sent: 1 Rows_examined: 4455209 SELECT COUNT(*) as total FROM contacts WHERE (contacts.owner = 2); How can we optimize it ? Queries should take no more than 30 secs to execute? Can we optimize it and keep all data in one BIG database or should we change app's structure and set one single database to each user ? Thanks

    Read the article

  • How does QuickBooks handle IIF imports?

    - by dwwilson66
    I've received a 'template' for an IIF file for Quickbooks transactions, and there's like seventy-bazillion fields in there, lots of which I never even user. It's a tab delimited file, with the following lines--field headers for transactions and respective splits for those transactions, followed by an end-of-transaction marker. !TRNS FIELD1 FIELD2 FIELD3 ... FIELD48 !SPL FIELD1 FIELD2 FIELD3 ... FIELD48 !ENDTRNS TRNS FIELD1_DATA FIELD2_DATA FIELD3_DATA ... FIELD48_DATA SPL FIELD1_DATA FIELD2_DATA FIELD3_DATA ... FIELD48_DATA ENDTRNS ... What drives data to a particular field? Is it the field header with corresponding data, or is it the tabular position relative to the head of the line? E.G., Let's say all I have to import is the data in FIELD1, FIELD3 and FIELD5: Would I need by header: !TRNS FIELD1 FIELD3 FIELD5 !SPL FIELD1 FIELD3 FIELD5 !ENDTRNS TRNS FIELD1 FIELD3 FIELD5 SPL FIELD1 FIELD3 FIELD5 ENDTRNS or by tabular position: !TRNS FIELD1 FIELD2 FIELD3 FIELD4 FIELD5 !SPL FIELD1 FIELD2 FIELD3 FIELD4 FIELD5 !ENDTRNS TRNS FIELD1_DATA FIELD2_BLANK FIELD3_DATA FIELD4_BLANK FIELD5_DATA SPL FIELD1_DATA FIELD2_BLANK FIELD3_DATA FIELD4_BLANK FIELD5_DATA ENDTRNS Alternately, if it were a comma delimited input, would I need: DATA1,DATA3,DATA5 or DATA1,,DATA3,,DATA5 Anyone have any experience with what Quickbooks is doing?

    Read the article

  • ADSL to T1, Is it worth it for us?

    - by Jack Hickerson
    The company I work for has roughly 45-55 simultaneous users (local and remote/VPN) logged in at a given time. We currently subscribe to an ADSL connection but we have been experiencing slower upload/download speeds as our number of users increase. So, I have a few questions with regards to upgrading our connection to a t1 line. I am aware that the number of channels on a t1 line are much greater then that of our current ADSL connection, but I have heard that the number of active users on a t1 line should be no greater than ~30 for optimal performance. I would think this statement is dependent on what each user was using the connection for and could change depending on this variable. That being said, I have tried to break down how the line would be used in our organization based on our major departments: Sales (~60% of total users) - Everyday surfing, email, research, occasional streaming media Marketing (~15% of total users) - Heavy reliance on uploading/downloading, streaming media, file sharing Other (~25% of total users) - email, rare use of any connection intensive activities. I have considered keeping the ADSL for our local users and dedicating the t1 to our remote users (or vice versa) but the cost is significantly higher then what we had hoped for. All factors being equal (# of users, frequency of downloads/uploads from our current activities) Would you suspect a significant performance increase in making the transition to a t1 line from our current ADSL line? What are your thoughts or recommendations?

    Read the article

  • SQL Query to update parent record with child record values

    - by Wells
    I need to create a Trigger that fires when a child record (Codes) is added, updated or deleted. The Trigger stuffs a string of comma separated Code values from all child records (Codes) into a single field in the parent record (Projects) of the added, updated or deleted child record. I am stuck on writing a correct query to retrieve the Code values from just those child records that are the children of a single parent record. -- Create the test tables CREATE TABLE projects ( ProjectId varchar(16) PRIMARY KEY, ProjectName varchar(100), Codestring nvarchar(100) ) GO CREATE TABLE prcodes ( CodeId varchar(16) PRIMARY KEY, Code varchar (4), ProjectId varchar(16) ) GO -- Add sample data to tables: Two projects records, one with 3 child records, the other with 2. INSERT INTO projects (ProjectId, ProjectName) SELECT '101','Smith' UNION ALL SELECT '102','Jones' GO INSERT INTO prcodes (CodeId, Code, ProjectId) SELECT 'A1','Blue', '101' UNION ALL SELECT 'A2','Pink', '101' UNION ALL SELECT 'A3','Gray', '101' UNION ALL SELECT 'A4','Blue', '102' UNION ALL SELECT 'A5','Gray', '102' GO I am stuck on how to create a correct Update query. Can you help fix this query? -- Partially working, but stuffs all values, not just values from chile (prcodes) records of parent (projects) UPDATE proj SET proj.Codestring = (SELECT STUFF((SELECT ',' + prc.Code FROM projects proj INNER JOIN prcodes prc ON proj.ProjectId = prc.ProjectId ORDER BY 1 ASC FOR XML PATH('')),1, 1, '')) The result I get for the Codestring field in Projects is: ProjectId ProjectName Codestring 101 Smith Blue,Blue,Gray,Gray,Pink ... But the result I need for the Codestring field in Projects is: ProjectId ProjectName Codestring 101 Smith Blue,Pink,Gray ... Here is my start on the Trigger. The Update query, above, will be added to this Trigger. Can you help me complete the Trigger creation query? CREATE TRIGGER Update_Codestring ON prcodes AFTER INSERT, UPDATE, DELETE AS WITH CTE AS ( select ProjectId from inserted union select ProjectId from deleted )

    Read the article

  • Why does my DSDT table is different from what I found online?

    - by Hao Shen
    I have found a field in DSDT table where I want to modify from here http://www.ztex.de/misc/c2ctl.e.html Generally, I want to modify the _PSS field about the processor so that I can have more frequency levels available in the CPUfreq driver interface. I try to use this command to dissemble the DSDT table from my Desktop(Linux2.6.29,Intel CORE 2): cat /proc/acpi/dsdt > dsdt.aml iasl -d dsdt.aml Then I have a file dsdt.dsl as following(very long, so I just show the beginning of the file): /* * Intel ACPI Component Architecture * AML Disassembler version 20090123 * * Disassembly of dsdt.aml, Mon May 6 20:41:40 2013 * * * Original Table Header: * Signature "DSDT" * Length 0x00003794 (14228) * Revision 0x01 **** ACPI 1.0, no 64-bit math support * Checksum 0x46 * OEM ID "DELL" * OEM Table ID "dt_ex" * OEM Revision 0x00001000 (4096) * Compiler ID "INTL" * Compiler Version 0x20050624 (537200164) */ DefinitionBlock ("dsdt.aml", "DSDT", 1, "DELL", "dt_ex", 0x00001000) { Method (DBIN, 0, NotSerialized) { Noop } Scope (\) { Device (_SB.VBTN) ................... But I can not find the _PSS field as shown in the website I have given above. I do not know why? I am sure the current cpufreq driver shows 4 frequency levels available. So at least there should be something in the table showing this..right? Has anybody here played with the DSDT table before? Thanks,

    Read the article

  • Mail.app doesn't detect sender in Address Book

    - by CoreSandello
    I don't understand, how does 'smart addresses' in Mail.app work. Recently I mentioned, that for some emails I don't see person's full name in 'From' column. I started to dig into this behavior and found out, that I have few contacts in my Address Book, that are not recognized by Mail.app. Here how it looks: I have a person in Address Book with filled email entry and filled first/last name (localized). I have an incoming email from that person (from email specified in Address Book), but first/last name in the email itself doesn't match with ones specified in Address Book (e. g. 'From' field in email looks like 'John [work] <[email protected]>' while Address Book entry is 'John Smith' (localized, in Russian)). And Mail.app doesn't recognize that this mail is originating from that person in Address Book: if I click on 'From' field, it suggests to me to add sender to Address Book, while for others' emails I have 'Show in Address Book' menu entry (especially for ones with full localized name in 'From' field). I'm wondering, is that behavior correct or I'm missing something? I'm using Snow Leopard & Mail 4.0; my system language set to English, if that matters. I'd like to have some clarifications on that Mail.app behavior: whenever it fixable or not (and if it's fixable, I'd like to see a fix). By the way, is it possible to match sender's address against Address Book entry in filter rules or not? That would be great, if I can create rules like 'move all mail from that person to that folder' without specifying exact source address. Thanks, Ivan.

    Read the article

  • OpenOffice Calc: How can I count the number of different items with data pilot?

    - by manu
    Hi all, I have a rather long spreadsheet with historical information of issues solved by some user on a collaborative environment. The spreadsheet have the following (relevant) columns date, week no., project, author id, etc... The week no. is calculated from the date, is basically the year concatenated with the week number within that year; for instance, both 2009-02-18 and 2009-02-20 yield the week number 200908 - the 8th week of year 2009; and 2009-02-23 yields 200909 - the 9th week of year 2009. I need to count how many different users (given by author id) contributed to some project, on a weekly basis. I have setup a data pilot with the week as Row Field, the project as the Column Field, and count-author as the Data Field. However, this counts the author id as different instances. This is not what I need. I need to count how many different users contributed to each project on a weekly basis. I expect to get something like: projects week Project1 Project2 Project3 200901 10 2 200902 2 7 Each inner cell containing how many different users contributed. With the count-author configuration, what I get is how many contributions (total) got the project on that week. Is there a way to tell OpenOffice Calc to do what I want?

    Read the article

  • Trying to send email from nagios

    - by batman
    I'm very new to Nagios. I'm trying to send email alerts. But that doesn't seem to be working. But in my log of nagios I can see this : SERVICE ALERT: Appserver;Tmp directory;CRITICAL;HARD;1; Where host notifications are generated via email, only service alerts are not working. And when I look at sendEmail log I can see this : Sep 14 12:38:39 x.x.x.x. sendEmail[23005]: ERROR => You must specify a 'from' field! Try --help. Sep 14 12:39:39 x.x.x.x.x. sendEmail[23129]: ERROR => You must specify a 'from' field! Try --help. Sep 14 12:40:39 x-x-x-x-x sendEmail[23233]: ERROR => You must specify a 'from' field! Try --help. Where I'm making the mistake? Thanks in advance.

    Read the article

  • NTFS 'Owner' missing when accessing hard disk from external USB adapter

    - by trismarck
    I have a hard drive with Windows XP SP3 installed on it. When the drive is connected through the standard SATA connector inside the laptop, everything works as expected. However when I remove the drive from the laptop and connect the drive to the external USB adapter, almost all files / folders lose the 'Owner' field contents. I was wondering why could that be. I've tried two USB adapters and this happens on each. I could take the ownership of all of the files, but this would overwrite the Owner value (the Owner value that is present when the drive is accessed through standard SATA connector in the laptop). //edit: if the hard drive is used through the USB adapter, I can't access most of the files, at least until I take ownership of the files (/folders). This is how it looks like: HDD inside USB adapter: HDD inside laptop: (note the Owner column) //edit: some of the files on the first screenshot have Owner field filled up. That's because I took the ownership of those files / folders to be able to access the files on the hard drive. //edit2: also, if the hard drive is connected through USB adapter and if I've took the ownership of some files by the 'ddd' user, then if i login as a different user (lets say 'eee' user), the owner field is _still_ empty: ddd user: eee user: eee user can't access the 'ddd' folder. Both users have Administrator priviledges.

    Read the article

  • Excel or Access: how to group several lines in a table and insert contents in columns? ("split column")

    - by Martin
    I have a table containing data of sold products (shown in the example on the left): Columns: Number of the order Product Name Attribute - specifies what is given in the following field "value", e. g. Customer Name or Product Variant Value - is the value of the Attribute Count - is the number of products of this variant sold in the order That means: Product B has 2 variants "c" and "d" Note that in Order 1 Product B was sold in Variant d only, because the letter "N" in field "D4" means "none". Note, that in OrdnerNo 3 Product B was sold only in Variant c, because for Variant d field "D9" is "N"!! This is confusing, but it is the structure of the original data (which I can not change). I need a way to convert the table on the left in a table like that on the right: one line for each product type Order Number Product Name Customer Name Count (number of products sold in this order) Variant - this is the problem, as it has to be filled with the So all rows with the same OrderNo and same product have to be grouped in to one, and I hope it is clear what I need. I tried to do it with Pivot Tables, but that fails, as the Count is always in each line, no matter if it has Value "N" or not and for the products without variants there is only one line for each order, however for products with variants there are several... So how could I create the right table with a VBA macro in MS Excel or maybe there is a trick in MS Access to do it directly or with an SQL query?

    Read the article

< Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >