Search Results

Search found 61241 results on 2450 pages for 'empty set'.

Page 254/2450 | < Previous Page | 250 251 252 253 254 255 256 257 258 259 260 261  | Next Page >

  • Free space on SSD (over provisioning) per disk or per partition?

    - by Horst Walter
    It is recommended to keep some percentage of an SSD free for relocation ( Is free space required on a SSD for performance? ). However, is this rule meant per partition or per disk (whole SSD)? So, if I want to keep 20% free for performance reasons, is it acceptable if one partition is 95% filled, while another is almost empty and the overall empty disk space still is 20. Or does each partition has to fulfill the rule of 20% empty space?

    Read the article

  • How can I get Solr listening on 0.0.0.0 instead of just localhost?

    - by Neil
    I'm trying to get Solr to listen on 0.0.0.0 instead of just localhost, and it doesn't seem to be picking up the configuration options. I downloaded apache-solr-1.4.1 from the Solr website, and I'm running: user@:apache-solr-1.4.1/example $ java -jar start.jar With these configuration options: <Call name="addConnector"> <Arg> <New class="org.mortbay.jetty.bio.SocketConnector"> <Set name="host"><SystemProperty name="jetty.host" default="0.0.0.0" /></Set> <Set name="port"><SystemProperty name="jetty.port" default="8983" /></Set> <Set name="maxIdleTime">50000</Set> <Set name="lowResourceMaxIdleTime">1500</Set> </New> </Arg> </Call> Where the only line changed from the default is this one: <Set name="host"><SystemProperty name="jetty.host" default="0.0.0.0" /></Set> And when I check netstat, I see this: $ netstat -an | egrep 'Proto|\b8983\b' Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 127.0.0.1:8983 0.0.0.0:* LISTEN tcp6 0 0 ::1:8983 :::* LISTEN Where Local Address should be 0.0.0.0:8983 instead of 127.0.0.1:8983. Does anyone know why this might not be working?

    Read the article

  • The instruction at “0x7c910a19” referenced memory at “oxffffffff”. The memory could not be “read”

    - by ClareBear
    Hello guys/girls I have a small issue, I receive the following error before the .vbs terminates. I don't know why this error is thrown. Below is the process of the .vbs file: Call ImportTransactions() Call UpdateTransactions() Function ImportTransactions() Dim objConnection, objCommand, objRecordset, strOracle Dim strSQL, objRecordsetInsert Set objConnection = CreateObject("ADODB.Connection") objConnection.Open "DSN=*****;UID=*****;PWD==*****;" Set objCommand = CreateObject("ADODB.Command") Set objRecordset = CreateObject("ADODB.Recordset") strOracle = "SELECT query here from Oracle database" objCommand.CommandText = strOracle objCommand.CommandType = 1 objCommand.CommandTimeout = 0 Set objCommand.ActiveConnection = objConnection objRecordset.cursorType = 0 objRecordset.cursorlocation = 3 objRecordset.Open objCommand, , 1, 3 If objRecordset.EOF = False Then Do Until objRecordset.EOF = True strSQL = "INSERT query here into SQL database" strSQL = Query(strSQL) Call RunSQL(strSQL, objRecordsetInsert, False, conTimeOut, conServer, conDatabase, conUsername, conPassword) objRecordset.MoveNext Loop End If objRecordset.Close() Set objRecordset = Nothing Set objRecordsetInsert = Nothing End Function Function UpdateTransactions() Dim strSQLUpdateVAT, strSQLUpdateCodes Dim objRecordsetVAT, objRecordsetUpdateCodes strSQLUpdateVAT = "UPDATE query here SET [value:costing output] = ([value:costing output] * -1)" Call RunSQL(strSQLUpdateVAT, objRecordsetVAT, False, conTimeOut, conServer, conDatabase, conUsername, conPassword) strSQLUpdateCodes = "UPDATE query here SET [value:costing output] = ([value:costing output] * -1) different WHERE clause" Call RunSQL(strSQLUpdateCodes, objRecordsetUpdateCodes, False, conTimeOut, conServer, conDatabase, conUsername, conPassword) Set objRecordsetVAT = Nothing Set objRecordsetUpdateCodes = Nothing End Function It does both the import and update and seems to throw this error after. If I comment out the ImportTransactions it doesnt throw a error, however I have produced similar code for another vbs file and this does not throw any errors Thanks in advance for any help, Clare

    Read the article

  • Xml failing to deserialise

    - by Carnotaurus
    I call a method to get my pages [see GetPages(String xmlFullFilePath)]. The FromXElement method is supposed to deserialise the LitePropertyData elements to strongly type LitePropertyData objects. Instead it fails on the following line: return (T)xmlSerializer.Deserialize(memoryStream); and gives the following error: <LitePropertyData xmlns=''> was not expected. What am I doing wrong? I have included the methods that I call and the xml data: public static T FromXElement<T>(this XElement xElement) { using (var memoryStream = new MemoryStream(Encoding.ASCII.GetBytes(xElement.ToString()))) { var xmlSerializer = new XmlSerializer(typeof(T)); return (T)xmlSerializer.Deserialize(memoryStream); } } public static List<LitePageData> GetPages(String xmlFullFilePath) { XDocument document = XDocument.Load(xmlFullFilePath); List<LitePageData> results = (from record in document.Descendants("row") select new LitePageData { Guid = IsValid(record, "Guid") ? record.Element("Guid").Value : null, ParentID = IsValid(record, "ParentID") ? Convert.ToInt32(record.Element("ParentID").Value) : (Int32?)null, Created = Convert.ToDateTime(record.Element("Created").Value), Changed = Convert.ToDateTime(record.Element("Changed").Value), Name = record.Element("Name").Value, ID = Convert.ToInt32(record.Element("ID").Value), LitePageTypeID = IsValid(record, "ParentID") ? Convert.ToInt32(record.Element("ParentID").Value) : (Int32?)null, Html = record.Element("Html").Value, FriendlyName = record.Element("FriendlyName").Value, Properties = record.Element("Properties") != null ? record.Element("Properties").Element("LitePropertyData").FromXElement<List<LitePropertyData>>() : new List<LitePropertyData>() }).ToList(); return results; } Here is the xml: <?xml version="1.0" encoding="utf-8"?> <root> <rows> <row> <ID>1</ID> <ImageUrl></ImageUrl> <Html>Home page</Html> <Created>01-01-2012</Created> <Changed>01-01-2012</Changed> <Name>Home page</Name> <FriendlyName>home-page</FriendlyName> </row> <row xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <Guid>edeaf468-f490-4271-bf4d-be145bc6a1fd</Guid> <ID>8</ID> <Name>Unused</Name> <ParentID>1</ParentID> <Created>2006-03-25T10:57:17</Created> <Changed>2012-07-17T12:24:30.0984747+01:00</Changed> <ChangedBy /> <LitePageTypeID xsi:nil="true" /> <Html> What is the purpose of this option? This option checks the current document for accessibility issues. It uses Bobby to provide details of whether the current web page conforms to W3C's WCAG criteria for web content accessibility. Issues with Bobby and Cynthia Bobby and Cynthia are free services that supposedly allow a user to expose web page accessibility barriers. It is something of a guide but perhaps a blunt instrument. I tested a few of the webpages that I have designed. Sure enough, my pages fall short and for good reason. I am not about to claim that Bobby and Cynthia are useless. Although it is useful and commendable tool, it project appears to be overly ambitious. Nevertheless, let me explain my issues with Bobby and Cynthia: First, certain W3C standards for designing web documents are often too strict and unworkable. For instance, in some versions W3C standards for HTML, certain tags should not include a particular attribute, whereas in others they are requisite if the document is to be ???well-formed???. The standard that a designer chooses is determined usually by the requirements specification document. This specifies which browsers and versions of those browsers that the web page is expected to correctly display. Forcing a hypertext document to conform strictly to a specific W3C standard for HTML is often no simple task. In the worst case, it cannot conform without losing some aesthetics or accessibility functionality. Second, the case of HTML documents is not an isolated case. Standards for XML, XSL, JavaScript, VBScript, are analogous. Therefore, you might imagine the problems when you begin to combine these languages and formats in an HTML document. Third, there is always more than one way to skin a cat. For example, Bobby and Cynthia may flag those IMG tags that do not contain a TITLE attribute. There might be good reason that a web developer chooses not to include the title attribute. The title attribute has a limited numbers of characters and does not support carriage returns. This is a major defect in the design of this tag. In fact, before the TITLE attribute was supported, there was the ALT attribute. Most browsers support both, yet they both perform a similar function. However, both attributes share the same deficiencies. In practice, there are instances where neither attribute would be used. Instead, for example, the developer would write some JavaScript or VBScript to circumvent these deficiencies. The concern is that Bobby and Cynthia would not notice this because it does not ???understand??? what the JavaScript does. </Html> <FriendlyName>unused</FriendlyName> <IsDeleted>false</IsDeleted> <Properties> <LitePropertyData> <Description>Image for the page</Description> <DisplayEditUI>true</DisplayEditUI> <OwnerTab>1</OwnerTab> <DisplayName>Image Url</DisplayName> <FieldOrder>1</FieldOrder> <IsRequired>false</IsRequired> <Name>ImageUrl</Name> <IsModified>false</IsModified> <ParentPageID>3</ParentPageID> <Type>String</Type> <Value xsi:type="xsd:string">smarter.jpg</Value> </LitePropertyData> <LitePropertyData> <Description>WebItemApplicationEnum</Description> <DisplayEditUI>true</DisplayEditUI> <OwnerTab>1</OwnerTab> <DisplayName>WebItemApplicationEnum</DisplayName> <FieldOrder>1</FieldOrder> <IsRequired>false</IsRequired> <Name>WebItemApplicationEnum</Name> <IsModified>false</IsModified> <ParentPageID>3</ParentPageID> <Type>Number</Type> <Value xsi:type="xsd:string">1</Value> </LitePropertyData> </Properties> <Seo> <Author>Phil Carney</Author> <Classification /> <Copyright>Carnotaurus</Copyright> <Description> What is the purpose of this option? This option checks the current document for accessibility issues. It uses Bobby to provide details of whether the current web page conforms to W3C's WCAG criteria for web content accessibility. Issues with Bobby and Cynthia Bobby and Cynthia are free services that supposedly allow a user to expose web page accessibility barriers. It is something of a guide but perhaps a blunt instrument. I tested a few of the webpages that I have designed. Sure enough, my pages fall short and for good reason. I am not about to claim that Bobby and Cynthia are useless. Although it is useful and commendable tool, it project appears to be overly ambitious. Nevertheless, let me explain my issues with Bobby and Cynthia: First, certain W3C standards for designing web documents are often too strict and unworkable. For instance, in some versions W3C standards for HTML, certain tags should not include a particular attribute, whereas in others they are requisite if the document is to be ???well-formed???. The standard that a designer chooses is determined usually by the requirements specification document. This specifies which browsers and versions of those browsers that the web page is expected to correctly display. Forcing a hypertext document to conform strictly to a specific W3C standard for HTML is often no simple task. In the worst case, it cannot conform without losing some aesthetics or accessibility functionality. Second, the case of HTML documents is not an isolated case. Standards for XML, XSL, JavaScript, VBScript, are analogous. Therefore, you might imagine the problems when you begin to combine these languages and formats in an HTML document. Third, there is always more than one way to skin a cat. For example, Bobby and Cynthia may flag those IMG tags that do not contain a TITLE attribute. There might be good reason that a web developer chooses not to include the title attribute. The title attribute has a limited numbers of characters and does not support carriage returns. This is a major defect in the design of this tag. In fact, before the TITLE attribute was supported, there was the ALT attribute. Most browsers support both, yet they both perform a similar function. However, both attributes share the same deficiencies. In practice, there are instances where neither attribute would be used. Instead, for example, the developer would write some JavaScript or VBScript to circumvent these deficiencies. The concern is that Bobby and Cynthia would not notice this because it does not ???understand??? what the JavaScript does. </Description> <Keywords>unused</Keywords> <Title>unused</Title> </Seo> </row> </rows> </root> EDIT Here are my entities: public class LitePropertyData { public virtual string Description { get; set; } public virtual bool DisplayEditUI { get; set; } public int OwnerTab { get; set; } public virtual string DisplayName { get; set; } public int FieldOrder { get; set; } public bool IsRequired { get; set; } public string Name { get; set; } public virtual bool IsModified { get; set; } public virtual int ParentPageID { get; set; } public LiteDataType Type { get; set; } public object Value { get; set; } } [Serializable] public class LitePageData { public String Guid { get; set; } public Int32 ID { get; set; } public String Name { get; set; } public Int32? ParentID { get; set; } public DateTime Created { get; set; } public String CreatedBy { get; set; } public DateTime Changed { get; set; } public String ChangedBy { get; set; } public Int32? LitePageTypeID { get; set; } public String Html { get; set; } public String FriendlyName { get; set; } public Boolean IsDeleted { get; set; } public List<LitePropertyData> Properties { get; set; } public LiteSeoPageData Seo { get; set; } /// <summary> /// Saves the specified XML full file path. /// </summary> /// <param name="xmlFullFilePath">The XML full file path.</param> public void Save(String xmlFullFilePath) { XDocument doc = XDocument.Load(xmlFullFilePath); XElement demoNode = this.ToXElement<LitePageData>(); demoNode.Name = "row"; doc.Descendants("rows").Single().Add(demoNode); doc.Save(xmlFullFilePath); } }

    Read the article

  • help needed on deciphering the g++ vtable dumps

    - by Ganesh Kundapur
    Hi, for the fallow class hierarchy class W { public: virtual void f() { cout << "W::f()" << endl; } virtual void g() { cout << "W::g()" << endl; } }; class AW : public virtual W { public: void g() { cout << "AW::g()" << endl; } }; class BW : public virtual W { public: void f() { cout << "BW::f()" << endl; } }; class CW : public AW, public BW { }; g++ -fdump-class-hierarchy is Vtable for W W::_ZTV1W: 4u entries 0 (int ()(...))0 4 (int ()(...))(& _ZTI1W) 8 W::f 12 W::g Class W size=4 align=4 base size=4 base align=4 W (0xb6e3da50) 0 nearly-empty vptr=((& W::_ZTV1W) + 8u) Vtable for AW AW::_ZTV2AW: 7u entries 0 0u 4 0u 8 0u 12 (int ()(...))0 16 (int ()(...))(& _ZTI2AW) 20 W::f 24 AW::g VTT for AW AW::_ZTT2AW: 2u entries 0 ((& AW::_ZTV2AW) + 20u) 4 ((& AW::_ZTV2AW) + 20u) Class AW size=4 align=4 base size=4 base align=4 AW (0xb6dbf6c0) 0 nearly-empty vptridx=0u vptr=((& AW::_ZTV2AW) + 20u) W (0xb6e3da8c) 0 nearly-empty virtual primary-for AW (0xb6dbf6c0) vptridx=4u vbaseoffset=-0x00000000000000014 Vtable for BW BW::_ZTV2BW: 7u entries 0 0u 4 0u 8 0u 12 (int ()(...))0 16 (int ()(...))(& _ZTI2BW) 20 BW::f 24 W::g VTT for BW BW::_ZTT2BW: 2u entries 0 ((& BW::_ZTV2BW) + 20u) 4 ((& BW::_ZTV2BW) + 20u) Class BW size=4 align=4 base size=4 base align=4 BW (0xb6dbf7c0) 0 nearly-empty vptridx=0u vptr=((& BW::_ZTV2BW) + 20u) W (0xb6e3dac8) 0 nearly-empty virtual primary-for BW (0xb6dbf7c0) vptridx=4u vbaseoffset=-0x00000000000000014 Vtable for CW CW::_ZTV2CW: 14u entries 0 0u 4 0u 8 4u 12 (int ()(...))0 16 (int ()(...))(& _ZTI2CW) 20 BW::_ZTv0_n12_N2BW1fEv 24 AW::g 28 4294967292u 32 4294967292u 36 0u 40 (int ()(...))-0x00000000000000004 44 (int ()(...))(& _ZTI2CW) 48 BW::f 52 0u Construction vtable for AW (0xb6dbf8c0 instance) in CW CW::_ZTC2CW0_2AW: 7u entries 0 0u 4 0u 8 0u 12 (int ()(...))0 16 (int ()(...))(& _ZTI2AW) 20 W::f 24 AW::g Construction vtable for BW (0xb6dbf900 instance) in CW CW::_ZTC2CW4_2BW: 13u entries 0 4294967292u 4 4294967292u 8 0u 12 (int ()(...))0 16 (int ()(...))(& _ZTI2BW) 20 BW::f 24 0u 28 0u 32 4u 36 (int ()(...))4 40 (int ()(...))(& _ZTI2BW) 44 BW::_ZTv0_n12_N2BW1fEv 48 W::g VTT for CW CW::_ZTT2CW: 7u entries 0 ((& CW::_ZTV2CW) + 20u) 4 ((& CW::_ZTC2CW0_2AW) + 20u) 8 ((& CW::_ZTC2CW0_2AW) + 20u) 12 ((& CW::_ZTC2CW4_2BW) + 20u) 16 ((& CW::_ZTC2CW4_2BW) + 44u) 20 ((& CW::_ZTV2CW) + 20u) 24 ((& CW::_ZTV2CW) + 48u) Class CW size=8 align=4 base size=8 base align=4 CW (0xb6bea2d0) 0 vptridx=0u vptr=((& CW::_ZTV2CW) + 20u) AW (0xb6dbf8c0) 0 nearly-empty primary-for CW (0xb6bea2d0) subvttidx=4u W (0xb6e3db04) 0 nearly-empty virtual primary-for AW (0xb6dbf8c0) vptridx=20u vbaseoffset=-0x00000000000000014 BW (0xb6dbf900) 4 nearly-empty lost-primary subvttidx=12u vptridx=24u vptr=((& CW::_ZTV2CW) + 48u) W (0xb6e3db04) alternative-path what are each entries in Vtable for AW AW::_ZTV2AW: 7u entries 0 0u // ? 4 0u // ? 8 0u // ? Vtable for CW CW::_ZTV2CW: 14u entries 0 0u // ? 4 0u // ? 8 4u // ? 12 (int ()(...))0 16 (int ()(...))(& _ZTI2CW) 20 BW::_ZTv0_n12_N2BW1fEv // ? 24 AW::g 28 4294967292u // ? 32 4294967292u // ? 36 0u // ? 40 (int ()(...))-0x00000000000000004 // some delta 44 (int ()(...))(& _ZTI2CW) 48 BW::f 52 0u // ? Thanks, Ganesh

    Read the article

  • EF Code first + Delete Child Object from Parent?

    - by ebb
    I have a one-to-many relationship between my table Case and my other table CaseReplies. I'm using EF Code First and now wants to delete a CaseReply from a Case object, however it seems impossible to do such thing because it just tries to remove the CaseId from the specific CaseReply record and not the record itself.. short: Case just removes the relationship between itself and the CaseReply.. it does not delete the CaseReply. My code: // Case.cs (Case Object) public class Case { [Key] public int Id { get; set; } public string Topic { get; set; } public string Message { get; set; } public DateTime Date { get; set; } public Guid UserId { get; set; } public virtual User User { get; set; } public virtual ICollection<CaseReply> Replies { get; set; } } // CaseReply.cs (CaseReply Object) public class CaseReply { [Key] public int Id { get; set; } public string Message { get; set; } public DateTime Date { get; set; } public int CaseId { get; set; } public Guid UserId { get; set; } public virtual User User { get; set; } public virtual Case Case { get; set; } } // RepositoryBase.cs public class RepositoryBase<T> : IRepository<T> where T : class { public IDbContext Context { get; private set; } public IDbSet<T> ObjectSet { get; private set; } public RepositoryBase(IDbContext context) { Contract.Requires(context != null); Context = context; if (context != null) { ObjectSet = Context.CreateDbSet<T>(); if (ObjectSet == null) { throw new InvalidOperationException(); } } } public IRepository<T> Remove(T entity) { ObjectSet.Remove(entity); return this; } public IRepository<T> SaveChanges() { Context.SaveChanges(); return this; } } // CaseRepository.cs public class CaseRepository : RepositoryBase<Case>, ICaseRepository { public CaseRepository(IDbContext context) : base(context) { Contract.Requires(context != null); } public bool RemoveCaseReplyFromCase(int caseId, int caseReplyId) { Case caseToRemoveReplyFrom = ObjectSet.Include(x => x.Replies).FirstOrDefault(x => x.Id == caseId); var delete = caseToRemoveReplyFrom.Replies.FirstOrDefault(x => x.Id == caseReplyId); caseToRemoveReplyFrom.Replies.Remove(delete); return Context.SaveChanges() >= 1; } } Thanks in advance.

    Read the article

  • SQL SERVER – Shrinking NDF and MDF Files – Readers’ Opinion

    - by pinaldave
    Previously, I had written a blog post about SQL SERVER – Shrinking NDF and MDF Files – A Safe Operation. After that, I have written the following blog post that talks about the advantage and disadvantage of Shrinking and why one should not be Shrinking a file SQL SERVER – SHRINKFILE and TRUNCATE Log File in SQL Server 2008. On this subject, SQL Server Expert Imran Mohammed left an excellent comment. I just feel that his comment is worth a big article itself. For everybody to read his wonderful explanation, I am posting this blog post here. Thanks Imran! Shrinking Database always creates performance degradation and increases fragmentation in the database. I suggest that you keep that in mind before you start reading the following comment. If you are going to say Shrinking Database is bad and evil, here I am saying it first and loud. Now, the comment of Imran is written while keeping in mind only the process showing how the Shrinking Database Operation works. Imran has already explained his understanding and requests further explanation. I have removed the Best Practices section from Imran’s comments, as there are a few corrections. Comments from Imran - Before I explain to you the concept of Shrink Database, let us understand the concept of Database Files. When we create a new database inside the SQL Server, it is typical that SQl Server creates two physical files in the Operating System: one with .MDF Extension, and another with .LDF Extension. .MDF is called as Primary Data File. .LDF is called as Transactional Log file. If you add one or more data files to a database, the physical file that will be created in the Operating System will have an extension of .NDF, which is called as Secondary Data File; whereas, when you add one or more log files to a database, the physical file that will be created in the Operating System will have the same extension as .LDF. The questions now are, “Why does a new data file have a different extension (.NDF)?”, “Why is it called as a secondary data file?” and, “Why is .MDF file called as a primary data file?” Answers: Note: The following explanation is based on my limited knowledge of SQL Server, so experts please do comment. A data file with a .MDF extension is called a Primary Data File, and the reason behind it is that it contains Database Catalogs. Catalogs mean Meta Data. Meta Data is “Data about Data”. An example for Meta Data includes system objects that store information about other objects, except the data stored by the users. sysobjects stores information about all objects in that database. sysindexes stores information about all indexes and rows of every table in that database. syscolumns stores information about all columns that each table has in that database. sysusers stores how many users that database has. Although Meta Data stores information about other objects, it is not the transactional data that a user enters; rather, it’s a system data about the data. Because Primary Data File (.MDF) contains important information about the database, it is treated as a special file. It is given the name Primary Data file because it contains the Database Catalogs. This file is present in the Primary File Group. You can always create additional objects (Tables, indexes etc.) in the Primary data file (This file is present in the Primary File group), by mentioning that you want to create this object under the Primary File Group. Any additional data file that you add to the database will have only transactional data but no Meta Data, so that’s why it is called as the Secondary Data File. It is given the extension name .NDF so that the user can easily identify whether a specific data file is a Primary Data File or a Secondary Data File(s). There are many advantages of storing data in different files that are under different file groups. You can put your read only in the tables in one file (file group) and read-write tables in another file (file group) and take a backup of only the file group that has read the write data, so that you can avoid taking the backup of a read-only data that cannot be altered. Creating additional files in different physical hard disks also improves I/O performance. A real-time scenario where we use Files could be this one: Let’s say you have created a database called MYDB in the D-Drive which has a 50 GB space. You also have 1 Database File (.MDF) and 1 Log File on D-Drive and suppose that all of that 50 GB space has been used up and you do not have any free space left but you still want to add an additional space to the database. One easy option would be to add one more physical hard disk to the server, add new data file to MYDB database and create this new data file in a new hard disk then move some of the objects from one file to another, and put the file group under which you added new file as default File group, so that any new object that is created gets into the new files, unless specified. Now that we got a basic idea of what data files are, what type of data they store and why they are named the way they are, let’s move on to the next topic, Shrinking. First of all, I disagree with the Microsoft terminology for naming this feature as “Shrinking”. Shrinking, in regular terms, means to reduce the size of a file by means of compressing it. BUT in SQL Server, Shrinking DOES NOT mean compressing. Shrinking in SQL Server means to remove an empty space from database files and release the empty space either to the Operating System or to SQL Server. Let’s examine this through an example. Let’s say you have a database “MYDB” with a size of 50 GB that has a free space of about 20 GB, which means 30GB in the database is filled with data and the 20 GB of space is free in the database because it is not currently utilized by the SQL Server (Database); it is reserved and not yet in use. If you choose to shrink the database and to release an empty space to Operating System, and MIND YOU, you can only shrink the database size to 30 GB (in our example). You cannot shrink the database to a size less than what is filled with data. So, if you have a database that is full and has no empty space in the data file and log file (you don’t have an extra disk space to set Auto growth option ON), YOU CANNOT issue the SHRINK Database/File command, because of two reasons: There is no empty space to be released because the Shrink command does not compress the database; it only removes the empty space from the database files and there is no empty space. Remember, the Shrink command is a logged operation. When we perform the Shrink operation, this information is logged in the log file. If there is no empty space in the log file, SQL Server cannot write to the log file and you cannot shrink a database. Now answering your questions: (1) Q: What are the USEDPAGES & ESTIMATEDPAGES that appear on the Results Pane after using the DBCC SHRINKDATABASE (NorthWind, 10) ? A: According to Books Online (For SQL Server 2000): UsedPages: the number of 8-KB pages currently used by the file. EstimatedPages: the number of 8-KB pages that SQL Server estimates the file could be shrunk down to. Important Note: Before asking any question, make sure you go through Books Online or search on the Google once. The reasons for doing so have many advantages: 1. If someone else already has had this question before, chances that it is already answered are more than 50 %. 2. This reduces your waiting time for the answer. (2) Q: What is the difference between Shrinking the Database using DBCC command like the one above & shrinking it from the Enterprise Manager Console by Right-Clicking the database, going to TASKS & then selecting SHRINK Option, on a SQL Server 2000 environment? A: As far as my knowledge goes, there is no difference, both will work the same way, one advantage of using this command from query analyzer is, your console won’t be freezed. You can do perform your regular activities using Enterprise Manager. (3) Q: What is this .NDF file that is discussed above? I have never heard of it. What is it used for? Is it used by end-users, DBAs or the SERVER/SYSTEM itself? A: .NDF File is a secondary data file. You never heard of it because when database is created, SQL Server creates database by default with only 1 data file (.MDF) and 1 log file (.LDF) or however your model database has been setup, because a model database is a template used every time you create a new database using the CREATE DATABASE Command. Unless you have added an extra data file, you will not see it. This file is used by the SQL Server to store data which are saved by the users. Hope this information helps. I would like to as the experts to please comment if what I understand is not what the Microsoft guys meant. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Readers Contribution, Readers Question, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Menu widget - no jQuery nor Javascript required - pure CSS

    - by Renso
    Goal: Create a menu widget that does not require any javascript, extremely lightweight, very fast, soley based on CSS, compatible with FireFox and Chrome. Issues: May have some rendering issues in some versions of IE, sorry :-) Instruments: css file html with specific menu format jQuery-ui library - optional if you want to use your own images/colors Implementation Details: HTML: <div id="header">   <div id="header_Menubar">     <ul class="linkList0 ui-tabs-nav ui-helper-reset ui-helper-clearfix ui-widget-header ui-corner-all">         <li class="first more ui-state-default ui-corner-top ui-tabs-selected"><a title="Home" href="/Home">Home</a>             <ul class="linkList01 ui-tabs-nav ui-helper-reset ui-helper-clearfix ui-widget-header ui-corner-all">                 <li class="ifirst ui-state-default ui-corner-top"><abbr title="Go Home"></abbr><a title="Home" href="/Home">Home</a></li>             </ul>         </li>         <li class="more ui-state-default ui-corner-top ui-tabs-selected"><a title="Menu 2" href="/Menu2a">Menu 2</a>             <ul class="linkList01 ui-tabs-nav ui-helper-reset ui-helper-clearfix ui-widget-header ui-corner-all">                 <li class="ifirst ui-state-default ui-corner-top"><abbr title="Menu 2 a"></abbr><a title="Menu 2 a" href="/Menu2a">Menu 2 a</a></li>                 <li class="ilast ui-state-default ui-corner-top"><abbr title="Menu 2 b"></abbr><a title="Menu 2 b" href="/Menu2b">Menu 2 b</a></li>             </ul>         </li>         <li class="more red ui-state-default ui-corner-top ui-tabs-selected"><a title="Menu 3" href="/Menu3 d">Menu 3</a>             <ul class="linkList01 ui-tabs-nav ui-helper-reset ui-helper-clearfix ui-widget-header ui-corner-all">                 <li class="ifirst ui-state-default ui-corner-top"><abbr title="Menu 3 a"><a title="Menu 3 a" href="/Menu3a">Menu 3 a</a></abbr></li>                 <li class="ui-state-default ui-corner-top"><abbr title="Menu 3 b"><a title="Menu 3 b" href="/Menu3b">Menu 3 b</a></abbr></li>                 <li class="ui-state-default ui-corner-top"><abbr title="Menu 3 c"><a title="Menu 3 c" href="/Menu3c">Menu 3 c</a></abbr></li>                 <li class="ilast ui-state-default ui-corner-top"><abbr title="Menu 3 d"><a title="Menu 3 d" href="/Menu3d">Menu 3 d</a></abbr></li>             </ul>         </li>     </ul>     </div> </div> CSS: /*    =Menu     -----------------------------------------------------------------------------------------    */ #header #header_Menubar {     margin: 0;     padding: 0;     border: 0;     width: 100%;     height: 22px; } #header {     background-color: #99cccc;     background-color: #aaccee;     background-color: #5BA3E0;     background-color: #006cb1; } /* Set menu bar background color     */ #header #header_Menubar {     background-attachment: scroll;     background-position: left center;     background-repeat: repeat-x; } /*    Set main (horizontal) menu typology    */ #header .linkList0 {     padding: 0 0 1em 0;     margin-bottom: 1em;     font-family: 'Trebuchet MS', 'Lucida Grande',           Verdana, Lucida, Geneva, Helvetica,           Arial, sans-serif;     font-weight: bold;     font-size: 1.085em;     font-size: 1em; } /*    Set all ul properties    */ #header .linkList0, #header .linkList0 ul {     list-style: none;     margin: 0;     padding: 0;     list-style-position: outside; } /*    Set all li properties    */ #header .linkList0 > li {     float: left;     position: relative;     font-size: 90%;     margin: 0 0 -1px;     width: 9.7em;     padding-right: 2em;     z-index: 100;    /*IE7:    Fix for IE7 hiding drop down list behind some other page elements    */ } /*    Set all li properties    */ #header .linkList01 > li {     width: 190px; } #header .linkList0 .linkList01 li {     margin-left: 0px; } /*    Set all list background image properties    */ /*#header .linkList0 li a {     background-position: left center;     background-image: url(  '../Content/Images/VerticalButtonBarGradientFade.png' );     background-repeat: repeat-x;     background-attachment: scroll; }*/ /*    Set all A ancor properties    */ #header .linkList0 li a {     display: block;     text-decoration: none;     line-height: 22px; } /*    IE7: Fix for a bug in IE7 where the margins between list items is doubled - need to set height explicitly    */ *+html #header .linkList0 ul li {     height: auto;     margin-bottom: -.3em; } /*    Menu:    Set different borders for different nested level lists     --------------------------------------------------------------    */ #header .linkList0 > li a {     border-left: 10px solid Transparent;     border-right: none; } #header .linkList0 > li a {     border-left: 0px;     margin-left: 0px;     border-right: none; } #header .linkList0 .linkList01 > li a {     border-left: 8px solid #336699;     border-right: none;     border: 1px solid Transparent;     -moz-border-radius: 5px 5px 5px 5px;     -moz-box-shadow: 3px 3px 4px #696969; } #header .linkList0 .linkList01 .linkList001 > li a {     border-left: 6px solid #336699;     border-right: none;     border: 1px solid Transparent;     -moz-border-radius: 5px 5px 5px 5px;     -moz-box-shadow: 3px 3px 4px #696969; } #header .linkList0 .linkList01 .linkList001 .linkList0001 > li a {     border-left: 4px solid #336699;     border-right: none;     border: 1px solid Transparent;     -moz-border-radius: 5px 5px 5px 5px;     -moz-box-shadow: 3px 3px 4px #696969; }     /*    Link and Visited pseudo-class settings for all lists (ul)    */ #header .linkList0 a:link, #header .linkList0 a:visited {     display: block;     text-decoration: none;     padding-left: 1em; } /*    Hide all the nested/sub menu items    */ #header .linkList0 ul {     display: none;     padding: 0;     position: absolute;    /*Important: must not impede on other page elements when drop down opens up    */ } /*    Hide all detail popups    */ #header .detailPopup {     display: none; } /*    Set the typology of all sub-menu list items li    */ /*#header .linkList0 ul li {     background-color: #AACCEE;     background-position: left center;     background-image: url(  '../Content/Images/VerticalButtonBarGradientFade.png' );     background-repeat: repeat-x;     background-attachment: scroll; }*/ #header .linkList0 ul li.more {     background: Transparent url('../Content/Images/ArrowRight.gif') no-repeat right center; } /*    Header list's margin and padding for all list items    */ #header .linkList0 ul li {     margin: 0 0 0 1em;     padding: 0; } #header .linkList01 ul li {     margin: 0;     padding: 0;     width: 189px; } /*    Set margins for the third li sibling (Plan a Call) to display to the right of the parent menu     to avoid the sub-menu overlaying the menu items below    */ #header .linkList0 li.more .linkList01 li.more > ul.linkList001 {     margin: -1.7em 0 0 13.2em;    /*Important, must be careful, if tbe EM since gap increases too much bewteen nested lists the gap will make the nested-list collapse prematurely    */ } /*    Set right hand arrow for list items with sub-menus (class-more)    */ #header li.more {     background: Transparent url('../Content/Images/ArrowRight.gif') no-repeat right center;     padding-right: 48px; } /*    Menu:    Dynamic Behavior of menu items (hover, visted, etc)     -----------------------------------------------------------    */ #header .linkList0 li a:link, #header .linkList01 li a:link {     display: block; } #header .linkList0 li a:visited, #header .linkList01 li a:visited {     display: block; } #header .linkList0 > li:hover { } #header .linkList01 > li:hover a ,#header .linkList001 > li:hover a {     text-decoration: underline; } #header .linkList0 > li abbr:hover span.detailPopup {     display: block;     position: absolute;     top: 1em;     left: 17em;     border: double 1px #696969;     border-style: outset;     width: 120%;     height: auto;     padding: 5px;     font-weight: 100; } #header .linkList0 > li:hover ,#header .linkList0 .linkList01 > li:hover { } #header .linkList0 .linkList01 .linkList001 > li:hover { } #header .linkList0 .linkList01 .linkList001 .linkList0001 > li:hover { } /*    Display the hidden sub menu when hovering over the parent ul's li    */ #header .linkList0 li:hover > ul {     display: block; } /*    Display the hidden sub menu when hovering over the parent ul's li    */ #header .linkList0 .linkList01 li:hover > ul {     display: block;         background: -moz-linear-gradient(top, #1E83CC, #619FCD);     /* Chrome, Safari:*/     background: -webkit-gradient(linear,                 center top, center bottom, from(#1E83CC), to(#619FCD)); } /*    Display the hidden sub menu when hovering over the parent ul's li    */ #header .linkList0 .linkList01 .linkList001 li:hover > ul {     display: block; } /*    Set right hand arrow for list items with sub-menus (class-more) on hover    */ #header li.more:hover { } Also some CSS for global settings that will affect this menu, you of course will have some other styling, but included it here so you can see how/why some css properties were set here: /* Neutralize styling:    Elements we want to clean out entirely: */ html, body {     margin: 0;     padding: 0;     font: 62.5%/120% Verdana, Arial, Helvetica, sans-serif; } /* Neutralize styling:    Elements with a vertical margin: */ h1, h2, h3, h4, h5, h6, p, pre, blockquote, ul, ol, dl, address {     margin: 0;    /*    most browsers set some default value that is not shared by all browsers    */     padding: 0;        /*    some borowsers default padding, set to 0 for all    */ } /* Apply left margin:    Only to the few elements that need it: */ li, dd, blockquote {     margin-left: 1em; }

    Read the article

  • Procedural... house with rooms generator

    - by pek
    I've been looking at some algorithms and articles about procedurally generating a dungeon. The problem is, I'm trying to generate a house with rooms, and they don't seem to fit my requirements. For one, dungeons have corridors, where houses have halls. And while initially they might seem the same, a hall is nothing more than the area that isn't a room, whereas a corridor is specifically designed to connect one area to another. Another important difference with a house is that you have a specific width and height, and you have to fill the entire thing with rooms and halls, whereas with a dungeon, there is empty space. I think halls in a house is something in between a dungeon corridor (gets you to other rooms) and an empty space in the dungeon (it's not explicitly defined in code). More specifically, the requirements are: There is a set of predefined rooms I cannot create walls and doors on the fly. Rooms can be rotated but not resized Again, because I have a predefined set of rooms, I can only rotate them, not resize them. The house dimensions are set and has to be entirely filled with rooms (or halls) I.e. I want to fill a 14x20 house with the available rooms making sure there is no empty space. Here are some images to make this a little more clear: As you can see, in the house, the "empty space" is still walkable and it gets you from one room to another. So, having said all this, maybe a house is just a really really tightly packed dungeon with corridors. Or it's something easier than a dungeon. Maybe there is something out there and I haven't found it because I don't really know what to search for. This is where I'd like your help: could you give me pointers on how to design this algorithm? Any thoughts on what steps it will take? If you have created a dungeon generator, how would you modify it to fit my requirements? You can be as specific or as generic as you like. I'm looking to pick your brains, really.

    Read the article

  • Your personal backlog

    - by johndoucette
    Whenever I start a new project or come in during a hectic time to help salvage a deliverable – there is always a backlog. Generating the backlog can be a daunting exercise, but worth the effort. Once I have a backlog, I feel in control and the chaos begins to quell. In your everyday life, you too should keep a backlog. Here is how I do it; 1. Always carry a notebook 2. Start each day marking a new page with today’s date 3. Flip to yesterday’s notes and copy every task with an empty checkbox next to it, to the new empty page (today) 4. As the day progresses and you go to meetings, do your work, or get interrupted to do something…jot it down in today’s page and put an empty checkbox next to it. If you get it done during the day, awesome. Mark it complete. Keep carrying and writing every task to each new day until it is complete. Maybe one day, you will have an empty backlog and your sprint will be complete!

    Read the article

  • Syntax logic suggestions

    - by Anna
    This syntax will be used inside HTML attributes. Here are a few examples of what I have so far: <input name="a" conditions="!b, c" /> <input name="b" /> <input name="c" /> This will make input "a" do something if b is not checked and c is checked (b and c are assumed to be checkboxes if they don't have a :value defined) <input name="a" conditions="!b:foo|bar, c:foo" /> <input name="b" /> <input name="c" /> This will make input "a" do something if bdoesn't have foo or bar values, and if c has the foo value. <input name="a" conditions="!b:EMPTY" /> <input name="b" /> Makes input "a" do something if b has a value assigned. So, essentially , acts as logical AND, : as equals (=), ! as NOT, and | as OR. The | (OR) is only needed between values (at least I think so), and AND is not needed between values for obvious reasons :) EMPTY means empty value, like <input value="" /> Do you have any suggestions on improving this syntax, like making it more human friendly? For example I think the "EMPTY" keyword is not really appropriate and should be replaced with a character, but I don't know which one to choose.

    Read the article

  • One of my most frequently used commands

    - by Kevin Smith
    On a Linux or UNIX server this is one of my most frequently used commands. find . -name "*.htm" -exec grep -iH "alter session" {} \; It is an easy way to find a string you know is in a group of files, but don't know or can't remember which file it is in. For the example above, I knew that WebCenter Content sends a bunch of alter session commands to the database when it opens a new database connection. I wanted to find where these were defined and what all the alter session commands were. So, I ran these commands: cd /opt/oracle/middleware/Oracle_ECM1/ucm/idc/resources/core find . -name "*.htm" -exec grep -iH "alter session" {} \; And the results were: ./tables/query.htm: ALTER SESSION SET optimizer_mode = ?./tables/query.htm: ALTER SESSION SET NLS_LENGTH_SEMANTICS = ?./tables/query.htm: ALTER SESSION SET NLS_SORT = ?./tables/query.htm: ALTER SESSION SET NLS_COMP = ?./tables/query.htm: ALTER SESSION SET CURSOR_SHARING = ?./tables/query.htm: ALTER SESSION SET EVENTS '30579 trace name context forever, level 2'./tables/query.htm: ALTER SESSION SET NLS_DATE_FORMAT = ?./tables/query.htm: alter session set events '30579 trace name context forever, level 2' I could then go edit the query.htm file and find the include that contained all the ALTER SESSION commands.

    Read the article

  • Spring + JSR 303 Validation group is ignored [closed]

    - by nsideras
    we have a simple bean with JSR annotations public class CustomerDTO { private static final long serialVersionUID = 1L; private Integer id; @NotEmpty(message = "{customer.firstname.empty}") private String firstName; @NotEmpty(message = "{customer.lastname.empty}") private String lastName; @NotEmpty(groups={PasswordChange.class}, message="{password.empty}") private String password; @NotEmpty(groups={PasswordChange.class}, message="{confirmation.password.empty}") private String password2; } and we have a Spring Controller @RequestMapping(value="/changePassword", method = RequestMethod.POST) public String changePassword(@Validated({ PasswordChange.class }) @ModelAttribute("customerdto") CustomerDTO customerDTO, BindingResult result, Locale locale) { logger.debug("Change Password was submitted with information: " + customerDTO.toString()); try { passwordStrengthPolicy.checkPasswordStrength(locale, customerDTO.getPassword()); if (result.hasErrors()) { return "changePassword"; } logger.debug("Calling customer service changePassword: " + customerDTO); customerOnlineNewService.changePassword(customerDTO); } catch (PasswordNotChangedException e) { logger.error("Could not change password PasswordNotChangedException: " + customerDTO.toString()); return "changePassword"; } catch (PasswordNotSecureException e) { return "changePassword"; } return createRedirectViewPath("changePassword"); } Our problem is that when changePassword is invoked the validator ignores the group(PasswordChange.class) and validates only firstName and lastName which are not in the group. Any idea? Thank you very much for your time.

    Read the article

  • How to rename an alias in PowerShell?

    - by jwfearn
    I want to make my own versions of some of the builtin PowerShell aliases. Rather than completely removing the overridden aliases, I'd like to rename them so I can still use them if I want to. For example, maybe I'll rename set to orig_set and then add my own new definition for set. This is what I've tried so far: PS> alias *set* CommandType Name Definition ----------- ---- ---------- Alias set Set-Variable PS> function Rename-Alias( $s0, $s1 ) { Rename-Item Alias:\$s0 $s1 -Force } PS> Rename-Alias set orig_set PS> alias *set* CommandType Name Definition ----------- ---- ---------- Alias set Set-Variable Any ideas as to why this isn't working?

    Read the article

  • Appengine BulkExport via Batch File

    - by Chris M
    I've created a batch file to run a bulk export on appengine to a dated file @echo off FOR /F "TOKENS=1* DELIMS= " %%A IN ('DATE/T') DO SET CDATE=%%B FOR /F "TOKENS=1,2 eol=/ DELIMS=/ " %%A IN ('DATE/T') DO SET mm=%%B FOR /F "TOKENS=1,2 DELIMS=/ eol=/" %%A IN ('echo %CDATE%') DO SET dd=%%B FOR /F "TOKENS=2,3 DELIMS=/ " %%A IN ('echo %CDATE%') DO SET yyyy=%%B SET date=%yyyy%%mm%%dd% FOR /f "tokens=1" %%u IN ('TIME /t') DO SET t=%%u IF "%t:~1,1%"==":" SET t=0%t% @REM set timestr=%d:~6,4%%d:~3,2%%d:~0,2%%t:~0,2%%t:~3,2% set time=%t:~0,2%%t:~3,2% @echo on "c:\Program Files\Google\google_appengine\appcfg.py" download_data --config_file=E:\FEEDSYSTEMS\TRACKER\TRACKER\tracker-export.py --filename=%date%data_archive.csv --batch_size=100 --kind="SearchRec" ./TRACKER I cant work out how to get it to authenticate with google automatically; at the moment I get asked the user/pass everytime which means I have to run it manually. Any Ideas?

    Read the article

  • How to update records based on sum of a field then use the sum to calc a new value in sql

    - by Casey
    Below is what i'm trying to do with by iterating through the records. I would like to have a more elegant solution if possible since i'm sure this is not the best way to do it in sql. set @counter = 1 declare @totalhrs dec(9,3), @lastemp char(7), @othrs dec(9,3) while @counter <= @maxrecs begin if exists(select emp_num from #tt_trans where id = @counter) begin set @nhrs = 0 set @othrs = 0 select @empnum = emp_num, @nhrs = n_hrs, @othrs = ot_hrs from #tt_trans where id = @counter if @empnum = @lastemp begin set @totalhrs = @totalhrs + @nhrs if @totalhrs > 40 begin set @othrs = @othrs + @totalhrs - 40 set @nhrs = @nhrs - (@totalhrs - 40) set @totalhrs = 40 end end else begin set @totalhrs = @nhrs set @lastemp = @empnum end update #tt_trans set n_hrs = @nhrs, ot_hrs = @othrs where id = @counter and can_have_ot = 1 end set @counter = @counter + 1 end Thx

    Read the article

  • Creating a Build Definition using the TFS 2010 API

    - by Jakob Ehn
    In this post I will show how to create a new build definition in TFS 2010 using the TFS API. When creating a build definition manually, using Team Explorer, the necessary steps are lined out in the New Build Definition Wizard:     So, lets see how the code looks like, using the same order. To start off, we need to connect to TFS and get a reference to the IBuildServer object: TfsTeamProjectCollection server = newTfsTeamProjectCollection(newUri("http://<tfs>:<port>/tfs")); server.EnsureAuthenticated(); IBuildServer buildServer = (IBuildServer) server.GetService(typeof (IBuildServer)); General First we create a IBuildDefinition object for the team project and set a name and description for it: var buildDefinition = buildServer.CreateBuildDefinition(teamProject); buildDefinition.Name = "TestBuild"; buildDefinition.Description = "description here..."; Trigger Next up, we set the trigger type. For this one, we set it to individual which corresponds to the Continuous Integration - Build each check-in trigger option buildDefinition.ContinuousIntegrationType = ContinuousIntegrationType.Individual; Workspace For the workspace mappings, we create two mappings here, where one is a cloak. Note the user of $(SourceDir) variable, which is expanded by Team Build into the sources directory when running the build. buildDefinition.Workspace.AddMapping("$/Path/project.sln", "$(SourceDir)", WorkspaceMappingType.Map); buildDefinition.Workspace.AddMapping("$/OtherPath/", "", WorkspaceMappingType.Cloak); Build Defaults In the build defaults, we set the build controller and the drop location. To get a build controller, we can (for example) use the GetBuildController method to get an existing build controller by name: buildDefinition.BuildController = buildServer.GetBuildController(buildController); buildDefinition.DefaultDropLocation = @\\SERVER\Drop\TestBuild; Process So far, this wasy easy. Now we get to the tricky part. TFS 2010 Build is based on Windows Workflow 4.0. The build process is defined in a separate .XAML file called a Build Process Template. By default, every new team team project containtwo build process templates called DefaultTemplate and UpgradeTemplate. In this sample, we want to create a build definition using the default template. We use te QueryProcessTemplates method to get a reference to the default for the current team project   //Get default template var defaultTemplate = buildServer.QueryProcessTemplates(teamProject).Where(p => p.TemplateType == ProcessTemplateType.Default).First(); buildDefinition.Process = defaultTemplate;   There are several build process templates that can be set for the default build process template. Only one of these are required, the ProjectsToBuild parameters which contains the solution(s) and configuration(s) that should be built. To set this info, we use the ProcessParameters property of thhe IBuildDefinition interface. The format of this property is actually just a serialized dictionary (IDictionary<string, object>) that maps a key (parameter name) to a value which can be any kind of object. This is rather messy, but fortunately, there is a helper class called WorkflowHelpers inthe Microsoft.TeamFoundation.Build.Workflow namespace, that simplifies working with this persistence format a bit. The following code shows how to set the BuildSettings information for a build definition: //Set process parameters varprocess = WorkflowHelpers.DeserializeProcessParameters(buildDefinition.ProcessParameters); //Set BuildSettings properties BuildSettings settings = newBuildSettings(); settings.ProjectsToBuild = newStringList("$/pathToProject/project.sln"); settings.PlatformConfigurations = newPlatformConfigurationList(); settings.PlatformConfigurations.Add(newPlatformConfiguration("Any CPU", "Debug")); process.Add("BuildSettings", settings); buildDefinition.ProcessParameters = WorkflowHelpers.SerializeProcessParameters(process); The other build process parameters of a build definition can be set using the same approach   Retention  Policy This one is easy, we just clear the default settings and set our own: buildDefinition.RetentionPolicyList.Clear(); buildDefinition.AddRetentionPolicy(BuildReason.Triggered, BuildStatus.Succeeded, 10, DeleteOptions.All); buildDefinition.AddRetentionPolicy(BuildReason.Triggered, BuildStatus.Failed, 10, DeleteOptions.All); buildDefinition.AddRetentionPolicy(BuildReason.Triggered, BuildStatus.Stopped, 1, DeleteOptions.All); buildDefinition.AddRetentionPolicy(BuildReason.Triggered, BuildStatus.PartiallySucceeded, 10, DeleteOptions.All); Save It! And we’re done, lets save the build definition: buildDefinition.Save(); That’s it!

    Read the article

  • Property overwrite behaviour

    - by jeremyj
    I thought it worth sharing about property overwrite behaviour because i found it confusing at first in the hope of preventing some learning pain for the uninitiated with MSBuild :-)The confusion for me came because of the redundancy of using a Condition statement in a _project_ level property to test that a property has not been previously set. What i mean is that the following two statements are always identical in behaviour, regardless if the property has been supplied on the command line -  <PropertyGroup>    <PropA Condition=" '$(PropA)' == '' ">PropA set at project level</PropA>  </PropertyGroup>has the same behaviour regardless of command line override as -  <PropertyGroup>     <PropA>PropA set at project level</PropA>   </PropertyGroup>  i.e. the two above property declarations have the same result whether the property is overridden on the command line or not.To prove this experiment with the following .proj file -<?xml version="1.0" encoding="utf-8"?><Project ToolsVersion="4.0" >  <PropertyGroup>    <PropA Condition=" '$(PropA)' == '' ">PropA set at project level</PropA>  </PropertyGroup>  <Target Name="Target1">    <Message Text="PropA: $(PropA)"/>  </Target>  <Target Name="Target2">    <PropertyGroup>      <PropA>PropA set in Target2</PropA>    </PropertyGroup>    <Message Text="PropA: $(PropA)"/>  </Target>  <Target Name="Target3">    <PropertyGroup>      <PropA Condition=" '$(PropA)' == '' ">PropA set in Target3</PropA>    </PropertyGroup>    <Message Text="PropA: $(PropA)"/>  </Target>  <Target Name="Target4">    <PropertyGroup>      <PropA Condition=" '$(PropA)' != '' ">PropA set in Target4</PropA>    </PropertyGroup>    <Message Text="PropA: $(PropA)"/>  </Target></Project>Try invoking it using both of the following invocations and observe its output -1)>msbuild blog.proj /t:Target1;Target2;Target3;Target42)>msbuild blog.proj /t:Target1;Target2;Target3;Target4 "/p:PropA=PropA set on command line"Then try those two invocations with the following three variations of specifying PropA at the project level -1)  <PropertyGroup>     <PropA Condition=" '$(PropA)' == '' ">PropA set at project level</PropA>   </PropertyGroup> 2)   <PropertyGroup>     <PropA>PropA set at project level</PropA>   </PropertyGroup>3)  <PropertyGroup>     <PropA Condition=" '$(PropA)' != '' ">PropA set at project level</PropA>   </PropertyGroup>

    Read the article

  • The Execute SQL Task

    In this article we are going to take you through the Execute SQL Task in SQL Server Integration Services for SQL Server 2005 (although it appies just as well to SQL Server 2008).  We will be covering all the essentials that you will need to know to effectively use this task and make it as flexible as possible. The things we will be looking at are as follows: A tour of the Task. The properties of the Task. After looking at these introductory topics we will then get into some examples. The examples will show different types of usage for the task: Returning a single value from a SQL query with two input parameters. Returning a rowset from a SQL query. Executing a stored procedure and retrieveing a rowset, a return value, an output parameter value and passing in an input parameter. Passing in the SQL Statement from a variable. Passing in the SQL Statement from a file. Tour Of The Task Before we can start to use the Execute SQL Task in our packages we are going to need to locate it in the toolbox. Let's do that now. Whilst in the Control Flow section of the package expand your toolbox and locate the Execute SQL Task. Below is how we found ours. Now drag the task onto the designer. As you can see from the following image we have a validation error appear telling us that no connection manager has been assigned to the task. This can be easily remedied by creating a connection manager. There are certain types of connection manager that are compatable with this task so we cannot just create any connection manager and these are detailed in a few graphics time. Double click on the task itself to take a look at the custom user interface provided to us for this task. The task will open on the general tab as shown below. Take a bit of time to have a look around here as throughout this article we will be revisting this page many times. Whilst on the general tab, drop down the combobox next to the ConnectionType property. In here you will see the types of connection manager which this task will accept. As with SQL Server 2000 DTS, SSIS allows you to output values from this task in a number of formats. Have a look at the combobox next to the Resultset property. The major difference here is the ability to output into XML. If you drop down the combobox next to the SQLSourceType property you will see the ways in which you can pass a SQL Statement into the task itself. We will have examples of each of these later on but certainly when we saw these for the first time we were very excited. Next to the SQLStatement property if you click in the empty box next to it you will see ellipses appear. Click on them and you will see the very basic query editor that becomes available to you. Alternatively after you have specified a connection manager for the task you can click on the Build Query button to bring up a completely different query editor. This is slightly inconsistent. Once you've finished looking around the general tab, move on to the next tab which is the parameter mapping tab. We shall, again, be visiting this tab throughout the article but to give you an initial heads up this is where you define the input, output and return values from your task. Note this is not where you specify the resultset. If however you now move on to the ResultSet tab this is where you define what variable will receive the output from your SQL Statement in whatever form that is. Property Expressions are one of the most amazing things to happen in SSIS and they will not be covered here as they deserve a whole article to themselves. Watch out for this as their usefulness will astound you. For a more detailed discussion of what should be the parameter markers in the SQL Statements on the General tab and how to map them to variables on the Parameter Mapping tab see Working with Parameters and Return Codes in the Execute SQL Task. Task Properties There are two places where you can specify the properties for your task. One is in the task UI itself and the other is in the property pane which will appear if you right click on your task and select Properties from the context menu. We will be doing plenty of property setting in the UI later so let's take a moment to have a look at the property pane. Below is a graphic showing our properties pane. Now we shall take you through all the properties and tell you exactly what they mean. A lot of these properties you will see across all tasks as well as the package because of everything's base structure The Container. BypassPrepare Should the statement be prepared before sending to the connection manager destination (True/False) Connection This is simply the name of the connection manager that the task will use. We can get this from the connection manager tray at the bottom of the package. DelayValidation Really interesting property and it tells the task to not validate until it actually executes. A usage for this may be that you are operating on table yet to be created but at runtime you know the table will be there. Description Very simply the description of your Task. Disable Should the task be enabled or not? You can also set this through a context menu by right clicking on the task itself. DisableEventHandlers As a result of events that happen in the task, should the event handlers for the container fire? ExecValueVariable The variable assigned here will get or set the execution value of the task. Expressions Expressions as we mentioned earlier are a really powerful tool in SSIS and this graphic below shows us a small peek of what you can do. We select a property on the left and assign an expression to the value of that property on the right causing the value to be dynamically changed at runtime. One of the most obvious uses of this is that the property value can be built dynamically from within the package allowing you a great deal of flexibility FailPackageOnFailure If this task fails does the package? FailParentOnFailure If this task fails does the parent container? A task can he hosted inside another container i.e. the For Each Loop Container and this would then be the parent. ForcedExecutionValue This property allows you to hard code an execution value for the task. ForcedExecutionValueType What is the datatype of the ForcedExecutionValue? ForceExecutionResult Force the task to return a certain execution result. This could then be used by the workflow constraints. Possible values are None, Success, Failure and Completion. ForceExecutionValue Should we force the execution result? IsolationLevel This is the transaction isolation level of the task. IsStoredProcedure Certain optimisations are made by the task if it knows that the query is a Stored Procedure invocation. The docs say this will always be false unless the connection is an ADO connection. LocaleID Gets or sets the LocaleID of the container. LoggingMode Should we log for this container and what settings should we use? The value choices are UseParentSetting, Enabled and Disabled. MaximumErrorCount How many times can the task fail before we call it a day? Name Very simply the name of the task. ResultSetType How do you want the results of your query returned? The choices are ResultSetType_None, ResultSetType_SingleRow, ResultSetType_Rowset and ResultSetType_XML. SqlStatementSource Your Query/SQL Statement. SqlStatementSourceType The method of specifying the query. Your choices here are DirectInput, FileConnection and Variables TimeOut How long should the task wait to receive results? TransactionOption How should the task handle being asked to join a transaction? Usage Examples As we move through the examples we will only cover in them what we think you must know and what we think you should see. This means that some of the more elementary steps like setting up variables will be covered in the early examples but skipped and simply referred to in later ones. All these examples used the AventureWorks database that comes with SQL Server 2005. Returning a Single Value, Passing in Two Input Parameters So the first thing we are going to do is add some variables to our package. The graphic below shows us those variables having been defined. Here the CountOfEmployees variable will be used as the output from the query and EndDate and StartDate will be used as input parameters. As you can see all these variables have been scoped to the package. Scoping allows us to have domains for variables. Each container has a scope and remember a package is a container as well. Variable values of the parent container can be seen in child containers but cannot be passed back up to the parent from a child. Our following graphic has had a number of changes made. The first of those changes is that we have created and assigned an OLEDB connection manager to this Task ExecuteSQL Task Connection. The next thing is we have made sure that the SQLSourceType property is set to Direct Input as we will be writing in our statement ourselves. We have also specified that only a single row will be returned from this query. The expressions we typed in was: SELECT COUNT(*) AS CountOfEmployees FROM HumanResources.Employee WHERE (HireDate BETWEEN ? AND ?) Moving on now to the Parameter Mapping tab this is where we are going to tell the task about our input paramaters. We Add them to the window specifying their direction and datatype. A quick word here about the structure of the variable name. As you can see SSIS has preceeded the variable with the word user. This is a default namespace for variables but you can create your own. When defining your variables if you look at the variables window title bar you will see some icons. If you hover over the last one on the right you will see it says "Choose Variable Columns". If you click the button you will see a list of checkbox options and one of them is namespace. after checking this you will see now where you can define your own namespace. The next tab, result set, is where we need to get back the value(s) returned from our statement and assign to a variable which in our case is CountOfEmployees so we can use it later perhaps. Because we are only returning a single value then if you remember from earlier we are allowed to assign a name to the resultset but it must be the name of the column (or alias) from the query. A really cool feature of Business Intelligence Studio being hosted by Visual Studio is that we get breakpoint support for free. In our package we set a Breakpoint so we can break the package and have a look in a watch window at the variable values as they appear to our task and what the variable value of our resultset is after the task has done the assignment. Here's that window now. As you can see the count of employess that matched the data range was 2. Returning a Rowset In this example we are going to return a resultset back to a variable after the task has executed not just a single row single value. There are no input parameters required so the variables window is nice and straight forward. One variable of type object. Here is the statement that will form the soure for our Resultset. select p.ProductNumber, p.name, pc.Name as ProductCategoryNameFROM Production.ProductCategory pcJOIN Production.ProductSubCategory pscON pc.ProductCategoryID = psc.ProductCategoryIDJOIN Production.Product pON psc.ProductSubCategoryID = p.ProductSubCategoryID We need to make sure that we have selected Full result set as the ResultSet as shown below on the task's General tab. Because there are no input parameters we can skip the parameter mapping tab and move straight to the Result Set tab. Here we need to Add our variable defined earlier and map it to the result name of 0 (remember we covered this earlier) Once we run the task we can again set a breakpoint and have a look at the values coming back from the task. In the following graphic you can see the result set returned to us as a COM object. We can do some pretty interesting things with this COM object and in later articles that is exactly what we shall be doing. Return Values, Input/Output Parameters and Returning a Rowset from a Stored Procedure This example is pretty much going to give us a taste of everything. We have already covered in the previous example how to specify the ResultSet to be a Full result set so we will not cover it again here. For this example we are going to need 4 variables. One for the return value, one for the input parameter, one for the output parameter and one for the result set. Here is the statement we want to execute. Note how much cleaner it is than if you wanted to do it using the current version of DTS. In the Parameter Mapping tab we are going to Add our variables and specify their direction and datatypes. In the Result Set tab we can now map our final variable to the rowset returned from the stored procedure. It really is as simple as that and we were amazed at how much easier it is than in DTS 2000. Passing in the SQL Statement from a Variable SSIS as we have mentioned is hugely more flexible than its predecessor and one of the things you will notice when moving around the tasks and the adapters is that a lot of them accept a variable as an input for something they need. The ExecuteSQL task is no different. It will allow us to pass in a string variable as the SQL Statement. This variable value could have been set earlier on from inside the package or it could have been populated from outside using a configuration. The ResultSet property is set to single row and we'll show you why in a second when we look at the variables. Note also the SQLSourceType property. Here's the General Tab again. Looking at the variable we have in this package you can see we have only two. One for the return value from the statement and one which is obviously for the statement itself. Again we need to map the Result name to our variable and this can be a named Result Name (The column name or alias returned by the query) and not 0. The expected result into our variable should be the amount of rows in the Person.Contact table and if we look in the watch window we see that it is.   Passing in the SQL Statement from a File The final example we are going to show is a really interesting one. We are going to pass in the SQL statement to the task by using a file connection manager. The file itself contains the statement to run. The first thing we are going to need to do is create our file connection mananger to point to our file. Click in the connections tray at the bottom of the designer, right click and choose "New File Connection" As you can see in the graphic below we have chosen to use an existing file and have passed in the name as well. Have a look around at the other "Usage Type" values available whilst you are here. Having set that up we can now see in the connection manager tray our file connection manager sitting alongside our OLE-DB connection we have been using for the rest of these examples. Now we can go back to the familiar General Tab to set up how the task will accept our file connection as the source. All the other properties in this task are set up exactly as we have been doing for other examples depending on the options chosen so we will not cover them again here.   We hope you will agree that the Execute SQL Task has changed considerably in this release from its DTS predecessor. It has a lot of options available but once you have configured it a few times you get to learn what needs to go where. We hope you have found this article useful.

    Read the article

  • Routing Issue in ASP.NET MVC 3 RC 2

    - by imran_ku07
         Introduction:             Two weeks ago, ASP.NET MVC team shipped the ASP.NET MVC 3 RC 2 release. This release includes some new features and some performance optimization. This release also fixes most of the bugs but still some minor issues are present in this release. Some of these issues are already discussed by Scott Guthrie at Update on ASP.NET MVC 3 RC2 (and a workaround for a bug in it). In addition to these issues, I have found another issue in this release regarding routing. In this article, I will show you the issue regarding routing and a simple workaround for this issue.       Description:             The easiest way to understand an issue is to reproduce it in the application. So create a MVC 2 application and a MVC 3 RC 2 application. Then in both applications, just open global.asax file and update the default route as below,     routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapRoute( "Default", // Route name "{controller}/{action}/{id1}/{id2}", // URL with parameters new { controller = "Home", action = "Index", id1 = UrlParameter.Optional, id2 = UrlParameter.Optional } // Parameter defaults );              Then just open Index View and add the following lines,    <%@ Page Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage" %> <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server"> Home Page </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <% Html.RenderAction("About"); %> </asp:Content>             The above view will issue a child request to About action method. Now run both applications. ASP.NET MVC 2 application will run just fine. But ASP.NET MVC 3 RC 2 application will throw an exception as shown below,                  You may think that this is a routing issue but this is not the case here as both ASP.NET MVC 2 and ASP.NET MVC  3 RC 2 applications(created above) are built with .NET Framework 4.0 and both will use the same routing defined in System.Web. Something is wrong in ASP.NET MVC 3 RC 2. So after digging into ASP.NET MVC source code, I have found that the UrlParameter class in ASP.NET MVC 3 RC 2 overrides the ToString method which simply return an empty string.     public sealed class UrlParameter { public static readonly UrlParameter Optional = new UrlParameter(); private UrlParameter() { } public override string ToString() { return string.Empty; } }             In MVC 2 the ToString method was not overridden. So to quickly fix the above problem just replace UrlParameter.Optional default value with a different value other than null or empty(for example, a single white space) or replace UrlParameter.Optional default value with a new class object containing the same code as UrlParameter class have except the ToString method is not overridden (or with a overridden ToString method that return a string value other than null or empty). But by doing this you will loose the benefit of ASP.NET MVC 2 Optional URL Parameters. There may be many different ways to fix the above problem and not loose the benefit of optional parameters. Here I will create a new class MyUrlParameter with the same code as UrlParameter class have except the ToString method is not overridden. Then I will create a base controller class which contains a constructor to remove all MyUrlParameter route data parameters, same like ASP.NET MVC doing with UrlParameter route data parameters early in the request.     public class BaseController : Controller { public BaseController() { if (System.Web.HttpContext.Current.CurrentHandler is MvcHandler) { RouteValueDictionary rvd = ((MvcHandler)System.Web.HttpContext.Current.CurrentHandler).RequestContext.RouteData.Values; string[] matchingKeys = (from entry in rvd where entry.Value == MyUrlParameter.Optional select entry.Key).ToArray(); foreach (string key in matchingKeys) { rvd.Remove(key); } } } } public class HomeController : BaseController { public ActionResult Index(string id1) { ViewBag.Message = "Welcome to ASP.NET MVC!"; return View(); } public ActionResult About() { return Content("Child Request Contents"); } }     public sealed class MyUrlParameter { public static readonly MyUrlParameter Optional = new MyUrlParameter(); private MyUrlParameter() { } }     routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapRoute( "Default", // Route name "{controller}/{action}/{id1}/{id2}", // URL with parameters new { controller = "Home", action = "Index", id1 = MyUrlParameter.Optional, id2 = MyUrlParameter.Optional } // Parameter defaults );             MyUrlParameter class is a copy of UrlParameter class except that MyUrlParameter class not overrides the ToString method. Note that the default route is modified to use MyUrlParameter.Optional instead of UrlParameter.Optional. Also note that BaseController class constructor is removing MyUrlParameter parameters from the current request route data so that the model binder will not bind these parameters with action method parameters. Now just run the ASP.NET MVC 3 RC 2 application again, you will find that it runs just fine.             In case if you are curious to know that why ASP.NET MVC 3 RC 2 application throws an exception if UrlParameter class contains a ToString method which returns an empty string, then you need to know something about a feature of routing for url generation. During url generation, routing will call the ParsedRoute.Bind method internally. This method includes a logic to match the route and build the url. During building the url, ParsedRoute.Bind method will call the ToString method of the route values(in our case this will call the UrlParameter.ToString method) and then append the returned value into url. This method includes a logic after appending the returned value into url that if two continuous returned values are empty then don't match the current route otherwise an incorrect url will be generated. Here is the snippet from ParsedRoute.Bind method which will prove this statement.       if ((builder2.Length > 0) && (builder2[builder2.Length - 1] == '/')) { return null; } builder2.Append("/"); ........................................................... ........................................................... ........................................................... ........................................................... if (RoutePartsEqual(obj3, obj4)) { builder2.Append(UrlEncode(Convert.ToString(obj3, CultureInfo.InvariantCulture))); continue; }             In the above example, both id1 and id2 parameters default values are set to UrlParameter object and UrlParameter class include a ToString method that returns an empty string. That's why this route will not matched.            Summary:             In this article I showed you the issue regarding routing and also showed you how to workaround this problem. I explained this issue with an example by creating a ASP.NET MVC 2 and a ASP.NET MVC 3 RC 2 application. Finally I also explained the reason for this issue. Hopefully you will enjoy this article too.   SyntaxHighlighter.all()

    Read the article

  • Do I need to store a generic rotation point/radius for rotating around a point other than the origin for object transforms?

    - by Casey
    I'm having trouble implementing a non-origin point rotation. I have a class Transform that stores each component separately in three 3D vectors for position, scale, and rotation. This is fine for local rotations based on the center of the object. The issue is how do I determine/concatenate non-origin rotations in addition to origin rotations. Normally this would be achieved as a Transform-Rotate-Transform for the center rotation followed by a Transform-Rotate-Transform for the non-origin point. The problem is because I am storing the individual components, the final Transform matrix is not calculated until needed by using the individual components to fill an appropriate Matrix. (See GetLocalTransform()) Do I need to store an additional rotation (and radius) for world rotations as well or is there a method of implementation that works while only using the single rotation value? Transform.h #ifndef A2DE_CTRANSFORM_H #define A2DE_CTRANSFORM_H #include "../a2de_vals.h" #include "CMatrix4x4.h" #include "CVector3D.h" #include <vector> A2DE_BEGIN class Transform { public: Transform(); Transform(Transform* parent); Transform(const Transform& other); Transform& operator=(const Transform& rhs); virtual ~Transform(); void SetParent(Transform* parent); void AddChild(Transform* child); void RemoveChild(Transform* child); Transform* FirstChild(); Transform* LastChild(); Transform* NextChild(); Transform* PreviousChild(); Transform* GetChild(std::size_t index); std::size_t GetChildCount() const; std::size_t GetChildCount(); void SetPosition(const a2de::Vector3D& position); const a2de::Vector3D& GetPosition() const; a2de::Vector3D& GetPosition(); void SetRotation(const a2de::Vector3D& rotation); const a2de::Vector3D& GetRotation() const; a2de::Vector3D& GetRotation(); void SetScale(const a2de::Vector3D& scale); const a2de::Vector3D& GetScale() const; a2de::Vector3D& GetScale(); a2de::Matrix4x4 GetLocalTransform() const; a2de::Matrix4x4 GetLocalTransform(); protected: private: a2de::Vector3D _position; a2de::Vector3D _scale; a2de::Vector3D _rotation; std::size_t _curChildIndex; Transform* _parent; std::vector<Transform*> _children; }; A2DE_END #endif Transform.cpp #include "CTransform.h" #include "CVector2D.h" #include "CVector4D.h" A2DE_BEGIN Transform::Transform() : _position(), _scale(1.0, 1.0), _rotation(), _curChildIndex(0), _parent(nullptr), _children() { /* DO NOTHING */ } Transform::Transform(Transform* parent) : _position(), _scale(1.0, 1.0), _rotation(), _curChildIndex(0), _parent(parent), _children() { /* DO NOTHING */ } Transform::Transform(const Transform& other) : _position(other._position), _scale(other._scale), _rotation(other._rotation), _curChildIndex(0), _parent(other._parent), _children(other._children) { /* DO NOTHING */ } Transform& Transform::operator=(const Transform& rhs) { if(this == &rhs) return *this; this->_position = rhs._position; this->_scale = rhs._scale; this->_rotation = rhs._rotation; this->_curChildIndex = 0; this->_parent = rhs._parent; this->_children = rhs._children; return *this; } Transform::~Transform() { _children.clear(); _parent = nullptr; } void Transform::SetParent(Transform* parent) { _parent = parent; } void Transform::AddChild(Transform* child) { if(child == nullptr) return; _children.push_back(child); } void Transform::RemoveChild(Transform* child) { if(_children.empty()) return; _children.erase(std::remove(_children.begin(), _children.end(), child), _children.end()); } Transform* Transform::FirstChild() { if(_children.empty()) return nullptr; return *(_children.begin()); } Transform* Transform::LastChild() { if(_children.empty()) return nullptr; return *(_children.end()); } Transform* Transform::NextChild() { if(_children.empty()) return nullptr; std::size_t s(_children.size()); if(_curChildIndex >= s) { _curChildIndex = s; return nullptr; } return _children[_curChildIndex++]; } Transform* Transform::PreviousChild() { if(_children.empty()) return nullptr; if(_curChildIndex == 0) { return nullptr; } return _children[_curChildIndex--]; } Transform* Transform::GetChild(std::size_t index) { if(_children.empty()) return nullptr; if(index > _children.size()) return nullptr; return _children[index]; } std::size_t Transform::GetChildCount() const { if(_children.empty()) return 0; return _children.size(); } std::size_t Transform::GetChildCount() { return static_cast<const Transform&>(*this).GetChildCount(); } void Transform::SetPosition(const a2de::Vector3D& position) { _position = position; } const a2de::Vector3D& Transform::GetPosition() const { return _position; } a2de::Vector3D& Transform::GetPosition() { return const_cast<a2de::Vector3D&>(static_cast<const Transform&>(*this).GetPosition()); } void Transform::SetRotation(const a2de::Vector3D& rotation) { _rotation = rotation; } const a2de::Vector3D& Transform::GetRotation() const { return _rotation; } a2de::Vector3D& Transform::GetRotation() { return const_cast<a2de::Vector3D&>(static_cast<const Transform&>(*this).GetRotation()); } void Transform::SetScale(const a2de::Vector3D& scale) { _scale = scale; } const a2de::Vector3D& Transform::GetScale() const { return _scale; } a2de::Vector3D& Transform::GetScale() { return const_cast<a2de::Vector3D&>(static_cast<const Transform&>(*this).GetScale()); } a2de::Matrix4x4 Transform::GetLocalTransform() const { Matrix4x4 p((_parent ? _parent->GetLocalTransform() : a2de::Matrix4x4::GetIdentity())); Matrix4x4 t(a2de::Matrix4x4::GetTranslationMatrix(_position)); Matrix4x4 r(a2de::Matrix4x4::GetRotationMatrix(_rotation)); Matrix4x4 s(a2de::Matrix4x4::GetScaleMatrix(_scale)); return (p * t * r * s); } a2de::Matrix4x4 Transform::GetLocalTransform() { return static_cast<const Transform&>(*this).GetLocalTransform(); } A2DE_END

    Read the article

  • TFS API Change WorkItem CreatedDate And ChangedDate To Historic Dates

    - by Tarun Arora
    There may be times when you need to modify the value of the fields “System.CreatedDate” and “System.ChangedDate” on a work item. Richard Hundhausen has a great blog with ample of reason why or why not you should need to set the values of these fields to historic dates. In this blog post I’ll show you, Create a PBI WorkItem linked to a Task work item by pre-setting the value of the field ‘System.ChangedDate’ to a historic date Change the value of the field ‘System.Created’ to a historic date Simulate the historic burn down of a task type work item in a sprint Explain the impact of updating values of the fields CreatedDate and ChangedDate on the Sprint burn down chart Rules of Play      1. You need to be a member of the Project Collection Service Accounts              2. You need to use ‘WorkItemStoreFlags.BypassRules’ when you instantiate the WorkItemStore service // Instanciate Work Item Store with the ByPassRules flag _wis = new WorkItemStore(_tfs, WorkItemStoreFlags.BypassRules);      3. You cannot set the ChangedDate         - Less than the changed date of previous revision         - Greater than current date Walkthrough The walkthrough contains 5 parts 00 – Required References 01 – Connect to TFS Programmatically 02 – Create a Work Item Programmatically 03 – Set the values of fields ‘System.ChangedDate’ and ‘System.CreatedDate’ to historic dates 04 – Results of our experiment Lets get started………………………………………………… 00 – Required References Microsoft.TeamFoundation.dll Microsoft.TeamFoundation.Client.dll Microsoft.TeamFoundation.Common.dll Microsoft.TeamFoundation.WorkItemTracking.Client.dll 01 – Connect to TFS Programmatically I have a in depth blog post on how to connect to TFS programmatically in case you are interested. However, the code snippet below will enable you to connect to TFS using the Team Project Picker. // Services I need access to globally private static TfsTeamProjectCollection _tfs; private static ProjectInfo _selectedTeamProject; private static WorkItemStore _wis; // Connect to TFS Using Team Project Picker public static bool ConnectToTfs() { var isSelected = false; // The user is allowed to select only one project var tfsPp = new TeamProjectPicker(TeamProjectPickerMode.SingleProject, false); tfsPp.ShowDialog(); // The TFS project collection _tfs = tfsPp.SelectedTeamProjectCollection; if (tfsPp.SelectedProjects.Any()) { // The selected Team Project _selectedTeamProject = tfsPp.SelectedProjects[0]; isSelected = true; } return isSelected; } 02 – Create a Work Item Programmatically In the below code snippet I have create a Product Backlog Item and a Task type work item and then link them together as parent and child. Note – You will have to set the ChangedDate to a historic date when you created the work item. Remember, If you try and set the ChangedDate to a value earlier than last assigned you will receive the following exception… TF26212: Team Foundation Server could not save your changes. There may be problems with the work item type definition. Try again or contact your Team Foundation Server administrator. If you notice below I have added a few seconds each time I have modified the ‘ChangedDate’ just to avoid running into the exception listed above. // Create Linked Work Items and return Ids private static List<int> CreateWorkItemsProgrammatically() { // Instantiate Work Item Store with the ByPassRules flag _wis = new WorkItemStore(_tfs, WorkItemStoreFlags.BypassRules); // List of work items to return var listOfWorkItems = new List<int>(); // Create a new Product Backlog Item var p = new WorkItem(_wis.Projects[_selectedTeamProject.Name].WorkItemTypes["Product Backlog Item"]); p.Title = "This is a new PBI"; p.Description = "Description"; p.IterationPath = string.Format("{0}\\Release 1\\Sprint 1", _selectedTeamProject.Name); p.AreaPath = _selectedTeamProject.Name; p["Effort"] = 10; // Just double checking that ByPassRules is set to true if (_wis.BypassRules) { p.Fields["System.ChangedDate"].Value = Convert.ToDateTime("2012-01-01"); } if (p.Validate().Count == 0) { p.Save(); listOfWorkItems.Add(p.Id); } else { Console.WriteLine(">> Following exception(s) encountered during work item save: "); foreach (var e in p.Validate()) { Console.WriteLine(" - '{0}' ", e); } } var t = new WorkItem(_wis.Projects[_selectedTeamProject.Name].WorkItemTypes["Task"]); t.Title = "This is a task"; t.Description = "Task Description"; t.IterationPath = string.Format("{0}\\Release 1\\Sprint 1", _selectedTeamProject.Name); t.AreaPath = _selectedTeamProject.Name; t["Remaining Work"] = 10; if (_wis.BypassRules) { t.Fields["System.ChangedDate"].Value = Convert.ToDateTime("2012-01-01"); } if (t.Validate().Count == 0) { t.Save(); listOfWorkItems.Add(t.Id); } else { Console.WriteLine(">> Following exception(s) encountered during work item save: "); foreach (var e in t.Validate()) { Console.WriteLine(" - '{0}' ", e); } } var linkTypEnd = _wis.WorkItemLinkTypes.LinkTypeEnds["Child"]; p.Links.Add(new WorkItemLink(linkTypEnd, t.Id) {ChangedDate = Convert.ToDateTime("2012-01-01").AddSeconds(20)}); if (_wis.BypassRules) { p.Fields["System.ChangedDate"].Value = Convert.ToDateTime("2012-01-01").AddSeconds(20); } if (p.Validate().Count == 0) { p.Save(); } else { Console.WriteLine(">> Following exception(s) encountered during work item save: "); foreach (var e in p.Validate()) { Console.WriteLine(" - '{0}' ", e); } } return listOfWorkItems; } 03 – Set the value of “Created Date” and Change the value of “Changed Date” to Historic Dates The CreatedDate can only be changed after a work item has been created. If you try and set the CreatedDate to a historic date at the time of creation of a work item, it will not work. // Lets do a work item effort burn down simulation by updating the ChangedDate & CreatedDate to historic Values private static void WorkItemChangeSimulation(IEnumerable<int> listOfWorkItems) { foreach (var id in listOfWorkItems) { var wi = _wis.GetWorkItem(id); switch (wi.Type.Name) { case "ProductBacklogItem": if (wi.State.ToLower() == "new") wi.State = "Approved"; // Advance the changed date by few seconds wi.Fields["System.ChangedDate"].Value = Convert.ToDateTime(wi.Fields["System.ChangedDate"].Value).AddSeconds(10); // Set the CreatedDate to Changed Date wi.Fields["System.CreatedDate"].Value = Convert.ToDateTime(wi.Fields["System.ChangedDate"].Value).AddSeconds(10); wi.Save(); break; case "Task": // Advance the changed date by few seconds wi.Fields["System.ChangedDate"].Value = Convert.ToDateTime(wi.Fields["System.ChangedDate"].Value).AddSeconds(10); // Set the CreatedDate to Changed date wi.Fields["System.CreatedDate"].Value = Convert.ToDateTime(wi.Fields["System.ChangedDate"].Value).AddSeconds(10); wi.Save(); break; } } // A mock sprint start date var sprintStart = DateTime.Today.AddDays(-5); // A mock sprint end date var sprintEnd = DateTime.Today.AddDays(5); // What is the total Sprint duration var totalSprintDuration = (sprintEnd - sprintStart).Days; // How much of the sprint have we already covered var noOfDaysIntoSprint = (DateTime.Today - sprintStart).Days; // Get the effort assigned to our tasks var totalEffortRemaining = QueryTaskTotalEfforRemaining(listOfWorkItems); // Defining how much effort to burn every day decimal dailyBurnRate = totalEffortRemaining / totalSprintDuration < 1 ? 1 : totalEffortRemaining / totalSprintDuration; // we have just created one task var totalNoOfTasks = 1; var simulation = sprintStart; var currentDate = DateTime.Today.Date; // Carry on till effort has been burned down from sprint start to today while (simulation.Date != currentDate.Date) { var dailyBurnRate1 = dailyBurnRate; // A fixed amount needs to be burned down each day while (dailyBurnRate1 > 0) { // burn down bit by bit from all unfinished task type work items foreach (var id in listOfWorkItems) { var wi = _wis.GetWorkItem(id); var isDirty = false; // Set the status to in progress if (wi.State.ToLower() == "to do") { wi.State = "In Progress"; isDirty = true; } // Ensure that there is enough effort remaining in tasks to burn down the daily burn rate if (QueryTaskTotalEfforRemaining(listOfWorkItems) > dailyBurnRate1) { // If there is less than 1 unit of effort left in the task, burn it all if (Convert.ToDecimal(wi["Remaining Work"]) <= 1) { wi["Remaining Work"] = 0; dailyBurnRate1 = dailyBurnRate1 - Convert.ToDecimal(wi["Remaining Work"]); isDirty = true; } else { // How much to burn from each task? var toBurn = (dailyBurnRate / totalNoOfTasks) < 1 ? 1 : (dailyBurnRate / totalNoOfTasks); // Check that the task has enough effort to allow burnForTask effort if (Convert.ToDecimal(wi["Remaining Work"]) >= toBurn) { wi["Remaining Work"] = Convert.ToDecimal(wi["Remaining Work"]) - toBurn; dailyBurnRate1 = dailyBurnRate1 - toBurn; isDirty = true; } else { wi["Remaining Work"] = 0; dailyBurnRate1 = dailyBurnRate1 - Convert.ToDecimal(wi["Remaining Work"]); isDirty = true; } } } else { dailyBurnRate1 = 0; } if (isDirty) { if (Convert.ToDateTime(wi.Fields["System.ChangedDate"].Value).Date == simulation.Date) { wi.Fields["System.ChangedDate"].Value = Convert.ToDateTime(wi.Fields["System.ChangedDate"].Value).AddSeconds(20); } else { wi.Fields["System.ChangedDate"].Value = simulation.AddSeconds(20); } wi.Save(); } } } // Increase date by 1 to perform daily burn down by day simulation = Convert.ToDateTime(simulation).AddDays(1); } } // Get the Total effort remaining in the current sprint private static decimal QueryTaskTotalEfforRemaining(List<int> listOfWorkItems) { var unfinishedWorkInCurrentSprint = _wis.GetQueryDefinition( new Guid(QueryAndGuid.FirstOrDefault(c => c.Key == "Unfinished Work").Value)); var parameters = new Dictionary<string, object> { { "project", _selectedTeamProject.Name } }; var q = new Query(_wis, unfinishedWorkInCurrentSprint.QueryText, parameters); var results = q.RunLinkQuery(); var wis = new List<WorkItem>(); foreach (var result in results) { var _wi = _wis.GetWorkItem(result.TargetId); if (_wi.Type.Name == "Task" && listOfWorkItems.Contains(_wi.Id)) wis.Add(_wi); } return wis.Sum(r => Convert.ToDecimal(r["Remaining Work"])); }   04 – The Results If you are still reading, the results are beautiful! Image 1 – Create work item with Changed Date pre-set to historic date Image 2 – Set the CreatedDate to historic date (Same as the ChangedDate) Image 3 – Simulate of effort burn down on a task via the TFS API   Image 4 – The history of changes on the Task. So, essentially this task has burned 1 hour per day Sprint Burn Down Chart – What’s not possible? The Sprint burn down chart is calculated from the System.AuthorizedDate and not the System.ChangedDate/System.CreatedDate. So, though you can change the System.ChangedDate and System.CreatedDate to historic dates you will not be able to synthesize the sprint burn down chart. Image 1 – By changing the Created Date and Changed Date to ‘18/Oct/2012’ you would have expected the burn down to have been impacted, but it won’t be, because the sprint burn down chart uses the value of field ‘System.AuthorizedDate’ to calculate the unfinished work points. The AsOf queries that are used to calculate the unfinished work points use the value of the field ‘System.AuthorizedDate’. Image 2 – Using the above code I burned down 1 hour effort per day over 5 days from the task work item, I would have expected the sprint burn down to show a constant burn down, instead the burn down shows the effort exhausted on the 24th itself. Simply because the burn down is calculated using the ‘System.AuthorizedDate’. Now you would ask… “Can I change the value of the field System.AuthorizedDate to a historic date” Unfortunately that’s not possible! You will run into the exception ValidationException –  “TF26194: The value for field ‘Authorized Date’ cannot be changed.” Conclusion - You need to be a member of the Project Collection Service account group in order to set the fields ‘System.ChangedDate’ and ‘System.CreatedDate’ to historic dates - You need to instantiate the WorkItemStore using the flag ByPassValidation - The System.ChangedDate needs to be set to a historic date at the time of work item creation. You cannot reset the ChangedDate to a date earlier than the existing ChangedDate and you cannot reset the ChangedDate to a date greater than the current date time. - The System.CreatedDate can only be reset after a work item has been created. You cannot set the CreatedDate at the time of work item creation. The CreatedDate cannot be greater than the current date. You can however reset the CreatedDate to a date earlier than the existing value. - You will not be able to synthesize the Sprint burn down chart by changing the value of System.ChangedDate and System.CreatedDate to historic dates, since the burn down chart uses AsOf queries to calculate the unfinished work points which internally uses the System.AuthorizedDate and NOT the System.ChangedDate & System.CreatedDate - System.AuthorizedDate cannot be set to a historic date using the TFS API Read other posts on using the TFS API here… Enjoy!

    Read the article

  • In China. Want to set up my own private proxy. Already have website/webhosting. Help please! n00b with respect to coding/programming, go easy on me [closed]

    - by user1725461
    I am in China and have used freegate in the past -- http://en.wikipedia.org/wiki/Freegate Recently I've been having too many problems with that and some other web-based proxies I usually use. I have a website that is hosted in the US which I can access from China. Is there an easy way for me to setup my own secure private proxy? I'm sick of all my internet problems and looking for a new workable solution. Thank you! PS: and I really hope this is the right place for such a question...

    Read the article

  • Send large JSON data to WCF Rest Service

    - by Christo Fur
    Hi I have a client web page that is sending a large json object to a proxy service on the same domain as the web page. The proxy (an ashx handler) then forwards the request to a WCF Rest Service. Using a WebClient object (standard .net object for making a http request) The JSON successfully arrives at the proxy via a jQuery POST on the client webpage. However, when the proxy forwards this to the WCF service I get a Bad Request - Error 400 This doesn't happen when the size of the json data is small The WCF service contract looks like this [WebInvoke(Method = "POST", BodyStyle = WebMessageBodyStyle.Wrapped, RequestFormat = WebMessageFormat.Json, ResponseFormat = WebMessageFormat.Json)] [OperationContract] CarConfiguration CreateConfiguration(CarConfiguration configuration); And the DataContract like this [DataContract(Namespace = "")] public class CarConfiguration { [DataMember(Order = 1)] public int CarConfigurationId { get; set; } [DataMember(Order = 2)] public int UserId { get; set; } [DataMember(Order = 3)] public string Model { get; set; } [DataMember(Order = 4)] public string Colour { get; set; } [DataMember(Order = 5)] public string Trim { get; set; } [DataMember(Order = 6)] public string ThumbnailByteData { get; set; } [DataMember(Order = 6)] public string Wheel { get; set; } [DataMember(Order = 7)] public DateTime Date { get; set; } [DataMember(Order = 8)] public List<string> Accessories { get; set; } [DataMember(Order = 9)] public string Vehicle { get; set; } [DataMember(Order = 10)] public Decimal Price { get; set; } } When the ThumbnailByteData field is small, all is OK. When it is large I get the 400 error What are my options here? I've tried increasing the MaxBytesRecived config setting but that is not enough Any ideas?

    Read the article

< Previous Page | 250 251 252 253 254 255 256 257 258 259 260 261  | Next Page >