Search Results

Search found 6392 results on 256 pages for 'reduce duplicate'.

Page 242/256 | < Previous Page | 238 239 240 241 242 243 244 245 246 247 248 249  | Next Page >

  • Understanding CLR 2.0 Memory Model

    - by Eloff
    Joe Duffy, gives 6 rules that describe the CLR 2.0+ memory model (it's actual implementation, not any ECMA standard) I'm writing down my attempt at figuring this out, mostly as a way of rubber ducking, but if I make a mistake in my logic, at least someone here will be able to catch it before it causes me grief. Rule 1: Data dependence among loads and stores is never violated. Rule 2: All stores have release semantics, i.e. no load or store may move after one. Rule 3: All volatile loads are acquire, i.e. no load or store may move before one. Rule 4: No loads and stores may ever cross a full-barrier (e.g. Thread.MemoryBarrier, lock acquire, Interlocked.Exchange, Interlocked.CompareExchange, etc.). Rule 5: Loads and stores to the heap may never be introduced. Rule 6: Loads and stores may only be deleted when coalescing adjacent loads and stores from/to the same location. I'm attempting to understand these rules. x = y y = 0 // Cannot move before the previous line according to Rule 1. x = y z = 0 // equates to this sequence of loads and stores before possible re-ordering load y store x load 0 store z Looking at this, it appears that the load 0 can be moved up to before load y, but the stores may not be re-ordered at all. Therefore, if a thread sees z == 0, then it also will see x == y. If y was volatile, then load 0 could not move before load y, otherwise it may. Volatile stores don't seem to have any special properties, no stores can be re-ordered with respect to each other (which is a very strong guarantee!) Full barriers are like a line in the sand which loads and stores can not be moved over. No idea what rule 5 means. I guess rule 6 means if you do: x = y x = z Then it is possible for the CLR to delete both the load to y and the first store to x. x = y z = y // equates to this sequence of loads and stores before possible re-ordering load y store x load y store z // could be re-ordered like this load y load y store x store z // rule 6 applied means this is possible? load y store x // but don't pop y from stack (or first duplicate item on top of stack) store z What if y was volatile? I don't see anything in the rules that prohibits the above optimization from being carried out. This does not violate double-checked locking, because the lock() between the two identical conditions prevents the loads from being moved into adjacent positions, and according to rule 6, that's the only time they can be eliminated. So I think I understand all but rule 5, here. Anyone want to enlighten me (or correct me or add something to any of the above?)

    Read the article

  • Openfire and LDAP issues

    - by clsmith
    Thanks in advance for the help. Has anyone see this issue with openfire? Currently I use Openfire Fedora with Auth using windows 2003 and also use mysql for the database. When I bring up two clients and talk to each other the time is slow between messages. Sometimes it can take between 5-15 minutes for something sent to get to the person (this is with only two people on the openfire server). I ran a tcp dump using port 389 and see that the machine is running thousands of queries against ldap. When i plug it into wireshark I notice that it is transferring the entire contact list or checking on the status of the entire contact list ? When I run debug on openfire itself I am presented with only this small message in the log: 2010.06.08 07:01:17 LdapManager: Starting LDAP search... 2010.06.08 07:01:17 LdapManager: ... search finished 2010.06.08 07:01:17 LdapManager: Creating a DirContext in LdapManager.getContext()... 2010.06.08 07:01:17 LdapManager: Created hashtable with context values, attempting to create context... 2010.06.08 07:01:17 LdapManager: ... context created successfully, returning. 2010.06.08 07:01:17 LdapManager: Trying to find a groups's DN based on it's groupname. cn: Spark agents CLT, Base DN: OU="Hidden",DC="Hidden",DC="net"... 2010.06.08 07:01:17 LdapManager: Creating a DirContext in LdapManager.getContext()... 2010.06.08 07:01:17 LdapManager: Created hashtable with context values, attempting to create context... 2010.06.08 07:01:17 LdapManager: ... context created successfully, returning. 2010.06.08 07:01:17 LdapManager: Starting LDAP search... 2010.06.08 07:01:17 LdapManager: ... search finished 2010.06.08 07:01:17 LdapManager: Trying to find a groups's DN based on it's groupname. cn: Spark agents CLT, Base DN: OU="Hidden",DC="Hidden",DC="net"... 2010.06.08 07:01:17 LdapManager: Creating a DirContext in LdapManager.getContext()... 2010.06.08 07:01:17 LdapManager: Created hashtable with context values, attempting to create context... 2010.06.08 07:01:17 LdapManager: ... context created successfully, returning. 2010.06.08 07:01:17 LdapManager: Starting LDAP search... 2010.06.08 07:01:17 LdapManager: ... search finished I thought this was a configuration on my end and started to look into the cache settings on the openfire webpages. I tweaked the settings as recommend by the pages and still get the same issues. I doesnt seem to cache the contact list or this might be a feature never fixed or implemented. Has anyone gone through this before ? I have searched online and I see others have great experience with openfire with no issues like I have, or is it because noone checked the queries ? For the time being I created a new Domain Controller and moved openfire to that computer so it can run local queries. This seems to help reduce the speed alot, but when I run the server performance manager tool I see that with two people only using that openfire server I run 593.7 request per second. Thanks for your help, if I didnt provide enough data please let me know what you need and I can find it.

    Read the article

  • Exception opening TAdoDataset: Arguments are of the wrong type, are out of acceptable range, or are

    - by Dave Falkner
    I've been trying to debug the following problem for several weeks now - this method is called from several places within the same datamodule, but this exception (from the subject line of this post) only occurs when integers for a certain purpose (pickup orders vs. orders that we ship through a carrier) are used - and don't ask me how the application can tell the difference between one integer's purpose and another! Furthermore, I cannot duplicate this issue on my machine - the error occurs on a warehouse machine but not my own development machine, even when working with the same production database. I have suspected an MDAC version conflict between the two machines, but have run a version checker and confirmed that both machines are running 2.8, and additionally have confirmed this by logging the TAdoDataset's .Version property at runtime. function TdmESShip.SecondaryID(const PrimaryID : Integer ): String; begin try with qESPackage2 do begin if Active then Close; LogMessage('-----------------------------------'); LogMessage('Version: ' + FConnection.Version); LogMessage('DB Info: ' + FConnection.Properties['Initial Catalog'].Value + ' ' + FConnection.Properties['Data Source'].Value); LogMessage('Setting the parameter.'); Parameters.ParamByName('ParameterName').Value := PrimaryID; LogMessage('Done setting the parameter.'); Open; Ninety-nine times out of 100 this logging code logs a successful operation as follows: Version: 2.8 DB Info: (database name and instance) Setting the parameter. Done setting the parameter. Opened the dataset. But then whenever a "pickup" order is processed, this exception gets thrown whenever the dataset is opened: Version: 2.8 DB Info: (database name and instance) Setting the parameter. Done setting the parameter. GetESPackageID() threw an exception. Type: EOleException, Message: Arguments are of the wrong type, are out of acceptable range, or are in conflict with one another Error: Arguments are of the wrong type, are out of acceptable range, or are in conflict with one another for packageID 10813711 I've tried eliminating the parameter and have built the commandtext for this dataset programmatically, suspecting that some part of the TParameter's configuration might be out of whack, but the same error occurs under the same circumstances. I've tried every combination of TParameter properties that I can think of - this is the millionth TParameter I've created for my millionth dataset, and I've never encountered this error. I've even created a second dataset from scratch and removed all references to the original dataset in case some property of the original dataset in the .dfm might be corrupted, but the same error occurs under the same circumstances. The commandtext for this dataset is a simple select ValueA from TableName where ValueB = @ParameterB I'm about ready to do something extreme, such as writing a web service to look these values up - it feels right now as though I could destroy my machine, rebuild it, rewrite this entire application from scratch, and the application would still know to throw an exception whenever I try to look up a secondary value from a primary value, but only for pickup orders, and only from the one machine in the warehouse, but I'm probably missing something simple. So, any help anyone could provide would be greatly appreciated.

    Read the article

  • Mapping two tables 0..n in Hibernate

    - by simon
    I have a table Users CREATE TABLE "USERS" ( "ID" NUMBER NOT NULL , "LOGINNAME" VARCHAR2 (150) NOT NULL ) and I have a second table SpecialUsers. No UserId can occur twice in the SpecialUsers table, and only a small subset of the ids of users in the Users table are contained in the SpecialUsers table. CREATE TABLE "SPECIALUSERS" ( "USERID" NUMBER NOT NULL, CONSTRAINT "PK_SPECIALUSERS" PRIMARY KEY ("USERID") ) ALTER TABLE "SPECIALUSERS" ADD CONSTRAINT "FK_SPECIALUSERS_USERID" FOREIGN KEY ("USERID") REFERENCES "USERS" ("ID") / Mapping the Users table in Hibernate works ok <hibernate-mapping package="com.initech.domain"> <class name="com.initech.User" table="USERS"> <id name="id" column="ID" type="java.lang.Long"> <meta attribute="use-in-tostring">true</meta> <generator class="sequence"> <param name="sequence">SEQ_USERS_ID</param> </generator> </id> <property name="loginName" column="LOGINNAME" type="java.lang.String" not-null="true"> <meta attribute="use-in-tostring">true</meta> </property> </class> </hibernate-mapping> But I'm struggling in creating the mapping for the SpecialUsers table. All the examples (e.g. in Hibernate documentation) in Internet I found don't have this type of self-reference. I tried a mapping like this: <hibernate-mapping package="com.initech.domain"> <class name="com.initech.User" table="SPECIALUSERS"> <id name="id" column="USERID"> <meta attribute="use-in-tostring">true</meta> <generator class="foreign"> <param name="property">user</param> </generator> </id> <one-to-one name="user" class="User"/> </class> </hibernate-mapping> but got the error Invocation of init method failed; nested exception is org.hibernate.DuplicateMappingException: Duplicate class/entity mapping com.initech.User How should I map the SpecialUsers table? What I need on the application level is a list of the User objects contained in the SpecialUsers table.

    Read the article

  • multiple definition of inline function

    - by K71993
    Hi, I have gone through some posts related to this topic but was not able to sort out my doubt completly. This might be a very navie question. Code Description I have a header file "inline.h" and two translation unit "main.cpp" and "tran.cpp". Details of code are as below inline.h file details #ifndef __HEADER__ #include <stdio.h> extern inline int func1(void) { return 5; } static inline int func2(void) { return 6; } inline int func3(void) { return 7; } #endif main.c file details are below #define <stdio.h> #include <inline.h> int main(int argc, char *argv[]) { printf("%d\n",func1()); printf("%d\n",func2()); printf("%d\n",func3()); return 0; } tran.cpp file details (Not that the functions are not inline here) #include <stdio.h> int func1(void) { return 500; } int func2(void) { return 600; } int func3(void) { return 700; } Question The above code does not compile in gcc compiler whereas compiles in g++ (Assuming you make changes related to gcc in code like changing the code to .c not using any C++ header files... etc). The error displayed is "duplicate definition of inline function - func3". Can you clarify why this difference is present across compile? When you run the program (g++ compiled) by creating two seperate compilation unit (main.o and tran.o and create an executable a.out), the output obtained is 500 6 700 Why does the compiler pick up the definition of the function which is not inline. Actually since #include is used to "add" the inline definiton I had expected 5,6,7 as the output. My understanding was during compilation since the inline definition is found, the function call would be "replaced" by inline function definition. Can you please tell me in detailed steps the process of compilation and linking which would lead us to 500,6,700 output. I can only understand the output 6. Thanks in advance for valuable input.

    Read the article

  • In a DDD approach, is this example modelled correctly?

    - by Tag
    Just created an acc on SO to ask this :) Assuming this simplified example: building a web application to manage projects... The application has the following requirements/rules. 1) Users should be able to create projects inserting the project name. 2) Project names cannot be empty. 3) Two projects can't have the same name. I'm using a 4-layered architecture (User Interface, Application, Domain, Infrastructure). On my Application Layer i have the following ProjectService.cs class: public class ProjectService { private IProjectRepository ProjectRepo { get; set; } public ProjectService(IProjectRepository projectRepo) { ProjectRepo = projectRepo; } public void CreateNewProject(string name) { IList<Project> projects = ProjectRepo.GetProjectsByName(name); if (projects.Count > 0) throw new Exception("Project name already exists."); Project project = new Project(name); ProjectRepo.InsertProject(project); } } On my Domain Layer, i have the Project.cs class and the IProjectRepository.cs interface: public class Project { public int ProjectID { get; private set; } public string Name { get; private set; } public Project(string name) { ValidateName(name); Name = name; } private void ValidateName(string name) { if (name == null || name.Equals(string.Empty)) { throw new Exception("Project name cannot be empty or null."); } } } public interface IProjectRepository { void InsertProject(Project project); IList<Project> GetProjectsByName(string projectName); } On my Infrastructure layer, i have the implementation of IProjectRepository which does the actual querying (the code is irrelevant). I don't like two things about this design: 1) I've read that the repository interfaces should be a part of the domain but the implementations should not. That makes no sense to me since i think the domain shouldn't call the repository methods (persistence ignorance), that should be a responsability of the services in the application layer. (Something tells me i'm terribly wrong.) 2) The process of creating a new project involves two validations (not null and not duplicate). In my design above, those two validations are scattered in two different places making it harder (imho) to see whats going on. So, my question is, from a DDD perspective, is this modelled correctly or would you do it in a different way?

    Read the article

  • Mocking objects with complex Lambda Expressions as parameters

    - by iCe
    Hi there, I´m encountering this problem trying to mock some objects that receive complex lambda expressions in my projects. Mostly with with proxy objects that receive this type of delegate: Func<Tobj, Fun<TParam1, TParam2, TResult>> I have tried to use Moq as well as RhinoMocks to acomplish mocking those types of objects, however both fail. (Moq fails with NotSupportedException, and in RhinoMocks simpy does not satisgy expectation). This is simplified example of what I´m trying to do: I have a Calculator object that does calculations: public class Calculator { public Calculator() { } public int Add(int x, int y) { var result = x + y; return result; } public int Substract(int x, int y) { var result = x - y; return result; } } I need to validate parameters on every method in the Calculator class, so to keep with the Single Responsability principle, I create a validator class. I wire everything up using a Proxy class, that prevents having duplicate code: public class CalculatorProxy : CalculatorExample.ICalculatorProxy { private ILimitsValidator _validator; public CalculatorProxy(Calculator _calc, ILimitsValidator _validator) { this.Calculator = _calc; this._validator = _validator; } public int Operation(Func&lt;Calculator, Func&lt;int, int, int&gt;&gt; operation, int x, int y) { _validator.ValidateArgs(x, y); var calcMethod = operation(this.Calculator); var result = calcMethod(x, y); _validator.ValidateResult(result); return result; } public Calculator Calculator { get; private set; } } Now, I´m testing a component that does use the CalculatorProxy, so I want to mock it, for example using Rhino Mocks: [TestMethod] public void ParserWorksWithCalcultaroProxy() { var calculatorProxyMock = MockRepository.GenerateMock&lt;ICalculatorProxy&gt;(); calculatorProxyMock.Expect(x =&gt; x.Calculator).Return(_calculator); calculatorProxyMock.Expect(x =&gt; x.Operation(c =&gt; c.Add, 2, 2)).Return(4); var mathParser = new MathParser(calculatorProxyMock); mathParser.ProcessExpression("2 + 2"); calculatorProxyMock.VerifyAllExpectations(); } However I cannot get it to work! Any ideas about how this can be done? Thanks a lot!

    Read the article

  • Dynamically specify the type in C#

    - by Lirik
    I'm creating a custom DataSet and I'm under some constrains: I want the user to specify the type of the data which they want to store. I want to reduce type-casting because I think it will be VERY expensive. I will use the data VERY frequently in my application. I don't know what type of data will be stored in the DataSet, so my initial idea was to make it a List of objects, but I suspect that the frequent use of the data and the need to type-cast will be very expensive. The basic idea is this: class DataSet : IDataSet { private Dictionary<string, List<Object>> _data; /// <summary> /// Constructs the data set given the user-specified labels. /// </summary> /// <param name="labels"> /// The labels of each column in the data set. /// </param> public DataSet(List<string> labels) { _data = new Dictionary<string, List<object>>(); foreach (string label in labels) { _data.Add(label, new List<object>()); } } #region IDataSet Members public List<string> DataLabels { get { return _data.Keys.ToList(); } } public int Count { get { _data[_data.Keys[0]].Count; } } public List<object> GetValues(string label) { return _data[label]; } public object GetValue(string label, int index) { return _data[label][index]; } public void InsertValue(string label, object value) { _data[label].Insert(0, value); } public void AddValue(string label, object value) { _data[label].Add(value); } #endregion } A concrete example where the DataSet will be used is to store data obtained from a CSV file where the first column contains the labels. When the data is being loaded from the CSV file I'd like to specify the type rather than casting to object. The data could contain columns such as dates, numbers, strings, etc. Here is what it could look like: "Date","Song","Rating","AvgRating","User" "02/03/2010","Code Monkey",4.6,4.1,"joe" "05/27/2009","Code Monkey",1.2,4.5,"jill" The data will be used in a Machine Learning/Artificial Intelligence algorithm, so it is essential that I make the reading of data very fast. I want to eliminate type-casting as much as possible, since I can't afford to cast from 'object' to whatever data type is needed on every read. I've seen applications that allow the user to pick the specific data type for each item in the csv file, so I'm trying to make a similar solution where a different type can be specified for each column. I want to create a generic solution so I don't have to return a List<object> but a List<DateTime> (if it's a DateTime column) or List<double> (if it's a column of doubles). Is there any way that this can be achieved? Perhaps my approach is wrong, is there a better approach to this problem?

    Read the article

  • Wondering about a way to conserve memory in C# using List<> with structs

    - by Michael Ryan
    I'm not even sure how I should phrase this question. I'm passing some CustomStruct objects as parameters to a class method, and storing them in a List. What I'm wondering is if it's possible and more efficient to add multiple references to a particular instance of a CustomStruct if a equivalent instance it found. This is a dummy/example struct: public struct CustomStruct { readonly int _x; readonly int _y; readonly int _w; readonly int _h; readonly Enum _e; } Using the below method, you can pass one, two, or three CustomStruct objects as parameters. In the final method (that takes three parameters), it may be the case that the 3rd and possibly the 2nd will have the same value as the first. List<CustomStruct> _list; public void AddBackground(CustomStruct normal) { AddBackground(normal, normal, normal); } public void AddBackground(CustomStruct normal, CustomStruct hover) { AddBackground(normal, hover, hover); } public void AddBackground(CustomStruct normal, CustomStruct hover, CustomStruct active) { _list = new List<CustomStruct>(3); _list.Add(normal); _list.Add(hover); _list.Add(active); } As the method stands now, I believe it will create new instances of CustomStruct objects, and then adds a reference of each to the List. It is my understanding that if I instead check for equality between normal and hover and (if equal) insert normal again in place of hover, when the method completes, hover will lose all references and eventually be garbage collected, whereas normal will have two references in the List. The same could be done for active. That would be better, right? The CustomStruct is a ValueType, and therefore one instance would remain on the Stack, and the three List references would just point to it. The overall List size is determined not by the object Type is contains, but by its Capacity. By eliminating the "duplicate" CustomStuct objects, you allow them to be cleaned up. When the CustomStruct objects are passed to these methods, new instances are created each time. When the structs are added to the List, is another copy made? For example, if i pass just one CustomStruct, AddBackground(normal) creates a copy of the original variable, and then passes it three times to Addbackground(normal, hover, active). In this method, three copies are made of the original copy. When the three local variables are added to the List using Add(), are additional copies created inside Add(), and does that defeat the purpose of any equality checks as previously mentioned? Am I missing anything here?

    Read the article

  • Moving MVC2 Helpers to MVC3 razor view engine

    - by Dai Bok
    Hi, In my MVC 2 site, I have an html helper, that I use to add javascripts for my pages. In my master page I have the main javascripts I want to include, and then in the aspx pages, I include page specific javascripts. So for example, my Site.Master has something like this: .... <head> <%=html.renderScripts() %> </head> ... //core scripts for main page <%html.AddScript("/scripts/jquery.js") %> <%html.AddScript("/scripts/myLib.js") %> .... Then in the child aspx page, I may also want to include other scripts. ... //the page specific script I want to use <% html.AddScript("/scripts/register.aspx.js") %> ... So when the full page gets rendered the javascript files are all collected and rendered in the head by sitemaster placeholder function RenderScripts. This works fine. Now with MVC 3 and razor view engine, they layout pages behave differently, because now my page level javascripts are not rendered/included. Now all I see the LayoutMaster contents. How do I get the solution wo workwith MVC 3 and the razor view engine. (The helper has already been re-written to return a HTMLString ;-)) For reference: my MasterLayout looks like this: ... ... <head> @{ Html.AddJavaScript("/Scripts/jQuery.js"); Html.AddJavaScript("/Scripts/myLib.js"); } //Render scripts @html.RenderScripts() </head> .... and the child page looks like this: @{ Layout = "~/Views/Shared/MasterLayout.cshtml"; ViewBag.Title = "Child Page"; Html.AddJavaScript("/Scripts/register.aspx.js"); } .... <div>some html </div> Thanks for your help. Edit = Just to explain, if this question is not clear enough. When producing a "page" I collect all the javascript files the designers want to use, by using the html.addJavascript("filename.js") and store these in a dictionary - (1) stops people adding duplicate js files - then finally when the page is ready to render, I write out all the javascript files neatly in the header. (2) - this helper helps keep JS in one place, and prevents designers from adding javascript files all over the place. This used to work fine with Master/SiteMaster Pages in mvc 2. but how can I achieve this with razor?

    Read the article

  • When should I use indexed arrays of OpenGL vertices?

    - by Tartley
    I'm trying to get a clear idea of when I should be using indexed arrays of OpenGL vertices, drawn with gl[Multi]DrawElements and the like, versus when I should simply use contiguous arrays of vertices, drawn with gl[Multi]DrawArrays. (Update: The consensus in the replies I got is that one should always be using indexed vertices.) I have gone back and forth on this issue several times, so I'm going to outline my current understanding, in the hopes someone can either tell me I'm now finally more or less correct, or else point out where my remaining misunderstandings are. Specifically, I have three conclusions, in bold. Please correct them if they are wrong. One simple case is if my geometry consists of meshes to form curved surfaces. In this case, the vertices in the middle of the mesh will have identical attributes (position, normal, color, texture coord, etc) for every triangle which uses the vertex. This leads me to conclude that: 1. For geometry with few seams, indexed arrays are a big win. Follow rule 1 always, except: For geometry that is very 'blocky', in which every edge represents a seam, the benefit of indexed arrays is less obvious. To take a simple cube as an example, although each vertex is used in three different faces, we can't share vertices between them, because for a single vertex, the surface normals (and possible other things, like color and texture co-ord) will differ on each face. Hence we need to explicitly introduce redundant vertex positions into our array, so that the same position can be used several times with different normals, etc. This means that indexed arrays are of less use. e.g. When rendering a single face of a cube: 0 1 o---o |\ | | \ | | \| o---o 3 2 (this can be considered in isolation, because the seams between this face and all adjacent faces mean than none of these vertices can be shared between faces) if rendering using GL_TRIANGLE_FAN (or _STRIP), then each face of the cube can be rendered thus: verts = [v0, v1, v2, v3] colors = [c0, c0, c0, c0] normal = [n0, n0, n0, n0] Adding indices does not allow us to simplify this. From this I conclude that: 2. When rendering geometry which is all seams or mostly seams, when using GL_TRIANGLE_STRIP or _FAN, then I should never use indexed arrays, and should instead always use gl[Multi]DrawArrays. (Update: Replies indicate that this conclusion is wrong. Even though indices don't allow us to reduce the size of the arrays here, they should still be used because of other performance benefits, as discussed in the comments) The only exception to rule 2 is: When using GL_TRIANGLES (instead of strips or fans), then half of the vertices can still be re-used twice, with identical normals and colors, etc, because each cube face is rendered as two separate triangles. Again, for the same single cube face: 0 1 o---o |\ | | \ | | \| o---o 3 2 Without indices, using GL_TRIANGLES, the arrays would be something like: verts = [v0, v1, v2, v2, v3, v0] normals = [n0, n0, n0, n0, n0, n0] colors = [c0, c0, c0, c0, c0, c0] Since a vertex and a normal are often 3 floats each, and a color is often 3 bytes, that gives, for each cube face, about: verts = 6 * 3 floats = 18 floats normals = 6 * 3 floats = 18 floats colors = 6 * 3 bytes = 18 bytes = 36 floats and 18 bytes per cube face. (I understand the number of bytes might change if different types are used, the exact figures are just for illustration.) With indices, we can simplify this a little, giving: verts = [v0, v1, v2, v3] (4 * 3 = 12 floats) normals = [n0, n0, n0, n0] (4 * 3 = 12 floats) colors = [c0, c0, c0, c0] (4 * 3 = 12 bytes) indices = [0, 1, 2, 2, 3, 0] (6 shorts) = 24 floats + 12 bytes, and maybe 6 shorts, per cube face. See how in the latter case, vertices 0 and 2 are used twice, but only represented once in each of the verts, normals and colors arrays. This sounds like a small win for using indices, even in the extreme case of every single geometry edge being a seam. This leads me to conclude that: 3. When using GL_TRIANGLES, one should always use indexed arrays, even for geometry which is all seams. Please correct my conclusions in bold if they are wrong.

    Read the article

  • Sorting based on existing elements in xslt

    - by Teelo
    Hi , I want to sort in xslt based on existing set of pattern . Let me explain with the code: <Types> <Type> <Names> <Name>Ryan</Name> </Names> <Address>2344</Address> </Type> <Type> <Names> </Name>Timber</Name> </Names> <Address>1234</Address> </Type> <Type> <Names> </Name>Bryan</Name> </Names> <Address>34</Address> </Type> </Types> Right now I m just calling it and getting it like (all hyperlinks) Ryan Timber Bryan Now I don't want sorting on name but I have existing pattern how I want it to get displayed.Like Timber Bryan Ryan (Also I don't want to lose the url attached to my names earlier while doing this) I was thinking of putting earlier value in some array and sort based on the other array where I will store my existing pattern. But I am not sure how to achieve that.. My xslt looks like this now(there can be duplicate names also) <xsl:for-each select="/Types/Type/Names/Name/text()[generate-id()=generate-id(key('Name',.)[1])]"> <xsl:call-template name="typename"> </xsl:call-template> </xsl:for-each> <xsl:template name="typename"> <li> <a href="somelogicforurl"> <xsl:value-of select="."/> </a> </li> </xsl:template> I am using xsl 1.0

    Read the article

  • Python: Script works, but seems to deadlock after some time

    - by sberry2A
    I have the following script, which is working for the most part Link to PasteBin The script's job is to start a number of threads which in turn each start a subprocess with Popen. The output from each subprocess is as follows: 1 2 3 . . . n Done Bascially the subprocess is transferring 10M records from tables in one database to different tables in another db with a lot of data massaging/manipulation in between because of the different schemas. If the subprocess fails at any time in it's execution (bad records, duplicate primary keys, etc), or it completes successfully, it will output "Done\n". If there are no more records to select against for transfer then it will output "NO DATA\n" My intent was to create my script "tableTransfer.py" which would spawn a number of these processes, read their output, and in turn output information such as number of updates completed, time remaining, time elapsed, and number of transfers per second. I started running the process last night and checked in this morning to see it had deadlocked. There were not subprocceses running, there are still records to be updated, and the script had not exited. It was simply sitting there, no longer outputting the current information because no subprocces were running to update the total number complete which is what controls updates to the output. This is running on OS X. I am looking for three things: I would like to get rid of the possibility of this deadlock occurring so I don't need to check in on it as frequently. Is there some issue with locking? Am I doing this in a bad way (gThreading variable to control looping of spawning additional thread... etc.) I would appreciate some suggestions for improving my overall methodology. How should I handle ctrl-c exit? Right now I need to kill the process, but assume I should be able to use the signal module or other to catch the signal and kill the threads, is that right? I am not sure whether I should be pasting my entire script here, since I usually just paste snippets. Let me know if I should paste it here as well.

    Read the article

  • ScriptManager emits ScriptReferences after ClientScriptIncludes. Why?

    - by Chris F
    I have an application that uses a lot of javascript in a lot of different .js files. Each page can have any number and combination of different controls and each control can use a different set of js files. Instead of including all possible js files as <script src="xxx.js"> in the head of my master page, I thought I would reduce the amount of included files by getting each control to call ScriptManager.Scripts.Add(new ScriptReference("xxx.js")) - thus only the scripts that are actually needed by the page will be included. Well that bit works fine. But...the controls also use ScriptManager.RegisterClientScriptBlock fairly extensively too. (There are scenarios where the controls are inside update panels). I was disappointed to see that the ScriptManager emits the client script includes before the ScriptReferences. For example - see the test file and output at the end (this results in a JS error because $ is not defined). Am I being dumb or is this to be expected? I would have thought that a sensible thing for the ScriptManger to do would be to emit the ScriptReferences first and then the other stuff. Short of rolling my own ScriptManager like object to manage static JS file references, does anyone have any suggestions as to get the behaviour that I want from ScriptManager?? Thanks in advance. Example File <%@ Page Language="C#" AutoEventWireup="true" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head runat="server"> <script runat="server"> protected void Page_Load(object sender, EventArgs e) { ScriptManager.RegisterClientScriptBlock(Page, Page.GetType(), "Test", "$(function() {alert('hello');});", true); } </script> </head> <body> <form id="form1" runat="server"> <asp:ScriptManager runat="server" > <Scripts> <asp:ScriptReference Path="~/jquery/jquery.js" /> </Scripts> </asp:ScriptManager> </form> </body> </html> Output <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > ... boring stuff removed .... <script src="/js/WebResource.axd?d=-nOHbla bla " type="text/javascript"></script> <script type="text/javascript"> //<![CDATA[ $(function() {alert('hello');});//]]> </script> ... boring stuff removed .... <script src="/js/ScriptResource.axd?d=qgCbla bla bla" type="text/javascript"></script> <script src="jquery/jquery.js" type="text/javascript"></script> ... boring stuff removed .... </form> </body> </html>

    Read the article

  • get eigenvalue pca with java

    - by Muhamad Burhanudin
    I try use PCA to reduce dimention, and i use jama for help me using matrix. but, i got problem when get eigenvalue with jama. for example i hava 2 image dimention 100x100, then i create single matrix 2 image x (100x100). there is 20.000 pixel. and how to get reduction with eigenvalue? this is sample my code : public static void main(String[] args) { BufferedImage bi; int[] rgb; int R, G, B; // int[] jum; double[][] gray = new double[500][500] ; String[] baris = new String[1000]; try { //bi = ImageIO.read(new File("D:\\c.jpg")); int[][] pixelData = new int[bi.getHeight() * bi.getWidth()][3]; int counter = 0; for (int i = 0; i < bi.getHeight(); i++) { for (int j = 0; j < bi.getWidth(); j++) { gray[i][j] = getPixelData(bi, i, j); // R = getR(bi, i, j); //G = getG(bi, i, j); //B = getB(bi, i, j); //jum = R + G + B; // gray[counter] = Double.toString(R + G + B / 3); // System.out.println("Gray " +gray); //for (int k = 0; k < rgb.length; k++) { // pixelData[counter][k] = rgb[k]; // } counter++; } } } catch (IOException e) { e.printStackTrace(); } Matrix matrix = new Matrix(gray); PCA pca = new PCA(matrix); pca.getEigenvalue(6); String n = pca.toString(); System.err.println("nilai n "+n); //double dete = pcadete(matrix,3600); } private static int getPixelData(BufferedImage bi, int x, int y) { int argb = bi.getRGB(y, x); int r, g, b; int gray; int rgb[] = new int[]{ (argb >> 16) & 0xff, //red (argb >> 8) & 0xff, //green (argb) & 0xff //blue }; r = rgb[0]; g = rgb[1]; b = rgb[2]; gray = (r + g + b) / 3; System.out.println("gray: " + gray); return gray; } when i show eigenvalue in this code : PCA pca = new PCA(matrix); pca.getEigenvalue(6); String n = pca.toString(); System.err.println("nilai n "+n); Result is : nilai n PCA@c3e9e9 Can, u tell me what way to get eigenvalue and reduction dimension.

    Read the article

  • "Unable to find any mappings for the given content, keyPath=null" RestKit 0.2

    - by abisson
    So I switched to using RestKit 0.2 and CoreData and I've been having a lot of trouble trying to get the mappings correct... I don't understand why. The JSON Response of my server is like this: { "meta": { "limit": 20, "next": null, "offset": 0, "previous": null, "total_count": 2 }, "objects": [{ "creation_date": "2012-10-15T20:16:47", "description": "", "id": 1, "last_modified": "2012-10-15T20:16:47", "order": 1, "other_names": "", "primary_name": "Mixing", "production_line": "/api/rest/productionlines/1/", "resource_uri": "/api/rest/cells/1/" }, { "creation_date": "2012-10-15T20:16:47", "description": "", "id": 2, "last_modified": "2012-10-15T20:16:47", "order": 2, "other_names": "", "primary_name": "Packaging", "production_line": "/api/rest/productionlines/1/", "resource_uri": "/api/rest/cells/2/" }] } Then in XCode I have: RKObjectManager *objectManager = [RKObjectManager sharedManager]; [AFNetworkActivityIndicatorManager sharedManager].enabled = YES; NSManagedObjectModel *managedObjectModel = [NSManagedObjectModel mergedModelFromBundles:nil]; RKManagedObjectStore *managedObjectStore = [[RKManagedObjectStore alloc] initWithManagedObjectModel:managedObjectModel]; objectManager.managedObjectStore = managedObjectStore; RKEntityMapping *cellMapping = [RKEntityMapping mappingForEntityForName:@"Cell" inManagedObjectStore:managedObjectStore]; cellMapping.primaryKeyAttribute = @"identifier"; [cellMapping addAttributeMappingsFromDictionary:@{ @"id": @"identifier", @"primary_name": @"primaryName", }]; RKResponseDescriptor *responseCell = [RKResponseDescriptor responseDescriptorWithMapping:cellMapping pathPattern:@"/api/rest/cells/?format=json" keyPath:@"objects" statusCodes:RKStatusCodeIndexSetForClass(RKStatusCodeClassSuccessful)]; [objectManager addResponseDescriptorsFromArray:@[responseCell, responseUser, responseCompany]]; [managedObjectStore createPersistentStoreCoordinator]; NSString *storePath = [RKApplicationDataDirectory() stringByAppendingPathComponent:@"AppDB.sqlite"]; NSString *seedPath = [[NSBundle mainBundle] pathForResource:@"SeedDatabase" ofType:@"sqlite"]; NSError *error; NSPersistentStore *persistentStore = [managedObjectStore addSQLitePersistentStoreAtPath:storePath fromSeedDatabaseAtPath:seedPath withConfiguration:nil options:nil error:&error]; NSAssert(persistentStore, @"Failed to add persistent store with error: %@", error); // Create the managed object contexts [managedObjectStore createManagedObjectContexts]; // Configure a managed object cache to ensure we do not create duplicate objects managedObjectStore.managedObjectCache = [[RKInMemoryManagedObjectCache alloc] initWithManagedObjectContext:managedObjectStore.persistentStoreManagedObjectContext]; My request is: [[RKObjectManager sharedManager] getObjectsAtPath:@"/api/rest/cells/?format=json" parameters:nil success:^(RKObjectRequestOperation *operation, RKMappingResult *mappingResult) { RKLogInfo(@"Load complete: Table should refresh..."); [[NSUserDefaults standardUserDefaults] setObject:[NSDate date] forKey:@"LastUpdatedAt"]; [[NSUserDefaults standardUserDefaults] synchronize]; } failure:^(RKObjectRequestOperation *operation, NSError *error) { RKLogError(@"Load failed with error: %@", error); }]; And I always get the following error: **Error Domain=org.restkit.RestKit.ErrorDomain Code=1001 "Unable to find any mappings for the given content" UserInfo=0x1102d500 {DetailedErrors=(), NSLocalizedDescription=Unable to find any mappings for the given content, keyPath=null}** Thanks a lot!

    Read the article

  • Optimizing Python code with many attribute and dictionary lookups

    - by gotgenes
    I have written a program in Python which spends a large amount of time looking up attributes of objects and values from dictionary keys. I would like to know if there's any way I can optimize these lookup times, potentially with a C extension, to reduce the time of execution, or if I need to simply re-implement the program in a compiled language. The program implements some algorithms using a graph. It runs prohibitively slowly on our data sets, so I profiled the code with cProfile using a reduced data set that could actually complete. The vast majority of the time is being burned in one function, and specifically in two statements, generator expressions, within the function: The generator expression at line 202 is neighbors_in_selected_nodes = (neighbor for neighbor in node_neighbors if neighbor in selected_nodes) and the generator expression at line 204 is neighbor_z_scores = (interaction_graph.node[neighbor]['weight'] for neighbor in neighbors_in_selected_nodes) The source code for this function of context provided below. selected_nodes is a set of nodes in the interaction_graph, which is a NetworkX Graph instance. node_neighbors is an iterator from Graph.neighbors_iter(). Graph itself uses dictionaries for storing nodes and edges. Its Graph.node attribute is a dictionary which stores nodes and their attributes (e.g., 'weight') in dictionaries belonging to each node. Each of these lookups should be amortized constant time (i.e., O(1)), however, I am still paying a large penalty for the lookups. Is there some way which I can speed up these lookups (e.g., by writing parts of this as a C extension), or do I need to move the program to a compiled language? Below is the full source code for the function that provides the context; the vast majority of execution time is spent within this function. def calculate_node_z_prime( node, interaction_graph, selected_nodes ): """Calculates a z'-score for a given node. The z'-score is based on the z-scores (weights) of the neighbors of the given node, and proportional to the z-score (weight) of the given node. Specifically, we find the maximum z-score of all neighbors of the given node that are also members of the given set of selected nodes, multiply this z-score by the z-score of the given node, and return this value as the z'-score for the given node. If the given node has no neighbors in the interaction graph, the z'-score is defined as zero. Returns the z'-score as zero or a positive floating point value. :Parameters: - `node`: the node for which to compute the z-prime score - `interaction_graph`: graph containing the gene-gene or gene product-gene product interactions - `selected_nodes`: a `set` of nodes fitting some criterion of interest (e.g., annotated with a term of interest) """ node_neighbors = interaction_graph.neighbors_iter(node) neighbors_in_selected_nodes = (neighbor for neighbor in node_neighbors if neighbor in selected_nodes) neighbor_z_scores = (interaction_graph.node[neighbor]['weight'] for neighbor in neighbors_in_selected_nodes) try: max_z_score = max(neighbor_z_scores) # max() throws a ValueError if its argument has no elements; in this # case, we need to set the max_z_score to zero except ValueError, e: # Check to make certain max() raised this error if 'max()' in e.args[0]: max_z_score = 0 else: raise e z_prime = interaction_graph.node[node]['weight'] * max_z_score return z_prime Here are the top couple of calls according to cProfiler, sorted by time. ncalls tottime percall cumtime percall filename:lineno(function) 156067701 352.313 0.000 642.072 0.000 bpln_contextual.py:204(<genexpr>) 156067701 289.759 0.000 289.759 0.000 bpln_contextual.py:202(<genexpr>) 13963893 174.047 0.000 816.119 0.000 {max} 13963885 69.804 0.000 936.754 0.000 bpln_contextual.py:171(calculate_node_z_prime) 7116883 61.982 0.000 61.982 0.000 {method 'update' of 'set' objects}

    Read the article

  • Spring MVC working with Web Designers

    - by jboyd
    When working with web-designers in a Spring-MVC and JSP environment, are there any tools or helpful patterns to reduce the pain of going back and forth from HTML to JSP and back? Project requirements dictate that views will change often, and it can be difficult to make changes efficiently because of the amount of Java code that leaks into the view layer. Ideally I'd like to remove almost all Java code from the view, but it does not seem like this works with the Spring/JSP philosophy where often the way to remove Java code is to replace that code with tag libraries, which will still have a similar problem. To provide a little clarity to my question I'm going to include some existing code (inherited by me) to show the kinds of problems I'm likely to face when change the look of our views: <%-- Begin looping through results --%> <% List memberList = memberSearchResults.getResults(); for(int i = start - 1; i < memberList.size() && i < end; i++) { Profile profile = (Profile)memberList.get(i); long profileId = profile.getProfileId(); String nickname = profile.getNickname(); String description = profile.getDescription(); Image image = profile.getAvatarImage(); String avatarImageSrc = null; int avatarImageWidthNum = 0; int avatarImageHeightNum = 0; if(null != image) { avatarImageSrc = image.getSrc(); avatarImageWidthNum = image.getWidth(); avatarImageHeightNum = image.getHeight(); } String bgColor = i % 2 == 1 ? "background-color:#FFF" : ""; %> <div style="float:left;clear:both;padding:5px 0 5px 5px;cursor:pointer;<%= bgColor %>" onclick='window.location="profile.sp?profileId=<%= profileId %>"'> <div style="float:right;clear:right;padding-left:10px;width:515px;font-size:10px;color:#7e7e7e"> <h6><%= nickname %></h6> <%= description %> </div> <img style="float:left;clear:left;" alt="Avatar Image" src="<%= null != avatarImageSrc && avatarImageSrc.trim().length() > 0 ? avatarImageSrc : "images/defaultUserImage.png" %>" <%= avatarImageWidthNum < avatarImageHeightNum ? "height='59'" : "width='92'" %> /> </div> <% } // End loop %> Now, ignoring some of the code smells there, it's obvious that if someone wants to change the look of that DIV it would be neccesary to re-place all the Java/JSP code into the new HTML given (the designers don't work in JSP files, they have their own HTML versions of the website). Which is tedious, and prone to errors.

    Read the article

  • C++0x Overload on reference, versus sole pass-by-value + std::move?

    - by dean
    It seems the main advice concerning C++0x's rvalues is to add move constructors and move operators to your classes, until compilers default-implement them. But waiting is a losing strategy if you use VC10, because automatic generation probably won't be here until VC10 SP1, or in worst case, VC11. Likely, the wait for this will be measured in years. Here lies my problem. Writing all this duplicate code is not fun. And it's unpleasant to look at. But this is a burden well received, for those classes deemed slow. Not so for the hundreds, if not thousands, of smaller classes. ::sighs:: C++0x was supposed to let me write less code, not more! And then I had a thought. Shared by many, I would guess. Why not just pass everything by value? Won't std::move + copy elision make this nearly optimal? Example 1 - Typical Pre-0x constructor OurClass::OurClass(const SomeClass& obj) : obj(obj) {} SomeClass o; OurClass(o); // single copy OurClass(std::move(o)); // single copy OurClass(SomeClass()); // single copy Cons: A wasted copy for rvalues. Example 2 - Recommended C++0x? OurClass::OurClass(const SomeClass& obj) : obj(obj) {} OurClass::OurClass(SomeClass&& obj) : obj(std::move(obj)) {} SomeClass o; OurClass(o); // single copy OurClass(std::move(o)); // zero copies, one move OurClass(SomeClass()); // zero copies, one move Pros: Presumably the fastest. Cons: Lots of code! Example 3 - Pass-by-value + std::move OurClass::OurClass(SomeClass obj) : obj(std::move(obj)) {} SomeClass o; OurClass(o); // single copy, one move OurClass(std::move(o)); // zero copies, two moves OurClass(SomeClass()); // zero copies, one move Pros: No additional code. Cons: A wasted move in cases 1 & 2. Performance will suffer greatly if SomeClass has no move constructor. What do you think? Is this correct? Is the incurred move a generally acceptable loss when compared to the benefit of code reduction?

    Read the article

  • How do I launch a WPF app from command.com. I'm getting a FontCache error.

    - by jttraino
    I know this is not ideal, but my constraint is that I have a legacy application written in Clipper. I want to launch a new, WinForms/WPF application from inside the application (to ease transition). This legacy application written in Clipper launches using: SwpRunCmd("C:\MyApp\MyBat.bat",0) The batch file contains something like this command: C:\PROGRA~1\INTERN~1\iexplore "http://QASVR/MyApp/AppWin/MyCompany.MyApp.AppWin.application#MyCompany.MyApp.AppWin.application" It is launching a WinForms/WPF app that is we deploy via ClickOnce. Everything has been going well until we introduced WPF into the application. We were able to easily launch from the legacy application. Since we have introduced WPF, however, we have the following behavior. If we launch via the Clipper application first, we get an exception when launching the application. The error text is: The type initializer for 'System.Windows.FrameworkElement' threw an exception. at System.Windows.FrameworkElement..ctor() at System.Windows.Controls.Panel..ctor() at System.Windows.Controls.DockPanel..ctor() at System.Windows.Forms.Integration.AvalonAdapter..ctor(ElementHost hostControl) at System.Windows.Forms.Integration.ElementHost..ctor() at MyCompany.MyApp.AppWin.Main.InitializeComponent() at MyCompany.MyApp.AppWin.Main..ctor(String[] args) at MyCompany.MyApp.AppWin.Program.Main(String[] args) The type initializer for 'System.Windows.Documents.TextElement' threw an exception. at System.Windows.FrameworkElement..cctor() The type initializer for 'System.Windows.Media.FontFamily' threw an exception. at System.Windows.Media.FontFamily..ctor(String familyName) at System.Windows.SystemFonts.get_MessageFontFamily() at System.Windows.Documents.TextElement..cctor() The type initializer for 'MS.Internal.FontCache.Util' threw an exception. at MS.Internal.FontCache.Util.get_WindowsFontsUriObject() at System.Windows.Media.FontFamily.PreCreateDefaultFamilyCollection() at System.Windows.Media.FontFamily..cctor() Invalid URI: The format of the URI could not be determined. at System.Uri.CreateThis(String uri, Boolean dontEscape, UriKind uriKind) at System.Uri..ctor(String uriString, UriKind uriKind) at MS.Internal.FontCache.Util..cctor() If we launch the application via the URL (in IE) or via the icon on the desktop first, we do not get the exception and application launches as expected. The neat thing is that whatever we launch with first determines whether the app will launch at all. So, if we launch with legacy first, it breaks right away and we can't get the app to run even if we launch with the otherwise successful URL or icon. To get it to work, we have to logout and log back in and start it from the URL or icon. If we first use the URL or the icon, we have no problem launching from the legacy application from that point forward (until we logout and come back in). One other piece of information is that we are able to simulate the problem in the following fashion. If we enter a command prompt using "cmd.exe" and execute a statement to launch from a URL, we are successful. If, however, we enter a command prompt using "command.com" and we execute that same statement, we experience the breaking behavior. We assume it is because the legacy application in Clipper uses the equivalent of command.com to create the shell to spawn the other app. We have tried a bunch of hacks like having command.com run cmd.exe or psexec and then executing, but nothing seems to work. We have some ideas for workarounds (like making the app launch on startup so we force the successful launch from a URL, making all subsequent launches successful), but they all are sub-optimal even though we have a great deal of control over our workstations. To reduce the chance that this is related to permissions, we have given the launching account administrative rights (as well as non-administrative rights in case that made a difference). Any ideas would be greatly-appreciate. Like I said, we have some work arounds, but I would love to avoid them. Thanks!

    Read the article

  • Who calls the Destructor of the class when operator delete is used in multiple inheritance.

    - by dicaprio-leonard
    This question may sound too silly, however , I don't find concrete answer any where else. With little knowledge on how late binding works and virtual keyword used in inheritance. As in the code sample, when in case of inheritance where a base class pointer pointing to a derived class object created on heap and delete operator is used to deallocate the memory , the destructor of the of the derived and base will be called in order only when the base destructor is declared virtual function. Now my question is : 1) When the destructor of base is not virtual, why the problem of not calling derived dtor occur only when in case of using "delete" operator , why not in the case given below: derived drvd; base *bPtr; bPtr = &drvd; //DTOR called in proper order when goes out of scope. 2) When "delete" operator is used, who is reponsible to call the destructor of the class? The operator delete will have an implementation to call the DTOR ? or complier writes some extra stuff ? If the operator has the implementation then how does it looks like , [I need sample code how this would have been implemented]. 3) If virtual keyword is used in this example, how does operator delete now know which DTOR to call? Fundamentaly i want to know who calls the dtor of the class when delete is used. Sample Code class base { public: base() { cout<<"Base CTOR called"<<endl; } virtual ~base() { cout<<"Base DTOR called"<<endl; } }; class derived:public base { public: derived() { cout<<"Derived CTOR called"<<endl; } ~derived() { cout<<"Derived DTOR called"<<endl; } }; I'm not sure if this is a duplicate, I couldn't find in search. int main() { base *bPtr = new derived(); delete bPtr;// only when you explicitly try to delete an object return 0; }

    Read the article

  • How to manage multiple versions of the same record

    - by Darvis Lombardo
    I am doing short-term contract work for a company that is trying to implement a check-in/check-out type of workflow for their database records. Here's how it should work... 1) A user creates a new entity within the application. There are about 20 related tables that will be populated in addition to the main entity table. 2) Once the entity is created the user will mark it as the master. 3) Another user can make changes to the master only by "checking out" the entity. Multiple users can checkout the entity at the same time. 4) Once the user has made all the necessary changes to the entity, they put it in a "needs approval" status. 5) After an authorized user reviews the entity, they can promote it to master which will put the original record in a tombstoned status. The way they are currently accomplishing the "check out" is by duplicating the entity records in all the tables. The primary keys include EntityID + EntityDate, so they duplicate the entity records in all related tables with the same EntityID and an updated EntityDate and give it a status of "checked out". When the record is put into the next state (needs approval), the duplication occurs again. Eventually it will be promoted to master at which time the final record is marked as master and the original master is marked as dead. This design seems hideous to me, but I understand why they've done it. When someone looks up an entity from within the application, they need to see all current versions of that entity. This was a very straightforward way for making that happen. But the fact that they are representing the same entity multiple times within the same table(s) doesn't sit well with me, nor does the fact that they are duplicating EVERY piece of data rather than only storing deltas. I would be interested in hearing your reaction to the design, whether positive or negative. I would also be grateful for any resoures you can point me to that might be useful for seeing how someone else has implemented such a mechanism. Thanks! Darvis

    Read the article

  • How and why do I set up a C# build machine?

    - by mmr
    Hi all, I'm working with a small (4 person) development team on a C# project. I've proposed setting up a build machine which will do nightly builds and tests of the project, because I understand that this is a Good Thing. Trouble is, we don't have a whole lot of budget here, so I have to justify the expense to the powers that be. So I want to know: What kind of tools/licenses will I need? Right now, we use Visual Studio and Smart Assembly to build, and Perforce for source control. Will I need something else, or is there an equivalent of a cron job for running automated scripts? What, exactly, will this get me, other than an indication of a broken build? Should I set up test projects in this solution (sln file) that will be run by these scripts, so I can have particular functions tested? We have, at the moment, two such tests, because we haven't had the time (or frankly, the experience) to make good unit tests. What kind of hardware will I need for this? Once a build has been finished and tested, is it a common practice to put that build up on an ftp site or have some other way for internal access? The idea is that this machine makes the build, and we all go to it, but can make debug builds if we have to. How often should we make this kind of build? How is space managed? If we make nightly builds, should we keep around all the old builds, or start to ditch them after about a week or so? Is there anything else I'm not seeing here? I realize that this is a very large topic, and I'm just starting out. I couldn't find a duplicate of this question here, and if there's a book out there I should just get, please let me know. EDIT: I finally got it to work! Hudson is completely fantastic, and FxCop is showing that some features we thought were implemented were actually incomplete. We also had to change the installer type from Old-And-Busted vdproj to New Hotness WiX. Basically, for those who are paying attention, if you can run your build from the command line, then you can put it into hudson. Making the build run from the command line via MSBuild is a useful exercise in itself, because it forces your tools to be current.

    Read the article

  • How to create 2 different scripts for 2 browsers (IE+all the rest)

    - by Barnabas Nagy
    I'm trying to solve an IE CSS issue by using conditional tags. My jQuery that does the job of tabbed box effect calls an id that is inside the conditional tags. I've given new ids to IE so on IE the jQuery is not working. I tried to duplicate the script with the new id in one of the script but the 2 scripts seems to get in conflict. My jQuery (in between the head tags) is (for all all browsers except IE): <script> var currentTab = 0; // Set to a different number to start on a different tab. function openTab(clickedTab) { var thisTab = $(".tabbed-box .tabs a").index(clickedTab); $(".tabbed-box .tabs li a").removeClass("active"); $(".tabbed-box .tabs li a:eq("+thisTab+")").addClass("active"); $(".tabbed-box .tabbed-content").hide(); $(".tabbed-box .tabbed-content:eq("+thisTab+")").show(); currentTab = thisTab; } $(document).ready(function() { $(".tabs li:eq(0) a").css("border-left", "none"); $(".tabbed-box .tabs li a").click(function() { openTab($(this)); return false; }); $(".tabbed-box .tabs li a:eq("+currentTab+")").click() }); </script> And I would need this for IE: <script> var currentTab = 0; // Set to a different number to start on a different tab. function openTab(clickedTab) { var thisTab = $(".tabbed-box .tabs a").index(clickedTab); $(".tabbed-box ie-.tabs li a").removeClass("active"); $(".tabbed-box ie-.tabs li a:eq("+thisTab+")").addClass("active"); $(".tabbed-box .tabbed-content").hide(); $(".tabbed-box .tabbed-content:eq("+thisTab+")").show(); currentTab = thisTab; } $(document).ready(function() { $(".ie-tabs li:eq(0) a").css("border-left", "none"); $(".tabbed-box .ie-tabs li a").click(function() { openTab($(this)); return false; }); $(".tabbed-box .ie-tabs li a:eq("+currentTab+")").click() }); </script> Having the 2 scripts in the head conflicts with each other. Maybe there is a way to combine them?

    Read the article

  • What are the weaknesses of this user authentication method?

    - by byronh
    I'm developing my own PHP framework. It seems all the security articles I have read use vastly different methods for user authentication than I do so I could use some help in finding security holes. Some information that might be useful before I start. I use mod_rewrite for my MVC url's. Passwords are sha1 and md5 encrypted with 24 character salt unique to each user. mysql_real_escape_string and/or variable typecasting on everything going in, and htmlspecialchars on everything coming out. Step-by step process: Top of every page: session_start(); session_regenerate_id(); If user logs in via login form, generate new random token to put in user's MySQL row. Hash is generated based on user's salt (from when they first registered) and the new token. Store the hash and plaintext username in session variables, and duplicate in cookies if 'Remember me' is checked. On every page, check for cookies. If cookies set, copy their values into session variables. Then compare $_SESSION['name'] and $_SESSION['hash'] against MySQL database. Destroy all cookies and session variables if they don't match so they have to log in again. If login is valid, some of the user's information from the MySQL database is stored in an array for easy access. So far, I've assumed that this array is clean so when limiting user access I refer to user.rank and deny access if it's below what's required for that page. I've tried to test all the common attacks like XSS and CSRF, but maybe I'm just not good enough at hacking my own site! My system seems way too simple for it to actually be secure (the security code is only 100 lines long). What am I missing? I've also spent alot of time searching for the vulnerabilities with mysql_real_escape string but I haven't found any information that is up-to-date (everything is from several years ago at least and has apparently been fixed). All I know is that the problem was something to do with encoding. If that problem still exists today, how can I avoid it? Any help will be much appreciated.

    Read the article

< Previous Page | 238 239 240 241 242 243 244 245 246 247 248 249  | Next Page >