Search Results

Search found 9492 results on 380 pages for 'logic unit'.

Page 347/380 | < Previous Page | 343 344 345 346 347 348 349 350 351 352 353 354  | Next Page >

  • Deleting a node in a family tree

    - by user559142
    Hi, I'm trying to calclulate the best way to delete a node in a family tree. First, a little description of how the app works. My app makes the following assumption: Any node can only have one partner. That means that any child a single node has, it will also be the partner nodes child too. Therefore, step relations, divorces etc aren't compensated for. A node always has two parents - A mother and father cannot be added seperately. If the user doesn't know the details - the nodes attributes are set to a default value. Also any node can add parents, siblings, children to itself. Therefore in law relationships can be added. I have the following classes: FamilyMember String fName; String lName; String dob; String gender; FamilyMember mother, father, partner; ArrayListchildren; int index; int generation; void linkParents(); void linkPartner(); void addChild(); //gets & sets for fields Family ArrayListfamily; void addMember(); void removeMember(); FamilyMember getFamilyMember(index); ArrayListgetFamilyMembers(); FamilyTree Family family; void removeMember(); //need help void displayFamilyMembers(); void addFamilyMember(); void enterDetails(); void displayAncestors(); void displayDescendants(); void printDescendants(); FamilyMember findRootNode(); void sortGenerations(); void getRootGeneration(); I am having trouble with identifying the logic for removing a member. All other functions work fine. Has anyone developed a family tree app before who knows how to deal with removing various different nodes in the family "tree"? e.g. removing a leaf removing a leaf with partner (what if partner has parents etc) removing a parent It seems to be another recursive property but my head is swelling from over thought.

    Read the article

  • Objective-C memory management issue

    - by Toby Wilson
    I've created a graphing application that calls a web service. The user can zoom & move around the graph, and the program occasionally makes a decision to call the web service for more data accordingly. This is achieved by the following process: The graph has a render loop which constantly renders the graph, and some decision logic which adds web service call information to a stack. A seperate thread takes the most recent web service call information from the stack, and uses it to make the web service call. The other objects on the stack get binned. The idea of this is to reduce the number of web service calls to only those appropriate, and only one at a time. Right, with the long story out of the way (for which I apologise), here is my memory management problem: The graph has persistant (and suitably locked) NSDate* objects for the currently displayed start & end times of the graph. These are passed into the initialisers for my web service request objects. The web service call objects then retain the dates. After the web service calls have been made (or binned if they were out of date), they release the NSDate*. The graph itself releases and reallocates new NSDates* on the 'touches ended' event. If there is only one web service call object on the stack when removeAllObjects is called, EXC_BAD_ACCESS occurs in the web service call object's deallocation method when it attempts to release the date objects (even though they appear to exist and are in scope in the debugger). If, however, I comment out the release messages from the destructor, no memory leak occurs for one object on the stack being released, but memory leaks occur if there are more than one object on the stack. I have absolutely no idea what is going wrong. It doesn't make a difference what storage symantics I use for the web service call objects dates as they are assigned in the initialiser and then only read (so for correctness' sake are set to readonly). It also doesn't seem to make a difference if I retain or copy the dates in the initialiser (though anything else obviously falls out of scope or is unwantedly released elsewhere and causes a crash). I'm sorry this explanation is long winded, I hope it's sufficiently clear but I'm not gambling on that either I'm afraid. Major big thanks to anyone that can help, even suggest anything I may have missed?

    Read the article

  • ANSI C as core of a C# project? Is this possible?

    - by Nektarios
    I'm writing a NON-GUI app which I want to be cross platform between OS X and Windows. I'm looking at the following architecture, but I don't know if it will work on the windows side: (Platform specific entry point) - ANSI C main loop = ANSI C model code doing data processing / logic = (Platform specific helpers) So the core stuff I'm planning to write in regular ANSI C, because A) it should be platform independent, B) I'm extremely comfortable with C, C) It can do the job and do it well (Platform specific entry point) can be written in whatever necessary to get the job done, this is a small amount of code, doesn't matter to me. (Platform specific helpers) is the sticky thing. This is stuff like parsing XML, accessing databases, graphics toolkit stuff, whatever. Things that aren't easy in C. Things that modern languages/frameworks will give for free. On OS X this code will be written in Objective-C interfacing with Cocoa. On Windows I'm thinking my best bet is to use C# So on Windows my architecture (simplified) looks like (C# or C?) - ANSI C - C# Is this possible? Some thoughts/suggestions so far.. 1) Compile my C core as a .dll -- this is fine, but seems there's no way to call my C# helpers unless I can somehow get function pointers and pass them to my core, but that seems unlikely 2) Compile a C .exe and a C# .exe and have them talk via shared memory or some kind of IPC. I'm not entirely opposed to this but it obviously introduces a lot of complexity so it doesn't seem ideal 3) Instead of C# use C++, it gets me some nice data management stuff and nice helper code. And I can mix it pretty easily. And the work I do could probably easily port to Linux. But I really don't like C++, and I don't want this to turn in to a 3rd-party-library-fest. Not that it's a huge deal, but it's 2010.. anything for basic data management should be built in. And targetting Linux is really not a priority. Note that no "total" alternatives are OK as suggested in other similar questions on SO I've seen; java, RealBasic, mono.. this is an extremely performance intensive application doing soft realtime for game/simulation purposes, I need C & friends here to do it right (maybe you don't, but I do)

    Read the article

  • How should I organize my C# classes? [closed]

    - by oscar.fimbres
    I'm creating an email generator system. I'm creating some clases and I'm trying to make things right. By the time, I have created 5 classes. Look at the class diagram: I'm going to explain you each one. Person. It's not a big deal. Just have two constructors: Person(fname, lname1, lname2) and Person(token, fname, lname1, lname2). Note that email property stays without value. StringGenerator. This is a static class and it has only a public function: Generate. The function receives a Person class and it will return a list of patterns for the email. MySql. It contains all the necessary to connect to a database. Database. This class inherits from MySql class. It has particular functions for the database. This gets all the registries from a table (function GetPeople) and return a List. Each person from the list contains all data except Email. Also it can add records (List but this must contains an available email). An available email is when an email doesn't have another person. For that reason, I have a method named ExistsEmail. Container. This is the class which is causing me some problems. It's like a temporary container. It supposed to have a people list from GetPeople (in Database class) and for each person it adds, it must generate a list of possible names (StringGenerator.Generate), then it selects one of the list and it must check out if exists in the database or in the same container. As I told above this is temporal, it may none of the possible emails is available. So the user can modify or enter a custom email available and update the list in this container. When all the email's people are available, it sends a list to add in the database, It must have a Flush method, to insert all the people in the database. I'm trying to design correct class. I need a little help to improve or edite the classes, because I want to separate the logic and visual, and learn of you. I hope you've been able to understand me. Any question or doubt, please let me know. Anyway, I attached the solution here to better understand it: http://www.megaupload.com/?d=D94FH8GZ

    Read the article

  • WCF App using Peer Chat app as example does not work.

    - by splate
    I converted a VB .Net 3.5 app to use peer to peer WCF using the available Microsoft example of the Chat app. I made sure that I copied the app.config file for the sample(modified the names for my app), added the appropriate references. I followed all the tutorials and added the appropriate tags and structure in my app code. Everything runs without any errors, but the clients only get messages from themselves and not from the other clients. The sample chat application does run just fine with multiple clients. The only difference I could find is that the server on the sample is targeting the framework 2.0, but I assume that is wrong and it is building it in at least 3.0 or the System.ServiceModel reference would break. Is there something that has to be registered that the sample is doing behind the scenes or is the sample a special project type? I am confused. My next step is to copy all my classes and logic from my app to the sample app, but that is likely a lot of work. Here is my Client App.config: <client><endpoint name="thldmEndPoint" address="net.p2p://thldmMesh/thldmServer" binding="netPeerTcpBinding" bindingConfiguration="PeerTcpConfig" contract="THLDM_Client.IGameService"></endpoint></client> <bindings><netPeerTcpBinding> <binding name="PeerTcpConfig" port="0"> <security mode="None"></security> <resolver mode="Custom"> <custom address="net.tcp://localhost/thldmServer" binding="netTcpBinding" bindingConfiguration="TcpConfig"></custom> </resolver> </binding></netPeerTcpBinding> <netTcpBinding> <binding name="TcpConfig"> <security mode="None"></security> </binding> </netTcpBinding> </bindings> Here is my server App.config: <services> <service name="System.ServiceModel.PeerResolvers.CustomPeerResolverService"> <host> <baseAddresses> <add baseAddress="net.tcp://localhost/thldmServer"/> </baseAddresses> </host> <endpoint address="net.tcp://localhost/thldmServer" binding="netTcpBinding" bindingConfiguration="TcpConfig" contract="System.ServiceModel.PeerResolvers.IPeerResolverContract"> </endpoint> </service> </services> <bindings> <netTcpBinding> <binding name="TcpConfig"> <security mode="None"></security> </binding> </netTcpBinding> </bindings> Thanks ahead of time.

    Read the article

  • How to prevent mvn jetty:run from executing test phase?

    - by tputkonen
    We use MySQL in production, and Derby for unit tests. Our pom.xml copies Derby version of persistence.xml before tests, and replaces it with the MySQL version in prepare-package phase: <plugin> <artifactId>maven-antrun-plugin</artifactId> <version>1.3</version> <executions> <execution> <id>copy-test-persistence</id> <phase>process-test-resources</phase> <configuration> <tasks> <!--replace the "proper" persistence.xml with the "test" version--> <copy file="${project.build.testOutputDirectory}/META-INF/persistence.xml.test" tofile="${project.build.outputDirectory}/META-INF/persistence.xml" overwrite="true" verbose="true" failonerror="true" /> </tasks> </configuration> <goals> <goal>run</goal> </goals> </execution> <execution> <id>restore-persistence</id> <phase>prepare-package</phase> <configuration> <tasks> <!--restore the "proper" persistence.xml--> <copy file="${project.build.outputDirectory}/META-INF/persistence.xml.production" tofile="${project.build.outputDirectory}/META-INF/persistence.xml" overwrite="true" verbose="true" failonerror="true" /> </tasks> </configuration> <goals> <goal>run</goal> </goals> </execution> </executions> </plugin> The problem is, that if I execute mvn jetty:run it will execute the test persistence.xml file copy task before starting jetty. I want it to be run using the deployment version. How can I fix this?

    Read the article

  • Visual studio 2010 colourizers, intellisense and the rest. Where to start!!

    - by Owen
    Ok, before I begin I realize that there is a lot of documentation on this subject but I have thus far failed to get even basic colourization working for VS2010. My goal is to simply get to a point where I can open a document and everything is coloured red, from here I can implement the relevant parsing logic. Here's what I have tried/found: 1) Downloaded all the relevent SDK's and such- Found the ook sample (http://code.msdn.microsoft.com/ookLanguage) - didn't build, didn't work. 2) Knowing almost nothing about MEF read through "Implementing a Language Service By Using the Managed Package Framework" - http://msdn.microsoft.com/en-us/library/bb166533(v=VS.100).aspx This was pretty much a copy and paste of all the basic stuff here, and also updating some references which were out of date with the sample see: http://social.msdn.microsoft.com/Forums/en-US/vsx/thread/a310fe67-afd2-4592-b295-3fc86fec7996 Now, I have got to a point where when running the package MEF appears to have hooked up correctly (I know this because with the debugger open I can see that the packages initialize and FDoIdle methods are being hit). When I open a file of the extension I have registered with the ProvideLanguageExtensionAttribute everything dies as if in an endless loop, yet no debug symbols hit (though they are loaded). Looking at the ook sample and the MEF examples they seem to be totally different approaches to the same problem. In the ook sample there are notions of Clasifications and Completion controllers which aren't mentioned in the MEF example. Also, they don't seem to create a Package or Language service, so I have no idea how it should work? With the MEF example, my assumption is that I need to hook into the "IScanner.ScanTokenAndProvideInfoAboutIt" to provide syntax highlighting? Which would be fine if I could ever hit this method. So my first question I guess is which approach should I be taking here? Or do they both somehow tie together? My second questions is, where can I find a basic fully working project that implements bog standard basic syntax highlighting and intellisense or VS2010? Thirdly, in the MEF example when I created a Package there were a bunch of test projects created for me. I appears that the integration tests launch the VS2010 test rig somehow, but the test fails. It would be good to write my service with tests but I have no idea what/how I can test each interaction so any references to testing Language services would be helpful. Finally, please throw any resource/book links my way that I may find useful. Cheers, Chris. N.B. Sorry I realize this is part question part rant, but I have never been so confused.

    Read the article

  • Separation of domain and ui layer in a composite

    - by hansmaad
    Hi all, i'm wondering if there is a pattern how to separate the domain logic of a class from the ui responsibilities of the objects in the domain layer. Example: // Domain classes interface MachinePart { CalculateX(in, out) // Where do we put these: // Draw(Screen) ?? // ShowProperties(View) ?? // ... } class Assembly : MachinePart { CalculateX(in, out) subParts } class Pipe : MachinePart { CalculateX(in, out) length, diamater... } There is an application that calculates the value X for machines assembled from many machine parts. The assembly is loaded from a file representation and is designed as a composite. Each concrete part class stores some data to implement the CalculateX(in,out) method to simulate behaviour of the whole assembly. The application runs well but without GUI. To increase the usability a GUi should be developed on top of the existing implementation (changes to the existing code are allowed). The GUI should show a schematic graphical representation of the assembly and provide part specific dialogs to edit several parameters. To achieve these goals the application needs new functionality for each machine part to draw a schematic representation on the screen, show a property dialog and other things not related to the domain of machine simulation. I can think of some different solutions to implement a Draw(Screen) functionality for each part but i am not happy with each of them. First i could add a Draw(Screen) method to the MachinePart interface but this would mix-up domain code with ui code and i had to add a lot of functionality to each machine part class what makes my domain model hard to read and hard to understand. Another "simple" solution is to make all parts visitable and implement ui code in visitors but Visitor does not belong to my favorite patterns. I could derive UI variants from each machine part class to add the UI implementation there but i had to check if each part class is suited for inheritance and had to be careful on changes to the base classes. My currently favorite design is to create a parallel composite hierarchy where each component stores data to define a machine part, has implementation for UI methods and a factory method which creates instances of the corresponding domain classes, so that i can "convert" a UI assembly to a domain assembly. But there are problems to go back from the created domain hierarchy to the UI hierarchy for showing calculation results in the drawing for example (imagine some parts store some values during the calculation i want to show in the schematic representation after the simluation). Maybe there are some proven patterns for such problems?

    Read the article

  • Issues querying Access '07 database in C#

    - by Kye
    I'm doing a .NET unit as part of my studies. I've only just started, with a lecturer that as kinda failed to give me the most solid foundation with .NET, so excuse the noobishness. I'm making a pretty simple and generic database-driven application. I'm using C# and I'm accessing a Microsoft Access 2007 database. I've put the database-ish stuff in its own class with the methods just spitting out OleDbDataAdapters that I use for committing. I feed any methods which preform a query a DataSet object from the main program, which is where I'm keeping the data (multiple tables in the db). I've made a very generic private method that I use to perform SQL SELECT queries and have some public methods wrapping that method to get products, orders.etc (it's a generic retail database). The generic method uses a separate Connect method to actually make the connection, and it is as follows: private static OleDbConnection Connect() { OleDbConnection conn = new OleDbConnection( @"Provider=Microsoft.ACE.OLEDB.12.0; Data Source=C:\Temp\db.accdb"); return conn; } The generic method is as follows: private static OleDbDataAdapter GenericSelectQuery( DataSet ds, string namedTable, String selectString) { OleDbCommand oleCommand = new OleDbCommand(); OleDbConnection conn = Connect(); oleCommand.CommandText = selectString; oleCommand.Connection = conn; oleCommand.CommandType = CommandType.Text; OleDbDataAdapter adapter = new OleDbDataAdapter(); adapter.SelectCommand = oleCommand; adapter.MissingSchemaAction = MissingSchemaAction.AddWithKey; adapter.Fill(ds, namedTable); return adapter; } The wrapper methods just pass along the DataSet that they received from the main program, the namedtable string is the name of the table in the dataset, and you pass in the query you wish to make. It doesn't matter which query I give it (even something simple like SELECT * FROM TableName) I still get thrown an OleDbException, stating that there was en error with the FROM clause of the query. I've just resorted to building the queries with Access, but there's still no use. Obviously there's something wrong with my code, which wouldn't actually surprise me. Here are some wrapper methods I'm using. public static OleDbDataAdapter GetOrderLines(DataSet ds) { OleDbDataAdapter adapter = GenericSelectQuery( ds, "orderlines", "SELECT OrderLine.* FROM OrderLine;"); return adapter; } They all look the same, it's just the SQL that changes.

    Read the article

  • A few problems with Delphi involving Mail Merge, SQL + Databases.

    - by Daniel
    My first problem is with mail merge. I have created a a Data File and a table, yet I am not able to fill my table with information from my Data File. The << just seems to be inserted after wherever the cursor is on the page, which is not where the table is. All that is entered into the actual table is a '59'. Therefore I think I either need to to change the code or be able to move the cursor. Here is the code I am currently using: wrdDoc.Tables.Add(wrdSelection.Range, ADOTable1.FieldCount, 3); wrdDoc.Tables.Item(1).Columns.Item(1).SetWidth(51,wdAdjustNone); wrdDoc.Tables.Item(1).Columns.Item(2).SetWidth(20,wdAdjustNone); wrdDoc.Tables.Item(1).Columns.Item(3).SetWidth(100,wdAdjustNone); // Set the shading on the first row to light gray wrdDoc.Tables.Item(1).Rows.Item(1).Cells .Shading.BackgroundPatternColorIndex := wdGray25; // BOLD the first row wrdDoc.Tables.Item(1).Rows.Item(1).Range.Bold := True; // Center the text in Cell (1,1) wrdDoc.Tables.Item(1).Cell(1,1).Range.Paragraphs.Alignment := wdAlignParagraphCenter; // Fill each row of the table with data wrdDoc.Tables.Item(1).Cell(1, 1).Range.InsertAfter('Time'); wrdDoc.Tables.Item(1).Cell(1, 2).Range.InsertAfter(''); wrdDoc.Tables.Item(1).Cell(1, 3).Range.InsertAfter('Teacher'); For Count := 1 to (ADOTable1.FieldCount - 1) do begin wrdDoc.Tables.Item(1).Cell((Count + 1), 1).Range.InsertAfter(wrdSelection.Range,'Time' + IntToStr(Count)); wrdDoc.Tables.Item(1).Cell((Count + 1), 2).Range.InsertAfter(wrdSelection.Range,'THonorific' + IntToStr(Count)); wrdDoc.Tables.Item(1).Cell((Count + 1), 3).Range.InsertAfter(wrdSelection.Range,'TSurname' + IntToStr(Count)); end; My second problem is that I do not know what the correct SQL syntax is for editing the name of a column in the database (I am using Delphi 7 and Microsoft Jet Engine if that makes a difference). The third problem is that when I add a new column to my database manually (which I need to do) I get a 'violation' error in one of my units when I activate an ADOTable. This only happens on one unit and it happens when I add a column with any name anywhere in the table. I know that is vague but I can't seem to narrow down the problem any further than that. If you could help with me with any of those it would be great. Thanks.

    Read the article

  • Writing a mini-language with haskell, trouble with "while" statements and blocks { }

    - by Nibirue
    EDIT: problem partially solved, skip to the bottom for update. I'm writing a small language using haskell, and I've made a lot of progress, but I am having trouble implementing statements that use blocks, like "{ ... }". I've implemented support for If statements like so in my parser file: stmt = skip +++ ifstmt +++ assignment +++ whilestmt ifstmt = symbol "if" >> parens expr >>= \c -> stmt >>= \t -> symbol "else" >> stmt >>= \e -> return $ If c t e whilestmt = symbol "while" >> parens expr >>= \c -> symbol "\n" >> symbol "{" >> stmt >>= \t -> symbol "}" >> return $ While c t expr = composite +++ atomic And in the Syntax file: class PP a where pp :: Int -> a -> String instance PP Stmt where pp ind (If c t e) = indent ind ++ "if (" ++ show c ++ ") \n" ++ pp (ind + 2) t ++ indent ind ++ "else\n" ++ pp (ind + 2) e pp ind (While c t) = indent ind ++ "while (" ++ show c ++") \n" ++ "{" ++ pp (ind + 2) t ++ "}" ++ indent ind Something is wrong with the while statement, and I don't understand what. The logic seems correct, but when I run the code I get the following error: EDIT: Fixed the first problem based on the first reply, now it is not recognizing my while statment which I assume comes from this: exec :: Env -> Stmt -> Env exec env (If c t e) = exec env ( if eval env c == BoolLit True then t else e ) exec env (While c t) = exec env ( if eval env c == BoolLit True then t ) The file being read from looks like this: x = 1; c = 0; if (x < 2) c = c + 1; else ; -- SEPARATE FILES FOR EACH x = 1; c = 1; while (x < 10) { c = c * x; x = x + 1; } c I've tried to understand the error report but nothing I've tried solves the problem.

    Read the article

  • Can I mix declarative and programmatic layout in GWT 2.0?

    - by stuff22
    I'm trying to redo an existing panel that I made before GWT 2.0 was released. The panel has a few text fields and a scrollable panel below in a VerticalPanel. What I'd like to do is to make the scrollable panel with UIBinder and then add that to a VerticalPanel Below is an example I created to illustrate this: public class ScrollTablePanel extends ResizeComposite{ interface Binder extends UiBinder<Widget, ScrollTablePanel > { } private static Binder uiBinder = GWT.create(Binder.class); @UiField FlexTable table1; @UiField FlexTable table2; public Test2() { initWidget(uiBinder.createAndBindUi(this)); table1.setText(0, 0, "testing 1"); table1.setText(0, 1, "testing 2"); table1.setText(0, 2, "testing 3"); table2.setText(0, 0, "testing 1"); table2.setText(0, 1, "testing 2"); table2.setText(0, 2, "testing 3"); table2.setText(1, 0, "testing 4"); table2.setText(1, 1, "testing 5"); table2.setText(1, 2, "testing 6"); } } then the xml: <ui:UiBinder xmlns:ui='urn:ui:com.google.gwt.uibinder' xmlns:g='urn:import:com.google.gwt.user.client.ui' xmlns:mail='urn:import:com.test.scrollpaneltest'> <g:DockLayoutPanel unit='EM'> <g:north size="2"> <g:FlexTable ui:field="table1"></g:FlexTable> </g:north> <g:center> <g:ScrollPanel> <g:FlexTable ui:field="table2"></g:FlexTable> </g:ScrollPanel> </g:center> </g:DockLayoutPanel> </ui:UiBinder> Then do something like this in the EntryPoint: public void onModuleLoad() { VerticalPanel vp = new VerticalPanel(); vp.add(new ScrollTablePanel()); vp.add(new Label("dummy label text")); vp.setWidth("100%"); RootLayoutPanel.get().add(vp); } But when I add the ScrollTablePanel to the VerticalPanel, only the first FlexTable (test1) is visible on the page, not the whole ScrollTablePanel. Is there a way to make this work where it is possible to mix declarative and programmatic layout in GWT 2.0?

    Read the article

  • Twisted + SQLAlchemy and the best way to do it.

    - by Khorkrak
    So I'm writing yet another Twisted based daemon. It'll have an xmlrpc interface as usual so I can easily communicate with it and have other processes interchange data with it as needed. This daemon needs to access a database. We've been using SQL Alchemy in place of hard coding SQL strings for our latest projects - those mostly done for web apps in Pylons. We'd like to do the same for this app and re-use library code that makes use of SQL Alchemy. So what to do? Well of course since that library was written for use in a Pylons app it's all the straight-forward blocking style code that everyone is accustomed to and all of the non-blocking is magically handled by Pylons via threading, thread locals, scoped sessions and so on. So now for Twisted I guess I'm a bit stuck. I could: Just write the sql I need directly if it's minimal and use the dbapi pool in twisted to do runInteractions etc when I need to hit the db. Use the objects and inherently blocking methods in our library and block now and then in my Twisted daemon. Bah. Use sAsync which was last updated in 2008 and kind of reuse the models we have defined already but not really and it does address code that needs to work in Pylons either. Does that even work with the latest version SQL Alchemy? Who knows. That project looked great though - why was it apparently abandoned? Spawn a separate subprocess and have it deal with the library code and all it's blocking, the results being returned back to my daemon when ready as objects marshalled via YAML over xmlrpc. Use deferToThread and then expunge the objects returned having made sure to do eager loads so that I have all my stuff that I might need. Seems kind of ugha to me. I'm also stuck using Python 2.5.4 atm so no 2.6 yet and I don't think I can just do an import from future to get access to the cool new multiprocessing module stuff in there. That's OK though I guess as we've got dealing with interprocess communication down pretty well. So I'm leaning towards option 4 mostly as that would avoid the mortal sin of logic duplication with option 1 while also staying the heck away from threads. Any better ideas?

    Read the article

  • Do you use logical negation operator (!) in "if" statement or check on "== false"

    - by Taras Terebkov
    Hello everyone, I just want to conduct a little survey about code style developers prefer. For me there are two ways to write "if" in such languages as Java, C#, C++, etc. (1) Logical negation operator public void foo() { if (!SessionManager.getInstance().hasActiveSession()) { . . . . . } } (2) Check on "false" public void foo() { if (SessionManager.getInstance().hasActiveSession() == false) { . . . . . } } I always believe that first way is much worst then the second one. Cause usually you don't "read" the code, but "recognize" it in one brief look. And exclamation symbol slipped from your mind, just disturbing you somewhere on the bottom of your unconscious. And only during reading the "if" block below you understand, that the logic is opposite - no sessions in "if" On the other hand in the second way of writing, an eye immediately catches words "SessionManager", "hasActiveSession" and "false". Also for me, the situation with "true" is different. In code like class SessionManager { private bool hasSession; public void foo() { if (hasSession == true) { . . . . . } else { . . . . . } } } I find "true" superfluous. why we repeating the sentence two times? The following is shorter and quicker to catch. class SessionManager { private bool hasSession; public void foo() { if (hasSession) { . . . . . } else { . . . . . } } } What do YOU think, guys?

    Read the article

  • Unable to find assembly, C#

    - by PlasmaCube
    So, here's the deal. I've got two ASP.NET applications, both of which use SQLServer Session State management. They also both use the same server. I've got a custom session class in an external DLL, which fully implements serialization, and which both applications have referenced. Each application, in turn, has a class which inherits from the DLL class, and both applications use their own respective classes for their session state. Now, what I was trying to accomplish was that if you wanted to go to the other application, it could look in the session (they all use the same session key) and treat the existing object there as the base (the one from the DLL), extract whatever login info you need, then overwrite the session object with your own. Unfortunately, when the second application attempts to read the session, it seems that it looks for the DLL of the first application, and when it can't find it, it throws an exception. Is there a flaw in my logic? Here's an example: // Global.asax of the 1st app protected void Session_Start(object sender, EventArgs e) { Session.Add( "UserSessionKey", new FirstUserSession()); // FirstUserSession inherits from BaseUserSession } Now the second application: // Global.asax of 2nd app protected void Session_Start(object sender, EventArgs e) { if (Session["UserSessionKey"] != null) { BaseUserSession existing = (BaseUserSession)Session["UserSessionKey"]; SecondUserSession session = new SecondUserSession(); // This also inherits from BaseUserSession session.Authenticated = existing.Authenticated; session.Id = existing.Id; session.Role = existing.Role; Session.Add("UserSessionKey", session); } else { Session.Add("UserSessionKey", new SecondUserSession()); } } Here's the exception stack trace. In this case, "MyCBC" is the real name of the first app, and "ASPTesting" is the second app. [SerializationException: Unable to find assembly 'MyCBC, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null'.] System.Runtime.Serialization.Formatters.Binary.BinaryAssemblyInfo.GetAssembly() +1871092 System.Runtime.Serialization.Formatters.Binary.ObjectReader.GetType(BinaryAssemblyInfo assemblyInfo, String name) +7545734 System.Runtime.Serialization.Formatters.Binary.ObjectMap..ctor(String objectName, String[] memberNames, BinaryTypeEnum[] binaryTypeEnumA, Object[] typeInformationA, Int32[] memberAssemIds, ObjectReader objectReader, Int32 objectId, BinaryAssemblyInfo assemblyInfo, SizedArray assemIdToAssemblyTable) +120 System.Runtime.Serialization.Formatters.Binary.ObjectMap.Create(String name, String[] memberNames, BinaryTypeEnum[] binaryTypeEnumA, Object[] typeInformationA, Int32[] memberAssemIds, ObjectReader objectReader, Int32 objectId, BinaryAssemblyInfo assemblyInfo, SizedArray assemIdToAssemblyTable) +52 System.Runtime.Serialization.Formatters.Binary.__BinaryParser.ReadObjectWithMapTyped(BinaryObjectWithMapTyped record) +190 System.Runtime.Serialization.Formatters.Binary.__BinaryParser.ReadObjectWithMapTyped(BinaryHeaderEnum binaryHeaderEnum) +61 System.Runtime.Serialization.Formatters.Binary.__BinaryParser.Run() +253 System.Runtime.Serialization.Formatters.Binary.ObjectReader.Deserialize(HeaderHandler handler, __BinaryParser serParser, Boolean fCheck, Boolean isCrossAppDomain, IMethodCallMessage methodCallMessage) +168 System.Runtime.Serialization.Formatters.Binary.BinaryFormatter.Deserialize(Stream serializationStream, HeaderHandler handler, Boolean fCheck, Boolean isCrossAppDomain, IMethodCallMessage methodCallMessage) +203 System.Web.Util.AltSerialization.ReadValueFromStream(BinaryReader reader) +788 System.Web.SessionState.SessionStateItemCollection.ReadValueFromStreamWithAssert() +55 System.Web.SessionState.SessionStateItemCollection.DeserializeItem(String name, Boolean check) +281 System.Web.SessionState.SessionStateItemCollection.get_Item(String name) +19 System.Web.SessionState.HttpSessionStateContainer.get_Item(String name) +13 System.Web.SessionState.HttpSessionState.get_Item(String name) +13 ASPTesting._Default.Page_Load(Object sender, EventArgs e) in C:\Documents and Settings\sarsstu\My Documents\Projects\Testing\ASPTesting\ASPTesting\Default.aspx.cs:20 System.Web.Util.CalliHelper.EventArgFunctionCaller(IntPtr fp, Object o, Object t, EventArgs e) +14 System.Web.Util.CalliEventHandlerDelegateProxy.Callback(Object sender, EventArgs e) +35 System.Web.UI.Control.OnLoad(EventArgs e) +99 System.Web.UI.Control.LoadRecursive() +50 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +627 Thanks to everyone in advance.

    Read the article

  • PHP Based session variable not retaining value. Works on localhost, but not on server.

    - by Foo
    I've been trying to debug this problem for many hours, but to no avail. I've been using PHP for many years, and got back into it after long hiatus, so I'm still a bit rusty. Anyways, my $_SESSION vars are not retaining their value for some reason that I can't figure out. The site worked on localhost perfectly, but uploading it to the server seemed to break it. First thing I checked was the PHP.ini server settings. Everything seems fine. In fact, my login system is session based and it works perfectly. So now that I know $_SESSIONS are working properly and retaining their value for my login, I'm presuming the server is setup and the problem is in my script. Here's a stripped version of the code that's causing a problem. $type, $order and $style are not being retained after they are set via a GET variable. The user clicks a link, which sets a variable via GET, and this variable is retained for the remainder of their session. Is there some problem with my logic that I'm not seeing? <?php require_once('includes/top.php'); //first line includes a call to session_start(); require_once('includes/db.php'); $type = filter_input(INPUT_GET, 't', FILTER_VALIDATE_INT); $order = filter_input(INPUT_GET, 'o', FILTER_VALIDATE_INT); $style = filter_input(INPUT_GET, 's', FILTER_VALIDATE_INT); /* According to documentation, filter_input returns a NULL when variables are undefined. So, if t, o, or s are not set via URL, $type, $order and $style will be NULL. */ print_r($_SESSION); /* All other sessions, such as the login session, etc. are displayed here. After the sessions are set below, they are displayed up here to... simply with no value. This leads me to believe the problem is with the code below, perhaps? */ // If $type is not null (meaning it WAS set via the get method above) // or it's false because the validation failed for some reason, // then set the session to the $type. I removed the false check for simplicity. // This code is being successfully executed, and the data is being stored... if(!is_null($type)) { $_SESSION['type'] = $type; } if(!is_null($order)) { $_SESSION['order'] = $order; } if(!is_null($style)) { $_SESSION['style'] = $style; } $smarty->display($template); ?> If anyone can point me in the right direction, I'd greatly appreciate it. Thanks.

    Read the article

  • How to store array of NSManagedObjects in an NSManagedObject

    - by David Tay
    I am loading my app with a property list of data from a web site. This property list file contains an NSArray of NSDictionaries which itself contains an NSArray of NSDictionaries. Basically, I'm trying to load a tableView of restaurant menu categories each of which contains menu items. My property list file is fine. I am able to load the file and loop through the nodes structure creating NSEntityDescriptions and am able to save to Core Data. Everything works fine and expectedly except that in my menu category managed object, I have an NSArray of menu items for that category. Later on, when I fetch the categories, the pointers to the menu items in a category is lost and I get all the menu items. Am I suppose to be using predicates or does Core Data keep track of my object graph for me? Can anyone look at how I am loading Core Data and point out the flaw in my logic? I'm pretty good with either SQL and OOP by themselves, but am a little bewildered by ORM. I thought that I should just be able to use aggregation in my managed objects and that the framework would keep track of the pointers for me, but apparently not. NSError *error; NSURL *url = [NSURL URLWithString:@"http://foo.com"]; NSArray *categories = [[NSArray alloc] initWithContentsOfURL:url]; NSMutableArray *menuCategories = [[NSMutableArray alloc] init]; for (int i=0; i<[categories count]; i++){ MenuCategory *menuCategory = [NSEntityDescription insertNewObjectForEntityForName:@"MenuCategory" inManagedObjectContext:[self managedObjectContext]]; NSDictionary *category = [categories objectAtIndex:i]; menuCategory.name = [category objectForKey:@"name"]; NSArray *items = [category objectForKey:@"items"]; NSMutableArray *menuItems = [[NSMutableArray alloc] init]; for (int j=0; j<[items count]; j++){ MenuItem *menuItem = [NSEntityDescription insertNewObjectForEntityForName:@"MenuItem" inManagedObjectContext:[self managedObjectContext]]; NSDictionary *item = [items objectAtIndex:j]; menuItem.name = [item objectForKey:@"name"]; menuItem.price = [item objectForKey:@"price"]; menuItem.image = [item objectForKey:@"image"]; menuItem.details = [item objectForKey:@"details"]; [menuItems addObject:menuItem]; } [menuCategory setValue:menuItems forKey:@"menuItems"]; [menuCategories addObject:menuCategory]; [menuItems release]; } if (![[self managedObjectContext] save:&error]) { NSLog(@"An error occurred: %@", [error localizedDescription]); }

    Read the article

  • Declare Locally or Globally in Delphi?

    - by lkessler
    I have a procedure my program calls tens of thousands of times that uses a generic structure like this: procedure PrintIndiEntry(JumpID: string); type TPeopleIncluded = record IndiPtr: pointer; Relationship: string; end; var PeopleIncluded: TList<TPeopleIncluded>; PI: TPeopleIncluded; begin { PrintIndiEntry } PeopleIncluded := TList<TPeopleIncluded>.Create; { A loop here that determines a small number (up to 100) people to process } while ... do begin PI.IndiPtr := ...; PI.Relationship := ...; PeopleIncluded.Add(PI); end; DoSomeProcess(PeopleIncluded); PeopleIncluded.Clear; PeopleIncluded.Free; end { PrintIndiEntry } Alternatively, I can declare PeopleIncluded globally rather than locally as follows: unit process; interface type TPeopleIncluded = record IndiPtr: pointer; Relationship: string; end; var PeopleIncluded: TList<TPeopleIncluded>; PI: TPeopleIncluded; procedure PrintIndiEntry(JumpID: string); begin { PrintIndiEntry } { A loop here that determines a small number (up to 100) people to process } while ... do begin PI.IndiPtr := ...; PI.Relationship := ...; PeopleIncluded.Add(PI); end; DoSomeProcess(PeopleIncluded); PeopleIncluded.Clear; end { PrintIndiEntry } procedure InitializeProcessing; begin PeopleIncluded := TList<TPeopleIncluded>.Create; end; procedure FinalizeProcessing; begin PeopleIncluded.Free; end; My question is whether in this situation it is better to declare PeopleIncluded globally rather than locally. I know the theory is to define locally whenever possible, but I would like to know if there are any issues to worry about with regards to doing tens of thousands of of "create"s and "free"s? Making them global will do only one create and one free. What is the recommended method to use in this case? If the recommended method is to still define it locally, then I'm wondering if there are any situations where it is better to define globally when defining locally is still an option.

    Read the article

  • Achieving C# "readonly" behavior in C++

    - by Tommy Fisk
    Hi guys, this is my first question on stack overflow, so be gentle. Let me first explain the exact behavior I would like to see. If you are familiar with C# then you know that declaring a variable as "readonly" allows a programmer to assign some value to that variable exactly once. Further attempts to modify the variable will result in an error. What I am after: I want to make sure that any and all single-ton classes I define can be predictably instantiated exactly once in my program (more details at the bottom). My approach to realizing my goal is to use extern to declare a global reference to the single-ton (which I will later instantiate at a time I choose. What I have sort of looks like this, namespace Global { extern Singleton& mainInstance; // not defined yet, but it will be later! } int main() { // now that the program has started, go ahead and create the singleton object Singleton& Global::mainInstance = Singleton::GetInstance(); // invalid use of qualified name Global::mainInstance = Singleton::GetInstance(); // doesn't work either :( } class Singleton { /* Some details ommited */ public: Singleton& GetInstance() { static Singleton instance; // exists once for the whole program return instance; } } However this does not really work, and I don't know where to go from here. Some details about what I'm up against: I'm concerned about threading as I am working on code that will deal with game logic while communicating with several third-party processes and other processes I will create. Eventually I would have to implement some kind of synchronization so multiple threads could access the information in the Singleton class without worry. Because I don't know what kinds of optimizations I might like to do, or exactly what threading entails (never done a real project using it), I was thinking that being able to predictably control when Singletons were instantiated would be a Good Thing. Imagine if Process A creates Process B, where B contains several Singletons distributed against multiple files and/or libraries. It could be a real nightmare if I can not reliably ensure the order these singleton objects are instantiated (because they could depend on each other, and calling methods on a NULL object is generally a Bad Thing). If I were in C# I would just use the readonly keyword, but is there any way I can implement this (compiler supported) behavior in C++? Is this even a good idea? Thanks for any feedback.

    Read the article

  • How do i know in the detail view what cell in tableview was selected?

    - by Daniel Rotaru
    how can i know what tableview cell was selected?(being in the detail view) The problem is that. I have an table view controller. Here are parsed from the internet entries to the table. So it's a dynamic tabe view that loads from internet. I will not know how many entries will be in the table so i will not know what details view to call when i click a row. So i have maked one view. This view contains an calendar. On this calendar(wich is the detail iew) i will parse data from internet depending on the selected row. For exemple: i have table: entry 1, entry 2,entry 3,entry 4 When i click entry 2 i need to call a php with the argument entry 2. The php will know what entry on the table i have selected and will generate me the correct xml that i will parse. Here is my tableview didSelectRow function: - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { // Navigation logic -- create and push a new view controller if(bdvController == nil) bdvController = [[BookDetailViewController alloc] initWithNibName:@"BookDetailView" bundle:[NSBundle mainBundle]]; Villa *aVilla = [appDelegate.villas objectAtIndex:indexPath.row]; [self.navigationController pushViewController:bdvController animated:YES] And here is my self view function on detailviewcontroller: -(void)loadView { [super loadView]; self.title=@"Month" UIBarButtonItem *addButtonItem = [[UIBarButtonItem alloc] initWithTitle:@"ListView" style:UIBarButtonItemStyleDone target:self action:@selector(add:)]; self.navigationItem.rightBarButtonItem = addButtonItem; calendarView = [[[KLCalendarView alloc] initWithFrame:CGRectMake(0.0f, 0.0f, 320.0f, 373.0f) delegate:self] autorelease]; appDelegate1 = (XMLAppDelegate *)[[UIApplication sharedApplication] delegate]; myTableView=[[UITableView alloc]initWithFrame:CGRectMake(0, 260, 320, 160)style:UITableViewStylePlain]; myTableView.dataSource=self; myTableView.delegate=self; UIView *myHeaderView=[[UIView alloc]initWithFrame:CGRectMake(0, 0, myTableView.frame.size.width,2)]; myHeaderView.backgroundColor=[UIColor grayColor]; [myTableView setTableHeaderView:myHeaderView]; [self.view addSubview:myTableView]; [self.view addSubview:calendarView]; [self.view bringSubviewToFront:myTableView]; } I think that here in self load i need to make the if procedure.. If indexPath.row=x parse fisier.php?variabila=title_of_rowx but the question is how i know the indexPath variable?

    Read the article

  • Hot to get rid of memory allocations/deallocations in swig wrappers?

    - by Dmitriy Matveev
    I want to use swig for generation of read-only wrappers for a complex object. The object which I want to wrap will always be existent while I will read it. And also I will only use my wrappers at the time that object is existent, thus I don't need any memory management from SWIG. For following swig interface: %module test %immutable; %inline %{ struct Foo { int a; }; struct Bar { int b; Foo f; }; %} I will have a wrappers which will have a lot of garbage in generated interfaces and do useless work which will reduce performance in my case. Generated java wrapper for Bar class will be like this: public class Bar { private long swigCPtr; protected boolean swigCMemOwn; protected Bar(long cPtr, boolean cMemoryOwn) { swigCMemOwn = cMemoryOwn; swigCPtr = cPtr; } protected static long getCPtr(Bar obj) { return (obj == null) ? 0 : obj.swigCPtr; } protected void finalize() { delete(); } public synchronized void delete() { if (swigCPtr != 0) { if (swigCMemOwn) { swigCMemOwn = false; testJNI.delete_Bar(swigCPtr); } swigCPtr = 0; } } public int getB() { return testJNI.Bar_b_get(swigCPtr, this); } public Foo getF() { return new Foo(testJNI.Bar_f_get(swigCPtr, this), true); } public Bar() { this(testJNI.new_Bar(), true); } } I don't need 'swigCMemOwn' field in my wrapper since it always will be false. All code related to this field will also be useless. There are also unnecessary logic in native code: SWIGEXPORT jlong JNICALL Java_some_testJNI_Bar_1f_1get(JNIEnv *jenv, jclass jcls, jlong jarg1, jobject jarg1_) { jlong jresult = 0 ; struct Bar *arg1 = (struct Bar *) 0 ; Foo result; (void)jenv; (void)jcls; (void)jarg1_; arg1 = *(struct Bar **)&jarg1; result = ((arg1)->f); { Foo * resultptr = (Foo *) malloc(sizeof(Foo)); memmove(resultptr, &result, sizeof(Foo)); *(Foo **)&jresult = resultptr; } return jresult; } I don't need these calls to malloc and memmove. I want to force swig to resolve both of these problems, but don't know how. Is it possible?

    Read the article

  • Passing arguments between classes - use public properties or pass a properties class as argument?

    - by devoured elysium
    So let's assume I have a class named ABC that will have a list of Point objects. I need to make some drawing logic with them. Each one of those Point objects will have a Draw() method that will be called by the ABC class. The Draw() method code will need info from ABC class. I can only see two ways to make them have this info: Having Abc class make public some properties that would allow draw() to make its decisions. Having Abc class pass to draw() a class full of properties. The properties in both cases would be the same, my question is what is preferred in this case. Maybe the second approach is more flexible? Maybe not? I don't see here a clear winner, but that sure has more to do with my inexperience than any other thing. If there are other good approaches, feel free to share them. Here are both cases: class Abc1 { public property a; public property b; public property c; ... public property z; public void method1(); ... public void methodn(); } and here is approach 2: class Abc2 { //here we make take down all properties public void method1(); ... public void methodn(); } class Abc2MethodArgs { //and we put them here. this class will be passed as argument to //Point's draw() method! public property a; public property b; public property c; ... public property z; } Also, if there are any "formal" names for these two approaches, I'd like to know them so I can better choose the tags/thread name, so it's more useful for searching purposes. That or feel free to edit them.

    Read the article

  • Can I use WCF to replace my current Web Service and Window Service combination?

    - by gun_shy
    I need a little bit of advise regarding the situation I am faced with. The current arrangement I have been tasked with improving just doesn't sit well with me and I feel like there is a better way to do it. The more I read about WCF, the more I get the feeling that it might be what I am looking for. Right now, I have an asp.net client, a .net web service, a windows service, a ms sql database, and a third party application that is used for processing a group of 'project' files into a finalized file. Since the third party application can only handle processing one 'project' at a time, the combination of the web service, window service, and database have been arranged to create a job queue manager for the third party application. The client sends a zip 'project' file containing multiple sub files to the web service. The web service adds a new 'project' line to the database, generating a unique job id. The zip file is expanded to a folder location on the server using the job id as the folder name. The web service then returns the job id to the client. The client will use this id to poll the web service for the status of the job submitted. When the job is complete, the client will request the processed file. The windows service polls the database every x minutes. If a new job exists, the service will pull the oldest job and send it to the third party app for processing. If the processing succeeds, the window service updates the project line in the database, marking the job complete. The window service will continue to process any non complete jobs in the database until there are no more. When it stops finding any jobs, it will sleep x minutes and then poll the database again. I do not like the fact that the window service has to poll the database. If there is only one job submitted, the client will have to wait for the window service to poll and then wait while the 'project' is being processed. It seems like WCF could be used to combine the web and window services using a combination of the InstanceContextMode.Single and ConcurrencyMode.Multiple. So far, I have been unable to find any articles or examples that would point me in the right direction. Can WCF be utilized to accomplish the job queue logic of the current arrangement in a better way? As always, any help is more than appreciated.

    Read the article

  • Adding a clustered index to a SQL table: what dangers exist for a live production system?

    - by MoSlo
    Right, keep in mind i need to describe this by abstracting all possible confidential info: I've been put in charge of a 10-year old transactional system of which the majority business logic is implemented at database level (triggers, stored procedures etc). Win2000 server, MSSQL 2000 Enterprise. No immediate plans for replacing/updating the system are being considered :( The core process is a program that executes transactions - specifically, it executes a stored procedure with various parameters, lets call it sp_ProcessTrans. The program executes the stored procedure at asynchronous intervals. By itself, things work fine. But there are 30 instances of this program on remotely located workstations, all of them asynchronously executing sp_ProcessTrans and then retrieving data from the SQL server (execution is pretty regular - ranging 0 to 60 times a minute, depending on what items the program instance is responsible for) . Performance of the system has dropped considerably with 10 yrs of data growth: the reason is the deadlocks and specifically deadlock wait times. The deadlock is on the Employee table. I have discovered: In sp_ProcessTrans' execution, it selects from an Employee table 7 times (dont ask) The select is done on a field that is NOT the primary key No index exists on this field. Thus a table scan is performed. 7 times. per transaction So the reason for deadlocks is clear. I created a non-unique ordered clustered index on the field (field looks good, almost unique, NUM(7), very rarely changes). Immediate improvement in the test environment. The problem is that i cannot simulate the deadlocks in a test environment (I'd need 30 workstations; i'd need to simulate 'realistic' activity on those stations, so visualization is out). I need to know if i must schedule downtime. Creating an index shouldn't be a risky operation for MSSQL, but is there any danger (data corruption in transactions/select statements/extra wait time etc) to create this field index on the production database while the transactions are still taking place? (although i can select a time when transactions are fairly quiet through the 30 stations) Are there any hidden dangers i'm not seeing (not looking forward to needing to restore the DB if something goes wrong, restoring would take a lot of time with 10yrs of data).

    Read the article

  • Representation of a DateTime as the local to remote user

    - by TwoSecondsBefore
    Hello! I was confused in the problem of time zones. I am writing a web application that will contain some news with dates of publication, and I want the client to see the date of publication of the news in the form of corresponding local time. However, I do not know in which time zone the client is located. I have three questions. I have to ask just in case: does DateTimeOffset.UtcNow always returns the correct UTC date and time, regardless of whether the server is dependent on daylight savings time? For example, if the first time I get the value of this property for two minutes before daylight savings time (or before the transition from daylight saving time back) and the second time in 2 minutes after the transfer, whether the value of properties in all cases differ by only 4 minutes? Or here require any further logic? (Question #1) Please see the following example and tell me what you think. I posted the news on the site. I assume that DateTimeOffset.UtcNow takes into account the time zone of the server and the daylight savings time, and so I immediately get the correct UTC server time when pressing the button "Submit". I write this value to a MS SQL database in the field of type datetime2(0). Then the user opens a page with news and no matter how long after publication. This may occur even after many years. I did not ask him to enter his time zone. Instead, I get the offset of his current local time from UTC using the javascript function following way: function GetUserTimezoneOffset() { var offset = new Date().getTimezoneOffset(); return offset; } Next I make the calculation of the date and time of publication, which will show the user: public static DateTime Get_Publication_Date_In_User_Local_DateTime( DateTime Publication_Utc_Date_Time_From_DataBase, int User_Time_Zone_Offset_Returned_by_Javascript) { int userTimezoneOffset = User_Time_Zone_Offset_Returned_by_Javascript; // For // example Javascript returns a value equal to -300, which means the // current user's time differs from UTC to 300 minutes. Ie offset // is UTC +6. In this case, it may be the time zone UTC +5 which // currently operates summer time or UTC +6 which currently operates the // standard time. // Right? (Question #2) DateTimeOffset utcPublicationDateTime = new DateTimeOffset(Publication_Utc_Date_Time_From_DataBase, new TimeSpan(0)); // get an instance of type DateTimeOffset for the // date and time of publication for further calculations DateTimeOffset publication_DateTime_In_User_Local_DateTime = utcPublicationDateTime.ToOffset(new TimeSpan(0, - userTimezoneOffset, 0)); return publication_DateTime_In_User_Local_DateTime.DateTime;// return to user } Is the value obtained correct? Is this the right approach to solving this problem? (Question #3)

    Read the article

< Previous Page | 343 344 345 346 347 348 349 350 351 352 353 354  | Next Page >