Search Results

Search found 8301 results on 333 pages for 'types'.

Page 263/333 | < Previous Page | 259 260 261 262 263 264 265 266 267 268 269 270  | Next Page >

  • Keeping cached browser data inside ASP update panel textboxes/dropdowns for browser back click

    - by pmlevere
    I'm new in VB.net/asp and am running a VB web application in a visual database program called IronSpeed designer. I'm primarily using IronSpeed in this case for its login/role security features. I have a basic two page setup for this app. The user logs in then is taken to AccountEntry.aspx, they enter data into textboxes and select some dropdown values that are linked to a sql database, then they click "submit" to move to Results.aspx. On Results.aspx, the user can change data and then generate several types of reports (PDF, Excel, etc). I'm used to setting up ASP controls inside ASPContent areas, and in these areas if a user performs a browser back click the previously entered data will still be on the page for potential user modification. However in this web app, IronSpeed is setting up the page and asp controls inside an asp update panel. It appears inside an asp update panel, cached values can't be seen on a browser back click. In this case, it's important that the orginally entered values still be there for the user experience if the user advances to Results.aspx then clicks browser back to modify a value on AccountEntry.aspx. If I have to I'll setup Session Variables and disable browser clicking, but that is last resort. Is there any way to save cached data inside an asp update panel and have it there for a browser back click?

    Read the article

  • Is it possible to get the parsed text of a SqlCommand with SqlParameters?

    - by Burg
    What I am trying to do is create some arbitrary sql command with parameters, set the values and types of the parameters, and then return the parsed sql command - with parameters included. I will not be directly running this command against a sql database, so no connection should be necessary. So if I ran the example program below, I would hope to see the following text (or something similar): WITH SomeTable (SomeColumn) AS ( SELECT N':)' UNION ALL SELECT N'>:o' UNION ALL SELECT N'^_^' ) SELECT SomeColumn FROM SomeTable And the sample program is: using System; using System.Data; using System.Data.SqlClient; namespace DryEraseConsole { class Program { static void Main(string[] args) { const string COMMAND_TEXT = @" WITH SomeTable (SomeColumn) AS ( SELECT N':)' UNION ALL SELECT N'>:o' UNION ALL SELECT @Value ) SELECT SomeColumn FROM SomeTable "; SqlCommand cmd = new SqlCommand(COMMAND_TEXT); cmd.CommandText = COMMAND_TEXT; cmd.Parameters.Add(new SqlParameter { ParameterName = "@Value", Size = 128, SqlDbType = SqlDbType.NVarChar, Value = "^_^" }); Console.WriteLine(cmd.CommandText); Console.ReadKey(); } } } Is this something that is achievable using the .net standard libraries? Initial searching says no, but I hope I'm wrong.

    Read the article

  • Detecting const-ness of nested type

    - by Channel72
    Normally, if I need to detect whether a type is const I just use boost::is_const. However, I ran into trouble when trying to detect the const-ness of a nested type. Consider the following traits template, which is specialized for const types: template <class T> struct traits { typedef T& reference; }; template <class T> struct traits<const T> { typedef T const& reference; }; The problem is that boost::is_const doesn't seem to detect that traits<const T>::reference is a const type. For example: std::cout << std::boolalpha; std::cout << boost::is_const<traits<int>::reference>::value << " "; std::cout << boost::is_const<traits<const int>::reference>::value << std::endl; This outputs: false false Why doesn't it output false true?

    Read the article

  • Comparing all properties of an object using expression trees

    - by stringargs
    Hi, I'm trying to write a simple generator that uses an expression tree to dynamically generate a method that compares all properties of an instance of a type to the properties of another instance of that type. This works fine for most properties, like int an string, but fails for DateTime? (and presumably other nullable value types). The method: static Delegate GenerateComparer(Type type) { var left = Expression.Parameter(type, "left"); var right = Expression.Parameter(type, "right"); Expression result = null; foreach (var p in type.GetProperties()) { var leftProperty = Expression.Property(left, p.Name); var rightProperty = Expression.Property(right, p.Name); var equals = p.PropertyType.GetMethod("Equals", new[] { p.PropertyType }); var callEqualsOnLeft = Expression.Call(leftProperty, equals, rightProperty); result = result != null ? (Expression)Expression.And(result, callEqualsOnLeft) : (Expression)callEqualsOnLeft; } var method = Expression.Lambda(result, left, right).Compile(); return method; } On a DateTime? property it fails with: Expression of type 'System.Nullable`1[System.DateTime]' cannot be used for parameter of type 'System.Object' of method 'Boolean Equals(System.Object)' OK, so it finds an overload of Equals that expects object. So why can't I pass a DateTime? into that, as it's convertible to object? If I look at Nullable<T>, it indeed has an override of Equals(object o). PS: I realize that this isn't a proper generator yet as it can't deal with null values, but I'll get to that :)

    Read the article

  • Some questions about special operators i've never seen in C++ code.

    - by toto
    I have downloaded the Phoenix SDK June 2008 (Tools for compilers) and when I'm reading the code of the Hello sample, I really feel lost. public ref class Hello { //-------------------------------------------------------------------------- // // Description: // // Class Variables. // // Remarks: // // A normal compiler would have more flexible means for holding // on to all this information, but in our case it's simplest (if // somewhat inelegant) if we just keep references to all the // structures we'll need to access as classstatic variables. // //-------------------------------------------------------------------------- static Phx::ModuleUnit ^ module; static Phx::Targets::Runtimes::Runtime ^ runtime; static Phx::Targets::Architectures::Architecture ^ architecture; static Phx::Lifetime ^ lifetime; static Phx::Types::Table ^ typeTable; static Phx::Symbols::Table ^ symbolTable; static Phx::Phases::PhaseConfiguration ^ phaseConfiguration; 2 Questions : What's that ref keyword? What is that sign ^ ? What is it doing protected: virtual void Execute ( Phx::Unit ^ unit ) override; }; override is a C++ keyword too? It's colored as such in my Visual Studio. I really want to play with this framework, but this advanced C++ is really an obstacle right now. Thank you.

    Read the article

  • Problem convert column values from VARCHAR(n) to DECIMAL

    - by Kevin Babcock
    I have a SQL Server 2000 database with a column of type VARCHAR(255). All the data is either NULL, or numeric data with up to two points of precision (e.g. '11.85'). I tried to run the following T-SQL query but received the error 'Error converting data type varchar to numeric' SELECT CAST([MyColumn] AS DECIMAL) FROM [MyTable]; I tried a more specific cast, which also failed. SELECT CAST([MyColumn] AS DECIMAL(6,2)) FROM [MyTable]; I also tried the following to see if any data is non-numeric, and the only values returned were NULL. SELECT ISNUMERIC([MyColumn]), [MyColumn] FROM [MyTable] WHERE ISNUMERIC([MyColumn]) = 0; I tried to convert to other data types, such as FLOAT and MONEY, but only MONEY was successful. So I tried the following: SELECT CAST(CAST([MyColumn] AS MONEY) AS DECIMAL) FROM [MyTable]; ...which worked just fine. Any ideas why the original query failed? Will there be a problem if I first convert to MONEY and then to DECIMAL? Thanks!

    Read the article

  • Constructor Overloading

    - by Mark Baker
    Normally when I want to create a class constructor that accepts different types of parameters, I'll use a kludgy overloading principle of not defining any args in the constructor definition: e.g. for an ECEF coordinate class constructor, I want it to accept either $x, $y and $z arguments, or to accept a single array argument containg x, y and z values, or to accept a single LatLong object I'd create a constructor looking something like: function __construct() { // Identify if any arguments have been passed to the constructor if (func_num_args() > 0) { $args = func_get_args(); // Identify the overload constructor required, based on the datatype of the first argument $argType = gettype($args[0]); switch($argType) { case 'array' : // Array of Cartesian co-ordinate values $overloadConstructor = 'setCoordinatesFromArray'; break; case 'object' : // A LatLong object that needs converting to Cartesian co-ordinate values $overloadConstructor = 'setCoordinatesFromLatLong'; break; default : // Individual Cartesian co-ordinate values $overloadConstructor = 'setCoordinatesFromXYZ'; break; } // Call the appropriate overload constructor call_user_func_array(array($this,$overloadConstructor),$args); } } // function __construct() I'm looking at an alternative: to provide a straight constructor with $x, $y and $z as defined arguments, and to provide static methods of createECEFfromArray() and createECEFfromLatLong() that handle all the necessary extraction of x, y and z; then create a new ECEF object using the standard constructor, and return that Which option is cleaner from an OO purists perspective?

    Read the article

  • LINQ Joins - Performance

    - by Meiscooldude
    I am curious on how exactly LINQ (not LINQ to SQL) is performing is joins behind the scenes in relation to how Sql Server performs joins. Sql Server before executing a query, generates an Execution Plan. The Execution Plan is basically an Expression Tree on what it believes is the best way to execute the query. Each node provides information on whether to do a Sort, Scan, Select, Join, ect. On a 'Join' node in our execution plan, we can see three possible algorithms; Hash Join, Merge Join, and Nested Loops Join. Sql Server will choose which algorithm to for each Join operation based on expected number of rows in Inner and Outer tables, what type of join we are doing (some algorithms don't support all types of joins), whether we need data ordered, and probably many other factors. Join Algorithms: Nested Loop Join: Best for small inputs, can be optimized with ordered inner table. Merge Join: Best for medium to large inputs sorted inputs, or an output that needs to be ordered. Hash Join: Best for medium to large inputs, can be parallelized to scale linearly. LINQ Query: DataTable firstTable, secondTable; ... var rows = from firstRow in firstTable.AsEnumerable () join secondRow in secondTable.AsEnumerable () on firstRow.Field<object> (randomObject.Property) equals secondRow.Field<object> (randomObject.Property) select new {firstRow, secondRow}; SQL Query: SELECT * FROM firstTable fT INNER JOIN secondTable sT ON fT.Property = sT.Property Sql Server might use a Nested Loop Join if it knows there are a small number of rows from each table, a merge join if it knows one of the tables has an index, and Hash join if it knows there are a lot of rows on either table and neither has an index. Does Linq choose its algorithm for joins? or does it always use one?

    Read the article

  • Java: Generics, Class.isaAssignableFrom, and type casting

    - by bguiz
    This method that uses method-level generics, that parses the values from a custom POJO, JXlistOfKeyValuePairs (which is exactly that). The only thing is that both the keys and values in JXlistOfKeyValuePairs are Strings. This method wants to taken in, in addition to the JXlistOfKeyValuePairs instance, a Class<T> that defines which data type to convert the values to (assume that only Boolean, Integer and Float are possible). It then outputs a HashMap with the specified type for the values in its entries. This is the code that I have got, and it is obviously broken. private <T extends Object> Map<String, T> fromListOfKeyValuePairs(JXlistOfKeyValuePairs jxval, Class<T> clasz) { Map<String, T> val = new HashMap<String, T>(); List<Entry> jxents = jxval.getEntry(); T value; String str; for (Entry jxent : jxents) { str = jxent.getValue(); value = null; if (clasz.isAssignableFrom(Boolean.class)) { value = (T)(Boolean.parseBoolean(str)); } else if (clasz.isAssignableFrom(Integer.class)) { value = (T)(Integer.parseInt(str)); } else if (clasz.isAssignableFrom(Float.class)) { value = (T)(Float.parseFloat(str)); } else { logger.warn("Unsupporteded value type encountered in key-value pairs, continuing anyway: " + clasz.getName()); } val.put(jxent.getKey(), value); } return val; } This is the bit that I want to solve: if (clasz.isAssignableFrom(Boolean.class)) { value = (T)(Boolean.parseBoolean(str)); } else if (clasz.isAssignableFrom(Integer.class)) { value = (T)(Integer.parseInt(str)); } I get: Inconvertible types required: T found: Boolean Also, if possible, I would like to be able to do this with more elegant code, avoiding Class#isAssignableFrom. Any suggestions? Sample method invocation: Map<String, Boolean> foo = fromListOfKeyValuePairs(bar, Boolean.class);

    Read the article

  • How can I have 2 ADO access methods use the same Transaction?

    - by KevinDeus
    I'm writing a test to see if my LINQ to Entity statement works.. I'll be using this for others if I can get this concept going.. my intention here is to INSERT a record with ADO, then verify it can be queried with LINQ, and then ROLLBACK the whole thing at the end. I'm using ADO to insert because I don't want to use the object or the entity model that I am testing. I figure that a plain ADO INSERT should do fine. problem is.. they both use different types of connections. is it possible to have these 2 different data access methods use the same TRANSACTION so I can roll it back?? _conn = new SqlConnection(_connectionString); _conn.Open(); _trans = _conn.BeginTransaction(); var x = new SqlCommand("INSERT INTO Table1(ID, LastName, FirstName, DateOfBirth) values('127', 'test2', 'user', '2-12-1939');", _conn); x.ExecuteNonQuery(); //So far, so good. Adding a record to the table. //at this point, we need to do **_trans.Commit()** here because our Entity code can't use the same connection. Then I have to manually delete in the TestHarness.TearDown.. I'd like to eliminate this step //(this code is in another object, I'll include it for brevity. Imagine that I passed the connection in) //check to see if it is there using (var ctx = new XEntities(_conn)) //can't do this.. _conn is not an EntityConnection! { var retVal = (from m in ctx.Table1 where m.first_name == "test2" where m.last_name == "user" where m.Date_of_Birth == "2-12-1939" where m.ID == 127 select m).FirstOrDefault(); return (retVal != null); } //Do test.. Assert.BlahBlah(); _trans.Rollback();

    Read the article

  • C# struct with object as data member

    - by source-energy
    As we know, in C# structs are passed by value, not by reference. So if I have a struct with the following data members: private struct MessageBox { // data members private DateTime dm_DateTimeStamp; // a struct type private TimeSpan dm_TimeSpanInterval; // also a struct private ulong dm_MessageID; // System.Int64 type, struct private String dm_strMessage; // an object (hence a reference is stored here) // more methods, properties, etc ... } So when a MessageBox is passed as a parameter, a COPY is made on the stack, right? What does that mean in terms of how the data members are copied? The first two are struct types, so copies should be made of DateTime and TimeSpan. The third type is a primitive, so it's also copied. But what about the dm_strMessage, which is a reference to an object? When it's copied, another reference to the same String is created, right? The object itself resides in the heap, and is NOT copied (there is only one instance of it on the heap.) So now we have to references to the same object of type String. If the two references are accessed from different threads, it's conceivable that the String object could be corrupted by being modified from two different directions simultaneously. The MSDN documentation says that System.String is thread safe. Does that mean that the String class has a built-in mechanism to prevent an object being corrupted in exactly the type of situation described here? I'm trying to figure out if my MessageBox struct has any potential flaws / pitfalls being a structure vs. a class. Thanks for any input. Source.Energy.

    Read the article

  • map operator [] operands

    - by Jamie Cook
    Hi all I have the following in a member function int tt = 6; vector<set<int>>& temp = m_egressCandidatesByDestAndOtMode[tt]; set<int>& egressCandidateStops = temp.at(dest); and the following declaration of a member variable map<int, vector<set<int>>> m_egressCandidatesByDestAndOtMode; However I get an error when compiling (Intel Compiler 11.0) 1>C:\projects\svn\bdk\Source\ZenithAssignment\src\Iteration\PtBranchAndBoundIterationOriginRunner.cpp(85): error: no operator "[]" matches these operands 1> operand types are: const std::map<int, std::vector<std::set<int, std::less<int>, std::allocator<int>>, std::allocator<std::set<int, std::less<int>, std::allocator<int>>>>, std::less<int>, std::allocator<std::pair<const int, std::vector<std::set<int, std::less<int>, std::allocator<int>>, std::allocator<std::set<int, std::less<int>, std::allocator<int>>>>>>> [ const int ] 1> vector<set<int>>& temp = m_egressCandidatesByDestAndOtMode[tt]; 1> ^ I know it's got to be something silly but I can't see what I've done wrong.

    Read the article

  • How do I get Spotlight attributes to display in the get info window?

    - by Alexander Rauchfuss
    I have created a spotlight importer for comic files. The attributes are successfully imported and searchable. The one thing that remains is getting the attributes to display in a file's get info window. It seems that this should be a simple matter of editing the schema.xml file so the attributes are nested inside displayattrs tags. Unfortunately this does not seem to be working. I simplified the plugin for testing. The following are all of the important files. schema.xml <types> <type name="cx.c3.cbz-archive"> <allattrs> kMDItemTitle kMDItemAuthors </allattrs> <displayattrs> kMDItemTitle kMDItemAuthors </displayattrs> </type> <type name="cx.c3.cbr-archive"> <allattrs> kMDItemTitle kMDItemAuthors </allattrs> <displayattrs> kMDItemTitle kMDItemAuthors </displayattrs> </type> GetMetadataForFile.m Boolean GetMetadataForFile(void* thisInterface, CFMutableDictionaryRef attributes, CFStringRef contentTypeUTI, CFStringRef pathToFile) { NSAutoreleasePool * pool = [NSAutoreleasePool new]; NSString * file = (NSString *)pathToFile; NSArray * authors = [[UKXattrMetadataStore stringForKey: @"com_opencomics_authors" atPath: file traverseLink: NO] componentsSeparatedByString: @","]; [(NSMutableDictionary *)attributes setObject: authors forKey: (id)kMDItemAuthors]; NSString * title = [UKXattrMetadataStore stringForKey: @"com_opencomics_title" atPath: file traverseLink: NO]; [(NSMutableDictionary *)attributes setObject: title forKey: (id)kMDItemTitle]; [pool release]; return true; }

    Read the article

  • Looking for a RESTful or SOAP pipeline between WordPress and InterWoven TeamSite

    - by deanpeters
    I've been Googling my brains out trying see if there's a simple way to bridge content to and from WordPress to and from TeamSite. I'm coming at this from the perspective of a WordPress developer. I see in the book "The Definitive Guide to Interwoven TeamSite" (http://bit.ly/d3z4wI) mention of objects for the Interwoven LiveSite product: com.interwoven.livesite.external.impl.RSS com.interwoven.livesite.external.impl.SOAP If I understand the above objects correctly, these allow me to instantiate objects of these data types, which after populating them via various method calls, allow me to render content using com.interwoven.livesite.external.ExternalCall ... but I'm not sure. Nor do I think this approach provides me the 2-way street I seek. As it stands now, from my limited understanding, it appears that the least path of resistance is deploying Interwoven's LiveSite with the existing TeamSite implementation so content can be both consumed and rendered via RSS ... an channel which WordPress can produce and consume; the latter with plugins such as wp-o-matic and/or feedpress. So the question is, does anyone out there have experience with a SOAP or RESTful API approach to InterWoven's TeamSite? If so, can I get some direction on documentation? Or is the addition of LiveSite + RSS the most feasible 2-way channel?

    Read the article

  • Problems installing PIL after OSX 10.9

    - by user2632417
    I installed Mac OSX 10.9 the day it came out. Afterwards I decided I needed to install PIL. I'd installed it before, but it appeared the update had broken that. When I try to use pip to install PIL, it fails when building _imaging. It appears the root cause is this. /usr/include/sys/cdefs.h:655:2: error: Unsupported architecture Theres also a similar error here: /usr/include/machine/limits.h:8:2: error: architecture not supported and here: /usr/include/machine/_types.h:34:2: error: architecture not supported Then there's a whole list of missing types. /usr/include/sys/_types.h:94:9: error: unknown type name '__int64_t' typedef __int64_t __darwin_blkcnt_t; /* total blocks */ ^ /usr/include/sys/_types.h:95:9: error: unknown type name '__int32_t' typedef __int32_t __darwin_blksize_t; /* preferred block size */ ^ /usr/include/sys/_types.h:96:9: error: unknown type name '__int32_t' typedef __int32_t __darwin_dev_t; /* dev_t */ ^ /usr/include/sys/_types.h:99:9: error: unknown type name '__uint32_t' typedef __uint32_t __darwin_gid_t; /* [???] process and group IDs */ ^ /usr/include/sys/_types.h:100:9: error: unknown type name '__uint32_t' typedef __uint32_t __darwin_id_t; /* [XSI] pid_t, uid_t, or gid_t*/ ^ /usr/include/sys/_types.h:101:9: error: unknown type name '__uint64_t' typedef __uint64_t __darwin_ino64_t; /* [???] Used for 64 bit inodes */ Needless to say I don't know where to go from here. I've got a couple of guesses, but I don't even know how to check. Wrong include probably as a result of a badly configured environment variable Problem with Xcode's installation/ missing command line tools Messed up header files If anyone has any suggestions either to check one of those possibilities or for one of their own I'm all ears.

    Read the article

  • changing the serialization procedure for a graph of objects (.net framework)

    - by pierusch
    Hello I'm developing a scientific application using .net framework. The application depends heavily upon a large data structure (a tree like structure) that has been serialized using a standard binaryformatter object. The graph structure looks like this: <serializable()>Public class BigObjet inherits list(of smallObject) end class <serializable()>public class smallObject inherits list(of otherSmallerObjects) end class ... The binaryFormatter object does a nice job but it's not optimized at all and the entire data structure reaches around 100Mb on my filesystem. Deserialization works too but it's pretty slow (around 30seconds on my quad core). I've found a nice .dll on codeproject (see "optimizing serialization...") so I wrote a modified version of the classes above overriding the default serialization/deserialization procedure reaching very good results. The problem is this: I can't lose the data previosly serialized with the old version and I'd like to be able to use the new serialization/deserialization method. I have some ideas but I'm pretty sure someone will be able to give me a proper and better advice ! use an "helper" graph of objects who takes care of the entire serialization/deserialization procedure reading data from the old format and converting them into the classes I nedd. This could work but the binaryformatter "needs" to know the types being serialized so........ :( modify the "old" graph to include a modified version of serialization procedure...so I'll be able to deserialize old file and save them with the new format......this doesn't sound too good imho. well any help will be higly highly appreciated :)

    Read the article

  • Which fieldtype is best for storing PRICE values?

    - by BerggreenDK
    Hi there I am wondering whats the best "price field" in MSSQL for a shoplike structure? Looking at this overview: http://www.teratrax.com/sql_guide/data_types/sql_server_data_types.html We have datatypes called money, smallmoney, then we have decimal/numeric and lastly float and real Name, memory/disk-usage and value ranges: Money: 8 bytes (values: -922,337,203,685,477.5808 to +922,337,203,685,477.5807) Smallmoney: 4 bytes (values: -214,748.3648 to +214,748.3647) Decimal: 9 [default, min. 5] bytes (values: -10^38 +1 to 10^38 -1 ) Float: 8 bytes (values: -1.79E+308 to 1.79E+308 ) Real: 4 bytes (values: -3.40E+38 to 3.40E+38 ) My question is: is it really wise to store pricevalues in those types? what about eg. INT? Int: 4 bytes (values: -2,147,483,648 to 2,147,483,647) Lets say a shop uses dollars, they have cents, but I dont see prices being $49.2142342 so the use of a lot of decimals showing cents seems waste of SQL bandwidth. Secondly, most shops wouldn't show any prices near 200.000.000 (not in normal webshops at least... unless someone is trying to sell me a famous tower in Paris) So why not go for an int? An int is fast, its only 4 bytes and you can easily make decimals, by saving values in cents instead of dollars and then divide when you present the values. The other approach would be to use smallmoney which is 4 bytes too, but this will require the math part of the CPU to do the calc, where as Int is integer power... on the downside you will need to divide every single outcome. Are there any "currency" related problems with regionalsettings when using smallmoney/money fields? what will these transfer too in C#/.NET ? Any pros/cons? Go for integer prices or smallmoney or some other? Whats does your experience tell?

    Read the article

  • Will my LinqToSql execution be deffered if i filter with IEnumerable<T> instead of IQueryable<T>?

    - by cottsak
    I have been using these common EntityObjectFilters as a "pipes and filters" way to query from a collection a particular item with an ID: public static class EntityObjectFilters { public static T WithID<T>(this IQueryable<T> qry, int ID) where T : IEntityObject { return qry.SingleOrDefault<T>(item => item.ID == ID); } public static T WithID<T>(this IList<T> list, int ID) where T : IEntityObject { return list.SingleOrDefault<T>(item => item.ID == ID); } } ..but i wondered to myself: "can i make this simpler by just creating an extension for all IEnumerable<T> types"? So i came up with this: public static class EntityObjectFilters { public static T WithID<T>(this IEnumerable<T> qry, int ID) where T : IEntityObject { return qry.SingleOrDefault<T>(item => item.ID == ID); } } Now while this appears to yield the same result, i want to know that when applied to IQueryable<T>s will the expression tree be passed to LinqToSql for evaluating as SQL code or will my qry be evaluated in it's entirety first, then iterated with Funcs? I'm suspecting that (as per Richard's answer) the latter will be true which is obviously what i don't want. I want the same result, but the added benefit of the delayed SQL execution for IQueryable<T>s. Can someone confirm for me what will actually happen and provide simple explanation as to how it would work?

    Read the article

  • Remove this URL string when login fails and simply show div error

    - by Anagio
    My developer built our registration page to display a div when logins failed based on a string in the URL. When logins fail this is added to the URL /login?msg=invalid The PHP in my login.phtml which displays the error messages based on the msg= parameter is <?php $msg = ""; $msg = $_GET['msg']; if($msg==""){ $showMsg = ""; } elseif($msg=="invalid"){ $showMsg = ' <div class="alert alert-error"> <a class="close" data-dismiss="alert">×</a> <strong>Error!</strong> Login or password is incorrect! </div>'; } elseif($msg=="disabled"){ $showMsg = "Your account has been disabled."; } elseif($msg==2){ $showMsg = "Your account is not activated. Please check your email."; } ?> In the controller the redirect to that URL is else //email id does not exist in our database { //redirecting back with invalid email(invalid) msg=invalid. $this->_redirect($url."?msg=invalid"); } I know there are a few other validation types for disabled accounts etc. I'm in the process of redesigning the entire interface and would like to get rid of this kind of validation so that the div tags display when logins fail but not show the URL strings. If it matters the new div I want to display is <div class="alert alert-error alert-login"> Email or password incorrect </div> I'd like to replace the php my self in my login.phtml and controller but not a good programmer. What can I replace $this->_redirect($url."?msg=invalid"); with so that no strings are added to the URL and display the appropriate div tags? Thanks

    Read the article

  • check properties of two objects for changes

    - by k-hoffmann
    Hi, i have to develop a mechanism to check two object properties for changes. All properties which are needed to check are marked with an attribute. Atm i - read all properties from acutal object via linq - read the corresponding property from old object - fill an own object with the two properties (old and new value) In Code the call to the workerclass looks like this public void CreateHistoryMap(BaseEntity actual, BaseEntity old) { CreateHistoryMap(actualEntity, oldEntity) .ForEach(mapEntry => CreateHistoryEntry(mapEntry), mapEntry => IfChangesDetected(mapEntry)); } CreateHistoryMap builds up the HistoryMapEntry which contains the two properties. CreateHistoryEntry build up the object which is saved to database, the IfChangesDetected check the object for changes. I have to handle own special application types to generate history values to database (like concatinating list values and so on). My problem is now, that i have to read the values of the properties twice - for change detection - and for the concreate CreateHistoryEntry How can i eliminate this problem or how can i implement the change tracking scenario with the nice c# 3.5 features? Thanks a lot.

    Read the article

  • F# numeric associations

    - by b1g3ar5
    I have a numeric association for a custom type as follows: let DiffNumerics = { new INumeric<Diff> with member op.Zero = C 0.0 member op.One = C 1.0 member op.Add(a,b) = a + b member op.Subtract(a,b) = a - b member op.Multiply(a,b) = a * b member ops.Negate(a) = Diff.negate a member ops.Abs(a) = Diff.abs a member ops.Equals(a, b) = ((=) a b) member ops.Compare(a, b) = Diff.compare a b member ops.Sign(a) = int (Diff.sign a).Val member ops.ToString(x,fmt,fmtprovider) = failwith "not implemented" member ops.Parse(s,numstyle,fmtprovider) = failwith "not implemented" } GlobalAssociations.RegisterNumericAssociation(DiffNumerics) It works fine in f# interactive, but crashes when I run, because .ElementOps is not filled correctly for a matrix of these types. Any ideas why this might be? EDIT: In fsi, the code let A = dmatrix [[Diff.C 1.;Diff.C 2.;Diff.C 3.];[Diff.C 4.;Diff.C 5.;Diff.C 6.]] let B = matrix [[1.;2.;3.];[4.;5.;6.]] gives: > A.ElementOps;; val it : INumeric<Diff> = FSI_0003.NewAD+DiffNumerics@258 > B.ElementOps;; val it : INumeric<float> = Microsoft.FSharp.Math.Instances+FloatNumerics@115 > in the debugger A.ElementOps shows: '(A).ElementOps' threw an exception of type 'System.NotSupportedException' and, for the B matrix: Microsoft.FSharp.Math.Instances+FloatNumerics@115 So somehow the DiffNumerics isn't making it to the compiled program.

    Read the article

  • ADO.NET DataTable/DataRow Thread Safety

    - by Allen E. Scharfenberg
    Introduction A user reported to me this morning that he was having an issue with inconsistent results (namely, column values sometimes coming out null when they should not be) of some parallel execution code that we provide as part of an internal framework. This code has worked fine in the past and has not been tampered with lately, but it got me to thinking about the following snippet: Code Sample lock (ResultTable) { newRow = ResultTable.NewRow(); } newRow["Key"] = currentKey; foreach (KeyValuePair<string, object> output in outputs) { object resultValue = output.Value; newRow[output.Name] = resultValue != null ? resultValue : DBNull.Value; } lock (ResultTable) { ResultTable.Rows.Add(newRow); } (No guarantees that that compiles, hand-edited to mask proprietery information.) Explanation We have this cascading type of locking code other places in our system, and it works fine, but this is the first instance of cascading locking code that I have come across that interacts with ADO .NET. As we all know, members of framework objects are usually not thread safe (which is the case in this situation), but the cascading locking should ensure that we are not reading and writing to ResultTable.Rows concurrently. We are safe, right? Hypothesis Well, the cascading lock code does not ensure that we are not reading from or writing to ResultTable.Rows at the same time that we are assigning values to columns in the new row. What if ADO .NET uses some kind of buffer for assigning column values that is not thread safe--even when different object types are involved (DataTable vs. DataRow)? Has anyone run into anything like this before? I thought I would ask here at StackOverflow before beating my head against this for hours on end :) Conclusion Well, the consensus appears to be that changing the cascading lock to a full lock has resolved the issue. That is not the result that I expected, but the full lock version has not produced the issue after many, many, many tests. The lesson: be wary of cascading locks used on APIs that you do not control. Who knows what may be going on under the covers!

    Read the article

  • C++: defining maximum/minimum limits for a class

    - by Luis
    Basically what the title says... I have created a class that models time slots in a variable-granularity daily schedule (where for example the first time slot is 30 minutes, but the second time slot can be 40 minutes); the first available slot starts at (a value comparable to) 1. What I want to do now is to define somehow the maximum and minimum allowable values that this class takes and I have two practical questions in order to do so: 1.- does it make sense to define absolute minimum and maximum in such a way for a custom class? Or better, does it suffice that a value always compares as lower-than any other possible value of the type, given the class's defined relational operators, to be defined the min? (and analogusly for the max) 2.- assuming the previous question has an answer modeled after "yes" (or "yes but ..."), how do I define such max/min? I know that there is std::numeric_limits<> but from what I read it is intended for "numeric types". Do I interpret that as meaning "represented as a number" or can I make a broader assumption like "represented with numbers" or "having a correspondence to integers"? After all, it would make sense to define the minimum and maximum for a date class, and maybe for a dictionary class, but numeric_limits may not be intended for those uses (I don't have much experience with it). Plus, numeric_limits has a lot of extra members and information that I don't know what to make with. If I don't use numeric_limits, what other well-known / widely-used mechanism does C++ offer to indicate the available range of values for a class?

    Read the article

  • Problem with sprintf function, last parameters are wrong when written

    - by Apoc
    So I use sprintf sprintf(buffer,"%f|%f|%f|%f|%f|%f|%d|%f|%d", x, y, z, u, v, w, nID,dDistance, nConfig) But when I print the buffer I get the 2 last parameters wrong, they are lets suppose to be 35.0000 and 0 and in the string they are 0.00000 and 10332430 and my buffer is long enough and all the other parameters are good in the string Any idea? Is there a length limit to sprintf or something^ I checked the types of all the numbers and they are right, but what seems to be the problem is the dDistance. When I remove it from the sprint, the nConfig gets the right value in the string, but when I remove nConfig, dDistance still doesn't get the right value. I checked and dDistance is a double. Any idea? Since people don't seem to believe me I did this : char test[255]={0}; int test1 = 2; double test2=35.00; int test3 = 0; sprintf(test,"%d|%f|%d",test1,test2,test3); and I get this in my string: 2|0.000000|1078034432

    Read the article

  • Could I do this blind relative to absolute path conversion (for perforce depot paths) better?

    - by wonderfulthunk
    I need to "blindly" (i.e. without access to the filesystem, in this case the source control server) convert some relative paths to absolute paths. So I'm playing with dotdots and indices. For those that are curious I have a log file produced by someone else's tool that sometimes outputs relative paths, and for performance reasons I don't want to access the source control server where the paths are located to check if they're valid and more easily convert them to their absolute path equivalents. I've gone through a number of (probably foolish) iterations trying to get it to work - mostly a few variations of iterating over the array of folders and trying delete_at(index) and delete_at(index-1) but my index kept incrementing while I was deleting elements of the array out from under myself, which didn't work for cases with multiple dotdots. Any tips on improving it in general or specifically the lack of non-consecutive dotdot support would be welcome. Currently this is working with my limited examples, but I think it could be improved. It can't handle non-consecutive '..' directories, and I am probably doing a lot of wasteful (and error-prone) things that I probably don't need to do because I'm a bit of a hack. I've found a lot of examples of converting other types of relative paths using other languages, but none of them seemed to fit my situation. These are my example paths that I need to convert, from: //depot/foo/../bar/single.c //depot/foo/docs/../../other/double.c //depot/foo/usr/bin/../../../else/more/triple.c to: //depot/bar/single.c //depot/other/double.c //depot/else/more/triple.c And my script: begin paths = File.open(ARGV[0]).readlines puts(paths) new_paths = Array.new paths.each { |path| folders = path.split('/') if ( folders.include?('..') ) num_dotdots = 0 first_dotdot = folders.index('..') last_dotdot = folders.rindex('..') folders.each { |item| if ( item == '..' ) num_dotdots += 1 end } if ( first_dotdot and ( num_dotdots > 0 ) ) # this might be redundant? folders.slice!(first_dotdot - num_dotdots..last_dotdot) # dependent on consecutive dotdots only end end folders.map! { |elem| if ( elem !~ /\n/ ) elem = elem + '/' else elem = elem end } new_paths << folders.to_s } puts(new_paths) end

    Read the article

< Previous Page | 259 260 261 262 263 264 265 266 267 268 269 270  | Next Page >