Search Results

Search found 11146 results on 446 pages for 'dynamic queries'.

Page 34/446 | < Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >

  • ASP.NET Dynamic Data field value disappears in the browser.

    - by ProfK
    I have an ASP.NET Dynamic Data web application, with an entity called ActivationResource. One of the properties of this is a CellPhone field. Now, whenever I open a List or Details view of one of these entities, the cell phone number displays for a moment then disappears. Anyone have any ideas as to the cause of this mysterious behavior?

    Read the article

  • How to set an onClick attribute that would load dynamic content?

    - by konzepz
    This code is suppose to add an onClick event to each of the a elements, so that clicking on the element would load the content of the page dynamically into a DIV. Now, I got this far - It will add the onClick event, but how can I load the dynamic content? $(document.body).ready(function () { $("li.cat-item a").each(function (i) { this.setAttribute('onclick','alert("[load content dynamically into #preview]")'); }); }); Thank you.

    Read the article

  • What do you call the concept of dynamic data definition?

    - by DJTripleThreat
    Maybe this is simpler and more straightforward then what I'm thinking but I can't seem to find this concept on google anywhere. The concept is this: You have a table in a database and the table has a specified number of columns. However, it has been asked of me by previous clients that there also be a set of dynamic user defined columns that can be added on the fly. What is this concept called and is it considered a design pattern?

    Read the article

  • MySQL differences between to select queries

    - by bpmccain
    I have two mysql queries that return a column of phone numbers. I want am trying to end up with a list of phone numbers that are in one list, but not in the other. So the two queries I have are: SELECT phone FROM civicrm_phone phone LEFT JOIN civicrm_participant participant ON phone.contact_id = participant.contact_id WHERE phone.is_primary = 1 AND participant.id IS NULL and SELECT phone FROM civicrm_phone phone LEFT JOIN civicrm_participant participant ON phone.contact_id = participant.contact_id WHERE phone.is_primary = 1 AND participant.id IS NOT NULL And before anyone asks, the above two queries do not provide mutually exclusive results (based on using IS NULL and IS NOT NULL for the last WHERE statement), since we have related individuals in the database who use the same phone number, but do not necessarily all have a participant.id. Thanks for any help.

    Read the article

  • Get list of named queries in NHibernate

    - by Dan
    I have a dozen or so named queries in my NHibernate project and I want to execute them against a test database in unit tests to make sure the syntax still matches the changing domain/database model. Currently I have a unit test for each named query where I get and execute the query, for example: IQuery query = session.GetNamedQuery("GetPersonSummaries"); var personSummaryArray = query.List(); Assert.That(personSummaryArray, Is.Not.Null); This works fine, but I would like to have one unit test that loops thru all of the named queries and executes them. Is there a way to discover all of the available named queries? Thanks Dan

    Read the article

  • How to combine two sql queries?

    - by plasmuska
    Hi Guys, I have a stock table and I would like to create a report that will show how often were items ordered. "stock" table: item_id | pcs | operation apples | 100 | order oranges | 50 | order apples | -100 | delivery pears | 100 | order oranges | -40 | delivery apples | 50 | order apples | 50 | delivery Basically I need to join these two queries together. A query which prints stock balances: SELECT stock.item_id, Sum(stock.pcs) AS stock_balance FROM stock GROUP BY stock.item_id; A query which prints sales statistics SELECT stock.item_id, Sum(stock.pcs) AS pcs_ordered, Count(stock.item_id) AS number_of_orders FROM stock GROUP BY stock.item_id, stock.operation HAVING stock.operation="order"; I think that some sort of JOIN would do the job but I have no idea how to glue queries together. Desired output: item_id | stock_balance | pcs_ordered | number_of_orders apples | 0 | 150 | 2 oranges | 10 | 50 | 1 pears | 100 | 100 | 1 This is just example. Maybe, I will need to add more conditions because there is more columns. Is there a universal technique of combining multiple queries together?

    Read the article

  • How to Prove that using subselect queries in SQL is killing performance of server

    - by adopilot
    One of my jobs it to maintain our database, usually we have troubles with lack of performance while getting reports and working whit that base. When I start looking at queries which our ERP sending to database I see a lot of totally needlessly subselect queries inside main queries. As I am not member of developers which is creator of program we using, they do not like much when I criticize they code and job. Let say they do not taking my review as serious statements. So I asking you few questions about subselect in SQL Does subselect is taking a lot of more time then left outer joins? Does exists any blog, article or anything where I subselect is recommended not to use ? How I can prove that if we avoid subselesct in query that query is going to be faster ? Our database server is MSSQL2005

    Read the article

  • Get the AVG of two SQL Access Queries

    - by reggiereg
    Hi, I'm trying to get the AVERAGE from the results of two separate sql queries built in MS Access. The first sql query pulls the largest record: SELECT DISTINCTROW Sheet1.Tx_Date, Sheet1.LName, Sheet1.Patient_Name, Sheet1.MRN, Max(Sheet1.) AS [Max Of FEV1_ACT], Max(Sheet1.FEF_25_75_ACT) AS [Max Of FEF_25_75_ACT] FROM Sheet1 GROUP BY Sheet1.Tx_Date, Sheet1.LName, Sheet1.Patient_Name, Sheet1.MRN; The second sql query pulls the second largest record: SELECT Sheet1.MRN, Sheet1.Patient_Name, Sheet1.Lname, Max(Sheet1.FEV1_ACT) AS 2ndLrgOfFEV1_ACT, Max(Sheet1.FEF_25_75_ACT) AS 2ndLrgOfFEF_25_75_ACT FROM Sheet1 WHERE (((Sheet1.FEV1_ACT)<(SELECT MAX( FEV1_ACT ) FROM Sheet1 ))) GROUP BY Sheet1.MRN, Sheet1.Patient_Name, Sheet1.Lname; These two queries work great, I just need some help on pulling the AVERAGE of the results of these two queries into one. Thanks.

    Read the article

  • NSArray vs. SQLite for Complex Queries on iPhone

    - by GingerBreadMane
    Developing for iPhone, I have a collection of points that I need to make complex queries on. For example: "How many points have a y-coordinate of 10" and "Return all points with an X-coordinate between 3 and 5 and a y-coordinate of 7". Currently, I am just cycling through each element of an NSArray and checking to see if each element matches my query. It's a pain to write the queries though. SQLite would be much nicer. I'm not sure which would be more efficient though since a SQLite database resides on disk and not in memory (to my understanding). Would SQLite be as efficient or more efficient here? Or is there a better way to do it other than these methods that I haven't thought of? I would need to perform the multiple queries with multiple sets of points thousands of times, so the best performance is important.

    Read the article

  • CakePHP repeats same queries

    - by Rytis
    I have a model structure: Category hasMany Product hasMany Stockitem belongsTo Warehouse, Manufacturer. I fetch data with this code, using containable to be able to filter deeper in the associated models: $this->Category->find('all', array( 'conditions' => array('Category.id' => $category_id), 'contain' => array( 'Product' => array( 'Stockitem' => array( 'conditions' => array('Stockitem.warehouse_id' => $warehouse_id), 'Warehouse', 'Manufacturer', ) ) ), ) ); Data structure is returned just fine, however, I get multiple repeating queries like, sometimes hundreds of such queries in a row, based on dataset. SELECT `Warehouse`.`id`, `Warehouse`.`title` FROM `beta_warehouses` AS `Warehouse` WHERE `Warehouse`.`id` = 2 Basically, when building data structure Cake is fetching data from mysql over and over again, for each row. We have datasets of several thousand rows, and I have a feeling that it's going to impact performance. Is it possible to make it cache results and not repeat same queries?

    Read the article

  • Dynamic SQL queries in code possible?

    - by SeanD
    Instead of hard coding sql queries like Select * from users where user_id =220202 can these be made dynamic like Select * from $users where $user_id = $input. Reason i ask is when changes are needed to table/column names i can just update it in one place and don't have to ask developers to go line by line to find all references to update. It is very time consuming. And I do not like the idea of exposing database stuff in the code. My major concern is load time. Like with dynamic pages, the database has to fetch the page content, same way if queries are dynamic first system has to lookup the references then execute the queries, so does it impact load times? I am using codeignitor PHP. If it it possible then the next question is where to store all the references? In the app, in a file, in the DB, and how?

    Read the article

  • Joining two select queries and ordering results

    - by user1
    Basically I'm just unsure as to why this query is failing to execute: (SELECT replies.reply_post, replies.reply_content, replies.reply_date AS d, members.username FROM (replies) AS a INNER JOIN members ON replies.reply_by = members.id) UNION (SELECT posts.post_id, posts.post_title, posts.post_date AS d, members.username FROM (posts) as b WHERE posts.post_set = 0 INNER JOIN members ON posts.post_by = members.id) ORDER BY d DESC LIMIT 5 I'm getting this error: #1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'a INNER JOIN members ON replies.re' at line 2 All I'm trying to do is select the 5 most recent rows (dates) from these two tables. I've tried Join, union etc and I've seen numerous queries where people have put another query after the FROM statement and that just makes no logical sense to me as to how that works? Am I safe to say that you can join the same table from two different but joined queries? Or am I taking completely the wrong approach, because frankly I can't seem see how this query is failing despite reading the error message. (The two queries on there own work fine)

    Read the article

  • Orange Brightbox and NO-IP.com

    - by JSweete
    Strange one here i didnt know where to ask, and i know this is a developer resource but i was hoping with everyones tech know how someone may have a solution for my problem. Ok i had an orange livebox before and in the dynamic dns settings it had no-ip.com as a drop down option with login variables to update my account with a dynamic ip address. This worked great for years. However my livebox died and i now have a orange brightbox, and this doesnt have no-ip.com as a login update option for dynamic dns on my router. Does any one have any idea how i can get my domain to point to my home server with a dynamic ip address ideally for free? This is merely for testing and to have a backup server for my main remote server.

    Read the article

  • How to do inclusive range queries when only half-open range is supported (ala SortedMap.subMap)

    - by polygenelubricants
    On SortedMap.subMap This is the API for SortedMap<K,V>.subMap: SortedMap<K,V> subMap(K fromKey, K toKey) : Returns a view of the portion of this map whose keys range from fromKey, inclusive, to toKey, exclusive. This inclusive lower bound, exclusive upper bound combo ("half-open range") is something that is prevalent in Java, and while it does have its benefits, it also has its quirks, as we shall soon see. The following snippet illustrates a simple usage of subMap: static <K,V> SortedMap<K,V> someSortOfSortedMap() { return Collections.synchronizedSortedMap(new TreeMap<K,V>()); } //... SortedMap<Integer,String> map = someSortOfSortedMap(); map.put(1, "One"); map.put(3, "Three"); map.put(5, "Five"); map.put(7, "Seven"); map.put(9, "Nine"); System.out.println(map.subMap(0, 4)); // prints "{1=One, 3=Three}" System.out.println(map.subMap(3, 7)); // prints "{3=Three, 5=Five}" The last line is important: 7=Seven is excluded, due to the exclusive upper bound nature of subMap. Now suppose that we actually need an inclusive upper bound, then we could try to write a utility method like this: static <V> SortedMap<Integer,V> subMapInclusive(SortedMap<Integer,V> map, int from, int to) { return (to == Integer.MAX_VALUE) ? map.tailMap(from) : map.subMap(from, to + 1); } Then, continuing on with the above snippet, we get: System.out.println(subMapInclusive(map, 3, 7)); // prints "{3=Three, 5=Five, 7=Seven}" map.put(Integer.MAX_VALUE, "Infinity"); System.out.println(subMapInclusive(map, 5, Integer.MAX_VALUE)); // {5=Five, 7=Seven, 9=Nine, 2147483647=Infinity} A couple of key observations need to be made: The good news is that we don't care about the type of the values, but... subMapInclusive assumes Integer keys for to + 1 to work. A generic version that also takes e.g. Long keys is not possible (see related questions) Not to mention that for Long, we need to compare against Long.MAX_VALUE instead Overloads for the numeric primitive boxed types Byte, Character, etc, as keys, must all be written individually A special check need to be made for toInclusive == Integer.MAX_VALUE, because +1 would overflow, and subMap would throw IllegalArgumentException: fromKey > toKey This, generally speaking, is an overly ugly and overly specific solution What about String keys? Or some unknown type that may not even be Comparable<?>? So the question is: is it possible to write a general subMapInclusive method that takes a SortedMap<K,V>, and K fromKey, K toKey, and perform an inclusive-range subMap queries? Related questions Are upper bounds of indexed ranges always assumed to be exclusive? Is it possible to write a generic +1 method for numeric box types in Java? On NavigableMap It should be mentioned that there's a NavigableMap.subMap overload that takes two additional boolean variables to signify whether the bounds are inclusive or exclusive. Had this been made available in SortedMap, then none of the above would've even been asked. So working with a NavigableMap<K,V> for inclusive range queries would've been ideal, but while Collections provides utility methods for SortedMap (among other things), we aren't afforded the same luxury with NavigableMap. Related questions Writing a synchronized thread-safety wrapper for NavigableMap On API providing only exclusive upper bound range queries Does this highlight a problem with exclusive upper bound range queries? How were inclusive range queries done in the past when exclusive upper bound is the only available functionality?

    Read the article

  • Dynamically loading Assemblies to reduce Runtime Depencies

    - by Rick Strahl
    I've been working on a request to the West Wind Application Configuration library to add JSON support. The config library is a very easy to use code-first approach to configuration: You create a class that holds the configuration data that inherits from a base configuration class, and then assign a persistence provider at runtime that determines where and how the configuration data is store. Currently the library supports .NET Configuration stores (web.config/app.config), XML files, SQL records and string storage.About once a week somebody asks me about JSON support and I've deflected this question for the longest time because frankly I think that JSON as a configuration store doesn't really buy a heck of a lot over XML. Both formats require the user to perform some fixup of the plain configuration data - in XML into XML tags, with JSON using JSON delimiters for properties and property formatting rules. Sure JSON is a little less verbose and maybe a little easier to read if you have hierarchical data, but overall the differences are pretty minor in my opinion. And yet - the requests keep rolling in.Hard Link Issues in a Component LibraryAnother reason I've been hesitant is that I really didn't want to pull in a dependency on an external JSON library - in this case JSON.NET - into the core library. If you're not using JSON.NET elsewhere I don't want a user to have to require a hard dependency on JSON.NET unless they want to use the JSON feature. JSON.NET is also sensitive to versions and doesn't play nice with multiple versions when hard linked. For example, when you have a reference to V4.4 in your project but the host application has a reference to version 4.5 you can run into assembly load problems. NuGet's Update-Package can solve some of this *if* you can recompile, but that's not ideal for a component that's supposed to be just plug and play. This is no criticism of JSON.NET - this really applies to any dependency that might change.  So hard linking the DLL can be problematic for a number reasons, but the primary reason is to not force loading of JSON.NET unless you actually need it when you use the JSON configuration features of the library.Enter Dynamic LoadingSo rather than adding an assembly reference to the project, I decided that it would be better to dynamically load the DLL at runtime and then use dynamic typing to access various classes. This allows me to run without a hard assembly reference and allows more flexibility with version number differences now and in the future.But there are also a couple of downsides:No assembly reference means only dynamic access - no compiler type checking or IntellisenseRequirement for the host application to have reference to JSON.NET or else get runtime errorsThe former is minor, but the latter can be problematic. Runtime errors are always painful, but in this case I'm willing to live with this. If you want to use JSON configuration settings JSON.NET needs to be loaded in the project. If this is a Web project, it'll likely be there already.So there are a few things that are needed to make this work:Dynamically create an instance and optionally attempt to load an Assembly (if not loaded)Load types into dynamic variablesUse Reflection for a few tasks like statics/enumsThe dynamic keyword in C# makes the formerly most difficult Reflection part - method calls and property assignments - fairly painless. But as cool as dynamic is it doesn't handle all aspects of Reflection. Specifically it doesn't deal with object activation, truly dynamic (string based) member activation or accessing of non instance members, so there's still a little bit of work left to do with Reflection.Dynamic Object InstantiationThe first step in getting the process rolling is to instantiate the type you need to work with. This might be a two step process - loading the instance from a string value, since we don't have a hard type reference and potentially having to load the assembly. Although the host project might have a reference to JSON.NET, that instance might have not been loaded yet since it hasn't been accessed yet. In ASP.NET this won't be a problem, since ASP.NET preloads all referenced assemblies on AppDomain startup, but in other executable project, assemblies are just in time loaded only when they are accessed.Instantiating a type is a two step process: Finding the type reference and then activating it. Here's the generic code out of my ReflectionUtils library I use for this:/// <summary> /// Creates an instance of a type based on a string. Assumes that the type's /// </summary> /// <param name="typeName">Common name of the type</param> /// <param name="args">Any constructor parameters</param> /// <returns></returns> public static object CreateInstanceFromString(string typeName, params object[] args) { object instance = null; Type type = null; try { type = GetTypeFromName(typeName); if (type == null) return null; instance = Activator.CreateInstance(type, args); } catch { return null; } return instance; } /// <summary> /// Helper routine that looks up a type name and tries to retrieve the /// full type reference in the actively executing assemblies. /// </summary> /// <param name="typeName"></param> /// <returns></returns> public static Type GetTypeFromName(string typeName) { Type type = null; // Let default name binding find it type = Type.GetType(typeName, false); if (type != null) return type; // look through assembly list var assemblies = AppDomain.CurrentDomain.GetAssemblies(); // try to find manually foreach (Assembly asm in assemblies) { type = asm.GetType(typeName, false); if (type != null) break; } return type; } To use this for loading JSON.NET I have a small factory function that instantiates JSON.NET and sets a bunch of configuration settings on the generated object. The startup code also looks for failure and tries loading up the assembly when it fails since that's the main reason the load would fail. Finally it also caches the loaded instance for reuse (according to James the JSON.NET instance is thread safe and quite a bit faster when cached). Here's what the factory function looks like in JsonSerializationUtils:/// <summary> /// Dynamically creates an instance of JSON.NET /// </summary> /// <param name="throwExceptions">If true throws exceptions otherwise returns null</param> /// <returns>Dynamic JsonSerializer instance</returns> public static dynamic CreateJsonNet(bool throwExceptions = true) { if (JsonNet != null) return JsonNet; lock (SyncLock) { if (JsonNet != null) return JsonNet; // Try to create instance dynamic json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); if (json == null) { try { var ass = AppDomain.CurrentDomain.Load("Newtonsoft.Json"); json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); } catch (Exception ex) { if (throwExceptions) throw; return null; } } if (json == null) return null; json.ReferenceLoopHandling = (dynamic) ReflectionUtils.GetStaticProperty("Newtonsoft.Json.ReferenceLoopHandling", "Ignore"); // Enums as strings in JSON dynamic enumConverter = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.Converters.StringEnumConverter"); json.Converters.Add(enumConverter); JsonNet = json; } return JsonNet; }This code's purpose is to return a fully configured JsonSerializer instance. As you can see the code tries to create an instance and when it fails tries to load the assembly, and then re-tries loading.Once the instance is loaded some configuration occurs on it. Specifically I set the ReferenceLoopHandling option to not blow up immediately when circular references are encountered. There are a host of other small config setting that might be useful to set, but the default seem to be good enough in recent versions. Note that I'm setting ReferenceLoopHandling which requires an Enum value to be set. There's no real easy way (short of using the cardinal numeric value) to set a property or pass parameters from static values or enums. This means I still need to use Reflection to make this work. I'm using the same ReflectionUtils class I previously used to handle this for me. The function looks up the type and then uses Type.InvokeMember() to read the static property.Another feature I need is have Enum values serialized as strings rather than numeric values which is the default. To do this I can use the StringEnumConverter to convert enums to strings by adding it to the Converters collection.As you can see there's still a bit of Reflection to be done even in C# 4+ with dynamic, but with a few helpers this process is relatively painless.Doing the actual JSON ConversionFinally I need to actually do my JSON conversions. For the Utility class I need serialization that works for both strings and files so I created four methods that handle these tasks two each for serialization and deserialization for string and file.Here's what the File Serialization looks like:/// <summary> /// Serializes an object instance to a JSON file. /// </summary> /// <param name="value">the value to serialize</param> /// <param name="fileName">Full path to the file to write out with JSON.</param> /// <param name="throwExceptions">Determines whether exceptions are thrown or false is returned</param> /// <param name="formatJsonOutput">if true pretty-formats the JSON with line breaks</param> /// <returns>true or false</returns> public static bool SerializeToFile(object value, string fileName, bool throwExceptions = false, bool formatJsonOutput = false) { dynamic writer = null; FileStream fs = null; try { Type type = value.GetType(); var json = CreateJsonNet(throwExceptions); if (json == null) return false; fs = new FileStream(fileName, FileMode.Create); var sw = new StreamWriter(fs, Encoding.UTF8); writer = Activator.CreateInstance(JsonTextWriterType, sw); if (formatJsonOutput) writer.Formatting = (dynamic)Enum.Parse(FormattingType, "Indented"); writer.QuoteChar = '"'; json.Serialize(writer, value); } catch (Exception ex) { Debug.WriteLine("JsonSerializer Serialize error: " + ex.Message); if (throwExceptions) throw; return false; } finally { if (writer != null) writer.Close(); if (fs != null) fs.Close(); } return true; }You can see more of the dynamic invocation in this code. First I grab the dynamic JsonSerializer instance using the CreateJsonNet() method shown earlier which returns a dynamic. I then create a JsonTextWriter and configure a couple of enum settings on it, and then call Serialize() on the serializer instance with the JsonTextWriter that writes the output to disk. Although this code is dynamic it's still fairly short and readable.For full circle operation here's the DeserializeFromFile() version:/// <summary> /// Deserializes an object from file and returns a reference. /// </summary> /// <param name="fileName">name of the file to serialize to</param> /// <param name="objectType">The Type of the object. Use typeof(yourobject class)</param> /// <param name="binarySerialization">determines whether we use Xml or Binary serialization</param> /// <param name="throwExceptions">determines whether failure will throw rather than return null on failure</param> /// <returns>Instance of the deserialized object or null. Must be cast to your object type</returns> public static object DeserializeFromFile(string fileName, Type objectType, bool throwExceptions = false) { dynamic json = CreateJsonNet(throwExceptions); if (json == null) return null; object result = null; dynamic reader = null; FileStream fs = null; try { fs = new FileStream(fileName, FileMode.Open, FileAccess.Read); var sr = new StreamReader(fs, Encoding.UTF8); reader = Activator.CreateInstance(JsonTextReaderType, sr); result = json.Deserialize(reader, objectType); reader.Close(); } catch (Exception ex) { Debug.WriteLine("JsonNetSerialization Deserialization Error: " + ex.Message); if (throwExceptions) throw; return null; } finally { if (reader != null) reader.Close(); if (fs != null) fs.Close(); } return result; }This code is a little more compact since there are no prettifying options to set. Here JsonTextReader is created dynamically and it receives the output from the Deserialize() operation on the serializer.You can take a look at the full JsonSerializationUtils.cs file on GitHub to see the rest of the operations, but the string operations are very similar - the code is fairly repetitive.These generic serialization utilities isolate the dynamic serialization logic that has to deal with the dynamic nature of JSON.NET, and any code that uses these functions is none the wiser that JSON.NET is dynamically loaded.Using the JsonSerializationUtils WrapperThe final consumer of the SerializationUtils wrapper is an actual ConfigurationProvider, that is responsible for handling reading and writing JSON values to and from files. The provider is simple a small wrapper around the SerializationUtils component and there's very little code to make this work now:The whole provider looks like this:/// <summary> /// Reads and Writes configuration settings in .NET config files and /// sections. Allows reading and writing to default or external files /// and specification of the configuration section that settings are /// applied to. /// </summary> public class JsonFileConfigurationProvider<TAppConfiguration> : ConfigurationProviderBase<TAppConfiguration> where TAppConfiguration: AppConfiguration, new() { /// <summary> /// Optional - the Configuration file where configuration settings are /// stored in. If not specified uses the default Configuration Manager /// and its default store. /// </summary> public string JsonConfigurationFile { get { return _JsonConfigurationFile; } set { _JsonConfigurationFile = value; } } private string _JsonConfigurationFile = string.Empty; public override bool Read(AppConfiguration config) { var newConfig = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfiguration)) as TAppConfiguration; if (newConfig == null) { if(Write(config)) return true; return false; } DecryptFields(newConfig); DataUtils.CopyObjectData(newConfig, config, "Provider,ErrorMessage"); return true; } /// <summary> /// Return /// </summary> /// <typeparam name="TAppConfig"></typeparam> /// <returns></returns> public override TAppConfig Read<TAppConfig>() { var result = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfig)) as TAppConfig; if (result != null) DecryptFields(result); return result; } /// <summary> /// Write configuration to XmlConfigurationFile location /// </summary> /// <param name="config"></param> /// <returns></returns> public override bool Write(AppConfiguration config) { EncryptFields(config); bool result = JsonSerializationUtils.SerializeToFile(config, JsonConfigurationFile,false,true); // Have to decrypt again to make sure the properties are readable afterwards DecryptFields(config); return result; } }This incidentally demonstrates how easy it is to create a new provider for the West Wind Application Configuration component. Simply implementing 3 methods will do in most cases.Note this code doesn't have any dynamic dependencies - all that's abstracted away in the JsonSerializationUtils(). From here on, serializing JSON is just a matter of calling the static methods on the SerializationUtils class.Already, there are several other places in some other tools where I use JSON serialization this is coming in very handy. With a couple of lines of code I was able to add JSON.NET support to an older AJAX library that I use replacing quite a bit of code that was previously in use. And for any other manual JSON operations (in a couple of apps I use JSON Serialization for 'blob' like document storage) this is also going to be handy.Performance?Some of you might be thinking that using dynamic and Reflection can't be good for performance. And you'd be right… In performing some informal testing it looks like the performance of the native code is nearly twice as fast as the dynamic code. Most of the slowness is attributable to type lookups. To test I created a native class that uses an actual reference to JSON.NET and performance was consistently around 85-90% faster with the referenced code. That being said though - I serialized 10,000 objects in 80ms vs. 45ms so this isn't hardly slouchy. For the configuration component speed is not that important because both read and write operations typically happen once on first access and then every once in a while. But for other operations - say a serializer trying to handle AJAX requests on a Web Server one would be well served to create a hard dependency.Dynamic Loading - Worth it?On occasion dynamic loading makes sense. But there's a price to be paid in added code complexity and a performance hit. But for some operations that are not pivotal to a component or application and only used under certain circumstances dynamic loading can be beneficial to avoid having to ship extra files and loading down distributions. These days when you create new projects in Visual Studio with 30 assemblies before you even add your own code, trying to keep file counts under control seems a good idea. It's not the kind of thing you do on a regular basis, but when needed it can be a useful tool. Hopefully some of you find this information useful…© Rick Strahl, West Wind Technologies, 2005-2013Posted in .NET  C#   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Resolving DNS queries for two disconnected, private, networks

    - by Mikeage
    I'm trying to setup two PCs (one Windows, one Linux, but my understanding is that this problem is more DNS and less OS) as follows: Home network: 192.168.1.0/24 VPN (via OpenVPN server not within the home network): 192.168.2.0/24 . I would like a PC on both networks to be able to access three different types of site: Internet addresses Addresses on the home network Addresses on the vpn However, I'm not sure how/which DNS servers to use. If I prioritize my home DNS server, I can resolve (1) and (2), but not (3). If I prioritize my VPN DNS server, I can't resolve addresses of type (2). Of course, looking up addresses via nslookup and explicitly setting the correct server works, so I know my local DNS servers are OK. Is there any way I can set up my PCs to fallback on the second DNS server if there is no response? Alternatively, is there any way I can tell different queries to go to different servers [maybe by setting up different subdomains; foo.local.something vs. bar.vpn.something]? Thanks

    Read the article

  • Activity monitor is unable to execute queries against server

    - by mika
    SQL Server Activity Monitor fails with an error dialog: TITLE: Microsoft SQL Server Management Studio The Activity Monitor is unable to execute queries against server [SERVER]. Activity Monitor for this instance will be placed into a paused state. Use the context menu in the overview pane to resume the Activity Monitor. ADDITIONAL INFORMATION: Unable to find SQL Server process ID [PID] on server [SERVER] (Microsoft.SqlServer.Management.ResourceMonitoring) I have this problem on SQL Server 2008 R2 x64 Developer Edition, but I think it is found in all 64bit systems using SQL Server 2008, under some yet unidentified conditions. There is a bug report on this in Microsoft Connect. It seems that the problem is not solved yet.

    Read the article

  • Simple SQL Server 2005 Replication - "D-1" server used for heavy queries/reports

    - by Ricardo Pardini
    Hello. We have two SQL 2005 machines. One is used for production data, and the other is used for running queries/reports. Every night, the production machine dumps (backups) it's database to disk, and the other one restores it. This is called the D-1 process. I think there must be a more efficient way of doing this, since SQL 2005 has many forms of replication. Some requirements: 1) No need for instant replication, there can be (some) delay 2) All changes (including schemas, data, constraints, indexes) need to be replicated without manual intervention 3) It is used for a single database only 4) There is a third server available if needed 5) There is high bandwidth (gigabit ethernet) available between the servers 6) There isn't a shared storage (SAN) available What would be a good alternative to this daily backup/restore routine? Thanks!

    Read the article

  • Simple queries occasionally running very slowly

    - by Johan
    I have some very simple queries that occasionally run very slowly. The table viewed_sites has about 10 - 20 rows. Running EXPLAIN ANALYZE always gives a runtime of less than 3 milliseconds. When the query is run automatically (every 10 seconds) it occasionally takes over a second to run. The query: INSERT INTO ga.viewed_sites (site_id) VALUES ('gop2') The table: CREATE TABLE viewed_sites ( site_id character varying(4) NOT NULL, last_viewed timestamp with time zone DEFAULT now() NOT NULL ); The (occasional) log result: 2010-05-24 15:47:55 UTC LOG: duration: 1044.632 ms statement: INSERT INTO ga.viewed_sites (site_id) VALUES ('gop2') It's a horribly vague question, but what could be causing this? I suppose it comes down to CPU, RAM, HDD or some combination of the above. Postgresql 8.3, Ubuntu 8.04 Intel(R) Core(TM)2 Duo CPU E6750 @ 2.66GHz 2 GiB RAM

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >