Search Results

Search found 21301 results on 853 pages for 'duplicate values'.

Page 510/853 | < Previous Page | 506 507 508 509 510 511 512 513 514 515 516 517  | Next Page >

  • how to architect this to make it unit testable

    - by SOfanatic
    I'm currently working on a project where I'm receiving an object via web service (WSDL). The overall process is the following: Receive object - add/delete/update parts (or all) of it - and return the object with the changes made. The thing is that sometimes these changes are complicated and there is some logic involved, other databases, other web services, etc. so to facilitate this I'm creating a custom object that mimics the original one but has some enhanced functionality to make some things easier. So I'm trying to have this process: Receive original object - convert/copy it to custom object - add/delete/update - convert/copy it back to original object - return original object. Example: public class Row { public List<Field> Fields { get; set; } public string RowId { get; set; } public Row() { this.Fields = new List<Field>(); } } public class Field { public string Number { get; set; } public string Value { get; set; } } So for example, one of the "actions" to perform on this would be to find all Fields in a Row that match a Value equal to something, and update them with some other value. I have a CustomRow class that represents the Row class, how can I make this class unit testable? Do I have to create an interface ICustomRow to mock it in the unit test? If one of the actions is to sum all of the Values in the Fields that have a Number equal to 10, like this function, how can design the custom class to facilitate unit tests. Sample function: public int Sum(FieldNumber number) { return row.Fields.Where(x => x.FieldNumber.Equals(number)).Sum(x => x.FieldValue); } Am I approaching this the wrong way?

    Read the article

  • Loading Entities Dynamically with Entity Framework

    - by Ricardo Peres
    Sometimes we may be faced with the need to load entities dynamically, that is, knowing their Type and the value(s) for the property(ies) representing the primary key. One way to achieve this is by using the following extension methods for ObjectContext (which can be obtained from a DbContext, of course): 1: public static class ObjectContextExtensions 2: { 3: public static Object Load(this ObjectContext ctx, Type type, params Object [] ids) 4: { 5: Object p = null; 6:  7: EntityType ospaceType = ctx.MetadataWorkspace.GetItems<EntityType>(DataSpace.OSpace).SingleOrDefault(x => x.FullName == type.FullName); 8:  9: List<String> idProperties = ospaceType.KeyMembers.Select(k => k.Name).ToList(); 10:  11: List<EntityKeyMember> members = new List<EntityKeyMember>(); 12:  13: EntitySetBase collection = ctx.MetadataWorkspace.GetEntityContainer(ctx.DefaultContainerName, DataSpace.CSpace).BaseEntitySets.Where(x => x.ElementType.FullName == type.FullName).Single(); 14:  15: for (Int32 i = 0; i < ids.Length; ++i) 16: { 17: members.Add(new EntityKeyMember(idProperties[i], ids[i])); 18: } 19:  20: EntityKey key = new EntityKey(String.Concat(ctx.DefaultContainerName, ".", collection.Name), members); 21:  22: if (ctx.TryGetObjectByKey(key, out p) == true) 23: { 24: return (p); 25: } 26:  27: return (p); 28: } 29:  30: public static T Load<T>(this ObjectContext ctx, params Object[] ids) 31: { 32: return ((T)Load(ctx, typeof(T), ids)); 33: } 34: } This will work with both single-property primary keys or with multiple, but you will have to supply each of the corresponding values in the appropriate order. Hope you find this useful!

    Read the article

  • Where should instantiated classes be stored?

    - by Eric C.
    I'm having a bit of a design dilemma here. I'm writing a library that consists of a bunch of template classes that are designed to be used as a base for creating content. For example: public class Template { public string Name {get; set;} public string Description {get; set;} public string Attribute1 {get; set;} public string Attribute2 {get; set;} public Template() { //constructor } public void DoSomething() { //does something } ... } The problem is, not only is the library providing the templates, it will also supply quite a few predefined templates which are instances of these template classes. The question is, where do I put these instances of the templates? The three solutions I've come up with so far are: 1) Provide serialized instances of the templates as files. On the one hand, this solution would keep the instances separated from the library itself, which is nice, but it would also potentially add complexity for the user. Even if we provided methods for loading/deserializing the files, they'd still have to deal with a bunch of files, and some kind of config file so the app knows where to look for those files. Plus, creating the template files would probably require a separate app, so if the user wanted to stick with the files method of storing templates, we'd have to provide some kind of app for creating the template files. Also, this requires external dependencies for testing the templates in the user's code. 2) Add readonly instances to the template class Example: public class Template { public string Name {get; set;} public string Description {get; set;} public string Attribute1 {get; set;} public string Attribute2 {get; set;} public Template PredefinedTemplate { get { Template templateInstance = new Template(); templateInstance.Name = "Some Name"; templateInstance.Description = "A description"; ... return templateInstance; } } public Template() { //constructor } public void DoSomething() { //does something } ... } This method would be convenient for users, as they would be able to access the predefined templates in code directly, and would be able to unit test code that used them. The drawback here is that the predefined templates pollute the Template type namespace with a bunch of extra stuff. I suppose I could put the predefined templates in a different namespace to get around this drawback. The only other problem with this approach is that I'd have to basically duplicate all the namespaces in the library in the predefined namespace (e.g. Templates.SubTemplates and Predefined.Templates.SubTemplates) which would be a pain, and would also make refactoring more difficult. 3) Make the templates abstract classes and make the predefined templates inherit from those classes. For example: public abstract class Template { public string Name {get; set;} public string Description {get; set;} public string Attribute1 {get; set;} public string Attribute2 {get; set;} public Template() { //constructor } public void DoSomething() { //does something } ... } and public class PredefinedTemplate : Template { public PredefinedTemplate() { this.Name = "Some Name"; this.Description = "A description"; this.Attribute1 = "Some Value"; ... } } This solution is pretty similar to #2, but it ends up creating a lot of classes that don't really do anything (none of our predefined templates are currently overriding behavior), and don't have any methods, so I'm not sure how good a practice this is. Has anyone else had any experience with something like this? Is there a best practice of some kind, or a different/better approach that I haven't thought of? I'm kind of banging my head against a wall trying to figure out the best way to go. Thanks!

    Read the article

  • Enterprise Portal Issue with the Ax Demo VPCs

    - by ssmantha
    Microsoft’s Ax Demo VPC is basically configured for a static IP address 192.168.0.1, this is due to the fact that the VPC has Domain Controlller configured in it which requires a static IP. When we put this VPC on a network with a different subnet and change the IP you can observer that the site http://sharepoint and http://sharepoint/EP cease to function and show “Page Not Found” errors in the browser. This is mainly due to the DNS configuration which is not updated. Below is the screen shot of the changes that needs to be done to make the site functioning properly. Change the following entries in the Forward Lookup Zones of DNS management: These websites default, SharePoint and projectserver are all mapped to a single port in the IIS i.e. port number 80. These websites are recognised with host headers. These host headers are configured in DNS with incorrect IP address entries in DNS when you change the IP address of the VPC. Just change these values to point to the Local Loop Adapter (127.0.0.1) and change the DNS to point to this address in the TCP/IP properties as shown below: This will resolve the issue with the website rendering. Initially you may get time out errors while browsing these website. be patient and try again this would work.

    Read the article

  • Why some recovery tools are still able to find deleted files after I purge Recycle Bin, defrag the disk and zero-fill free space?

    - by Ivan
    As far as I understand, when I delete (without using Recycle Bin) a file, its record is removed from the file system table of contents (FAT/MFT/etc...) but the values of the disk sectors which were occupied by the file remain intact until these sectors are reused to write something else. When I use some sort of erased files recovery tool, it reads those sectors directly and tries to build up the original file. In this case, what I can't understand is why recovery tools are still able to find deleted files (with reduced chance of rebuilding them though) after I defragment the drive and overwrite all the free space with zeros. Can you explain this? I thought zero-overwritten deleted files can be only found by means of some special forensic lab magnetic scan hardware and those complex wiping algorithms (overwriting free space multiple times with random and non-random patterns) only make sense to prevent such a physical scan to succeed, but practically it seems that plain zero-fill is not enough to wipe all the tracks of deleted files. How can this be?

    Read the article

  • What does SVN do better than git?

    - by doug
    No question that the majority of debates over programmer tools distill to either personal choice (by the user) or design emphasis, i.e., optimizing design according to particular uses cases (by the tool builder). Text Editors are probably the most prominent example--a coder who works on a Windows at work and codes in Haskell on the Mac at home, values cross-platform and compiler integration and so chooses Emacs over Textmate, etc. It's less common that a newly introduced technology is genuinely, demonstrably superior to the extant options. I wonder if this is in fact the case with version-control systems, in particular, centralized VCS (CVS, SVN) versus distributed VCS (git, hg)? I used SVN for about five years, and SVN is currently used where I work. A little less than three years ago, I switched to git (and gitHub) for all of my personal projects. I can think of a number of advantages of git over subversion (and which for the most part abstract to advantages of distributed over centralized VCS), but I cannot think of one contra example--some task (that's relevant and arises in a programmers usual workflow) that subversion does better than git. The only conclusion I have drawn from this is that I don't have any data--not that git is better, etc. My guess is that such counter-examples exist, hence this question.

    Read the article

  • Modular Architecture for Processing Pipeline

    - by anjruu
    I am trying to design the architecture of a system that I will be implementing in C++, and I was wondering if people could think of a good approach, or critique the approach that I have designed so far. First of all, the general problem is an image processing pipeline. It contains several stages, and the goal is to design a highly modular solution, so that any of the stages can be easily swapped out and replaced with a piece of custom code (so that the user can have a speed increase if s/he knows that a certain stage is constrained in a certain way in his or her problem). The current thinking is something like this: struct output; /*Contains the output values from the pipeline.*/ class input_routines{ public: virtual foo stage1(...){...} virtual bar stage2(...){...} virtual qux stage3(...){...} ... } output pipeline(input_routines stages); This would allow people to subclass input_routines and override whichever stage they wanted. That said, I've worked in systems like this before, and I find the subclassing and the default stuff tends to get messy, and can be difficult to use, so I'm not giddy about writing one myself. I was also thinking about a more STLish approach, where the different stages (there are 6 or 7) would be defaulted template parameters. Can anyone offer a critique of the pattern above, thoughts on the template approach, or any other architecture that comes to mind?

    Read the article

  • Program to dump ID3 tag structure

    - by grawity
    Is there a program that would dump the complete structure of ID3v2 tags? Not just the frame names and values, but full information such as frame order, text encoding, description encoding (for TXXX frames), presence of unsynchronization, presence of multiple tags... Background: I'm rather curious why some files are incompatible with some programs. For example, some ID3v2.4 tags written by foobar2000 are not read by Winamp; editing with Mutagen fixes them but editing with foobar2000 breaks again. It's not the version or data encoding – most other v2.4 UTF-16 tags work fine... However, if I use foobar2000 to convert the tags to v2.3, then back to v2.4, they start working fine in Winamp – this last bit just does not make any sense. Edit: Linux or/and Windows.

    Read the article

  • Should `keepalive_timeout` be removed from Nginx config?

    - by Bryson
    Which is the better configuration/optimization: to explicitly limit the keepalive_timeout or to allow Nginx to kill keepalive connections on its own? I have seen two conflicting recommendations regarding the keepalive_timeout directive for Nginx. They are as follows: # How long to allow each connection to stay idle; longer values are better # for each individual client, particularly for SSL, but means that worker # connections are tied up longer. (Default: 65) keepalive_timeout 20; and # You should remove keepalive_timeout from your formula. # Nginx closes keepalive connections when the # worker_connections limit is reached. The Nginx documentation for keepalive_timeout makes no mention of the automatic killing, and I have only seen this recommendation once, but it intrigues me. This server serves exclusively TLS-secured connections, and all non-encrypted connections are immediately rerouted to the https:// version of the same URL.

    Read the article

  • Generating HTML Help files based on XML documentation

    - by geekrutherford
    Since discovering the XML commenting features built into .NET years ago I have been using it to help make my code more readable and simpler for other developers to understand exactly what the code is doing. Entering /// preceding a line of code causes Visual Studio to insert "summary" tags.  It also results in additional tags being generated if you are commenting a method with parameters and a return type. I already knew that Intellisense would pick up these comments and display them when coding and selecting properties, methods, etc. from a class.  I also knew that you could set Visual Studio to generate an XML file containing said comments.  Only recently did I begin to wonder if I could generate some kind of readable help files based on these comments I so diligently added. After searching the web I came across NDoc, an open source project which creates documentation for you based on the XML files generated by Visual Studio.  Unfortunately, NDoc has become stale and no longer supported (last release was back in 2005). Fortunately there is a little known tool from Microsoft themselves called "Sandcastle Help File Builder".  This nifty little tool gives you a graphical interface that allows you to specify multiple DLL and XML files from which to generate a MSDN like HTML Help File for your own projects! You can check it out here: http://shfb.codeplex.com/ If you are curious how to set Visual Studio to generate the above reference XML documentation files simply go to your projects property page and edit as shown below (my paths are specific, you can leave yours at the default values):

    Read the article

  • Is it possible to sync specific router settings across multiple routers?

    - by Betard Fooser
    I recently purchased a second Linksys wireless routers set up on the other side of my home and I am wondering if it is possible to somehow sync particular settings between the two? For instance what I am really after is the "MAC Filter List". I would "like" to be able to maintain the list on both routers without having to manually type in the field values. maybe this isn't possible, or has an easy answer, but hopefully those of you who know will cut me a bit of slack. I tried to "google" the answer to this of course, but it seems any searches with the words "sync" and "router" and/or "wifi" result in pages of people having issues with synching their iOS devices over wifi. I would say that have decent amount of networking knowledge in regards to average home networks, and I imagine in larger businesses / corporations they must have a "simpler" way of maintaining things like this. Any insight to point me in the right direction will be much appreciated.

    Read the article

  • UTF-8 bit representation

    - by Yanick Rochon
    I'm learning about UTF-8 standards and this is what I'm learning : Definition and bytes used UTF-8 binary representation Meaning 0xxxxxxx 1 byte for 1 à 7 bits chars 110xxxxx 10xxxxxx 2 bytes for 8 à 11 bits chars 1110xxxx 10xxxxxx 10xxxxxx 3 bytes for 12 à 16 bits chars 11110xxx 10xxxxxx 10xxxxxx 10xxxxxx 4 bytes for 17 à 21 bits chars And I'm wondering, why 2 bytes UTF-8 code is not 10xxxxxx instead, thus gaining 1 bit all the way up to 22 bits with a 4 bytes UTF-8 code? The way it is right now, 64 possible values are lost (from 1000000 to 10111111). I'm not trying to argue the standards, but I'm wondering why this is so? ** EDIT ** Even, why isn't it UTF-8 binary representation Meaning 0xxxxxxx 1 byte for 1 à 7 bits chars 110xxxxx xxxxxxxx 2 bytes for 8 à 13 bits chars 1110xxxx xxxxxxxx xxxxxxxx 3 bytes for 14 à 20 bits chars 11110xxx xxxxxxxx xxxxxxxx xxxxxxxx 4 bytes for 21 à 27 bits chars ...? Thanks!

    Read the article

  • Change XAMPP's htdocs web root folder to another one

    - by vitto
    I'm trying to change the XAMPP's web root default directory /opt/lampp/htocs to another one like /home/me/Dropbox/public_html without success. I've edited the file /opt/lampp/etc/httpd.conf # old line: DocumentRoot "/opt/lampp/htdocs" DocumentRoot "/home/me/Dropbox/public_html" #...etc... # old line: <Directory "/opt/lampp/htdocs"> <Directory "/home/me/Dropbox/Work/public_html"> # # Possible values for the Options directive are "None", "All", # or any combination of: # Indexes Includes FollowSymLinks SymLinksifOwnerMatch ExecCGI MultiViews # etc... I've did this as said in this article: Using Ubuntu One to synchronise htdocs? Then I've restarted Apache and I've got a permission error 403 on every page I've called with the web browser. So I've changed folder and files permission to 755. I've did this as said in this article: What file permissions should I set on web root? The problem still remains the same, I have the 403 error on every page I try to reach with the web browser. I have the same problem on a Mac using XAMPP. So everythig works fine if the folder remains the original /opt/lampp/htocs. How can I change it correctly?

    Read the article

  • How can I regress a number series in Excel?

    - by jcollum
    I'd like to use these data to derive an equation using Excel. 300 13 310 12.6 320 12.2 330 11.8 340 11.4 350 11 360 10.8 370 10.6 380 10.4 As x goes up, y goes down. Seems straightforward. But when I do a polynomial regression on these data, even though the trendline matches the data pretty well, the equation it generates doesn't work. The equation is When I plug in x values to that equation, the numbers go up! So something is pretty wrong here. My steps: place both number series in excel select the second set (13, 12.6 ...) plot a line graph set the first set as the x axis labels select Series1 and add a polynomial (2) trendline, display equation, display R-squared That produces the equation above, with an R^2 value of .9955. But when I use that equation, it doesn't produce those outputs for those inputs. Clearly I'm doing something wrong.

    Read the article

  • GPO Startup Script can't modify HKU Registry?

    - by pepoluan
    I've been scratching my head with my current problem. You see, I have this Startup Script that I pushed via GPO. Problem is, although the script starts alright (I see the event it created when starting in the event log), it always fails when trying to enumerate and/or modify registry settings under HKU. If I login as administrator and execute the script manually, it works! If I startup a Command Prompt as SYSTEM (using the "at" workaround) and execute the script manually, it also works! If I reboot... the script always fails. Can anyone shed a light on my problem? Additional information: This script injects some registry values for the Local Administrator (i.e., S-1-5-21-etc etc etc-500), so I'm not sure that it's doable via GPP, not to mention that since nearly all the workstations in my domain are still using XP, so no guarantee of GPP support.

    Read the article

  • How are objects modelled in a functional programming language?

    - by Giorgio
    In an answer to this question (written by Pete) there are some considerations about OOP versus FP. In particular, it is suggested that FP languages are not very suitable for modelling (persistent) objects that have an identity and a mutable state. I was wondering if this is true or, in other words, how one would model objects in a functional programming language. From my basic knowledge of Haskell I thought that one could use monads in some way, but I really do not know enough on this topic to come up with a clear answer. So, how are entities with an identity and a mutable persistent state normally modelled in a functional language? EDIT Here are some further details to clarify what I have in mind. Take a typical Java application in which I can (1) read a record from a database table into a Java object, (2) modify the object in different ways, (3) save the modified object to the database. How would this be implemented e.g. in Haskell? I would initially read the record into a record value (defined by a data definition), perform different transformations by applying functions to this initial value (each intermediate value is a new, modified copy of the original record) and then write the final record value to the database. Is this all there is to it? How can I ensure that at each moment in time only one copy of the record is valid / accessible? One does not want to have different immutable values representing different snapshots of the same object to be accessible at the same time.

    Read the article

  • Dhcp server change fail after successful importing to new machine

    - by Tathagata
    I transfered the configs of a dhcp server from one server to another both running Windows Server 2003 R2 following http [://] support.microsoft.com/kb/325473. The new server has a statically configured ip(outside the scope) like the old one. Stopped the server on the old, and started up in the new server (authorized too) - but when I ipconfig /renew from a client its network interface fails with all 0.0.0.0 (or 169...*). I read somewhere I need to reconcile the scope to sync the new registry values ('ll try this tomorrow). What other troubleshooting steps can I take other than these (which didn't help)? Things work fine when the old server resurrects and the new one is taken down. The new server showed there was no requests for offer.

    Read the article

  • Intermittent sound on an Medion Akoya S5610

    - by ej159
    The sound on my machine (Medion Akoya S5610) works intermittently. If I reboot enough times I do get sound. This happened before I upgraded, when running Oneiric too. I have fiddled around with alsa-base.conf, putting in different values for model in options snd-hda-intel model=but still the issue persists (although I get the impression that I am more like to have sound on the next reboot if I have edited that file although I can't be sure of this). Adding index=0 does not help the situation either. I have been thinking that this problem could be related some how to the order that driver modules are loaded. The snd-hda-intel module is also used for the sound card (ALC888) in my graphics card. Could it be that these are some how competing? If so, how do I add a preference when they are using the same module? This is the result of lspci -nn | grep Audio (when sound was not working): 00:1b.0 Audio device [0403]: Intel Corporation 82801I (ICH9 Family) HD Audio Controller [8086:293e] (rev 03) 01:00.1 Audio device [0403]: Advanced Micro Devices [AMD] nee ATI RV620 HDMI Audio [Radeon HD 3400 Series] [1002:aa28] I've been wrestling with this problem for ages and ages and have spent days looking for answers on forums but to no avail so I would appreciate any help you can give. Many thanks

    Read the article

  • How to automatically set default quota limits for users on XFS filesystem, when the new account is created

    - by acidburn2k
    I guess the title explains the problem pretty well. Do you have an idea for a mechanism, which will automatically assign default quota values for every new account created (sort as the skel scheme works, but in this area)? Now, I am looking for a generic clean solution, not some ugly cron based scripts, or wrapper scripts for creating users. I would also like to avoid any external, unmaintained stuff (like forgotten pam modules, and such). Anything what could lead to overhead and extra work in future isn't really the solution, nor is checking for new accounts every minute.

    Read the article

  • Access Control Service: Programmatically Accessing Identity Provider Information and Redirect URLs

    - by Your DisplayName here!
    In my last post I showed you that different redirect URLs trigger different response behaviors in ACS. Where did I actually get these URLs from? The answer is simple – I asked ACS ;) ACS publishes a JSON encoded feed that contains information about all registered identity providers, their display names, logos and URLs. With that information you can easily write a discovery client which, at the very heart, does this: public void GetAsync(string protocol) {     var url = string.Format( "https://{0}.{1}/v2/metadata/IdentityProviders.js?protocol={2}&realm={3}&version=1.0",         AcsNamespace,         "accesscontrol.windows.net",         protocol,         Realm);     _client.DownloadStringAsync(new Uri(url)); } The protocol can be one of these two values: wsfederation or javascriptnotify. Based on that value, the returned JSON will contain the URLs for either the redirect or notify method. Now with the help of some JSON serializer you can turn that information into CLR objects and display them in some sort of selection dialog. The next post will have a demo and source code.

    Read the article

  • How to copy/paste LARGE amounts of text in Windows

    - by Johnson
    I am not sure if this is more suited for Superuser or Stackoverflow, but here goes... A little bit of background: I'm learning SQL and was trying to make a very large table which I could use for optimization tests. Something generic with random values. I created a little Java program to do just that, and was able to put out a text file with 100,000 lines, each line being an SQL INSERT statement for a new random record. However, with anything much bigger than 100,000 lines, I had problems either opening/using the text file in any text editor, or copying/pasting the text to the windows clipboard and then into SQL Developer so I could execute it as a script. I'm probably overlooking something really obvious, or doing something really stupid. There has got to be a better way to do this, but I couldn't find anything through Google or Stackoverflow or Superuser. Thanks!

    Read the article

  • MS Word 2007 Mail Merge fails on ZIP codes with leading Zeros (eg. 01234)

    - by Pretzel
    I have an Excel Spreadsheet with a ZIP code column. For some dumb reason the original spreadsheet I got had all the zip codes stored as numbers, so a ZIP code like 01234 was stored as 1234. Easy to fix with "Format Column" as "Special = ZIP Code". All values like 1234, show up as 01234. Great! When I import it into Word via Mail Merge (to print address labels), the ZIP codes on all the addresses starting with a leading zero (like 01234) revert to their old form (1234). How do I fix this?

    Read the article

  • VirtualBox 4.0.10 is now available for download

    - by user12611829
    VirtualBox 4.0.10 has been released and is now available for download. You can get binaries for Windows, OS X (Intel Mac), Linux and Solaris hosts at http://www.virtualbox.org/wiki/Downloads The full changelog can be found here. The high points for the 4.0.10 maintenance release include .... GUI: fixed disappearing settings widgets on KDE hosts (bug #6809) Storage: fixed hang under rare circumstances with flat VMDK images Storage: a saved VM could not be restored under certain circumstances after the host kernel was updated Storage: refuse to create a medium with an invalid variant Snapshots: none of the hard disk attachments must be attached to another VM in normal mode when creating a snapshot USB: fixed occasional VM hangs with SMP guests USB: proper device detection on RHEL/OEL/CentOS 5 guests ACPI: force the ACPI timer to return monotonic values for improve behavior with SMP Linux guests RDP: fixed screen corruption under rare circumstances rdesktop-vrdp: updated to version 1.7.0 OVF: under rare circumstances some data at the end of a VMDK file was not written during export Mac OS X hosts: Lion fixes Mac OS X hosts: GNOME 3 fix Linux hosts: fixed VT-x detection on Linux 3.0 hosts Linux hosts: fixed Python 2.7 bindings in the universal Linux binaries Windows hosts: fixed leak of thread and process handles Windows Additions: fixed bug when determining the extended version of the Guest Additions Solaris Additions: fixed installation to 64-bit Solaris 10u9 guests Linux Additions: RHEL6.1/OL6.1 compile fix Linux Additions: fixed a memory leak during VBoxManage guestcontrol execute Technocrati Tags: Sun Virtualization VirtualBox var sc_project=1193495; var sc_invisible=1; var sc_security="a46f6831";

    Read the article

  • Apache mod_header rule to change all cookies to secure

    - by Supowski
    I would like to change all cookies to be secure and http-only. I works fine for one cookie, but doesn't work when multiple cookies are set in response. Apache mod_header rule should change cookies from: Set-Cookie cookie1=value; Path=/somePath Set-Cookie cookie2=value; Path=/somePath to Set-Cookie cookie1=value; Path=/somePath; Secure; Http-Only Set-Cookie cookie2=value; Path=/somePath; Secure; Http-Only I use mod_headers for it with following rule: Header edit Set-Cookie ^(.*)$ $1;Secure;HttpOnly It works fine when only one cookie is set, but if there is more than one, it just removes all the following and they are not set at all. Any help how to write mod_headers rule for multiple values? or the problem is in something else?

    Read the article

  • implementing dynamic query handler on historical data

    - by user2390183
    EDIT : Refined question to focus on the core issue Context: I have historical data about property (house) sales collected from various sources in a centralized/cloud data source (assume info collection is handled by a third party) Planning to develop an application to query and retrieve data from this centralized data source Example Queries: Simple : for given XYZ post code, what is average house price for 3 bed room house? Complex: What is estimated price for an house at "DD,Some Street,XYZ Post Code" (worked out from average values of historic data filtered by various characteristics of the house: house post code, no of bed rooms, total area, and other deeper insights like house building type, year of built, features)? In addition to average price, the application should support other property info ** maximum, or minimum price..etc and trend (graph) on a selected property attribute over a period of time**. Hence, the queries should not enforce the search based on a primary key or few fixed fields In other words, queries can be What is the change in 3 Bed Room house price (irrespective of location) over last 30 days? What kind of properties we can get for X price (irrespective of location or house type) The challenge I have is identifying the domain (BI/ Data Analytical or DB Design or DB Query Interface or DW related or something else) this problem (dynamic query on historic data) belong to, so that I can do further exploration My findings so far I could be wrong on the following, so please correct me if you think so I briefly read about BI/Data Analytics - I think it is heavy weight solution for my problem and has scalability issues. DB Design - As I understand RDBMS works well if you know Data model at design time. I am expecting attributes about property or other entity (user) that am going to bring in, would evolve quickly. hence maintenance would be an issue. As I am going to have multiple users executing query at same time, performance would be a bottleneck Other options like Graph DB (http://www.tinkerpop.com/) seems to be bit complex (they are good. but using those tools meant for generic purpose, make me think like assembly programming to solve my problem ) BigData related solution are to analyse data from multiple unrelated domains So, Any suggestion on the space this problem fit in ? (Especially if you have design/implementation experience of back-end for property listing or similar portals)

    Read the article

< Previous Page | 506 507 508 509 510 511 512 513 514 515 516 517  | Next Page >