Search Results

Search found 16 results on 1 pages for 'glenatron'.

Page 1/1 | 1 

  • Physical effects of long term keyboard use- what does the science say and what factors affect it?

    - by glenatron
    This question asks about the ergonomics of a particular keyboard for long programming hours, what I would like to know is about the ergonomics of using a keyboard in general. What are the most significant risks associated with it and how can they best be mitigated? Do the "ergonomic" keyboard designs make a difference and if so which design is most effective? If not do other factors such as wrist-rests, regular exercise or having a suitable height of chair or desk make a difference? Do you have any direct experience of problems deriving from keyboard use and if so how did you resolve them? Is there any good science on this and if so what does it indicate? Edited to add: Wikipedia suggests that there are no proven advantages to "ergonomic" keyboards, but their citation seems pretty old- is that still the current state of play?

    Read the article

  • Finding a way to simplify complex queries on legacy application

    - by glenatron
    I am working with an existing application built on Rails 3.1/MySql with much of the work taking place in a JavaScript interface, although the actual platforms are not tremendously relevant here, except in that they give context. The application is powerful, handles a reasonable amount of data and works well. As the number of customers using it and the complexity of the projects they create increases, however, we are starting to run into a few performance problems. As far as I can tell, the source of these problems is that the data represents a tree and it is very hard for ActiveRecord to deterministically know what data it should be retrieving. My model has many relationships like this: Project has_many Nodes has_many GlobalConditions Node has_one Parent has_many Nodes has_many WeightingFactors through NodeFactors has_many Tags through NodeTags GlobalCondition has_many Nodes ( referenced by Id, rather than replicating tree ) WeightingFactor has_many Nodes through NodeFactors Tag has_many Nodes through NodeTags The whole system has something in the region of thirty types which optionally hang off one or many nodes in the tree. My question is: What can I do to retrieve and construct this data faster? Having worked a lot with .Net, if I was in a similar situation there, I would look at building up a Stored Procedure to pull everything out of the database in one go but I would prefer to keep my logic in the application and from what I can tell it would be hard to take the queried data and build ActiveRecord objects from it without losing their integrity, which would cause more problems than it solves. It has also occurred to me that I could bunch the data up and send some of it across asynchronously, which would not improve performance but would improve the user perception of performance. However if sections of the data appeared after page load that could also be quite confusing. I am wondering whether it would be a useful strategy to make everything aware of it's parent project, so that one could pull all the records for that project and then build up the relationships later, but given the ubiquity of complex trees in day to day programming life I wouldn't be surprised if there were some better design patterns or standard approaches to this type of situation that I am not well versed in.

    Read the article

  • How would you want to see software intellectual property protected?

    - by glenatron
    Reading answers to this question - and many other discussions of software patents - it seems that most of us as programmers feel that software patents are a bad idea. At the same time we are in the group most likely to lose out if our work is copied or stolen. So what level of Intellectual Property Protection does code and software need? Is copyright sufficient? Are patents necessary? As software is neither a physical object nor simple text, should we be thinking of a third path that falls somewhere between the two? Do we need any protection at all? If you had the facility to set up the law for this, what would you choose?

    Read the article

  • How do you manage a complexity jump?

    - by glenatron
    It seems an infrequent but common experience that sometimes you're working on a project and suddenly something turns up unexpectedly, throws a massive spanner in the works and ramps up the complexity a whole lot. For example, I was working on an application that talked to SOAP services on various other machines. I whipped up a prototype that worked fine, then went on to develop a regular front end and generally get everything up and running in a nice, fairly simple and easy to follow fashion. It worked great until we started testing across a wider network and suddenly pages started timing out as the latency of the connections and the time required to perform calculations on remote machines resulted in timed out requests to the soap services. It turned out that we needed to change the architecture to spin requests out onto their own threads and cache the returned data so it could be updated progressively in the background rather than performing calculations on a request by request basis. The details of that scenario are not too important - indeed it's not a great example as it was quite forseeable and people who have written a lot of apps of this type for this type of environment might have anticipated it - except that it illustrates a way that one can start with a simple premise and model and suddenly have an escalation of complexity well into the development of the project. What strategies do you have for dealing with these types of functional changes whose need arises - often as a result of environmental factors rather than specification change - later on in the development process or as a result of testing? How do you balance between avoiding the premature optimisation/ YAGNI/ overengineering risks of designing a solution that mitigates against possible but not necessarily probable issues as opposed to developing a simpler and easier solution that is likely to be as effective but doesn't incorporate preparedness for every possible eventuality?

    Read the article

  • A better alternative to incompatible implementations for the same interface?

    - by glenatron
    I am working on a piece of code which performs a set task in several parallel environments where the behaviour of the different components in the task are similar but quite different. This means that my implementations are quite different but they are all based on the relationships between the same interfaces, something like this: IDataReader -> ContinuousDataReader -> ChunkedDataReader IDataProcessor -> ContinuousDataProcessor -> ChunkedDataProcessor IDataWriter -> ContinuousDataWriter -> ChunkedDataWriter So that in either environment we have an IDataReader, IDataProcessor and IDataWriter and then we can use Dependency Injection to ensure that we have the correct one of each for the current environment, so if we are working with data in chunks we use the ChunkedDataReader, ChunkedDataProcessor and ChunkedDataWriter and if we have continuous data we have the continuous versions. However the behaviour of these classes is quite different internally and one could certainly not go from a ContinuousDataReader to the ChunkedDataReader even though they are both IDataProcessors. This feels to me as though it is incorrect ( possibly an LSP violation? ) and certainly not a theoretically correct way of working. It is almost as though the "real" interface here is the combination of all three classes. Unfortunately in the project I am working on with the deadlines we are working to, we're pretty much stuck with this design, but if we had a little more elbow room, what would be a better design approach in this kind of scenario?

    Read the article

  • How to tell Wine that I have changed CD when mounting them virtually on a netbook with no CD drive?

    - by glenatron
    I have been trying to catch up with some of the old games from way back that are about right for my little Aspire One netbook through Wine. I've run into a problem Baldurs Gate, however, which is that I can't change CD. Obviously, I don't have a CD drive, so I have copied the CD content onto an external hard drive and I'm using a mount command with the loopback option to persuade the game that the CD is present. This allowed installation to work correctly and works fine to run it and to play the content from the first CD. However, when the game asks for CD2, I'm stuck. If I mount the CD2 ISO to the CD Rom path it doesn't appear to respond, whether or not I have first unmounted CD1. When I ask Wine to show me the CD drive it contains the right data, but it seems as though whatever signal would be interpreted by Windows to mean the CD drive has been closed isn't being sent. Does anyone know of a way to do this, or am I barking up the wrong tree and there is something else I need to do?

    Read the article

  • VMWare ESXi, change the default path for a VM

    - by glenatron
    For some reason VMWare ESXi has decided that one of my VMs is on a completely different path to the path it is actually on. So my VM is on /vmfs/volumes/long-guid-here/my-vm-name but when I try to open it I get the message "File < unspecified filename was not found." Which is not really surprising as unspecified filenames are quite difficult to locate. I thought it was just the swap file, which was down in the .vmx file as /vmfs/volumes/long-guid-here/old-vm-name/old-vm-name.vmsd but when I changed that in the vmx it made no difference. What I can't figure out is where VMWare is getting the old-vm-name from- when I look in the "Settings" pane it believes the working file location to be "[datastore-name] old-vm-name\" and I can't find anywhere to change it. Now the files themselves are all named for old-vm-name - so the directory is /my-vm-name/old-vm-name.vmx and so on. Is this the cause of my problems or is there some arcane configuration option elsewhere around the VMWare machine that I need to be tinkering with?

    Read the article

  • MOSS2007 tries to use ActiveDirectory when I have configured an alternative membership provider

    - by glenatron
    I've got a MOSS site that I am trying to configure using Forms authentication and absolutely any kind of membership provider whatsoever. Thus far ActiveDirectory has proved obstructively difficult so I've just whipped up a simple stub membership provider and put it in the GAC. It's a very basic and simple provider but it works fine with an ASP.Net site, I just can't make it work with Sharepoint. On Sharepoint I get the following error when I look for StubProvider:Bob ( or anything else for that matter) from the "Policy For Web Application" people picker: Error in searching user 'StubProvider:bob' : System.ComponentModel.Win32Exception: Unable to contact the global catalog server at Microsoft.SharePoint.Utilities.SPActiveDirectoryDomain.GetDirectorySearcher() at Microsoft.SharePoint.WebControls.PeopleEditor.SearchFromGC(SPActiveDirectoryDomain domain, String strFilter, String[] rgstrProp, Int32 nTimeout, Int32 nSizeLimit, SPUserCollection spUsers, ArrayList& rgResults) at Microsoft.SharePoint.Utilities.SPUserUtility.SearchAgainstAD(String input, SPActiveDirectoryDomain domainController, SPPrincipalType scopes, SPUserCollection usersContainer, Int32 maxCount, String customQuery, TimeSpan searchTimeout, Boolean& reachMaxCount) at Microsoft.SharePoint.Utilities.SPActiveDirectoryPrincipalResolver.SearchPrincipals(String input, SPPrincipalType scopes, SPPrincipalSource sources, SPUserCollection usersContainer, Int32 maxCount, Boolean& reachMaxCount) at Microsoft.SharePoint.Utilities.SPUtility.SearchPrincipalFromResolvers(List`1 resolvers, String input, SPPrincipalType scopes, SPPrincipalSource sources, SPUserCollection usersContainer, Int32 maxCount, Boolean& reachMaxCount, Dictionary`2 usersDict). The Provider is named as Authentication Provider for the Site Collection in question. As far as I can tell this is because Sharepoint is still trying to access ActiveDirectory rather than talking to the provider I'm asking it to use. My Sharepoint Central Administration section includes this: <membership> <providers> <add name="StubProvider" type="StubMembershipProvider.Provider, StubMembershipProvider, Version=1.0.0.0, Culture=neutral, PublicKeyToken=5bd7e2498c3e1a03" /> </providers> </membership> And also: <PeoplePickerWildcards> <clear /> <add key="StubProvider" value="%" /> </PeoplePickerWildcards> Is there a clear reason why this would not be accessible from the PeoplePicker or why it is still trying to use ActiveDirectory? I've made sure I reset IIS and even restarted the server to see if either of those helped but they made no difference.

    Read the article

  • MOSS2007 tries to use ActiveDirectory when I have configured an alternative membership provider

    - by glenatron
    I've got a MOSS site that I am trying to configure using Forms authentication and absolutely any kind of membership provider whatsoever. Thus far ActiveDirectory has proved obstructively difficult so I've just whipped up a simple stub membership provider and put it in the GAC. It's a very basic and simple provider but it works fine with an ASP.Net site, I just can't make it work with Sharepoint. On Sharepoint I get the following error when I look for StubProvider:Bob ( or anything else for that matter) from the "Policy For Web Application" people picker: Error in searching user 'StubProvider:bob' : System.ComponentModel.Win32Exception: Unable to contact the global catalog server at Microsoft.SharePoint.Utilities.SPActiveDirectoryDomain.GetDirectorySearcher() at Microsoft.SharePoint.WebControls.PeopleEditor.SearchFromGC(SPActiveDirectoryDomain domain, String strFilter, String[] rgstrProp, Int32 nTimeout, Int32 nSizeLimit, SPUserCollection spUsers, ArrayList& rgResults) at Microsoft.SharePoint.Utilities.SPUserUtility.SearchAgainstAD(String input, SPActiveDirectoryDomain domainController, SPPrincipalType scopes, SPUserCollection usersContainer, Int32 maxCount, String customQuery, TimeSpan searchTimeout, Boolean& reachMaxCount) at Microsoft.SharePoint.Utilities.SPActiveDirectoryPrincipalResolver.SearchPrincipals(String input, SPPrincipalType scopes, SPPrincipalSource sources, SPUserCollection usersContainer, Int32 maxCount, Boolean& reachMaxCount) at Microsoft.SharePoint.Utilities.SPUtility.SearchPrincipalFromResolvers(List`1 resolvers, String input, SPPrincipalType scopes, SPPrincipalSource sources, SPUserCollection usersContainer, Int32 maxCount, Boolean& reachMaxCount, Dictionary`2 usersDict). The Provider is named as Authentication Provider for the Site Collection in question. As far as I can tell this is because Sharepoint is still trying to access ActiveDirectory rather than talking to the provider I'm asking it to use. My Sharepoint Central Administration section includes this: <membership> <providers> <add name="StubProvider" type="StubMembershipProvider.Provider, StubMembershipProvider, Version=1.0.0.0, Culture=neutral, PublicKeyToken=5bd7e2498c3e1a03" /> </providers> </membership> And also: <PeoplePickerWildcards> <clear /> <add key="StubProvider" value="%" /> </PeoplePickerWildcards> Is there a clear reason why this would not be accessible from the PeoplePicker or why it is still trying to use ActiveDirectory? I've made sure I reset IIS and even restarted the server to see if either of those helped but they made no difference.

    Read the article

  • Does Powershell have an "eval" equivalent? Is there a better way to see a list of properties and val

    - by glenatron
    I'm doing a bit of Powershell scripting ( for the first time ) to look at some stuff in a Sharepoint site and what I would like to be able to do is to go through a list of properties of an object and just output their values in a "property-name = value" kind of format. Now I can find the list of elements using this: $myObject | get-member -membertype property Which will return a list of all the properties in a very clear and readable fashion. But what I need is to find a value for those properties. In some scripting languages I could have a kind of eval( "$myObject.$propertyName" ) call - where I have extracted $propertyName from the get-member output - and have it evaluate the string as code, which for the kind of quick-and-dirty solution I need would be fine. Does this exist in Powershell or is there a more convenient way to do it? Should I be using reflection instead?

    Read the article

  • How best to use XPath with very large XML files in .NET?

    - by glenatron
    I need to do some processing on fairly large XML files ( large here being potentially upwards of a gigabyte ) in C# including performing some complex xpath queries. The problem I have is that the standard way I would normally do this through the System.XML libraries likes to load the whole file into memory before it does anything with it, which can cause memory problems with files of this size. I don't need to be updating the files at all just reading them and querying the data contained in them. Some of the XPath queries are quite involved and go across several levels of parent-child type relationship - I'm not sure whether this will affect the ability to use a stream reader rather than loading the data into memory as a block. One way I can see of making it work is to perform the simple analysis using a stream-based approach and perhaps wrapping the XPath statements into XSLT transformations that I could run across the files afterward, although it seems a little convoluted. Alternately I know that there are some elements that the XPath queries will not run across, so I guess I could break the document up into a series of smaller fragments based on it's original tree structure, which could perhaps be small enough to process in memory without causing too much havoc. I've tried to explain my objective here so if I'm barking up totally the wrong tree in terms of general approach I'm sure you folks can set me right...

    Read the article

  • Web Service in c# - "This web service is using http://tempuri.org/ as its default namespace."

    - by glenatron
    I've created a web service using Visual Studio ( 2005 - I know I'm old school ) and it all compiles fine but when it opens I get warned thus: This web service does not conform to WS-I Basic Profile v1.1. And furthermore: This web service is using http://tempuri.org/ as its default namespace. Which would be fine except my service begins thus: [WebService(Namespace = "http://totally-not-default-uri.com/servicename")] Searching the entire solution folder for "tempuri" returns nothing. I can't find it mentioned in any configuration page acessible from Visual Studio. And yet it's right there in the wsdl:definitions list for the xmlns:tns attribute on the web service descriptor page when I view it through the browser and as targetNamespace in the same tag. I'm viewing it using Visual Studio's "debug" mode with the built in server from that. Seems like something has got cached somewhere but I can't work out what and where- I've tried stopping and restarting the server, cleaning and rebuilding the service and going through the associated text config files with a text editor but no dice. Any idea what is going on?

    Read the article

  • How to group items by date range in XSLT?

    - by glenatron
    I have a bunch of data that looks a little like this: <item> <colour>Red</colour> <date_created>2009-10-10 12:01:55</date_created> <date_sold>2009-10-20 22:32:12</date_sold> </item> <item> <colour>Blue</colour> <date_created>2009-11-01 13:21:00</date_created> <date_sold>2009-11-21 12:32:12</date_sold> </item> <item> <colour>Blue</colour> <date_created>2009-10-29 21:23:02</date_created> <date_sold>2009-10-20 02:02:22</date_sold> </item> <item> <colour>Red</colour> <date_created>2009-11-02 09:11:51</date_created> <date_sold>2009-11-20 09:15:53</date_sold> </item> <item> <colour>Red</colour> <date_created>2009-10-18 11:00:55</date_created> <date_sold>2009-10-20 11:12:22</date_sold> </item> Now what I would like to be able to do is to run that through an XSLT stylesheet such that I get ouput looking like this: Colour | In stock 1 week | In stock 2 weeks | In stock 3 weeks Red | 1 | 3 | 2 Blue | 0 | 2 | 1 Currently I have a stylesheet that uses basic muenchian grouping to show that 30% of stock was Red and 70% blue, but I can't see a way to find the number of nodes withing a given date range. Is there a way to use keys to select a range? Do I need to create some kind of intermediate data node? Is there a different route that shows I'm barking up the wrong tree with both those suggestions? Is this even possible with XSLT or do I need to find a way to change the data source?

    Read the article

  • How does SharePoint identify a new email to a Discussion Board as belonging to a thread?

    - by glenatron
    Pretty much as the title really, I want to be able to create a new thread from a reply to an old thread, perhaps by adding a "New Thread: " or similar to the title of the message, but of course Sharepoint is using some other characteristics of the message to recognise messages as replies to other messages and I can't find what those are. Does anyone know? Is it just using the In-Reply-To header? Alternatively, is there already a standard way to get the outcome I'm looking for, of a new discussion thread regardless of whether the initial email is a reply to a current discussion or not?

    Read the article

  • How do I allow an email discussion list in MOSS to collect messages from any email sender

    - by glenatron
    I have a Sharepoint discussion list that belongs to an Exchange list with the idea that it will be able to archive discussions on that list and make them generally accessible, searcheable and so on. The problem is that although I have checked the "Accept e-mail messages from any sender" option on the discussion board, it still appears to only be seeing emails from members of the domain, nothing sent to the list from outside gets picked up by the Sharepoint site. Any suggestions as to what else I have to do?

    Read the article

  • ASMX Web Service - "This web service is using http://tempuri.org/ as its default namespace." message

    - by glenatron
    I've created a web service using Visual Studio ( 2005 - I know I'm old school ) and it all compiles fine but when it opens I get warned thus: This web service does not conform to WS-I Basic Profile v1.1. And furthermore: This web service is using http://tempuri.org/ as its default namespace. Which would be fine except my service begins thus: [WebService(Namespace = "http://totally-not-default-uri.com/servicename")] Searching the entire solution folder for "tempuri" returns nothing. I can't find it mentioned in any configuration page acessible from Visual Studio. And yet it's right there in the wsdl:definitions list for the xmlns:tns attribute on the web service descriptor page when I view it through the browser and as targetNamespace in the same tag. I'm viewing it using Visual Studio's "debug" mode with the built in server from that. Seems like something has got cached somewhere but I can't work out what and where- I've tried stopping and restarting the server, cleaning and rebuilding the service and going through the associated text config files with a text editor but no dice. Any idea what is going on?

    Read the article

1