Search Results

Search found 9663 results on 387 pages for 'peopletools strategy team'.

Page 33/387 | < Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >

  • Using HTTP Vary header to decide on a strategy to process a request

    - by Jacques René Mesrine
    I have a specific REST endpoint that creates a topic in a forum; but I want to apply different strategies when processing the request. e.g. If client A makes the call, perform moderation. if client B makes the call, do something else. The easiest would be to add a query param for differentiation: POST /resource?from=xyz Another brilliant idea is to use the Vary HTTP header. POST /resource Vary: xyz Any problems with this approach ?

    Read the article

  • strategy for choosing proper object and proper method

    - by zerkms
    in the code below at first if statements block (there will be more than just "worker" condition, joined with else if) i select proper filter_object. After this in the same conditional block i select what filter should be applied by filter object. This code is silly. public class Filter { public static List<data.Issue> fetch(string type, string filter) { Filter_Base filter_object = new Filter_Base(filter); if (type == "worker") { filter_object = new Filter_Worker(filter); } else if (type == "dispatcher") { filter_object = new Filter_Dispatcher(filter); } List<data.Issue> result = new List<data.Issue>(); if (filter == "new") { result = filter_object.new_issues(); } else if (filter == "ended") { result = filter_object.ended_issues(); } return result; } } public class Filter_Base { protected string _filter; public Filter_Base(string filter) { _filter = filter; } public virtual List<data.Issue> new_issues() { return new List<data.Issue>(); } public virtual List<data.Issue> ended_issues() { return new List<data.Issue>(); } } public class Filter_Worker : Filter_Base { public Filter_Worker(string filter) : base(filter) { } public override List<data.Issue> new_issues() { return (from i in data.db.GetInstance().Issues where (new int[] { 4, 5 }).Contains(i.RequestStatusId) select i).Take(10).ToList(); } } public class Filter_Dispatcher : Filter_Base { public Filter_Dispatcher(string filter) : base(filter) { } } it will be used in some kind of: Filter.fetch("worker", "new"); this code means, that for user that belongs to role "worker" only "new" issues will be fetched (this is some kind of small and simple CRM). Or another: Filter.fetch("dispatcher", "ended"); // here we get finished issues for dispatcher role Any proposals on how to improve it?

    Read the article

  • WPF deployment strategy dilemma.clickonce(limited customization)+autoupdate vs installer(unlimited c

    - by black sensei
    Hello Experts!!! I've been facing a deployment problem.I've built a WPF application with visual studio 2008 and created an installer(msi) which works fine.But then it's pain to add automatic update to it. i've seen this article at windowsclient.net but it seems to be pretty old but could have been the perfect thing for me.Then i looked at the .Net Application updater block v2.0 which uses enterprise library june 2005 and for some reason it's not installing on my machine. I thought i will need to use a more recent Enterprise library so i installed and compiled Enterprise 4.1(october 2008) but nothing better happened.To i decided to give a try to CLickonce deployment.After struggling with it, it was almost perfect.I realized that when i was testing the updates provided by the clickonce on my machine which is XP i didn't notice the need of having sqlite dll in the GAC. surely it was already there.I noticed it when i moved to vista that there is a problem.After checking the net i know it's impossible to add a dll to the Global Assembly Cache. Now i'm stuck, i think i've hit a wall.Can any one share some of his experience? I'm willing to try the updater block if i can get help. Thanks for reading this!!

    Read the article

  • SSIS Script Component Testing Strategy

    - by Paul Kohler
    This question is in respect to the script component specifically. I am aware of ssisUnit etc… With simple SSIS Scripts Components, it’s sufficient to let basic testing flesh out issues, however I am working with a script that has grown in complexity over time. To better test the functionality I am considering abstracting the script logic into a DLL that gets deployed with the package, and then use the custom component in the script. The advantage is that the function will be more testable etc but it’s one more deployment artefact that needs to be managed. My question is, does anyone know of a better way to test such an SSIS script in a more isolated manner than to run the whole package and examine the output?

    Read the article

  • Grand Central Strategy for Opening Multiple Files

    - by user276632
    I have a working implementation using Grand Central dispatch queues that (1) opens a file and computes an OpenSSL DSA hash on "queue1", (2) writing out the hash to a new "side car" file for later verification on "queue2". I would like to open multiple files at the same time, but based on some logic that doesn't "choke" the OS by having 100s of files open and exceeding the hard drive's sustainable output. Photo browsing applications such as iPhoto or Aperture seem to open multiple files and display them, so I'm assuming this can be done. I'm assuming the biggest limitation will be disk I/O, as the application can (in theory) read and write multiple files simultaneously. Any suggestions? TIA

    Read the article

  • Task managment for team

    - by Kartoch
    I'm looking for a web application to manage tasks (not necessary programming-oriented) for a small team. It must be easy to setup and maintain. It must offer file upload and mail users in case of a change. There is hundred of solutions available but most are too complex for what we want or are not "stable" (not updated since a long time, not very well programmed). i was wondering if stack overflow's folks have some recommendations...

    Read the article

  • How to install VS.NET 2010 without Team Explorer

    - by LexL
    Since we don't use and don't plan to use TFS, it would be nice to not install Team Explorer VS.NET addon and not see any references to it. However this is no TFS option in customized install. Is there some way to install plain-vanilla VS.NET 2010 without it? Or maybe there is some kind of TFS uninstaller?

    Read the article

  • Light and simple bugtracker for a small team

    - by Varnenchik
    I'm looking for a simple and light tool for tracking ideas and bugs for small physics lab team (4 members). Bugzilla and Trac have too much fields and other stuff for us. I don’t think that we need more than issue descriptions and simple categorization. But we are not happy with text and excel files suggested here. Could you please advise me anything easy to install preferably?

    Read the article

  • HFT strategy coding on hardware

    - by bsobaid
    Hardware accelaration and embedded programming has mostly been used so far to parse datafeed and/or to route orders to exchange. Have there been attempts to write simpler HFT strategies such as equity market-making in hardware? Have they been successful? Which companies are doing this and what kind of programming model is used?

    Read the article

  • Strategy for developing namespaced and non-namespaced versions of same PHP code

    - by porneL
    I'm maintaining library written for PHP 5.2 and I'd like to create PHP 5.3-namespaced version of it. However, I'd also keep non-namespaced version up to date until PHP 5.3 becomes so old, that even Debian stable ships it ;) I've got rather clean code, about 80 classes following Project_Directory_Filename naming scheme (I'd change them to \Project\Directory\Filename of course) and only few functions and constants (also prefixed with project name). Question is: what's the best way to develop namespaced and non-namespaced versions in parallel? Should I just create fork in repository and keep merging changes between branches? Are there cases where backslash-sprinkled code becomes hard to merge? Should I write script that converts 5.2 version to 5.3 or vice-versa? Should I use PHP tokenizer? sed? C preprocessor? Is there a better way to use namespaces where available and keep backwards compatibility with older PHP?

    Read the article

  • Using protocol buffers for a comprehensive data strategy for Windows Mobile devices

    - by Steve
    I have started reading some of the posts related to protocol buffers. The serialization method seems very appropriate for the transfer of data to and from web servers. Has anyone considered using a method like this to save and retrieve data on the mobile device itself? (i.e. a replacement for a traditional database / orm layer) Where would the data be persisted? How would the data be queried? Would it make sense to store the data in a traditional database (SqlCE or SqlLite) with a few "searchable" columns and then one column for the serialized data? Thoughts? Am I out on a limb here? Thank you!

    Read the article

  • Partial Git deployment strategy?

    - by MatW
    I need to setup a Kohana dev environment that allows me to make full use of shared module / system classes across separate applications. Each application typically belonging to a different client. I use Git for source control, but am struggling to come up with a clean deployment method that will allow me to pull only those parts of the dev environment specific to a client / app down into that client's production environment (assuming that the client's production environment will have Git installed). Dev enviroment: - kohana - applications - clientapp1 - clientapp2 - modules - public_html - clientapp1 - clientapp2 - system - 3.0.1 - 3.0.5 Client 1's production environment: - / - applications - clientapp1 - modules - public_html - client_app1 - system - 3.0.5 Naturally, I want to have total control over each client "sub repo" as if it were an independent repo (in terms of gitignore, etc). I have seen topics that cover Git's sparse checkout feature, but it seems like it may cause a few problems down the line from a maintenance point of view, and I don't like the idea of the entire repo's metadata existing in client's production environment repo. As you can probably tell, I'm not exactly a Git poweruser, so any suggestions / wisdom are very welcome!

    Read the article

  • Strategy in storing ad-hoc numbers/constants?

    - by Jiho Han
    I have a need to store a number of ad-hoc figures and constants for calculation. These numbers change periodically but they are different type of values. One might be a balance, a money amount, another might be an interest rate, and yet another might be a ratio of some kind. These numbers are then used in a calculation that involve other more structured figures. I'm not certain what the best way to store these in a relational DB is - that's the choice of storage for the app. One way, I've done before, is to create a very generic table that stores the values as text. I might store the data type along with it but the consumer knows what type it is so, in situations I didn't even need to store the data type. This kind of works fine but I am not very fond of the solution. Should I break down each of the numbers into specific categories and create tables that way? For example, create Rates table, and Balances table, etc.?

    Read the article

  • Browser loading strategy, <head>...<body>

    - by Bin Chen
    I am inspecting some interesting behaviors of browser, I don't know it's in standard or not. If I put everythin inside <head></head>, the browser will only begin to render the page after all the resouces in head is retrieved. So I am thinking that put as little as possible things into head is one of the important website optimization techniques, is it right? My question is: If I put script/css in body or other parts of the html, how can I know that script has been loaded successfully so that I will not be calling a undefined function?

    Read the article

  • Generic version control strategy for select table data within a heavily normalized database

    - by leppie
    Hi Sorry for the long winded title, but the requirement/problem is rather specific. With reference to the following sample (but very simplified) structure (in psuedo SQL), I hope to explain it a bit better. TABLE StructureName { Id GUID PK, Name varchar(50) NOT NULL } TABLE Structure { Id GUID PK, ParentId GUID (FK to Structure), NameId GUID (FK to StructureName) NOT NULL } TABLE Something { Id GUID PK, RootStructureId GUID (FK to Structure) NOT NULL } As one can see, Structure is a simple tree structure (not worried about ordering of children for the problem). StructureName is a simplification of a translation system. Finally 'Something' is simply something referencing the tree's root structure. This is just one of many tables that need to be versioned, but this one serves as a good example for most cases. There is a requirement to version to any changes to the name and/or the tree 'layout' of the Structure table. Previous versions should always be available. There seems to be a few possibilities to tackle this issue, like copying the entire structure, but most approaches causes one to 'loose' referential integrity. Example if one followed this approach, one would have to make a duplicate of the 'Something' record, given that the root structure will be a new record, and have a new ID. Other avenues of possible solutions are looking into how Wiki's handle this or go a lot further and look how proper version control systems work. Currently, I feel a bit clueless how to proceed on this in a generic way. Any ideas will be greatly appreciated. Thanks leppie

    Read the article

  • using perforce with team foundation server

    - by rgupta4
    Does Team Foundation Server 2008 or upcoming 2010 work with perforce as the SCM tool? I haven't been able to find any documentation on the web indicating whether or not this configuration is supported? I apologize if this question has been answered elsewhere.

    Read the article

  • Mercurial merge strategy per file type

    - by dls
    All: I want to use kdiff to merge all files with a certain suffix (say *.c, *.h) and I want to do two things (turn off premerge and use internal:other) for all files with another suffix (say *.mdl). The purpose of this is to allow me to employ a type of 'clobber merge' for a specific file type (ie: un-mergable files like configurations, auto-generated C, models, etc..) In my .hgrc I've tried: [merge-tools] kdiff3= clobbermerge=internal:other clobbermerge.premerge = False [merge-patterns] **.c = kdiff3 **.h = kdiff3 **.mdl = clobbermerge but it still triggers kdiff3 for all files. Thoughts? An extension of this would be to perform a 'clobber merge' on a directory - but once the syntax is clear for a file suffix, the dir should be easy.

    Read the article

  • evaluation strategy examples

    - by Boontz
    Assuming the language supports these evaluation strategies, what would be the result for call by reference, call by name, and call by value? void swap(int a; int b) { int temp; temp = a; a = b; b = temp; } int i = 3; int A[5]; A[3] = 4; swap (i, A[3]);

    Read the article

  • Strategy for developing a multi function asp.net web application

    - by user247023
    I'm about to start a new project and want some advice on how to implement. I need a web application which contains a booking module for reserving timeslots, and a time management module which will enable employees to clock in / clock out. If I am writing an update to the time managment module, I don't want to disrupt the booking engine availability by releasing a new solution containing both modules. to make things more difficult, there is some shared functionality like common users, roles and security. Here's a suggestion I've gotten, which sounds a bit cruddy, but may be functional. Write a 'container' web application which consists of basically a frame, and authentication / security features. This then has links which, will load the 2 independantly built and released web applications into the frame. I can see that say, if I wanted to update the time management module, I would only need to build and release this separately, and the rest of the solution would be 'untouched' Any better alternatives?

    Read the article

  • Distributing iPhone App to developers in your team through iTunes

    - by Rudiger
    Hi everyone, Was wondering if you can distribute your App to other developers in your team through iTunes. I guess you would upload the App as a beta version through iTunes Connect and anyone with a provisioned iPhone would receive the update. I didn't think it was possible but someone told me they were sure you could. If this is not possible are there any other benefits of a company standard enrolment besides being able to add other people so they can get the same resources? Thanks

    Read the article

  • Licences Team Foundation Server

    - by cpt.oneeye
    Hello, We are now 5 developers and want to use Team Foundation Server. What do we need? 1 Premium/Ultimate Licence and a Professional Version for each developer? Or does Ultimate already contains more than one licence? thx cpt.oneeye

    Read the article

  • What's best Drupal deployment strategy?

    - by Horace Ho
    I am working on my first Drupal project on XAMPP in my MacBook. It's a prototype and receives positive feedback from my client. I am going to deploy the project on a Linux VPS two weeks later. Is there a better way than 're-do'ing everything on the server from scratch? install Drupal download modules (CCK, Views, Date, Calendar) create the Contents ... Thanks

    Read the article

  • What database strategy to choose for a large web application

    - by Snoopy
    I have to rewrite a large database application, running on 32 servers. The hardware is up to date, each machine has two quad core Xeon and 32 GByte RAM. The database is multi-tenant, each customer has his own file, around 5 to 10 GByte each. I run around 50 databases on this hardware. The app is open to the web, so I have no control on the load. There are no really complex queries, so SQL is not required if there is a better solution. The databases get updated via FTP every day at midnight. The database is read-only. C# is my favourite language and I want to use ASP.NET MVC. I thought about the following options: Use two big SQL servers running SQL Server 2012 to serve the 32 servers with data. On the 32 servers running IIS hosting providing REST services. Denormalize the database and use Redis on each webserver. Use booksleeve as a Redis client. Use a combination of SQL Server and Redis Use SQL Server 2012 together with Hadoop Use Hadoop without SQL Server What is the best way for a read-only database, to get the best performance without loosing maintainability? Does Map-Reduce make sense at all in such a scenario? The reason for the rewrite is, the old app written in C++ with ISAM technology is too slow, the interfaces are old fashioned and not nice to use from an website, especially when using ajax. The app uses a relational datamodel with many tables, but it is possible to write one accerlerator table where all queries can be performed on, and all other information from the other tables are possible by a simple key lookup.

    Read the article

< Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >