Search Results

Search found 56 results on 3 pages for 'ewald peters'.

Page 1/3 | 1 2 3  | Next Page >

  • They Wrote The Book On It

    - by steve.diamond
    First of all, an apology to you all for my not posting this yesterday, when I should have. For those of you bloggers out there, you know the difference between "Save" and "Preview." But I temporarily forgot it. Nevertheless, while I'm not impressed with this mishap, I'm blown away by the initiative three of my colleagues have taken. Jeff Saenger, Tim Koehler, and Louis Peters, recently wrote a book, "Oracle CRM On Demand Deployment Guide." Not only that, they got this book PUBLISHED. These guys know their stuff. They have worked in the CRM industry for many years. And trust me, they command a lot of respect inside this organization. In the words of Louis Peters (who posted this verbiage yesterday on LinkedIn), "We've assembled all the best practices and lessons learned over the past six years working with CRM On Demand. The book covers a range of topics - working with SaaS-based applications, planning and executing a successful rollout, designing elegant and high-performing applications, and working effectively with Oracle. We even included several sample designs based on successful real-world deployments. Our main target audience is the CRM On Demand project team - sponsors, project managers, administrators, developers - really anyone planning, implementing or maintaining the application." Now these guys don't know it, but I'll be interviewing one of them and including audio excerpts of that conversation right here next Wednesday. In the meantime, if you want to learn more about successful CRM deployments in general, and working with Oracle CRM On Demand in particular, you should check out this book.

    Read the article

  • What is Python 20th and final guideline

    - by Eric
    Sometimes when my life needs some guiding light, I read the Zen of Python by Tim Peters and usually manage to get back on the tracks. Today was one of those days, so I checked it again. I've noticed that the "abstract" section says: Long time Pythoneer Tim Peters succinctly channels the BDFL's guiding principles for Python's design into 20 aphorisms, only 19 of which have been written down. What would be this last unwritten aphorism?

    Read the article

  • C++ How to deep copy a struct with unknown datatype?

    - by Ewald Peters
    hi, i have a "data provider" which stores its output in a struct of a certain type, for instance struct DATA_TYPE1{ std::string data_string; }; then this struct has to be casted into a general datatype, i thought about void * or char *, because the "intermediate" object that copies and stores it in its binary tree should be able to store many different types of such struct data. struct BINARY_TREE_ENTRY{ void * DATA; struct BINARY_TREE_ENTRY * next; }; this void * is then later taken by another object that casts the void * back into the (struct DATA_TYPE1 *) to get the original data. so the sender and the receiver know about the datatype DATA_TYPE1 but not the copying object inbetween. but how can the intermidiate object deep copy the contents of the different structs, when it doesn't know the datatype, only void * and it has no method to copy the real contents; dynamic_cast doesn't work for void *; the "intermediate" object should do something like: void store_data(void * CASTED_DATA_STRUCT){ void * DATA_COPY = create_a_deepcopy_of(CASTED_DATA_STRUCT); push_into_bintree(DATA_COPY); } a simple solution would be that the sending object doesn't delete the sent data struct, til the receiving object got it, but the sending objects are dynamically created and deleted, before the receiver got the data from the intermediate object, for asynchronous communication, therefore i want to copy it. instead of converting it to void * i also tried converting to a superclass pointer of which the intermediate copying object knows about, and which is inherited by all the different datatypes of the structs: struct DATA_BASE_OBJECT{ public: DATA_BASE_OBJECT(){} DATA_BASE_OBJECT(DATA_BASE_OBJECT * old_ptr){ std::cout << "this should be automatically overridden!" << std::endl; } virtual ~DATA_BASE_OBJECT(){} }; struct DATA_TYPE1 : public DATA_BASE_OBJECT { public: string str; DATA_TYPE1(){} ~DATA_TYPE1(){} DATA_TYPE1(DATA_TYPE1 * old_ptr){ str = old_ptr->str; } }; and the corresponding binary tree entry would then be: struct BINARY_TREE_ENTRY{ struct DATA_BASE_OBJECT * DATA; struct BINARY_TREE_ENTRY * next; }; and to then copy the unknown datatype, i tried in the class that just gets the unknown datatype as a struct DATA_BASE_OBJECT * (before it was the void *): void * copy_data(DATA_BASE_OBJECT * data_that_i_get_in_the_sub_struct){ struct DATA_BASE_OBJECT * copy_sub = new DATA_BASE_OBJECT(data_that_i_get_in_the_sub_struct); push_into_bintree(copy_sub); } i then added a copy constructor to the DATA_BASE_OBJECT, but if the struct DATA_TYPE1 is first casted to a DATA_BASE_OBJECT and then copied, the included sub object DATA_TYPE1 is not also copied. i then thought what about finding out the size of the actual object to copy and then just memcopy it, but the bytes are not stored in one row and how do i find out the real size in memory of the struct DATA_TYPE1 which holds a std::string? Which other c++ methods are available to deepcopy an unknown datatype (and to maybe get the datatype information somehow else during runtime) thanks Ewald

    Read the article

  • Set a login with username/password for SQL Server 2008 Express

    - by Ewald Stieger
    I would like to set a password and username for connecting to a server with SQL Management Studio 2008. I set up SQL Server 2008 Express on a customer's computer to host a DB used by an Access 2007 app. The customer should not be able to access the DB or connect with SQL Management Studio. How do I set up a login and remove any logins that allow a user to connect via Windows Authentication and without entering a username and password? I have not much experience with logins and controlling access.

    Read the article

  • When should I use Areas in TFS instead of Team Projects

    - by Martin Hinshelwood
    Well, it depends…. If you are a small company that creates a finite number of internal projects then you will find it easier to create a single project for each of your products and have TFS do the heavy lifting with reporting, SharePoint sites and Version Control. But what if you are not… Update 9th March 2010 Michael Fourie gave me some feedback which I have integrated. Ed Blankenship via @edblankenship offered encouragement and a nice quote. Ewald Hofman gave me a couple of Cons, and maybe a few more soon. Ewald’s company, Avanade, currently uses Areas, but it looks like the manual management is getting too much and the project is getting cluttered. What if you are likely to have hundreds of projects, possibly with a multitude of internal and external projects? You might have 1 project for a customer or 10. This is the situation that most consultancies find themselves in and thus they need a more sustainable and maintainable option. What I am advocating is that we should have 1 “Team Project” per customer, and use areas to create “sub projects” within that single “Team Project”. "What you describe is what we generally do internally and what we recommend. We make very heavy use of area path to categorize the work within a larger project." - Brian Harry, Microsoft Technical Fellow & Product Unit Manager for Team Foundation Server   "We tend to use areas to segregate multiple projects in the same team project and it works well." - Tiago Pascoal, Visual Studio ALM MVP   "In general, I believe this approach provides consistency [to multi-product engagements] and lowers the administration and maintenance costs. All good." - Michael Fourie, Visual Studio ALM MVP   “@MrHinsh BTW, I'm very much a fan of very large, if not huge, team projects in TFS. Just FYI :) Use Areas & Iterations.” Ed Blankenship, Visual Studio ALM MVP   This would mean that SSW would have a single Team Project called “SSW” that contains all of our internal projects and consequently all of the Areas and Iteration move down one hierarchy to accommodate this. Where we would have had “\SSW\Sprint 1” we now have “\SSW\SqlDeploy\Sprint1” with “SqlDeploy” being our internal project. At the moment SSW has over 70 internal projects and more than 170 total projects in TFS. This method has long term benefits that help to simplify the support model for companies that often have limited internal support time and many projects. But, there are implications as TFS does not provide this model “out-of-the-box”. These implications stretch across Areas, Iterations, Queries, Project Portal and Version Control. Michael made a good comment, he said: I agree with your approach, assuming that in a multi-product engagement with a client, they are happy to adopt the same process template across all products. If they are not, then it’ll either be easy to convince them or there is a valid reason for having a different template - Michael Fourie, Visual Studio ALM MVP   At SSW we have a standard template that we use and this is applied across the board, to all of our projects. We even apply any changes to the core process template to all of our existing projects as well. If you have multiple projects for the same clients on multiple templates and you want to keep it that way, then this approach will not work for you. However, if you want to standardise as we have at SSW then this approach may benefit you as well. Implications around Areas Areas should be used for topological classification/isolation of work items. You can think of this as architecture areas, organisational areas or even the main features of your application. In our scenario there is an additional top level item that represents the Project / Product that we want to chop our Team Project into. Figure: Creating a sub area to represent a product/project is easy. <teamproject> <teamproject>\<Functional Area/module whatever> Becomes: <teamproject> <teamproject>\<ProjectName>\ <teamproject>\<ProjectName>\<Functional Area/module whatever> Implications around Iterations Iterations should be used for chronological classification/isolation of work items. This could include isolated time boxes, milestones or release timelines and really depends on the logical flow of your project or projects. Due to the new level in Area we need to add the same level to Iteration. This is primarily because it is unlikely that the sprints in each of your projects/products will start and end at the same time. This is just a reality of managing multiple projects. Figure: Adding the same Area value to Iteration as the top level item adds flexibility to Iteration. <teamproject>\Sprint 1 Or <teamproject>\Release 1\Sprint 1 Becomes: <teamproject>\<ProjectName>\Sprint 1 Or <teamproject>\<ProjectName>\Release 1\Sprint 1 Implications around Queries Queries are used to filter your work items based on a specified level of granularity. There are a number of queries that are built into a project created using the MSF Agile 5.0 template, but we now have multiple projects and it would be a pain to have to edit all of the work items every time we changed project, and that would only allow one team to work on one project at a time.   Figure: The Queries that are created in a normal MSF Agile 5.0 project do not quite suit our new needs. In order for project contributors to be able to query based on their project we need a couple of things. The first thing I did was to create an “_Area Template” folder that has a copy of the project layout with all the queries setup to filter based on the “_Area Template” Area and the “_Sprint template” you can see in the Area and Iteration views. Figure: The template is currently easily drag and drop, but you then need to edit the queries to point at the right Area and Iteration. This needs a tool. I then created an “Areas” folder to hold all of the area specific queries. So, when you go to create a new TFS Sub-Project you just drag “_Area Template” while holding “Ctrl” and drop it onto “Areas”. There is a little setup here. That said I managed it in around 10 minutes which is not so bad, and I can imagine it being quite easy to build a tool to create these queries Figure: These new queries can be configured in around 10 minutes, which includes setting up the Area and Iteration as well. Version Control What about your source code? Well, that is the easiest of the lot. Just create a sub folder for each of your projects/products.   Figure: Creating sub folders in source control is easy as “Right click | Create new folder”. <teamproject>\DEV\Main\ Becomes: <teamproject>\<ProjectName>\DEV\Main\ Conclusion I think it is up to each company to make a call on how you want to configure your Team Projects and it depends completely on how many projects/products you are going to have for each customer including yourself. If we decide to utilise this route it will require some configuration to get our 170+ projects into this format, and I will probably be writing some tools to help. Pros You only have one project to upgrade when a process template changes – After going through an upgrade of over 170 project prior to the changes in the RC I can tell you that that many projects is no fun. Standardises your Process Template – You will always have the same Process implementation across projects/products without exception You get tighter control over the permissions – Yes, you can do this on a standard Team Project, but it gets a lot easier with practice. You can “move” work items from one “product” to another – Have we not always wanted to do that. You can rename your projects – Wahoo: everyone wants to do this, now you can. One set of Reporting Services reports to manage – You set an area and iteration to run reports anyway, so you may as well set both. Simplified Check-In Policies– There is only one set of check-in policies per client. This simplifies administration of policies. Simplified Alerts – As alerts are applied across multiple projects this simplifies your alert rules as per client. Cons All of these cons could be mitigated by a custom tool that helps automate creation of “Sub-projects” within Team Projects. This custom tool could create areas, Iteration, permissions, SharePoint and queries. It just does not exist yet :) You need to configure the Areas and Iterations You need to configure the permissions You may need to configure sub sites for SharePoint (depends on your requirement) – If you have two projects/products in the same Team Project then you will not see the burn down for each one out-of-the-box, but rather a cumulative for the Team Project. This is not really that much of a problem as you would have to configure your burndown graphs for your current iteration anyway. note: When you create a sub site to a TFS linked portal it will inherit the settings of its parent site :) This is fantastic as it means that you can easily create sub sites and then set the Area and Iteration path in each of the reports to be the correct one. Every team wants their own customization (via Ewald Hofman) - small teams of 2 persons against teams of 30 – or even outsourcing – need their own process, you cannot allow that because everybody gets the same work item types. note: Luckily at SSW this is not a problem as our template is standardised across all projects and customers. Large list of builds (via Ewald Hofman) – As the build list in Team Explorer is just a flat list it can get very cluttered. note: I would mitigate this by removing any build that has not been run in over 30 days. The build template and workflow will still be available in version control, but it will clean the list. Feedback Now that I have explained this method, what do you think? What other pros and cons can you see? What do you think of this approach? Will you be using it? What tools would you like to support you?   Technorati Tags: Visual Studio ALM,TFS Administration,TFS,Team Foundation Server,Project Planning,TFS Customisation

    Read the article

  • Incrementing Assembly Version in TFS Builds and its affect over Other Build Definitions

    - by ssmantha
    A very common scenario while performing TFS builds is to increment version number of the assemblies. There are quite a few approaches of which I would like to share two links: Ewald Hofman’s Approach: http://www.ewaldhofman.nl/post/2010/05/13/Customize-Team-Build-2010-e28093-Part-5-Increase-AssemblyVersion.aspx#id_02e7b082-ce95-49a9-92e9-7dc88887b377 Richard Bank’s Approach : http://www.richard-banks.org/2010/07/how-to-versioning-builds-with-tfs-2010.html   Both these approaches work well, however there are scenarios where Editing and Checking–in the Assembly version information can create problems with Build Definitions meant for Continuous Integration, or gated Check-ins. You can suppress the Continuous Integration Builds while checking in the Assembly info file by just putting a comment “***NO_CI***” as specified by Ewald in his blog. However, if you have Gated Checkin in place, this can turn out to be difficult to suppress, I myself tried to suppress the Build Trigger during the check in process but things doesn’t turn out well. That’s where Richard’s solution comes as handy. Both the solutions have their own pros and cons, which I believe can only be experienced over a period of time. In case of Richard’s solution I believe that we don’t have any history of the Assembly Version Info file and when you take latest of the solution the information will be lost. If you notice closely, that suppressing the Continuous Integration (the NO_CI approach in check in comments) is a workaround provided by Microsoft, however I didn’t find anything to suppress the gated Checkin so far. Suggestions or Findings are most welcome.

    Read the article

  • Managing a file-based public maven repository

    - by Roland Ewald
    I am looking for an easy way to manage a public file-based Maven repository. While we are using the open-source version of Artifactory internally, we now want to put a file-based repository of our published artifacts (and their dependencies) on a separate machine that is publicly available. There are several ways how to do this, but none of them seems ideal: Use Maven Dependency plugin: if it is configured correctly and executed with the goal dependency:copy-dependencies for the release-module of our project, it creates a local repository structure that is fine, but this structure does not contain the meta-data.xml files, nor the hash-sums. Use Artifactory to export repo: AFAIK Artifactory only allows to export a repository as a whole. This would include the non-published modules from our project (which would then need to be deleted manually). Also, all dependencies are sitting in another repository, so this needs to be done twice, and many dependencies are not even required by a published artifact (only by artifacts that are still for internal use only). Nevertheless, this method would also include the meta-data.xml files and the hash-sums for all files. To set up an initial version of the repository, I used a mixture of both methods: I first created the Maven repository for all required dependencies via dependency:copy-dependencies and then wrote a script to cherry-pick the meta-data.xml files (etc.) from Artifactory. This is terribly cumbersome, isn't there a better way to solve this? Maybe there is another Maven 3 - plugin that I am unaware of, or some other command-line tool that does the job? I basically just need a simple way to create a Maven repository that contains all artifacts a given artifact depends on (and no more), and also contains all meta-data expected in a remote repository. Any ideas?

    Read the article

  • Outlook activesync not pushing changes to devices

    - by Ryan Peters
    I recently set up my account outlook.com account and connected Outlook 2013 to it using ActiveSync. For a while, it was pushing changes I made, for example, from the web client to my phone and my Outlook when an email was deleted, moved, etc. The change was instant. Now all of a sudden, I have to manually refresh to see changes on either device. What happened? I just set up my wife's email account and it works fine, though she has no emails in it yet. I have several hundred. Why is mine not pushing sync changes and hers is? Thanks.

    Read the article

  • samba4 dc "network location cannot be reached"

    - by mitchell babies peters
    to clear the air centos 6.4? (maybe 6.3) as the server, running samba 4.0.10, trying to add a windows 7 client that has connectivity to the server. this is what windows shouts as me as it mocks my dependence on network infrastructure. "the network location cannot be reached." i have access to the domain contoller (dc) im using the dc as the domain name server (dns) already, and the name is correctly resolving, and it is correctly forwarding outbound traffic. i have nothing but self taught experience with active directory(ad) so if i am missing something obvious, please shout it out, but keep the verbal abuse to a minimum. i checked samba4DC + my error and found nothing relevant to my issue, if i missed something please point me in that direction. the weekend is just starting as i write this so i probably wont be back on to check this post for a day or three, but i might because this mystery is killing me. i followed the samba4 as a dc guide here and i supplimented gaps with this i have tested kerberos, ntp, and set my DC as the clock to sync to in my windows client and it appears to be a very small fraction of a second off so that shouldn't be it. also, firewall and selinux are both off for testing. i have also tried disabling ipv6, and cleared the registry of ipv6 records (allegedly the default samba4 as a DC runs as windows server 2003 which allegedly does not support or tolerate the existence of ipv6, fair warning, i heard this on the internet so it is probably a lie) i have tried a few other things that i have forgotten because i have been doing this for a day and a half now. ideas welcome. suggestions for alternatives are also welcome, as long as they are free. i was given a budget of $0 dollars and told to implement active directory (no prior knowledge of active directory at that point).

    Read the article

  • .bat file overriding answer

    - by John Peters
    When I made this batch file, when typing y and n for the first time it works fine, but as soon as I choose n, every single time I try inputting something, it opens up the 7000 songs wpl list then closes it and replaces it with Rick Astley... HELP! @echo off :Ask echo Would you like to listen to the best songs out of the 7000 I have?(Y/N) set INPUT= set /P INPUT=Type input: %=% If /I "%INPUT%"=="y" goto yes If /I "%INPUT%"=="n" goto lolno echo Incorrect input & goto Ask :yes start c:/Users/MyName/Music/Playlists/"The Best of the 7000 songs that I have.wpl" :lolno start c:/Users/MyName/Music/Downloads/Music/"Various Artists"/"The Number One 80's Album Disc 2"/"06 Never Gonna Give You Up.mp3"

    Read the article

  • Using a KVM with a Sun Ultra 45

    - by Kit Peters
    I have a friend with two machines: a Mac and a Sun Ultra45. He wants to use a single monitor (connected via DVI) and USB keyboard / mouse with both machines. Try as I might, the Sun box will not bring up a console if the KVM is connected. However, if I connect a monitor, keyboard, and mouse directly to the Sun box, it will bring up a console that I can then (IIRC) connect to the KVM. Is there any way I can make the Sun box always bring up a console, regardless of whether it "sees" a monitor, keyboard, and mouse?

    Read the article

  • Disable zsh autocorrect entirely

    - by Collin Peters
    How do you disable zsh autocorrect entirely? I am aware of the 'nocorrect' option which only applies to certain commands. But I want it off entirely so that when I type 'lear' instead of 'clear', it won't prompt for a correction. I should note that 'unsetopt correctall' doesn't seem to do anything for me collin@mandalay ~ % unsetopt correctall collin@mandalay ~ % lear zsh: correct 'lear' to 'clear' [nyae]? n zsh: command not found: lear

    Read the article

  • Easy Listening = CRM On Demand Podcasts

    - by Anne
    OK, here's my NEW favorite resource for CRM On Demand info -- podcasts! Specifically, the CRM On Demand Podcast site -- signed, sealed, and delivered with humor and know-how. Yes, I admit, I know the cast of characters. But let's face it, sometimes dealing with software is just soooo dry! Not so when discussed by the two main commentators, Louis Peters and Robert Davidson, whom someone once referred to as CRM On Demand's "Click and Clack." (Thought that was too good not to pass along!) Anyhow, another huge plus about the site is the option to listen OR to read. Out walking my dog or doing the dishes? Just turn up the podcast. Listening to music or watching TV? I'll read Louis's entertaining write-ups to glean great info about CRM On Demand in a very short period of time. So that you get a better understanding of why I like this site so much, here's a sampling of what's discussed: Five Things about Books of Business As Louis Peters put it in his entry, when you see "Five Things" in the title, "you'll know you're going to get some concrete advice that you can put to work right away." Well, Louis and Robert do just that, pointing you in the right direction when using Books of Business to segment data. Moving to Indexed Fields - A Rough Guide (only an article, not a podcast) I've read all about performance and even helped develop material around it. But nowhere have I heard indexed custom fields referred to as "super heroes." Louis and Robert use imaginative language to describe the process for moving your data to indexed fields for optimal performance. Data Access QA from the Forums I think that everyone would admit that data access and visibility is the most difficult topic to understand in CRM On Demand. Following up on their previous podcast on the same topic, Louis and Robert answer a few key questions from the many postings on the Oracle CRM On Demand forums. And I bet that the scenarios match many companies' business requirements...maybe even yours! We Need to Talk About Adoption Another expert, Tim Koehler, joins Louis to talk about how to drive user adoption: aligning product usage with business results, communicating why and how to use the product, getting feedback on usability, and so on. Hope I've made my point -- turn to these podcasts to hear knowledgeable folks discuss CRM On Demand tips and tricks in entertaining ways. One podcast is even called "SaaS Talk"!

    Read the article

  • Make @JsonTypeInfo property optional

    - by Mark Peters
    I'm using @JsonTypeInfo to instruct Jackson to look in the @class property for concrete type information. However, sometimes I don't want to have to specify @class, particularly when the subtype can be inferred given the context. What's the best way to do that? Here's an example of the JSON: { "owner": {"name":"Dave"}, "residents":[ {"@class":"jacksonquestion.Dog","breed":"Greyhound"}, {"@class":"jacksonquestion.Human","name":"Cheryl"}, {"@class":"jacksonquestion.Human","name":"Timothy"} ] } and I'm trying to deserialize them into these classes (all in jacksonquestion.*): public class Household { private Human owner; private List<Animal> residents; public Human getOwner() { return owner; } public void setOwner(Human owner) { this.owner = owner; } public List<Animal> getResidents() { return residents; } public void setResidents(List<Animal> residents) { this.residents = residents; } } public class Animal {} public class Dog extends Animal { private String breed; public String getBreed() { return breed; } public void setBreed(String breed) { this.breed = breed; } } public class Human extends Animal { private String name; public String getName() { return name; } public void setName(String name) { this.name = name; } } using this config: @JsonTypeInfo(use = JsonTypeInfo.Id.CLASS, include = JsonTypeInfo.As.PROPERTY, property = "@class") private static class AnimalMixin { } //... ObjectMapper objectMapper = new ObjectMapper(); objectMapper.getDeserializationConfig().addMixInAnnotations(Animal.class, AnimalMixin.class); Household household = objectMapper.readValue(json, Household.class); System.out.println(household); As you can see, the owner is declared as a Human, not an Animal, so I want to be able to omit @class and have Jackson infer the type as it normally would. When I run this though, I get org.codehaus.jackson.map.JsonMappingException: Unexpected token (END_OBJECT), expected FIELD_NAME: missing property '@class' that is to contain type id (for class jacksonquestion.Human) Since "owner" doesn't specify @class. Any ideas? One initial thought I had was to use @JsonTypeInfo on the property rather than the type. However, this cannot be leveraged to annotate the element type of a list.

    Read the article

  • Nhibernate Criteria Query with Join

    - by John Peters
    I am looking to do the following using an NHibernate Criteria Query I have "Product"s which has 0 to Many "Media"s A product can be associated with 1 to Many ProductCategories These use a table in the middled to create the join ProductCategories Id Title ProductsProductCategories ProductCategoryId ProductId Products Id Title ProductMedias ProductId MediaId Medias Id MediaType I need to implement a criteria query to return All Products in a ProductCategory and the top 1 associated Media or no media if none exists. So although for example a "T Shirt" may have 10 Medias associated, my result should be something similar to this Product.Id Product.Title MediaId 1 T Shirt 21 2 Shoes Null 3 Hat 43 I have tried the following solutions using JoinType.LeftOuterJoin 1) productCriteria.SetResultTransformer(Transformers.DistinctRootEntity); This hasnt worked as the transform is done code side and as I have .SetFirstResult() and .SetMaxResults() for paging purposes it wont work. 2) .SetProjection( Projections.Distinct( Projections.ProjectionList() .Add(Projections.Alias(Projections.Property("Id"), "Id")) ... .SetResultTransformer(Transformers.AliasToBean()); This hasn't worked as I cannot seem to populate a value for Medias.Id in the projections. (Similar to http://stackoverflow.com/questions/1036116/nhibernate-criteria-api-projections) Any help would be greatly appreciated

    Read the article

  • Privilege Elevation only when and if required.

    - by Cameron Peters
    My application only very occasionally requires privilege elevation... I need to reference some 3rd party COM components that only work correctly when run as administrator. I would like my application to request privilege elevation only when it needs it... Generally, I don't want my application to run as administrator unless I need to use the 3rd party COM components. I see that CoCreateAsAdmin could potentially solve the problem, but the component author doesn't set up the required registry entries, and I'm not sure how to use CoCreateAsAdmin in C# and in conjuction with Runtime-Callable-Wrapper that is created by tlbimp. Another solution would be to spawn another process, but I have no experience with this yet... I don't want to create a completely separate application... I would be happy to create an assembly that runs in a separated elevated process if someone can show me how to make it work. Thanks...

    Read the article

  • C# Linq List Contains Similar Elements

    - by John Peters
    Hi All, I am looking for linq query to see if there exists a similar object I have an object graph as follows Cart myCart = new Cart { List<CartProduct> myCartProduct = new List<CartProduct> { CartProduct cartProduct1 = new CartProduct { List<CartProductAttribute> a = new List<CartProductAttribute> { CartProductAttribute cpa1 = new CartProductAttribute{ title="red" }, CartProductAttribute cpa2 = new CartProductAttribute{ title="small" } } } CartProduct cartProduct2 = new CartProduct { List<CartProductAttribute> d = new List<CartProductAttribute> { CartProductAttribute cpa3 = new CartProductAttribute{ title="john" }, CartProductAttribute cpa4 = new CartProductAttribute{ title="mary" } } } } } I would like to get from the Cart = a CartProduct that has the exact same CartProductAttribute title values as a CartProduct that I need to compare. No more and no less. E.G. I need to find a similar CartProduct that has a CartProductAttribute with title="red" and a cartProductAttribute with title="small" in myCart (eg 'cartProduct1' in the example) CartProduct cartProductToCompare = new CartProduct { List<CartProductAttribute> cartProductToCompareAttributes = new List<CartProductAttribute> { CartProductAttribute cpa5 = new CartProductAttribute{ title="red" }, CartProductAttribute cpa6 = new CartProductAttribute{ title="small" } } } So from object graph myCart cartProduct1 cpa1 (title=red) cpa2 (title=small) cartProduct2 cpa3 (title=john) cpa4 (title=mary) Linq query looking for cartProductToCompare cpa5 (title=red) cpa6 (title=small) Should find cartProduct1 Hope all this makes sense... Thanks

    Read the article

  • ASP.NET MVC - using model property as form, how can I post to action?

    - by Ryan Peters
    Consider the following model: public class BandProfileModel { public BandModel Band { get; set; } public IEnumerable<Relationship> Requests { get; set; } } and the following form: <% using (Html.BeginForm()) { %> <%: Html.EditorFor(m => m.Band) %> <input type="submit" value="Save Band" /> <% } %> which posts to the following action: public ActionResult EditPost(BandProfileModel m, string band) { // stuff is done here, but m is null? return View(m); } Basically, I only have one property on my model that is used in the form. The other property in BandProfleModel is just used in the UI for other data. I'm trying to update just the Band property, but for each post, the argument "m" is always null (specifically, the .Band property is null). It's posting just fine to the action, so it isn't a problem with my route. Just the data is null. The ID and name attributes of the fields are BAND_whatever and Band.whatever (whatever being a property of Band), so it seems like it would work... What am I doing wrong? How can I use just one property as part of a form, post back, and have values populated via the model binder for my BandProfileModel property in the action? Thanks.

    Read the article

  • Expose url to webservice

    - by Patrick Peters
    In our project we want to query a document management system for a specific document or movie. The dms returns a URL with the document location (for example: http://mydomain.myserver1.share/mypdf.pdf or http://mydomain.myserver2.share/mymovie.avi). We want to expose the document to internet users and intranet users. The requested file can be large (large video files). Our architecture is like: request goes like: webapp1 - webapp2 - webapp3 - dms response goes like: dms - webapp3 - webapp2 - webapp1 webapp1 could be on the internet. I have have been thinking how we can obfusicate the real url from the dms, due to security issues. I have seen implementations from other webapps where the pdf URL was obfusicated by creating a temp file for the requested document that is specific for the session and user. So other users cannot easily guess the documentname of other users. My question: is there a pattern that deals with exposing company/user vulernable data to the public ? Our development is in C# 3.5.

    Read the article

  • Displaying shifts on a timetable/calendar using C#

    - by Dave Peters
    I am very much a beginner and have experience of SQL and a tiny amount of VBA. What I am looking to do is create a tool to pull shift times from a database and to display them on a timetable/calendar. It will be part of a desktop application that the (very tech illiterate) end users will use to view and amend shift patterns. In essence it will be a grid with days on one axis and people on the other (I would however like to have the blocks proportional to shift length). In my mind it would potentially be a simple Gantt chart. All computing stuff I’ve learnt has been through trial and error and I want to use this project to get a much better understanding of C# as well as to get to the end product. I have been reading around for ways to tackle the problem and my issue is creating the timetable framework to which I will bind the data. I am using Visual Studio 2010 and SQL Server 2008 R2. Do you know of good resources which will either start me on the road to designing my own interface or give a basic framework I can adapt? All the resources I’ve found so far have been in different languages or for web based applications. Thank you for your time.

    Read the article

  • Login Website, curious Cookie Problem

    - by Collin Peters
    Hello, Language: C# Development Environment: Visual Studio 2008 Sorry if the english is not perfect. I want to login to a Website and get some Data from there. My Problem is that the Cookies does not work. Everytime the Website says that I should activate Cookies but i activated the Cookies trough a Cookiecontainer. I sniffed the traffic serveral times for the login progress and I see no problem there. I tried different methods to login and I have searched if someone else have this Problem but no results... Login Page is: "www.uploaded.to", Here is my Code to Login in Short Form: private void login() { //Global CookieContainer for all the Cookies CookieContainer _cookieContainer = new CookieContainer(); //First Login to the Website HttpWebRequest _request1 = (HttpWebRequest)WebRequest.Create("http://uploaded.to/login"); _request1.Method = "POST"; _request1.CookieContainer = _cookieContainer; string _postData = "email=XXXXX&password=XXXXX"; byte[] _byteArray = Encoding.UTF8.GetBytes(_postData); Stream _reqStream = _request1.GetRequestStream(); _reqStream.Write(_byteArray, 0, _byteArray.Length); _reqStream.Close(); HttpWebResponse _response1 = (HttpWebResponse)_request1.GetResponse(); _response1.Close(); //######################## //Follow the Link from Request1 HttpWebRequest _request2 = (HttpWebRequest)WebRequest.Create("http://uploaded.to/login?coo=1"); _request2.Method = "GET"; _request2.CookieContainer = _cookieContainer; HttpWebResponse _response2 = (HttpWebResponse)_request2.GetResponse(); _response2.Close(); //####################### //Get the Data from the Page after Login HttpWebRequest _request3 = (HttpWebRequest)WebRequest.Create("http://uploaded.to/home"); _request3.Method = "GET"; _request3.CookieContainer = _cookieContainer; HttpWebResponse _response3 = (HttpWebResponse)_request3.GetResponse(); _response3.Close(); } I'm stuck at this problem since many weeks and i found no solution that works, please help...

    Read the article

  • Where do Java Applets live?

    - by Wendi Peters
    I'm trying to figure out where java Applets that I run from the browser live. I'm using Firefox 3.0 on Windows XP with Java 1.6 if that makes any difference. From the Java Control Panel on the toolbar, I can access "Temporary Internet Files - Settings" to find the Java cache. From there I can show the resources and see a file called "dws2010066.dat". Does this resource correspond to a file on disk? I did a search in the Java cache/my whole computer and came up empty handed.

    Read the article

  • Must web apps support back button?

    - by Patrick Peters
    I did a system test on a new ASP.NET app. I encountered several exceptions when using the BACK button in my browser (IE 7). I stated in a review-record that the web-app must support the use of a BACK button (or at least handle it gracefully with for example session-time out warnings). The teamlead did not agree with me as he stated that a web-app should not support working with the back-button by default. Do you agree?

    Read the article

1 2 3  | Next Page >