Search Results

Search found 29825 results on 1193 pages for 'business process'.

Page 250/1193 | < Previous Page | 246 247 248 249 250 251 252 253 254 255 256 257  | Next Page >

  • Cloud – the forecast is improving

    - by Rob Farley
    There is a lot of discussion about “the cloud”, and how that affects people’s data stories. Today the discussion enters the realm of T-SQL Tuesday, hosted this month by Jorge Segarra. Over the years, companies have invested a lot in making sure that their data is good, and I mean every aspect of it – the quality of it, the security of it, the performance of it, and more. Experts such as those of us at LobsterPot Solutions have helped these companies with this, and continue to work with clients to make sure that data is a strong part of their business, not an oversight. Whether business intelligence systems are being utilised or not, every business needs to be able to rely on its data, and have the confidence in it. Data should be a foundation upon which a business is built. In the past, data had been stored in paper-based systems. Filing cabinets stored vital information. Today, people have server rooms with storage of various kinds, recognising that filing cabinets don’t necessarily scale particularly well. It’s easy to ‘lose’ data in a filing cabinet, when you have people who need to make sure that the sheets of paper are in the right spot, and that you know how things are stored. Databases help solve that problem, but still the idea of a large filing cabinet continues, it just doesn’t involve paper. If something happens to the physical ‘filing cabinet’, then the problems are larger still. Then the data itself is under threat. Many clients have generators in case the power goes out, redundant cables in case the connectivity dies, and spare servers in other buildings just in case they’re required. But still they’re maintaining filing cabinets. You see, people like filing cabinets. There’s something to be said for having your data ‘close’. Even if the data is not in readable form, living as bits on a disk somewhere, the idea that its home is ‘in the building’ is comforting to many people. They simply don’t want to move their data anywhere else. The cloud offers an alternative to this, and the human element is an obstacle. By leveraging the cloud, companies can have someone else look after their filing cabinet. A lot of people really don’t like the idea of this, partly because the administrators of the data, those people who could potentially log in with escalated rights and see more than they should be allowed to, who need to be trusted to respond if there’s a problem, are now a faceless entity in the cloud. But this doesn’t mean that the cloud is bad – this is simply a concern that some people may have. In new functionality that’s on its way, we see other hybrid mechanisms that mean that people can leverage parts of the cloud with less fear. Companies can use cloud storage to hold their backup data, for example, backups that have been encrypted and are therefore not able to be read by anyone (including administrators) who don’t have the right password. Companies can have a database instance that runs locally, but which has its data files in the cloud, complete with Transparent Data Encryption if needed. There can be a higher level of control, making the change easier to accept. Hybrid options allow people who have had fears (potentially very justifiable) to take a new look at the cloud, and to start embracing some of the benefits of the cloud (such as letting someone else take care of storage, high availability, and more) without losing the feeling of the data being close. @rob_farley

    Read the article

  • Oracle Applications Cloud Release 8 Customization: Your User Interface, Your Text

    - by ultan o'broin
    Introducing the User Interface Text Editor In Oracle Applications Cloud Release 8, there’s an addition to the customization tool set, called the User Interface Text Editor  (UITE). When signed in with an application administrator role, users launch this new editing feature from the Navigator's Tools > Customization > User Interface Text menu option. See how the editor is in there with other customization tools? User Interface Text Editor is launched from the Navigator Customization menu Applications customers need a way to make changes to the text that appears in the UI, without having to initiate an IT project. Business users can now easily change labels on fields, for example. Using a composer and activated sandbox, these users can take advantage of the Oracle Metadata Services (MDS), add a key to a text resource bundle, and then type in their preferred label and its description (as a best practice for further work, I’d recommend always completing that description). Changing a simplified UI field label using Oracle Composer In Release 8, the UITE enables business users to easily change UI text on a much wider basis. As with composers, the UITE requires an activated sandbox where users can make their changes safely, before committing them for others to see. The UITE is used for editing UI text that comes from Oracle ADF resource bundles or from the Message Dictionary (or FND_MESSAGE_% tables, if you’re old enough to remember such things). Functionally, the Message Dictionary is used for the text that appears in business rule-type error, warning or information messages, or as a text source when ADF resource bundles cannot be used. In the UITE, these Message Dictionary texts are referred to as Multi-part Validation Messages.   If the text comes from ADF resource bundles, then it’s categorized as User Interface Text in the UITE. This category refers to the text that appears in embedded help in the UI or in simple error, warning, confirmation, or information messages. The embedded help types used in the application are explained in an Oracle Fusion Applications User Experience (UX) design pattern set. The message types have a UX design pattern set too. Using UITE  The UITE enables users to search and replace text in UI strings using case sensitive options, as well as by type. Users select singular and plural options for text changes, should they apply. Searching and replacing text in the UITE The UITE also provides users with a way to preview and manage changes on an exclusion basis, before committing to the final result. There might, for example, be situations where a phrase or word needs to remain different from how it’s generally used in the application, depending on the context. Previewing replacement text changes. Changes can be excluded where required. Multi-Part Messages The Message Dictionary table architecture has been inherited from Oracle E-Business Suite days. However, there are important differences in the Oracle Applications Cloud version, notably the additional message text components, as explained in the UX Design Patterns. Message Dictionary text has a broad range of uses as indicated, and it can also be reserved for internal application use, for use by PL/SQL and C programs, and so on. Message Dictionary text may even concatenate together at run time, where required. The UITE handles the flexibility of such text architecture by enabling users to drill down on each message and see how it’s constructed in total. That way, users can ensure that any text changes being made are consistent throughout the different message parts. Multi-part (Message Dictionary) message components in the UITE Message Dictionary messages may also use supportability-related numbers, the ones that appear appended to the message text in the application’s UI. However, should you have the requirement to remove these numbers from users' view, the UITE is not the tool for the job. Instead, see my blog about using the Manage Messages UI.

    Read the article

  • Multichannel Digital Engagement: Find Out How Your Organization Measures Up

    - by Michael Snow
    This article was originally published in the September 2013 Edition of the Oracle Information InDepth Newsletter ORACLE WEBCENTER EDITION Thanks to mobile and social technologies, interactive online experiences are now commonplace. Not only that, they give consumers more choices, influence, and control than ever before. So how can you make your organization stand out? The key building blocks for delivering exceptional cross-channel digital experiences are outlined below. Also, a new assessment tool is available to help you measure your organization's ability to deliver such experiences. A clearly defined digital strategy. The customer journey is growing increasingly complex, encompassing multiple touchpoints and channels. It used to be easy to map marketing efforts to specific offline channels; for example, a direct mail piece with an offer to visit a store for a discounted purchase. Now it is more difficult to cultivate and track such clear cause-and-effect relationships. To deliver an integrated digital experience in this more complex world, organizations need a clearly defined and comprehensive digital marketing strategy that is backed up by an integrated set of software, middleware, and hardware solutions. Strong support for business agility and speed-to-market. As both IT and marketing executives know, speed-to-market and business agility are key to competitive advantage. That means marketers need solutions to support the rapid implementation of online marketing initiatives—plus the flexibility to adapt quickly to a changing marketplace. And IT needs tools with the performance, scalability, and ease of integration to support marketing efforts. Both teams benefit when business users are empowered to implement marketing initiatives on their own, with minimal IT intervention. The ability to deliver relevant, personalized content. Delivering a one-size-fits-all online customer experience is no longer acceptable. Customers expect you to know who they are, including their preferences and past relationship with your brand. That means delivering the most relevant content from the moment a visitor enters your site. To make that happen, you need a powerful rules engine so that marketers and business users can easily define site visitor segments and deliver content accordingly. That includes both implicit targeting that is based on the user’s behavior, and explicit targeting that takes a user’s profile information into account. Ideally, the rules engine can also intelligently weight recommendations when multiple segments apply to a specific customer. Support for social interactivity. With the advent of Facebook and LinkedIn, visitors expect to participate in and contribute to your web presence—and share their experience on their own social networks. That requires easy incorporation of user-generated content such as comments, ratings, reviews, polls, and blogs; seamless integration with third-party social networking sites; and support for social login, which helps to remove barriers to social participation. The ability to deliver connected, multichannel experiences that include powerful, flexible mobile capabilities. By 2015, mobile usage is projected to surpass that of PCs and other wired devices. In other words, mobile is an essential element in delivering exceptional online customer experiences. This requires the creation and management of mobile experiences that are optimized for delivery to the thousands of different devices that are in use today. Just as important, organizations must be able to easily extend their traditional web presence to the mobile channel and deliver highly personalized and relevant multichannel marketing initiatives while also managing to minimize the time and effort required to manage mobile sites. Are you curious to know how your organization measures up when it comes to delivering an engaging, multichannel digital experience? If so, take this brief, 15-question online assessment and see how your organization scores in the areas of digital strategy, digital agility, relevance and personalization, social interactivity, and multichannel experience.

    Read the article

  • Building Enterprise Smartphone App &ndash; Part 1: Why Build Smart Phone Apps

    - by Tim Murphy
    This is part 1 in a series of post based on a talk I gave recently at the Chicago Information Technology Architects Group.  Feel free to leave feedback. Intro Most of us already carry smartphones. We play games on them. We keep up with what is going on with our friends and our favorite teams. We take pictures of our kids at their events. But the question is if that is all they are good for. Many companies have aspects of their business that lend themselves to being performed by mobile devices. Some of them lean toward larger device such as tablets, but many can be executed on smartphones. This and the following articles will discuss some of the possible applications of smartphone technology for businesses, the platforms that are available and the considerations you need to make when building them. I'll take a look at some specific scenarios and wrap up with a couple of capabilities that are just emerging that can be used in the future. Why Build Enterprise Smartphone Applications So what are some of the ways that you can leverage smartphone technology to gain efficiency in your business or a clients business. There are a few major areas that I have seen mobile platforms being an advantage to. Your mobile sales force is a key candidate for leveraging smartphone apps.  They can visit clients in their retail location and place orders on site. It is a more personal approach which can gain you customer loyalty.  A sales person may also gather information about the way a client does business or who their target market is. This allows them you to focus marketing information or build customized support for your customer. You may also have need to track physical inventory in a store. This is something that has historically been done with laser scanners, but with the camera capabilities in today's phones and tablets it is possible to use more general multi-purpose devices.  This can save costs on both hardware and telecommunication contracts. Delivery verification is another area that historically has been the domain of specialized devices but can now be accomplished with smartphones.  This also reduces costs because it is also used for communicating with the driver and other operations.  Add to that the navigation capability of smartphones and you can see how the return on investment increases. Executives are always on the go. They spend most of their time in meetings and yet they need access to decision making information at their finger tips. With a smartphone app they can get alerts when major sales are closed or critical accounting process are completed that may need their attention. They can also answer questions by instantly pulling up BI reports. I have often heard operations support people say that they need things like VPN and RDP from their phones. If they can also have notifications of outages or critical support requests they can be react to situations without needing to be tied to their desks. These are all valid reasons to need smartphone applications.  In the next installment I will discuss platforms and features. del.icio.us Tags: Smartphones,Enterprise Smartphone Apps,Architecture

    Read the article

  • Repository pattern with lazying loading using POCO

    - by Simon G
    Hi, I'm in the process of starting a new project and creating the business objects and data access etc. I'm just using plain old clr objects rather than any orms. I've created two class libraries: 1) Business Objects - holds all my business objects, all this objects are light weight with only properties and business rules. 2) Repository - this is for all my data access. The majority of my objects will have child list in and my question is what is the best way to lazy load these values as I don't want to bring back unnecessary information if I dont need to. I've thought about when using the "get" on the child property to check if its "null" and if it is call my repository to get the child information. This has two problems from what I can see: 1) The object "knows" how to get itself I would rather no data access logic be held in the object. 2) This required both classes to reference each other which in visual studio throws a circular dependency error. Does anyone have any suggestions on how to overcome this issue or any recommendations on my projects layout and where it can be improved? Thanks

    Read the article

  • jQuery "Autcomplete" plugin is messing up the order of my data

    - by Max Williams
    I'm using Jorn Zaefferer's Autocomplete plugin on a couple of different pages. In both instances, the order of displayed strings is a little bit messed up. Example 1: array of strings: basically they are in alphabetical order except for General Knowledge which has been pushed to the top: General Knowledge,Art and Design,Business Studies,Citizenship,Design and Technology,English,Geography,History,ICT,Mathematics,MFL French,MFL German,MFL Spanish,Music,Physical Education,PSHE,Religious Education,Science,Something Else Displayed strings: General Knowledge,Geography,Art and Design,Business Studies,Citizenship,Design and Technology,English,History,ICT,Mathematics,MFL French,MFL German,MFL Spanish,Music,Physical Education,PSHE,Religious Education,Science,Something Else Note that Geography has been pushed to be the second item, after General Knowledge. The rest are all fine. Example 2: array of strings: as above but with Cross-curricular instead of General Knowledge. Cross-curricular,Art and Design,Business Studies,Citizenship,Design and Technology,English,Geography,History,ICT,Mathematics,MFL French,MFL German,MFL Spanish,Music,Physical Education,PSHE,Religious Education,Science,Something Else Displayed strings: Cross-curricular,Citizenship,Art and Design,Business Studies,Design and Technology,English,Geography,History,ICT,Mathematics,MFL French,MFL German,MFL Spanish,Music,Physical Education,PSHE,Religious Education,Science,Something Else Here, Citizenship has been pushed to the number 2 position. I've experimented a little, and it seems like there's a bug saying "put things that start with the same letter as the first item after the first item and leave the rest alone". Kind of mystifying. I've tried a bit of debugging by triggering alerts inside the autocomplete plugin code but everywhere i can see, it's using the correct order. it seems to be just when its rendered out that it goes wrong. Any ideas anyone? max

    Read the article

  • Unit Testing in the real world

    - by Malfist
    I manage a rather large application (50k+ lines of code) by myself, and it manages some rather critical business actions. To describe the program simple, I would say it's a fancy UI with the ability to display and change data from the database, and it's managing around 1,000 rental units, and about 3k tenants and all the finances. When I make changes, because it's so large of a code base, I sometimes break something somewhere else. I typically test it by going though the stuff I changed at the functional level (i.e. I run the program and work through the UI), but I can't test for every situation. That is why I want to get started with unit testing. However, this isn't a true, three tier program with a database tier, a business tier, and a UI tier. A lot of the business logic is performed in the UI classes, and many things are done on events. To complicate things, everything is database driven, and I've not seen (so far) good suggestions on how to unit test database interactions. How would be a good way to get started with unit testing for this application. Keep in mind. I've never done unit testing or TDD before. Should I rewrite it to remove the business logic from the UI classes (a lot of work)? Or is there a better way?

    Read the article

  • jquery toggle and fade in one function?

    - by tony noriega
    I was wondering if toggle() and fadeIn() could be used in one function... i got this to work, but it only fades in after the second click... not on first click of the toggle. $(document).ready(function() { $('#business-blue').hide(); $('a#biz-blue').click(function() { $('#business-blue').toggle().fadeIn('slow'); return false; }); // hides the slickbox on clicking the noted link $('a#biz-blue-hide').click(function() { $('#business-blue').hide('fast'); return false; }); }); <a href="#" id="biz-blue">Learn More</a> <div id="business-blue" style="border:1px soild #00ff00; background:#c6c1b8; height:600px; width:600px; margin:0 auto; position:relative;"> <p>stuff here</p> </div>

    Read the article

  • Flexible design - customizable entity model, UI and workflow

    - by Ngm
    Hi All, I want to achieve the following aspects in the software I am building: 1. Customizable entity model 2. Customizable UI 3. Customizable workflow I have thought about an approach to achieve this, I want you to review this and make suggestions: Entity objects should be plain objects and will hold just data Separate Entity model and DB Schema by using an framework (like NHibernate?). This will allow easy modification of entity objects. Business logic to fetch/modify entities has to be granular enough so that they can be invoked as part of the workflow. Business objects should not hold any state, and hence will contain only static methods The workflow will decide depending upon the "state" of an entity/entities which methods on business object/objects to invoke. The workflow should obtain the results of the processing and then pass on the business objects to the appropriate UI screen. The UI screen has to contain instructions about how to display a given entity/entites. Possibly the UI has to be generated dynamically based on a set of UI instructions. (like XUL) What do you think about this approach? Suggest which existing frameworks (like NHiberante, Window Workflow) fit into this model, so that I will not spend time on coding these frameworks Also suggest is there any asp.net framework that can generate dynamic asp.net ajax pages based on a set of UI instructions (like Mozilla XUL)? I have recently been exploring Apache Ofbiz and was impressed by its ability to customize most areas of the application: UI, workflow, entities. Is there any similar (not necessarily an ERP system) application developed in C#/.Net which offers a similar level of customization? I am looking for examples of applications developed in C# that are highly customizable in terms of UI, Workflow and Entity Model

    Read the article

  • Hibernate : Opinions in Composite PK vs Surrogate PK

    - by Albert Kam
    As i understand it, whenever i use @Id and @GeneratedValue on a Long field inside JPA/Hibernate entity, i'm actually using a surrogate key, and i think this is a very nice way to define a primary key considering my not-so-good experiences in using composite primary keys, where : there are more than 1 business-value-columns combination that become a unique PK the composite pk values get duplicated across the table details cannot change the business value inside that composite PK I know hibernate can support both types of PK, but im left wondering by my previous chats with experienced colleagues where they said that composite PK is easier to deal with when doing complex SQL queries and stored procedure processes. They went on saying that when using surrogate keys will complicate things when doing joining and there are several condition when it's impossible to do some stuffs when using surrogate keys. Although im sorry i cant explain the detail here since i was not clear enough when they explain it. Maybe i'll put more details next time. Im currently trying to do a project, and want to try out surrogate keys, since it's not getting duplicated across tables, and we can change the business-column values. And when the need for some business value combination uniqueness, i can use something like : @Table(name="MY_TABLE", uniqueConstraints={ @UniqueConstraint(columnNames={"FIRST_NAME", "LAST_NAME"}) // name + lastName combination must be unique But im still in doubt because of the previous discussion about the composite key. Could you share your experiences in this matter ? Thank you !

    Read the article

  • Aspect-Oriented Programming in OOP world - breaking rules ?

    - by Maksim Kondratyuk
    Hi 2 all! When I worked on asp.net mvc web site project, I investigated different approaches for validation. Some of them were DataAnotation validation and Validation Block. They use attributes for setting up rules for validation. Like this: [Required] public string Name {get;set;} I was confused how this approach combines with SRP (single responsibilty principle) from OOP world. Also I don't like any business logic in business objects, I prefer "poor business objects" model, but when I decorate my business objects with validation attributes for real requirements, they become ugly (Has a lot of attributes / with localization logic and so on). Idea with attributes realy simple, but in my opinion the validation decoration should be separated from object. I'm not sure is the approach to separate validation rules to xml files or to another objects, maybe it is a solution. Another bad side of AOP - problems with unit testin such code. When I decorated some controller actions with custom attributes for example to import/export TempData between actions or initialize some required services I can't to write proper unit test for testing this actions. Do you think that attributes don't break srp or you just disregard this and think that it's simplest , is not worst way ? P.S. I read some likes articles and discussions and I just want to put things in proper order. P.P.S. sorry for my "fluent" english :=)

    Read the article

  • Rails form with a better URL

    - by Sam
    Wow, switching to REST is a different paradigm for sure and is mainly a headache right now. view <% form_tag (businesses_path, :method => "get") do %> <%= select_tag :business_category_id, options_for_select(@business_categories.collect {|bc| [bc.name, bc.id ]}.insert(0, ["All Containers", 0]), which_business_category(@business_category) ), { :onchange => "this.form.submit();"} %> <% end %> controller def index @business_categories = BusinessCategory.find(:all) if params[:business_category_id].to_i != 0 @business_category = BusinessCategory.find(params[:business_category_id]) @businesses = @business_category.businesses else @businesses = Business.all end respond_to do |format| format.html # index.html.erb format.xml { render :xml => @businesses } end end routes map.resources What I want to to is get a better URL than what this form is presenting which is the following: http://localhost:3000/businesses?business_category_id=1 Without REST I would have do something like http://localhost:3000/business/view/bbq bbq as permalink or I would have done http://localhost:300/business_categories/view/bbq and get the business that are associated with the category but I don't really know the best way of doing this. So the two questions are what is the best logic of finding a business by its categories using the latter form and number two how to get that in a pretty URL all through RESTful routes in Rails.

    Read the article

  • What is the best practice in regards to building composite dtos off of an aggregate root with domain

    - by Chance
    I'm trying to figure out the best approach/practice for assembling a composite data transfer object off of an aggregate root and would love to hear people's thoughts on this. For example, lets say I have a root that has a few domain objects as children. I want to assemble a specific view dto, based on some business logic, that either has attributes or full dto's of it's objects. What I'm struggling with is trying to figure out where that assembly should happen. I can see it going on the domain object of the aggregate root as there is some business logic associated with it. The benefits of this approach from what I've deduced thus far is that it should reduce the inevitable business logic from bleeding outisde of the domain object. It also allows for private methods that take care of tasks that could become more complex from an external builder. The downsides being that the domain object becomes much more entrenched in the application's workflow and represents much more than just the domain object. It also could become very large in the scenario where you need multiple composite Dtos. Alternatively, I could also see it belonging to some form of transfer object assembler where there is a builder for each domain object. The domain objects would still be responsible for GetDto() and UpdateFromDto(dto). Outside of that, the builder would handle the construction and deconstruction of composite dtos. The downside is kind of mentioned above, where I fear this will easily lead to developers unfamiliar with DDD bleeding a ton of business logic into the assembler which is what I want to desperately avoid. Any thoughts would be greatly apperciated.

    Read the article

  • Entity Framework POCO objects

    - by Dan
    I'm struggling with understanding Entity Framework and POCO objects. Here's what I'm trying to achieve. 1) Separate the DAL from the Business Layer by having my business layer use an interface to my DAL. Maybe use Unity to create my context. 2) Use Entity Framework inside my DAL. I have a Domain model with objects that reside in my business layer. I also have a database full of tables that doesn't really represent my domain model. I setup Entity Framework and generated POCO objects by using the ADO.NET POCO Generator extension. This gave me an object for each table in my database. Now I want to be able to say context.GetAll<User>(); and have it return a list of my User objects. The User object is in my business layer. Is that possible? Does that make sense or am I totally off and should start over? I'm guessing I need to use the repository pattern to achieve this, but I'm not sure. Can anyone help?

    Read the article

  • ASP.Net Session Storage provider in 3-layer architecture

    - by Tedd Hansen
    I'm implementing a custom session storage provider in ASP.Net. We have a strict 3-layer architecture and therefore the session storage needs to go through the business layer. Presentation-Business-Database. The business layer is accessed through WPF. The database is MSSQL. What I need is (in order of preference): A commercial/free/open source product that solves this. The source code of a SqlSessionStateStore (custom session store) (not the ODBC-sample on MSDN) that I can modify to use a middle layer. I've tried looking at .Net source through Reflector, but the code is not usable. Note: I understand how to do this. I am looking for working samples, preferably that has been proven to work fine under heavy load. The ODBC sample on MSDN doesn't use the (new?) stored procs that the build in SqlSessionStateStore uses - I'd like to use these if possible (decreases traffic). Edit1: To answer Simons question on more info: ASP.Net Session()-object can be stored in either InProc, ASP.Net State Service or SQL-server. In a secure 3-layer model the presentation layer (web server) does not have direct/physical access to the database layer (SQL-server). And even without the physical limitations, from an architectural standpoint you may not want this. InProc and ASP.Net State Service does not support load balancing and doesn't have fault tolerance. Therefore the only option is to access SQL through webservice middle layer (business layer).

    Read the article

  • Creating Modules for VB Program (Similar to firefox add on)

    - by Assimilater
    Hi all. I have a contact manager program that I'd like to design for multiple network marketing companies. My desired structure would include a core program which covers basic contact management functions. After that one could purchase a module for their specific network marketing company. This module would contain a variety of controls that the original program would need to be able to manipulate. Here is an example of what it would have to have: A group box containing buttons that link to a genealogy view, and the option to import one's donwnline from the back office provided by a company. A panel which is displayed on the contact page allowing the user to input business information or which will be filled by importing a downline from the back office: ie business ID, qualification level, sponsor information etc. a panel displayed when one searches for contacts on the contact list page which allows the user to sort based on information such as when they joined, what their business id is and so forth. a panel which is displayed in a personal business overview page which presents to an individual how many people in their downline are at each qualification level and develop a mailing list for individuals of a certain qualification level. I have the code developed to perform all these functions, I just wanted to give you an example of what needed to be done. I'm thinking that what I'm trying to create is a library that one can download and the program will recognize, but I'm not really sure where to go. What I'm really trying to do is figure out what kind of file I can make that will contain all this code and the GUI information that the program will recognize. Any ideas? With thanks, John

    Read the article

  • Cannot perform an ORDERBY against my EF4 data

    - by Jaxidian
    I have a query hitting EF4 using STEs and I'm having an issue with user-defined sorting. In debugging this, I have removed the dynamic sorting and am hard-coding it and I still have the issue. If I swap/uncomment the var results = xxx lines in GetMyBusinesses(), my results are not sorted any differently - they are always sorting it ascendingly. FYI, Name is a varchar(200) field in SQL 2008 on my Business table. private IQueryable<Business> GetMyBusinesses(MyDBContext CurrentContext) { var myBusinesses = from a in CurrentContext.A join f in CurrentContext.F on a.FID equals f.id join b in CurrentContext.Businesses on f.BID equals b.id where a.PersonID == 52 select b; var results = from r in myBusinesses orderby "Name" ascending select r; //var results = from r in results // orderby "Name" descending // select r; return results; } private PartialEntitiesList<Business> DoStuff() { var myBusinesses = GetMyBusinesses(); var myBusinessesCount = GetMyBusinesses().Count(); Results = new PartialEntitiesList<Business>(myBusinesses.Skip((PageNumber - 1)*PageSize).Take(PageSize).ToList()) {UnpartialTotalCount = myBusinessesCount}; return Results; } public class PartialEntitiesList<T> : List<T> { public PartialEntitiesList() { } public PartialEntitiesList(int capacity) : base(capacity) { } public PartialEntitiesList(IEnumerable<T> collection) : base(collection) { } public int UnpartialTotalCount { get; set; } }

    Read the article

  • Asynchronous SQL Operations

    - by Paul Hatcherian
    I've got a problem I'm not sure how best to solve. I have an application which updates a database in response to ad hoc requests. One request in particular is quite common. The request is an update that by itself is quite simple, but has some complex preconditions. For this request the business layer first requests a set of data from the data layer. The business logic layer evaluated the data from the database and parameters from the request, from this the action to be performed is determined, and the request's response message(s) are created. The business layer now executes the actual update command that is the purpose of the request. This last step is the problem, this command is dependent on the state of the database, which might have changed since the business logic ran. Locking down the data read in this operation across several round-trips to the database doesn't seem like a good idea either. Is there a 'best-practice' way to accomplish something like this? Thanks!

    Read the article

  • Basic Application Organization + Publishing (.NET 4.0)

    - by keynesiancross
    Hi all, I'm trying to figure out the best way to keep my program organized. Currently I have many class files in one project file, but some of these classes do things that are very different, and some I would like to expose to other applications in the future. One thought I had to organizing my application was to create multiple project files, with one "Main" project, which would interact with all the other projects and their relevant classes as needed. Does this make sense? I was wondering if anyone had any suggestions in regards to using multiple project files in one solution (and how do you create something like this?), and if it makes sense to have multiple namespaces in one solution... Cheers ----Edit Below---- Sorry, my fault. Currently my program is all in one console project. Within this project I have several classes, some of which basically launch a BackgroundWorker and run an endless loop pulling data. The BackgroundWorker then passes this data back to the main business logic as needed. I'm hoping to seperate this data pull material (including the background worker material) into one project file, and the rest of the business logic into another project file. The projects will have to pass objects between eachother though (the data to the main business logic, and the business logic will pass startup parameteres to the dataPull project)... Hopefully this adds a bit more detail.

    Read the article

  • Patterns and Libraries for working with raw UI values.

    - by ProfK
    By raw values, I mean the application level values provided by UI controls, such as the Text property on a TextBox. Too often I find myself writing code to check and parse such values before they get used as a business level value, e.g. PaymentTermsNumDays. I've mitigated a lot of the spade work with rough and ready extension methods like String.ToNullableInt, but we all know that just isn't right. We can't put the whole world on String's shoulders. Do I look at tasking my UI to provide business values, using a ruleset pushed out from the server app, or open my business objects up a bit to do the required sanitising etc. as they required? Neither of these approaches sits quite right with me; the first seems closer to ideal, but quite a bit of work, while the latter doesn't show much respect to the business objects' single responsibility. The responsibilities of the UI are a closer match. Between these extremes, I could also just implement another DTO layer, an IoC container with sanitising and parsing services, derive enhanced UI controls, or stick to copy and paste inline drudgery.

    Read the article

  • Rails from with better url

    - by Sam
    wow, switching to rest is a different paradigm for sure and is mainly a headache right now. view <% form_tag (businesses_path, :method => "get") do %> <%= select_tag :business_category_id, options_for_select(@business_categories.collect {|bc| [bc.name, bc.id ]}.insert(0, ["All Containers", 0]), which_business_category(@business_category) ), { :onchange => "this.form.submit();"} %> <% end %> controller def index @business_categories = BusinessCategory.find(:all) if params[:business_category_id].to_i != 0 @business_category = BusinessCategory.find(params[:business_category_id]) @businesses = @business_category.businesses else @businesses = Business.all end respond_to do |format| format.html # index.html.erb format.xml { render :xml => @businesses } end end routes map.resources What I want to to is get a better url than what this form is presenting which is the following: http://localhost:3000/businesses?business_category_id=1 Without rest I would have do something like http://localhost:3000/business/view/bbq bbq as permalink or I would have done http://localhost:300/business_categories/view/bbq and get the business that are associated with the category but I don't really know the best way of doing this. So the two questions are what is the best logic of finding a business by its categories using the latter form and number two how to get that in a pretty url all through restful routes in rails.

    Read the article

  • I’m screwing up this LEFT JOIN

    - by Jason Shultz
    I'm doing a left join on several tables. What I want to happen is that it list all the businesses. Then, it looks through the photos, videos, specials and categories. IF there are photos, then the tables shows yes, if there are videos, it shows yes in the table. It does all that without any problems. Except for one thing. For every photo, it shows the business that many times. For example, if there are 5 photos in the DB for a business, it shows the business five times. Obviously, this is not what I want to happen. Can you help? function frontPageList() { $this->db->select('b.id, b.busname, b.busowner, b.webaddress, p.thumb, v.title, c.catname'); $this->db->from ('business AS b'); $this->db->where('b.featured', '1'); $this->db->join('photos AS p', 'p.busid = b.id', 'left'); $this->db->join('video AS v', 'v.busid = b.id', 'left'); $this->db->join('specials AS s', 's.busid = b.id', 'left'); $this->db->join('category As c', 'b.category = c.id', 'left'); return $this->db->get();

    Read the article

  • OpenStack: Keystone service stops immediately after starting

    - by user241618
    When restarting the Keystone service, it starts with a PID but within a fraction of second it stops. Checking the status immediately afterwards, it shows a different PID and when rechecking afterwards, it's dead. root@hyper5:~# service keystone restart stop: Unknown instance: keystone start/running, process 37746 root@hyper5:~# service keystone status keystone start/running, process 37750 root@hyper5:~# service keystone status keystone stop/waiting

    Read the article

  • TFS 2010 Build Custom Activity for Merging Assemblies

    - by Jakob Ehn
    *** The sample build process template discussed in this post is available for download from here: http://cid-ee034c9f620cd58d.office.live.com/self.aspx/BlogSamples/ILMerge.xaml ***   In my previous post I talked about library builds that we use to build and replicate dependencies between applications in TFS. This is typically used for common libraries and tools that several other application need to reference. When the libraries grow in size over time, so does the number of assemblies. So all solutions that uses the common library must reference all the necessary assemblies that they need, and if we for example do a refactoring and extract some code into a new assembly, all the clients must update their references to reflect these changes, otherwise it won’t compile. To improve on this, we use a tool from Microsoft Research called ILMerge (Download from here). It can be used to merge several assemblies into one assembly that contains all types. If you haven’t used this tool before, you should check it out. Previously I have implemented this in builds using a simple batch file that contains the full command, something like this: "%ProgramFiles(x86)%\microsoft\ilmerge\ilmerge.exe" /target:library /attr:ClassLibrary1.bl.dll /out:MyNewLibrary.dll ClassLibrary1.dll ClassLibrar2.dll ClassLibrary3.dll This merges 3 assemblies (ClassLibrary1, 2 and 3) into a new assembly called MyNewLibrary.dll. It will copy the attributes (file version, product version etc..) from ClassLibrary1.dll, using the /attr switch. For more info on ILMerge command line tool, see the above link. This approach works, but requires a little bit too much knowledge for the developers creating builds, therefor I have implemented a custom activity that wraps the use of ILMerge. This makes it much simpler to setup a new build definition and have the build automatically do the merging. The usage of the activity is then implemented as part of the Library Build process template mentioned in the previous post. For this article I have just created a simple build process template that only performs the ILMerge operation.   Below is the code for the custom activity. To make it compile, you need to reference the ILMerge.exe assembly. /// <summary> /// Activity for merging a list of assembies into one, using ILMerge /// </summary> public sealed class ILMergeActivity : BaseCodeActivity { /// <summary> /// A list of file paths to the assemblies that should be merged /// </summary> [RequiredArgument] public InArgument<IEnumerable<string>> InputAssemblies { get; set; } /// <summary> /// Full path to the generated assembly /// </summary> [RequiredArgument] public InArgument<string> OutputFile { get; set; } /// <summary> /// Which input assembly that the attibutes for the generated assembly should be copied from. /// Optional. If not specified, the first input assembly will be used /// </summary> public InArgument<string> AttributeFile { get; set; } /// <summary> /// Kind of assembly to generate, dll or exe /// </summary> public InArgument<TargetKindEnum> TargetKind { get; set; } // If your activity returns a value, derive from CodeActivity<TResult> // and return the value from the Execute method. protected override void Execute(CodeActivityContext context) { string message = InputAssemblies.Get(context).Aggregate("", (current, assembly) => current + (assembly + " ")); TrackMessage(context, "Merging " + message + " into " + OutputFile.Get(context)); ILMerge m = new ILMerge(); m.SetInputAssemblies(InputAssemblies.Get(context).ToArray()); m.TargetKind = TargetKind.Get(context) == TargetKindEnum.Dll ? ILMerge.Kind.Dll : ILMerge.Kind.Exe; m.OutputFile = OutputFile.Get(context); m.AttributeFile = !String.IsNullOrEmpty(AttributeFile.Get(context)) ? AttributeFile.Get(context) : InputAssemblies.Get(context).First(); m.SetTargetPlatform(RuntimeEnvironment.GetSystemVersion().Substring(0,2), RuntimeEnvironment.GetRuntimeDirectory()); m.Merge(); TrackMessage(context, "Generated " + m.OutputFile); } } [Browsable(true)] public enum TargetKindEnum { Dll, Exe } NB: The activity inherits from a BaseCodeActivity class which is an internal helper class which contains some methods and properties useful for moste custom activities. In this case, it uses the TrackeMessage method for writing to the build log. You either need to remove the TrackMessage method calls, or implement this yourself (which is not very hard… ) The custom activity has the following input arguments: InputAssemblies A list with the (full) paths to the assemblies to merge OutputFile The name of the resulting merged assembly AttributeFile Which assembly to use as the template for the attribute of the merged assembly. This argument is optional and if left blank, the first assembly in the input list is used TargetKind Decides what type of assembly to create, can be either a dll or an exe Of course, there are more switches to the ILMerge.exe, and these can be exposed as input arguments as well if you need it. To show how the custom activity can be used, I have attached a build process template (see link at the top of this post) that merges the output of the projects being built (CommonLibrary.dll and CommonLibrary2.dll) into a merged assembly (NewLibrary.dll). The build process template has the following custom process parameters:   The Assemblies To Merge argument is passed into a FindMatchingFiles activity to located all assemblies that are located in the BinariesDirectory folder after the compilation has been performed by Team Build. Here is the complete sequence of activities that performs the merge operation. It is located at the end of the Try, Compile, Test and Associate… sequence: It splits the AssembliesToMerge parameter and appends the full path (using the BinariesDirectory variable) and then enumerates the matching files using the FindMatchingFiles activity. When running the build, you can see that it merges two assemblies into a new one:     And the merged assembly (and associated pdb file) is copied to the drop location together with the rest of the assemblies:

    Read the article

  • How To Skip Commercials in Windows 7 Media Center

    - by DigitalGeekery
    If you use Windows 7 Media Center to record live TV, you’re probably interested in skipping through commercials. After all, a big reason to record programs is to avoid commercials, right? Today we focus on a fairly simple and free way to get you skipping commercials in no time. In Windows 7, the .wtv file format has replaced the dvr-ms file format used in previous versions of Media Center for Recorded TV. The .wtv file format, however, does not work very well with commercial skipping applications.  The Process Our first step will be to convert the recorded .wtv files to the previously used dvr-ms file format. This conversion will be done automatically by WtvWatcher. It’s important to note that this process deletes the original .wtv file after successfully converting to .dvr-ms. Next, we will use DVRMSToolBox with the DTB Addin to handle commercials skipping. This process does not “cut” or remove the commercials from the file. It merely skips the commercials during playback. WtvWatcher Download and install the WTVWatcher (link below). To install WtvWatcher, you’ll need to have Windows Installer 3.1 and .NET Framework 3.5 SP1. If you get the Publisher cannot be verified warning you can go ahead and click Install. We’ve completely tested this app and it contains no malware and runs successfully.   After installing, the WtvWatcher will pop up in the lower right corner of your screen. You will need to set the path to your Recorded TV directory. Click on the button for “Click here to set your recorded TV path…”   The WtvWatcher Preferences window will open…   …and you’ll be prompted to browse for your Recorded TV folder. If you did not change the default location at setup, it will be found at C:\Users\Public\Recorded TV. Click “OK” when finished. Click the “X” to close the Preferences screen. You should now see WtvWatcher begin to convert any existing WTV files.   The process should only take a few minutes per file. Note: If WtvWatcher detects an error during the conversion process, it will not delete the original WTV file.    You will probably want to run WtvWatcher on startup. This will allow WtvWatcher too constantly scan for new .wtv files to convert. There is no setting in the application to run on startup, so you’ll need to copy the WTV icon from your desktop into your Windows start menu “Startup” directory. To do so, click on Start > All Programs, right-click on Startup and click on Open all users. Drag and drop, or cut and paste, the WtvWatcher desktop shortcut into the Startup folder. DVRMSToolBox and DTBAddIn Next, we need to download and install the DVRMSToolBox and the DTBAddIn. These two pieces of software will do the actual commercial skipping. After downloading the DVRMSToolBox zip file, extract it and double-click the setup.exe file.  Click “Next” to begin the installation.   Unless DVRMSToolBox will only be used by Administrator accounts, check the “Modify File Permissions” box. Click “Next.” When you get to the Optional Components window, uncheck Download/Install ShowAnalyzer. We will not be using that application. When the installation is complete, click “Close.”    Next we need to install the DTBAddin. Unzip the download folder and run the appropriate .msi file for your system. It is available in 32 & 64 bit versions. Just double click on the file and take the default options. Click “Finish” when the install is completed. You will then be prompted to restart your computer. After your computer has restarted, open DVRMSToolBox settings by going to Start > All Programs, DVRMSToolBox, and click on DVRMStoMPEGSettings. On the MC Addin tab, make sure that Skip Commercials is checked. It should be by default.   On the Commercial Skip tab, make sure the Auto Skip option is selected. Click “Save.”   If you try to watch recorded TV before the file conversion and commercial indexing process is complete you’ll get the following message pop up in Media Center. If you click Yes, it will start indexing the commercials if WtvWatcher has already converted it to dvr-ms. Now you’re ready to kick back and watch your recorded tv without having to wait through those long commercial breaks. Conclusion The DVRMSToolBox is a powerful and complex application with a multitude of features and utilities. We’ve showed you a quick and easy way to get your Windows Media Center setup to skip commercials. This setup, like virtually all commercial skipping setups, is not perfect. You will occasionally find a commercial that doesn’t get skipped. Need help getting your Windows 7 PC configured for TV? Check out our previous tutorial on setting up live TV in Windows Media Center. Links Download WTV Watcher Download DVRMSToolBox Download DTBAddin Similar Articles Productive Geek Tips Using Netflix Watchnow in Windows Vista Media Center (Gmedia)Increase Skip and Replay Intervals in Windows 7 Media CenterSchedule Updates for Windows Media CenterIntegrate Hulu Desktop and Windows Media Center in Windows 7Add Color Coding to Windows 7 Media Center Program Guide TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Make your Joomla & Drupal Sites Mobile with OSMOBI Integrate Twitter and Delicious and Make Life Easier Design Your Web Pages Using the Golden Ratio Worldwide Growth of the Internet How to Find Your Mac Address Use My TextTools to Edit and Organize Text

    Read the article

< Previous Page | 246 247 248 249 250 251 252 253 254 255 256 257  | Next Page >