Search Results

Search found 10396 results on 416 pages for 'business'.

Page 147/416 | < Previous Page | 143 144 145 146 147 148 149 150 151 152 153 154  | Next Page >

  • Oracle Executive Strategy Brief: Enterprise-Grade Cloud Applications

    - by B Shashikumar
    Cloud Computing has clearly evolved into one of the dominant secular trends in the industry. Organizations are looking to the cloud to change how they buy and consume IT. And its no longer about just lower up-front costs. The cloud promises to deliver greater agility and free up resources to focus on innovation versus running and maintaining systems. But are organizations actually realizing these benefits? The full promise of cloud is not being realized by customers who entrust their business to multiple niche cloud providers. While almost 9 out of 10 companies  expect more IT agility with cloud, only 47% are actually getting it (Source: 2011 State of Cloud Survey by Symantec). These niche cloud customers have also seen the promises of lower costs, efficiency gains, improved security, and compliance go unfulfilled. Having one cloud provider for customer relationship management (CRM) and another for human capital management (HCM), and then trying to glue these proprietary systems together while integrating to a back-office financial system can add to complexity and long-term costs. Completing a business process or generating an integrated report is cumbersome, and leverages incomplete data. Why can’t niche cloud providers deliver on the full promise of cloud? It’s simple: you still need to complete business processes. You still need reporting that enables you to take action using data from multiple systems. You still have to comply with SOX and other industry regulations. These requirements don’t go away just because you deploy in the cloud. Delivering lower up-front costs by enabling customers to buy software as a service (SaaS) is the easy part. To get real value that lasts longer than your quarterly report, it’s important to realize the benefits of cloud without compromising on functionality and while having the right level of control and flexibility. This is the true promise of cloud. Oracle’s cloud strategy centers around delivering the benefits of cloud—without compromise. We uniquely empower our customers with complete solutions and choice. From the richest functionality to integrated reporting and great user experience. It’s all available in the cloud. And it works not just with other Oracle cloud applications, but with your existing Oracle and third-party systems as well. This helps protect your current investments and extend their value as you journey to the cloud. We’ve made the necessary investments not only in our applications but also in the underlying technology that makes it all run—from the platform down to the hardware and operating system. We make it all. And we’ve engineered it to work together and be highly optimized for our customers, in the cloud. With Oracle enterprise-grade cloud applications, you get the benefits of cloud plus more power, more choice, and more confidence. Read more about how you can realize the true advantage of Cloud with Oracle Enterprise-grade Cloud applications in the Oracle Executive Strategy Brief here.  You can also attend an Oracle Cloud Conference event at a city near you. Register here. 

    Read the article

  • Cloud Adoption Challenges

    - by Herve Roggero
    Originally posted on: http://geekswithblogs.net/hroggero/archive/2013/11/07/cloud-adoption-challenges.aspxWhile cloud computing makes sense for most organizations and countless projects, I have seen customers significantly struggle with cloud adoption challenges. This blog post is not an attempt to provide a generic assessment of cloud adoption; rather it is an account of personal experiences in the field, some of which may or may not apply to your organization. Cloud First, Burst? In the rush to cloud adoption some companies have made the decision to redesign their core system with a cloud first approach. However a cloud first approach means that the system may not work anymore on-premises after it has been redesigned, specifically if the system depends on Platform as a Service (PaaS) components (such as Azure Tables). While PaaS makes sense when your company is in a position to adopt the cloud exclusively, it can be difficult to leverage with systems that need to work in different clouds or on-premises. As a result, some companies are starting to rethink their cloud strategy by designing for on-premises first, and modify only the necessary components to burst when needed in the cloud. This generally means that the components need to work equally well in any environment, which requires leveraging Infrastructure as a Service (IaaS) or additional investments for PaaS applications, or both.  What’s the Problem? Although most companies can benefit from cloud computing, not all of them can clearly identify a business reason for doing so other than in very generic terms. I heard many companies claim “it’s cheaper”, or “it allows us to scale”, without any specific metric or clear strategy behind the adoption decision. Other companies have a very clear strategy behind cloud adoption and can precisely articulate business benefits, such as “we have a 500% increase in traffic twice a year, so we need to burst in the cloud to avoid doubling our network and server capacity”. Understanding the problem being solved through by adopting cloud computing can significantly help organizations determine the optimum path and timeline to adoption. Performance or Scalability? I stopped counting the number of times I heard “the cloud doesn’t scale; our database runs faster on a laptop”.  While performance and scalability are related concepts, they are nonetheless different in nature. Performance is a measure of response time under a given load (meaning with a specific number of users), while scalability is the performance curve over various loads. For example one system could see great performance with 100 users, but timeout with 1,000 users, in which case the system wouldn’t scale. However another system could have average performance with 100 users, but display the exact same performance with 1,000,000 users, in which case the system would scale. Understanding that cloud computing does not usually provide high performance, but instead provides the tools necessary to build a scalable system (usually using PaaS services such as queuing and data federation), is fundamental to proper cloud adoption. Uptime? Last but not least, you may want to read the Service Level Agreement of your cloud provider in detail if you haven’t done so. If you are expecting 99.99% uptime annually you may be in for a surprise. Depending on the component being used, there may be no associated SLA at all! Other components may be restarted at any time, or services may experience failover conditions weekly ( or more) based on current overall conditions of the cloud service provider, most of which are outside of your control. As a result, for PaaS cloud environments (and to a certain extent some IaaS systems), applications need to assume failure and gracefully retry to be successful in the cloud in order to provide service continuity to end users. About Herve Roggero Herve Roggero, Windows Azure MVP, is the founder of Blue Syntax Consulting (http://www.bluesyntax.net). Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" and “PRO SQL Server 2012 Practices” from Apress, a PluralSight author, and runs the Azure Florida Association.

    Read the article

  • I.T. Chargeback : Core to Cloud Computing

    - by Anand Akela
    Contributed by Mark McGill Consolidation and Virtualization have been widely adopted over the years to help deliver benefits such as increased server utilization, greater agility and lower cost to the I.T. organization. These are key enablers of cloud, but in themselves they do not provide a complete cloud solution. Building a true enterprise private cloud involves moving from an admin driven world, where the I.T. department is ultimately responsible for the provisioning of servers, databases, middleware and applications, to a world where the consumers of I.T. resources can provision their infrastructure, platforms and even complete application stacks on demand. Switching from an admin-driven provisioning model to a user-driven model creates some challenges. How do you ensure that users provisioning resources will not provision more than they need? How do you encourage users to return resources when they have finished with them so that others can use them? While chargeback has existed as a concept for many years (especially in mainframe environments), it is the move to this self-service model that has created a need for a new breed of chargeback applications for cloud. Enabling self-service without some form of chargeback is like opening a shop where all of the goods are free. A successful chargeback solution will be able to allocate the costs of shared I.T. infrastructure based on the relative consumption by the users. Doing this creates transparency between the I.T. department and the consumers of I.T. When users are able to understand how their consumption translates to cost they are much more likely to be prudent when it comes to their use of I.T. resources. This also gives them control of their I.T. costs, as moderate usage will translate to a lower charge at the end of the month. Implementing Chargeback successfully create a win-win situation for I.T. and the consumers. Chargeback can help to ensure that I.T. resources are used for activities that deliver business value. It also improves the overall utilization of I.T. infrastructure as I.T. resources that are not needed are not left running idle. Enterprise Manager 12c provides an integrated metering and chargeback solution for Enterprise Manager Targets. This solution is built on top of the rich configuration and utilization information already available in Enterprise Manager. It provides metering not just for virtual machines, but also for physical hosts, databases and middleware. Enterprise Manager 12c provides metering based on the utilization and configuration of the following types of Enterprise Manager Target: Oracle VM Host Oracle Database Oracle WebLogic Server Using Enterprise Manager Chargeback, administrators are able to create a set of Charge Plans that are used to attach prices to the various metered resources. These plans can contain fixed costs (eg. $10/month/database), configuration based costs (eg. $10/month if OS is Windows) and utilization based costs (eg. $0.05/GB of Memory/hour) The self-service user provisioning these resources is then able to view a report that details their usage and helps them understand how this usage translates into cost. Armed with this information, the user is able to determine if the resources are delivering adequate business value based on what is being charged. Figure 1: Chargeback in Self-Service Portal Enterprise Manager 12c provides a variety of additional interfaces into this data. The administrator can access summary and trending reports. Summary reports allow the administrator to drill-down through the cost center hierarchy to identify, for example, the top resource consumers across the organization. Figure 2: Charge Summary Report Trending reports can be used for I.T. planning and budgeting as they show utilization and charge trends over a period of time. Figure 3: CPU Trend Report We also provide chargeback reports through BI Publisher. This provides a way for users who do not have an Enterprise Manager login (such as Line of Business managers) to view charge and usage information. For situations where a bill needs to be produced, chargeback can be integrated with billing applications such as Oracle Billing and Revenue Management (BRM). Further information on Enterprise Manager 12c’s integrated metering and chargeback: White Paper Screenwatch Cloud Management on OTN

    Read the article

  • Top 10 things I Learned this October

    - by rbewtra
    Last week, I attended the second largest IT conference. It was Gartner Symposium IT Expo held in Orlando, Florida. Earlier this month, I also had the opportunity to be part of the largest IT conference earlier in the month – Oracle Open World . Both were gatherings for senior IT professionals – CIOs, Senior IT  and Line of Business executives, and Developers. At both events, I learned a great deal about how companies are innovating and leveraging technology.  Here are my top 10 take-aways: #10.  Everyone is talking about Social, Mobile and Cloud  - Whether listening to Gartner discuss The Nexus of Forces or listening to Oracle’s Executive Vice President Hasan Rizvi deliver Oracle Fusion Middleware General Session  -- everyone is talking about Social, Mobile Cloud, and Information – Gartner, Oracle, our customers, partners, -- everyone.   #9. SOA is NOT dead, it is more important than ever before – it is an imperative!  #8. The big question around IT security is not “what will you do IF?” but “what will you do WHEN?” #7. General Colin Powell is an IT guy! Aside from having served as National Security Advisor, Chairman of the Joint Chiefs of Staff and as the U.S. Secretary of State. Gen Colin Powell was an inspirational speaker at the Gartner Symposium and it was clear he understands IT and the powerful impact it has on our society and our youth today. #6. Change will happen, we need to plan for it! #5. When everything is connected and just works, we have harnessed the power of technology. Middleware is at the heart of social, mobile and cloud. #4. Innovation is happening everywhere! Attending both IT events I was able to hear from companies of all sizes and across industries – including Tesco, Nike, Electronic Arts, Nintendo, International Speedway--  they all discussed how they are transforming their companies and their industries. #3. “One size fits all” strategy does not work instead it alienates IT and business. The PACE Layered Application Strategy is a framework that allows IT to have that Nexus of Forces conversation with the business. #2. To stay relevant, we need to hire the innovation workers, develop for that innovation layer. #1. My smartphone is the most valuable tool I own! Everyday with it, I am able to communicate via phone, email, text with family, friends, colleagues. I am able to look up directions to my hotel, make reservations at restaurants, view my calendar, take pictures, record messages, check in for flights and so much more…. I can never leave home without it. Look forward to catching up again soon! Additional Information Product Information on Oracle.com: Oracle Fusion Middleware Follow us on Twitter and Facebook Subscribe to our regular Fusion Middleware Newsletter

    Read the article

  • Oracle????????(2013?7?)

    - by Steve He(???)
    ???? ?? ?? ???? ?? ????? ???????,??25??????5???Q&A??,????????????,???????,???????????,????????????????????? 7?11?14:00 ?? ?My Oracle Support????????? "?My Oracle Support??????"??????????????????Oracle??????????????????My Oracle Support??????????????????????????? ??????????My Oracle Support?????????,?????????????????????,????My Oracle Support???????,???????????????????:30??? 7?25?14:00 ?? WebLogic Server ????????????? ????WebLogic Server??????????,?????????1???????????????????WebLogic Server ????????????????????????????????? 7?17?14:00 ?? Exadata????: Diagnostic Assistant   ??Exadata???????Diagnostic Assistent (DA). ??????????????,????????????????,??ADR,RDA,OCM,Explorer? ?????????????: ????????,??Diagnostic Assistant?Exadata??????? ?????????????,????????????,????????????? 7?18?14:00 ?? E-Business Suite ?????? ????????????E-Business ???????????,??????????????? ?60???????????E-Business Suite???????????,????????????????DBA? 7?16?14:00 ?? ?????? My Oracle Support ??????????????????????,??? world clock.??????? Oracle ?????????????,??? note 603505.1 ????????????,??????????????(Mandarin)?????? Internet Explorer ??? My Oracle Support ????????????????? ?? ?? ?? ?? Oracle Support Best Practices(Oracle??????)(?) ???? ?? ?? GetProactive: Resolve Fast(??????)(?) ???? ?? ?? WebLogic GC & OutOfMemory Diagnostic(?) ????? ?? ?? Creating Customer Value ???? ?? ?? Oracle Support Basics ???? ?? ?? An Introduction to My Oracle Support ???? ?? ?? Service Request Management ???? ?? ?? Customer User Administration ???? ?? ?? Managing Favorite ???? ?? ?? Quick Search ???? ?? ?? Hot Topic Email ???? ?? ?? Patch and Update ???? ?? ?? Site Alert ???? ?? ?? Search and Browse Features in My Oracle Support ???? ?? ?? Why Use Configuration Manager In The My Oracle Support ???? ?? ?? Enterprise Manager 11g and My Oracle Support ???? ?? ?? Oracle Collaborative Support ???? ?? ?? How to Escalate a Service Request within Oracle Support ???? ?? ?? ????????,?? Support Training Community ??????????

    Read the article

  • Refresh UltraGrid's GroupBy Sort on child bands when ListChanged?

    - by Idriss
    I am using Infragistics 2009 vol 1. My UltraGrid is bound to a BindingList of business objects "A" having themself a BindingList property of business objects "B". It results in having two bands: one named "BindingList`1", the other one "ListOfB" thanks to the currency manager. I would like to refresh the GroupBy sort of the grid whenever a change is performed on the child band through the child business object and INotifyPropertyChange. If I group by a property in the child band which is a boolean (let's say "Active") and I subscribe to the event ListChanged on the bindinglist datasource with this event handler: void Grid_ListChanged(object sender, ListChangedEventArgs e) { if (e.ListChangedType == ListChangedType.ItemChanged) { string columnKey = e.PropertyDescriptor.Name; if (e.PropertyDescriptor.PropertyType.Name == "BindingList`1") { ultraGrid.DisplayLayout.Bands[columnKey].SortedColumns.RefreshSort(true); } else { UltraGridBand band = ultraGrid.DisplayLayout.Bands[0]; UltraGridColumn gc = band.Columns[columnKey]; if (gc.IsGroupByColumn || gc.SortIndicator != SortIndicator.None) { band.SortedColumns.RefreshSort(true); } ColumnFilter cf = band.ColumnFilters[columnKey]; if (cf.FilterConditions.Count > 0) { ultraGrid.DisplayLayout.RefreshFilters(); } } } } the band.SortedColumns.RefreshSort(true) is called but It gives unpredictable results in the groupby area when the property Active is changed in the child band: if one object out of three actives becomes inactive it goes from: Active : True (3 items) To: Active : False (3 items) Instead of (which is the case when I drag the column back and forth to the group by area) Active : False (1 item) Active : True (2 items) Am I doing something wrong? Is there a way to restore the expanded state of the rows when performing a RefreshSort(true); ?

    Read the article

  • EF 4, POCO and AddOrUpdate

    - by Eric J.
    I'm trying to create a method on an EF 4 POCO repository called AddOrUpdate. The idea is that the business layer can pass in a POCO object and the persistence framework will add the object if it is new (not yet in the database), else will update the database (once SaveChanges() is called) with the new value. This is similar to some other questions I have asked about EF, but I'm only about 80% there in understanding this so please forgive partial duplication. The part I'm missing is how to update the object graph in my ObjectContext/associated ObjectSet for the passed-in business object once I have determined that the business object indeed already exists in the database (and now has been loaded thanks to TryGetObjectByKey). ApplyCurrentValues sounds sort of like what I want, but it only copies scalar values and doesn't seem intended to update the object graph in the ObjectContext/ObjectSet. Because of my particular use case I don't care about concurrency right now. public void AddOrUpdate(BO biz) { object obj; EntityKey ek = Ctx.CreateEntityKey(mySetName, biz); bool found = Ctx.TryGetObjectByKey(ek, out obj); if (found) { // How do I do what this method name implies? Biz is a parent with children. mySet.TellTheSetToUpdateThisObject(biz); } else { mySet.AddObject(biz); } Ctx.DetectChanges(); }

    Read the article

  • A couple of questions on exceptions/flow control and the application of custom exceptions

    - by dotnetdev
    1) Custom exceptions can help make your intentions clear. How can this be? The intention is to handle or log the exception, regardless of whether the type is built-in or custom. The main reason I use custom exceptions is to not use one exception type to cover the same problem in different contexts (eg parameter is null in system code which may be effect by an external factor and an empty shopping basket). However, the partition between system and business-domain code and using different exception types seems very obvious and not making the most of custom exceptions. Related to this, if custom exceptions cover the business exceptions, I could also get all the places which are sources for exceptions at the business domain level using "Find all references". Is it worth adding exceptions if you check the arguments in a method for being null, use them a few times, and then add the catch? Is it a realistic risk that an external factor or some other freak cause could cause the argument to be null after being checked anyway? 2) What does it mean when exceptions should not be used to control the flow of programs and why not? I assume this is like: if (exceptionVariable != null) { } Is it generally good practise to fill every variable in an exception object? As a developer, do you expect every possible variable to be filled by another coder?

    Read the article

  • Thrift,.NET,Cassandra - Is this is right combination?

    - by Vadi
    I've been evaluating technology stack for developing a social network based application. Below are the stack I think could well suitable for this application type of application: GUI -- ASP.NET MVC, Flash (Flex) Business Services -- Thrift based services One of the advantage of using Thrift is to solve scaling problems that will come in future when the user base increases rapidly. All the business logic can be exposed as a services using REST,JSON etc., This also allows us to go with C++ or Erlang based services when situation demands. Database -- mySQL, CasSandara mySQL can be used for storing the data which needs to be persisted. Cassandara will be used for storing global identifiers to the persisted data. Since Cassandara is also very good at scaling by introducing more nodes this will leverage Thrift based services as well. And also there is native support between Cassandara and Thrift Cache Server -- Memcached Any requests from Business Services will only talk to Memcached if any non-dirty data is required, otherwise there will be some background jobs that will invalidate the cache from database. The question is: Is the Thrift which is open-sourced one is production-ready? Is it the right stack for services layer to choose when the application (GUI) is primarily gets developed in ASP.NET and DB is mysql? Is there any other caveats that anyone here experienced? One of the main objective behind this stack is to easily scale up with more nodes and also this helps us to use Linux boxes, it will reduce our cost significantly Thoughts please ..

    Read the article

  • WCF Service Layer in n-layered application: performance considerations

    - by Marconline
    Hi all. When I went to University, teachers used to say that in good structured application you have presentation layer, business layer and data layer. This is what I heard for more than 5 years. When I started working I discovered that this is true but sometimes is better to have more than just three layers. Two or three days ago I discovered this article by John Papa that explain how to use Entity Framework in layered application. According to that article you should have: UI Layer and Presentation Layer (Model View Pattern) Service Layer (WCF) Business Layer Data Access Layer Service Layer is, to me, one of the best ideas I've ever heard since I work. Your UI is then completely "diconnected" from Business and Data Layer. Now when I went deeper by looking into provided source code, I began to have some questions. Can you help me in answering them? Question #0: is this a good enterpise application template in your opinion? Question #1: where should I host the service layer? Should it be a Windows Service or what else? Question #2: in the source code provided the service layer expose just an endpoint with WSHttpBinding. This is the most interoperable binding but (I think) the worst in terms of performances (due to serialization and deserializations of objects). Do you agree? Question #3: if you agree with me at Question 2, which kind of binding would you use? Looking forward to hear from you. Have a nice weekend! Marco

    Read the article

  • Accounting System for Winforms / SQL Server applications

    - by Craig L
    If you were going to write a vertical market C# / WinForms / SQL Server application and needed an accounting "engine" for it, what software package would you chose ? By vertical market, I mean the application is intended to solve a particular set of business problems, not be a generic accounting application. Thus the value add of the program is the 70% of non-accounting related functionality present in the finished product. The 30% of accounting functionality is merely to enable the basic accounting needs of the business. I said all that to lead up to this: The accounting engine needs to be a royalty-free runtime license and not super expensive. I've found a couple C#/SQL Server accounting apps that can be had with source code and a royalty free run time for $150k+ and that would be fine for greenfield development funded by a large bankroll, but for smaller apps, that sort of capital outlay isn't feasible. Something along the lines of $5k to $15k for a royalty-free runtime would be more reasonable. Open-source would be even better. By accounting engine, I mean something that takes care of at a minimum: General Ledger Invoices Statements Accounts Receivable Payments / Credits Basically, an accounting engine should be something that lets the developer concentrate on the value added (industry specific business best practices / processes) part of the solution and not have to worry about how to implement the low level details of a double entry accounting system. Ideally, the accounting engine would be something that is licensed on a royalty free run-time basis. Suggestions, please ?

    Read the article

  • General N-Tier Architecture Question

    - by whatispunk
    In an N-Tier app you're supposed to have a business logic layer and a data access layer. Is it bad to simply have two assemblies: BusinessLogicLayer.dll and DataAccessLayer.dll to handle all this logic? How do you actually represent these layers. It seems silly, the way I've seen it, to have a BusinessLogic class library containing classes like: CustomerBusinessLogic.cs, OrderBusinessLogic.cs, etc. each calling their appropriately named cousin in the DataAccessLayer class library, i.e. CustomerDataAccess.cs, OrderDataAccess.cs. I want to create a web app using MVP and it doesn't seem so cut and dry as this. There are lots of opinions about where the business logic is supposed to be put in MVP and I'm not sure I've found a really great answer yet. I want this project to be easily testable, and I am trying to adhere to TDD methodologies as best I can. I intend to use MSTest and Rhino Mocks for testing. I was thinking of something like the following for my architecture: I'd use LINQ-To-SQL to talk to the database. WCF services to define data contract interfaces for the business logic layer. Then use MVP with ASP.NET Forms for the UI/BLL. Now, this isn't the start of this project, most of the LINQ stuff is already done, so its stuck. The WCF service would replace the existing DataAccessLayer assembly and the UI/BLL would replace the BusinessLogicLayer assembly etc. This sort of makes sense in my head, but its getting really late. Anyone that's traveled down this path have any guidance? Good links? Warnings? Thanks!

    Read the article

  • How to force c# binary int division to return a double?

    - by Wayne
    How to force double x = 3 / 2; to return 1.5 in x without the D suffix or casting? Is there any kind of operator overload that can be done? Or some compiler option? Amazingly, it's not so simple to add the casting or suffix for the following reason: Business users need to write and debug their own formulas. Presently C# is getting used like a DSL (domain specific language) in that these users aren't computer science engineers. So all they know is how to edit and create a few types of classes to hold their "business rules" which are generally just math formulas. But they always assume that double x = 3 / 2; will return x = 1.5 however in C# that returns 1. A. they always forget this, waste time debugging, call me for support and we fix it. B. they think it's very ugly and hurts the readability of their business rules. As you know, DSL's need to be more like natural language. Yes. We are planning to move to Boo and build a DSL based on it but that's down the road. Is there a simple solution to make double x = 3 / 2; return 1.5 by something external to the class so it's invisible to the users? Thanks! Wayne

    Read the article

  • How to force c# binary int division to return a double?

    - by Wayne
    How to force double x = 3 / 2; to return 1.5 in x without the D suffix or casting? Is there any kind of operator overload that can be done? Or some compiler option? Amazingly, it's not so simple to add the casting or suffix for the following reason: Business users need to write and debug their own formulas. Presently C# is getting used like a DSL (domain specific language) in that these users aren't computer science engineers. So all they know is how to edit and create a few types of classes to hold their "business rules" which are generally just math formulas. But they always assume that double x = 3 / 2; will return x = 1.5 however in C# that returns 1. A. they always forget this, waste time debugging, call me for support and we fix it. B. they think it's very ugly and hurts the readability of their business rules. As you know, DSL's need to be more like natural language. Yes. We are planning to move to Boo and build a DSL based on it but that's down the road. Is there a simple solution to make double x = 3 / 2; return 1.5 by something external to the class so it's invisible to the users? Thanks! Wayne

    Read the article

  • Should the entity framework + self tracking entities be saving me time

    - by sipwiz
    I've been using the entity framework in combination with the self tracking entity code generation templates for my latest silverlight to WCF application. It's the first time I've used the entity framework in a real project and my hope was that I would save myself a lot of time and effort by being able to automatically update the whole data access layer of my project when my database schema changed. Happily I've found that to be the case, updating my database schema by adding a new table, changing column names, adding new columns etc. etc. can be propagated to my business object classes by using the update from database option on the entity framework model. Where I'm hurting is the CRUD operations within my WCF service in response to actions on my Silverlight client. I use the same self tracking entity framework business objects in my Silverlight app but I find I'm continually having to fight against problems such as foreign key associations not being handled correctly when updating an object or the change tracker getting confused about the state of an object at the Silverlight end and the data access operation within the WCF layer throwing a wobbly. It's got to a point where I have now spent more time dealing with this quirks than I have on my previous project where I used Linq-to-SQL as the starting point for rolling my own business objects. Is it just me being hopeless or is the self tracking entities approach something that should be avoided until it's more mature?

    Read the article

  • Authenticated WCF: Getting the Current Security Context

    - by bradhe
    I have the following scenario: I have various user's data stored in my database. This data was entered via a web app. We'd like to expose this data back to the user over a web service so that they can integrate their data with their applications. We would also like to expose some business logic over these services. As such we do not want to use OData. This is a multi-tenant application so I only want to expose their data back to them and not other users. Likewise, the business logic we expose should be relative to the authenticated user. I would like let the user use an OASIS scheme to authenticate with the web service -- WCF already allows for this out of the box as far as I understand -- or perhaps we can issue them certificates to authenticate with. That bit hasn't really been worked out yet. Here is a bit of pseudo-code of how I envision this would work within the service: function GetUsersData(id) var user := Lookup User based on Username from Auth Context var data := Get Data From Repository based on "user" return data end function For the business logic scenario I think it would look something like this: function PerformBusinessLogic(someData) var user := Lookup User based on Username from Auth Context var returnValue := Perform some logic based on supplied data return returnValue end function The hard bit here is getting the current username (or cert info in the cert scenario) that the user authenticated with! Does WCF even enable this scenario? If not would WSE3 enable this? Thanks,

    Read the article

  • ECM (Niche Vs Mass Market)

    - by Luj Reyes
    Hi Everyone, I recently started a little company with a couple of guys. Ours is the typical startup, a lot of ideas, dreams, talent and work hours :P. Our initial business plan was to develop a DM (Document Manager) with several features found on DropBox and other tools but with a big differentiator. Then we got in the team this Business Guy (I must say that several of us could be called 'Business Guys' but we are mainly hackers, he is just Another 'Networking Guy'), and along with him came this market analysis for a DM aimed at a very specific and narrow niche. We have many elements to believe in his market study and the idea is the classic "The market is X million, so if we grab a 10%...", and the market is really there to grab because all big providers deemed it too little and fled, let's say that the market is 5 million USD and demand very specific features. If we decide to go for this niche product we face a sales cycle of about 7 months, and the main goal of these revenue is to develop more ambitious projects. (Institutional VC is out of the question if you want to keep a marginal ownership of your company in my country). The only overlap between the niche and the mass market product features is the ability to store documents; everything else requires that we focus all of our efforts towards one or the other. I've studied a lot about the differences between Mass and Niche Markets, but I want to hear from people with actual experience. So everything comes down to this: If you have a really “saleable” idea what is the right thing to do: to go for the niche or go for the big prize and target primarily the mass market? Thanks for your input

    Read the article

  • PL/SQL - How to pull data from 3 tables based on latest created date

    - by Nancy
    Hello, I'm hoping someone can help me as I've been stuck on this problem for a few days now. Basically I'm trying to pull data from 3 tables in Oracle: 1) Orders Table 2) Vendor Table and 3) Master Data Table. Here's what the 3 tables look like: Table 1: BIZ_DOC2 (Orders table) OBJECTID (Unique key) UNIQUE_DOC_NAME (Document Name i.e. ORD-005) CREATED_AT (Date the order was created) Table 2: UDEF_VENDOR (Vendors Table): PARENT_OBJECT_ID (This matches up to the ObjectId in the Orders table) VENDOR_OBJECT_NAME (This is the name of the vendor i.e. Acme) Table 3: BIZ_UNIT (Master Data table) PARENT_OBJECT_ID (This matches up to the ObjectID in the Orders table) BIZ_UNIT_OBJECT_NAME (This is the name of the business unit i.e. widget A, widget B) Note: The Vendors Table and Master Data do not have a link between them except through the Orders table. I can join all of the data from the tables and it looks something like this: Before selecting latest order date: ORD-005 | Widget A | Acme | 3/14/10 ORD-005 | Widget B | Acme | 3/14/10 ORD-004 | Widget C | Acme | 3/10/10 Ideally I'd like to return the latest order for each vendor. However, each order may contain multiple business units (e.g. types of widgets) so if a Vendor's latest record is ORD-005 and the order contains 2 business units, here's what the result set should look like by the following columns: UNIQUE_DOC_NAME, BIZ_UNIT_OBJECT_NAME, VENDOR_OBJECT_NAME, CREATED_AT After selecting by latest order date: ORD-005 | Widget A | Acme | 3/14/10 ORD-005 | Widget B | Acme | 3/14/10 I tried using Select Max and several variations of sub-queries but I just can't seem to get it working. Any help would be hugely appreciated!

    Read the article

  • Bi-directional view model syncing with "live" collections and properties (MVVM)

    - by Schneider
    I am getting my knickers in a twist recently about View Models (VM). Just like this guy I have come to the conclusion that the collections I need to expose on my VM typically contain a different type to the collections exposed on my business objects. Hence there must be a bi-directional mapping or transformation between these two types. (Just to complicate things, on my project this data is "Live" such that as soon as you change a property it gets transmitted to other computers) I can just about cope with that concept, using a framework like Truss, although I suspect there will be a nasty surprise somewhere within. Not only must objects be transformed but a synchronization between these two collections is required. (Just to complicate things I can think of cases where the VM collection might be a subset or union of business object collections, not simply a 1:1 synchronization). I can see how to do a one-way "live" sync, using a replicating ObservableCollection or something like CLINQ. The problem then becomes: What is the best way to create/delete items? Bi-directinal sync does not seem to be on the cards - I have found no such examples, and the only class that supports anything remotely like that is the ListCollectionView. Would bi-directional sync even be a sensible way to add back into the business object collection? All the samples I have seen never seem to tackle anything this "complex". So my question is: How do you solve this? Is there some technique to update the model collections from the VM? What is the best general approach to this?

    Read the article

  • What is a good programming language/environment for Linux database applications?

    - by Dkellygb
    I could use some advice on my move from the Windows world to Linux. For my business, I have used VB6 and Microsoft Access with both Access databases and SQL server in the past. The easy to use forms, report writers and programming language were perfect for CRUD apps and analysis for our small hotel/restaurant business. After using Linux at home for some time I would like to convert our small business. Our server is already a Linux box using Samba. I am happy with the OpenOffice.org applications instead of Microsoft Office. The only thing which is holding me back is a desktop database application where I can develop the forms and reports we require. Base does not seem to be up to the job yet from my experience. I would like something like VB.Net with visual studio (express) but I would like to avoid Mono – I just don’t see the point of it. (You can correct me if I’m wrong.) But a good collection of forms, controls and a good report writer would be ideal. I have looked at web based stuff like Ruby on Rails, but I think a webserver for our 5 pc network is overkill. I don’t mind running a proper database on our Ubuntu 9.10 server. I may have exposed a few prejudices above but my mind is open. Any thoughts?

    Read the article

  • Return type from DAL class (Sql ce, Linq to Sql)

    - by bretddog
    Hi, Using VS2008 and Sql CE 3.5, and preferably Linq to Sql. I'm learning database, and unsure about DAL methods return types and how/where to map the data over to my business objects: I don't want direct UI binding. A business object class UserData, and a class UserDataList (Inherits List(Of UserData)), is represented in the database by the table "Users". I use SQL Compact and run SqlMetal which creates dbml/designer.vb file. This gives me a class with a TableAttribute: <Table()> _ Partial Public Class Users I'm unsure how to use this class. Should my business object know about this class, such that the DAL can return the type Users, or List(Of Users) ? So for example the "UserDataService Class" is a part of the DAL, and would have for example the functions GetAll and GetById. Will this be correct : ? Public Class UserDataService Public Function GetAll() As List(Of Users) Dim ctx As New MyDB(connection) Dim q As List(Of Users) = From n In ctx.Users Select n Return q End Function Public Function GetById(ByVal id As Integer) As Users Dim ctx As New MyDB(connection) Dim q As Users = (From n In ctx.Users Where n.UserID = id Select n).Single Return q End Function And then, would I perhaps have a method, say in the UserDataList class, like: Public Class UserDataList Inherits List(Of UserData) Public Sub LoadFromDatabase() Me.clear() Dim database as New UserDataService dim users as List(Of Users) users = database.GetAll() For each u in users dim newUser as new UserData newUser.Id = u.Id newUser.Name = u.Name Me.Add(newUser) Next End Sub End Class Is this a sensible approach? Would appreciate any suggestions/alternatives, as this is my first attempt on a database DAL. cheers!

    Read the article

  • rookie MySql question about paging; Is one query enough?

    - by Camran
    I have managed to get paging to work, almost. I want to display to the user, total nr of records found, and the currently displayed records. Ex: 4000 found, displaying 0-100. I am testing this with the nr 2 (because I don't have that many records, have like 20). So I am using LIMIT $start, $nr_results; Do I have to make two queries in order to display the results the way I want, one query fetching all records and then make a mysql_num_rows to get all records, then the one with the LIMIT ? I have this: mysql_num_rows($qry_result); $total_pages = ceil($num_total / $res_per_page); //$res_per_page==2 and $num_total = 2 if ($p - 10 < 1) { $pagemin=1; } else { $pagemin = $p - 10; } if ($p + 10 $total_pages) { $pagemax = $total_pages; } else { $pagemax = $p + 10; } Here is the query: SELECT mt.*, fordon.*, boende.*, elektronik.*, business.*, hem_inredning.*, hobby.* FROM classified mt LEFT JOIN fordon ON fordon.classified_id = mt.classified_id LEFT JOIN boende ON boende.classified_id = mt.classified_id LEFT JOIN elektronik ON elektronik.classified_id = mt.classified_id LEFT JOIN business ON business.classified_id = mt.classified_id LEFT JOIN hem_inredning ON hem_inredning.classified_id = mt.classified_id LEFT JOIN hobby ON hobby.classified_id = mt.classified_id ORDER BY modify_date DESC LIMIT 0, 2 Thanks, if you need more input let me know. Basically Q is, do I have to make two queries?

    Read the article

  • Start with remoting or with WCF

    - by Sheldon
    Hi. I'm just starting with distributed application development. I need to create (all by myself) an enterprise application for document management. That application will run on an intranet (within the firewall, no internet access is required now, BUT is probably that will be later). The application needs to manage images that will be stored within MySQL Server (as blobs) and those images will be then recovered by the app and eventually one or more of them will be converted to PDF. Performance is the most important non-functional requirement. I have a couple of doubts. What do you suggest to use, .NET Remoting or WCF over TCP-IP (I think second one is the best for the moment I need to expose the business logic over internet, changing the protocol). Where do you suggest to make the transformation of the images to pdf files, I'm using iText. (I have thought to have the business logic stored within the IIS and exposed via WCF, and that business logic to be responsible of getting the images and transforming them to PDF, that because the IIS and the MySQL Server are the same physical machine). I ask about where to do the transformation because the app must be accessible from multiple devices, and for example, for mobile devices, the pdf maybe is not necessary. Thank you very much in advance.

    Read the article

  • Spring: Bean fails to read off values from external Properties file when using @Value annotation

    - by daydreamer
    XML Configuration <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:util="http://www.springframework.org/schema/util" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util.xsd"> <util:properties id="mongoProperties" location="file:///storage/local.properties" /> <bean id="mongoService" class="com.business.persist.MongoService"></bean> </beans> and MongoService looks like @Service public class MongoService { @Value("#{mongoProperties[host]}") private String host; @Value("#{mongoProperties[port]}") private int port; @Value("#{mongoProperties[database]}") private String database; private Mongo mongo; private static final Logger LOGGER = LoggerFactory.getLogger(MongoService.class); public MongoService() throws UnknownHostException { LOGGER.info("host=" + host + ", port=" + port + ", database=" + database); mongo = new Mongo(host, port); } public void putDocument(@Nonnull final DBObject document) { LOGGER.info("inserting document - " + document.toString()); mongo.getDB(database).getCollection(getCollectionName(document)).insert(document, WriteConcern.SAFE); } I write my MongoServiceTest as public class MongoServiceTest { @Autowired private MongoService mongoService; public MongoServiceTest() throws UnknownHostException { mongoRule = new MongoRule(); } @Test public void testMongoService() { final DBObject document = DBContract.getUniqueQuery("001"); document.put(DBContract.RVARIABLES, "values"); document.put(DBContract.PVARIABLES, "values"); mongoService.putDocument(document); } and I see failures in tests as 12:37:25.224 [main] INFO c.s.business.persist.MongoService - host=null, port=0, database=null java.lang.NullPointerException at com.business.persist.MongoServiceTest.testMongoService(MongoServiceTest.java:40) Which means bean was not able to read the values from local.properties local.properties ### === MongoDB interaction === ### host="127.0.0.1" port=27017 database=contract How do I fix this? update It doesn't seem to read off the values even after creating setters/getters for the fields. I am really clueless now. How can I even debug this issue? Thanks much!

    Read the article

  • Are bad data issues that common?

    - by Water Cooler v2
    I've worked for clients that had a large number of distinct, small to mid-sized projects, each interacting with each other via properly defined interfaces to share data, but not reading and writing to the same database. Each had their own separate database, their own cache, their own file servers/system that they had dedicated access to, and so they never caused any problems. One of these clients is a mobile content vendor, so they're lucky in a way that they do not have to face the same problems that everyday business applications do. They can create all those separate compartments where their components happily live in isolation of the others. However, for many business applications, this is not possible. I've worked with a few clients, one of whose applications I am doing the production support for, where there are "bad data issues" on an hourly basis. Yeah, it's that crazy. Some data records from one of the instances (lower than production, of course) would have been run a couple of weeks ago, and caused some other user's data to get corrupted. And then, a data script will have to be written to fix this issue. And I've seen this happening so much with this client that I have to ask. I've seen this happening at a moderate rate with other clients, but this one just seems to be out of order. If you're working with business applications that share a large amount of data by reading and writing to/from the same database, are "bad data issues" that common in your environment?

    Read the article

< Previous Page | 143 144 145 146 147 148 149 150 151 152 153 154  | Next Page >