Search Results

Search found 5323 results on 213 pages for 'sharepoint designer'.

Page 191/213 | < Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >

  • SSIS Technique to Remove/Skip Trailer and/or Bad Data Row in a Flat File

    - by Compudicted
    I noticed that the question on how to skip or bypass a trailer record or a badly formatted/empty row in a SSIS package keeps coming back on the MSDN SSIS Forum. I tried to figure out the reason why and after an extensive search inside the forum and outside it on the entire Web (using several search engines) I indeed found that it seems even thought there is a number of posts and articles on the topic none of them are employing the simplest and the most efficient technique. When I say efficient I mean the shortest time to solution for the fellow developers. OK, enough talk. Let’s face the problem: Typically a flat file (e.g. a comma delimited/CSV) needs to be processed (loaded into a database in most cases really). Oftentimes, such an input file is produced by some sort of an out of control, 3-rd party solution and would come in with some garbage characters and/or even malformed/miss-formatted rows. One such example could be this imaginary file: As you can see several rows have no data and there is an occasional garbage character (1, in this example on row #7). Our task is to produce a clean file that will only capture the meaningful data rows. As an aside, our output/target may be a database table, but for the purpose of this exercise we will simply re-format the source. Let’s outline our course of action to start off: Will use SSIS 2005 to create a DFT; The DFT will use a Flat File Source to our input [bad] flat file; We will use a Conditional Split to process the bad input file; and finally Dump the resulting data to a new [clean] file. Well, only four steps, let’s see if it is too much of work. 1: Start the BIDS and add a DFT to the Control Flow designer (I named it Process Dirty File DFT): 2, and 3: I had added the data viewer to just see what I am getting, alas, surprisingly the data issues were not seen it:   What really is the key in the approach it is to properly set the Conditional Split Transformation. Visually it is: and specifically its SSIS Expression LEN([After CS Column 0]) > 1 The point is to employ the right Boolean expression (yes, the Conditional Split accepts only Boolean conditions). For the sake of this post I re-named the Output Name “No Empty Rows”, but by default it will be named Case 1 (remember to drag your first column into the expression area)! You can close your Conditional Split now. The next part will be crucial – consuming the output of our Conditional Split. Last step - #4: Add a Flat File Destination or any other one you need. Click on the Conditional Split and choose the green arrow to drop onto the target. When you do so make sure you choose the No Empty Rows output and NOT the Conditional Split Default Output. Make the necessary mappings. At this point your package must look like: As the last step will run our package to examine the produced output file. F5: and… it looks great!

    Read the article

  • DataContractSerializer: type is not serializable because it is not public?

    - by Michael B. McLaughlin
    I recently ran into an odd and annoying error when working with the DataContractSerializer class for a WP7 project. I thought I’d share it to save others who might encounter it the same annoyance I had. So I had an instance of  ObservableCollection<T> that I was trying to serialize (with T being a class I wrote for the project) and whenever it would hit the code to save it, it would give me: The data contract type 'ProjectName.MyMagicItemsClass' is not serializable because it is not public. Making the type public will fix this error. Alternatively, you can make it internal, and use the InternalsVisibleToAttribute attribute on your assembly in order to enable serialization of internal members - see documentation for more details. Be aware that doing so has certain security implications. This, of course, was malarkey. I was trying to write an instance of MyAwesomeClass that looked like this: [DataContract] public class MyAwesomeClass { [DataMember] public ObservableCollection<MyMagicItemsClass> GreatItems { get; set; }   [DataMember] public ObservableCollection<MyMagicItemsClass> SuperbItems { get; set; }     public MyAwesomeClass { GreatItems = new ObservableCollection<MyMagicItemsClass>(); SuperbItems = new ObservableCollection<MyMagicItemsClass>(); } }   That’s all well and fine. And MyMagicItemsClass was also public with a parameterless public constructor. It too had DataContractAttribute applied to it and it had DataMemberAttribute applied to all the properties and fields I wanted to serialize. Everything should be cool, but it’s not because I keep getting that “not public” exception. I could tell you about all the things I tried (generating a List<T> on the fly to make sure it wasn’t ObservableCollection<T>, trying to serialize the the Collections directly, moving it all to a separate library project, etc.), but I want to keep this short. In the end, I remembered my the “Debug->Exceptions…” VS menu option that brings up the list of exception-related circumstances under which the Visual Studio debugger will break. I checked the “Thrown” checkbox for “Common Language Runtime Exceptions”, started the project under the debugger, and voilà: the true problem revealed itself. Some of my properties had fairly elaborate setters whose logic I wanted to ignore. So for some of them, I applied an IgnoreDataMember attribute to them and applied the DataMember attribute to the underlying fields instead. All of which, in line with good programming practices, were private. Well, it just so happens that WP7 apps run in a “partial trust” environment and outside of “full trust”-land, DataContractSerializer refuses to serialize or deserialize non-public members. Of course that exception was swallowed up internally by .NET so all I ever saw was that bizarre message about things that I knew for certain were public being “not public”. I changed all the private fields I was serializing to public and everything worked just fine. In hindsight it all makes perfect sense. The serializer uses reflection to build up its graph of the object in order to write it out. In partial trust, you don’t want people using reflection to get at non-public members of an object since there are potential security problems with allowing that (you could break out of the sandbox pretty quickly by reflecting and calling the appropriate methods and cause some havoc by reflecting and setting the appropriate fields in certain circumstances. The fact that you cannot reflect your own assembly seems a bit heavy-handed, but then again I’m not a compiler writer or a framework designer and I have no idea what sorts of difficulties would go into allowing that from a compilation standpoint or what sorts of security problems allowing that could present (if any). So, lesson learned. If you get an incomprehensible exception message, turn on break on all thrown exceptions and try running it again (it might take a couple of tries, depending) and see what pops out. Chances are you’ll find the buried exception that actually explains what was going on. And if you’re getting a weird exception when trying to use DataContractSerializer complaining about public types not being public, chances are you’re trying to serialize a private or protected field/property.

    Read the article

  • EPM 11.1.2.2 Architecture: Financial Performance Management Applications

    - by Marc Schumacher
     Financial Management can be accessed either by a browser based client or by SmartView. Starting from release 11.1.2.2, the Financial Management Windows client does not longer access the Financial Management Consolidation server. All tasks that require an on line connection (e.g. load and extract tasks) can only be done using the web interface. Any client connection initiated by a browser or SmartView is send to the Oracle HTTP server (OHS) first. Based on the path given (e.g. hfmadf, hfmofficeprovider) in the URL, OHS makes a decision to forward this request either to the new Financial Management web application based on the Oracle Application Development Framework (ADF) or to the .NET based application serving SmartView retrievals running on Internet Information Server (IIS). Any requests send to the ADF web interface that need to be processed by the Financial Management application server are send to the IIS using HTTP protocol and will be forwarded further using DCOM to the Financial Management application server. SmartView requests, which are processes by IIS in first row, are forwarded to the Financial Management application server using DCOM as well. The Financial Management Application Server uses OLE DB database connections via native database clients to talk to the Financial Management database schema. Communication between the Financial Management DME Listener, which handles requests from EPMA, and the Financial Management application server is based on DCOM.  Unlike most other components Essbase Analytics Link (EAL) does not have an end user interface. The only user interface is a plug-in for the Essbase Administration Services console, which is used for administration purposes only. End users interact with a Transparent or Replicated Partition that is created in Essbase and populated with data by EAL. The Analytics Link Server deployed on WebLogic communicates through HTTP protocol with the Analytics Link Financial Management Connector that is deployed in IIS on the Financial Management web server. Analytics Link Server interacts with the Data Synchronisation server using the EAL API. The Data Synchronization server acts as a target of a Transparent or Replicated Partition in Essbase and uses a native database client to connect to the Financial Management database. Analytics Link Server uses JDBC to connect to relational repository databases and Essbase JAPI to connect to Essbase.  As most Oracle EPM System products, browser based clients and SmartView can be used to access Planning. The Java based Planning web application is deployed on WebLogic, which is configured behind an Oracle HTTP Server (OHS). Communication between Planning and the Planning RMI Registry Service is done using Java Remote Message Invocation (RMI). Planning uses JDBC to access relational repository databases and talks to Essbase using the CAPI. Be aware of the fact that beside the Planning System database a dedicated database schema is needed for each application that is set up within Planning.  As Planning, Profitability and Cost Management (HPCM) has a pretty simple architecture. Beside the browser based clients and SmartView, a web service consumer can be used as a client too. All clients access the Java based web application deployed on WebLogic through Oracle HHTP Server (OHS). Communication between Profitability and Cost Management and EPMA Web Server is done using HTTP protocol. JDBC is used to access the relational repository databases as well as data sources. Essbase JAPI is utilized to talk to Essbase.  For Strategic Finance, two clients exist, SmartView and a Windows client. While SmartView communicates through the web layer to the Strategic Finance Server, Strategic Finance Windows client makes a direct connection to the Strategic Finance Server using RPC calls. Connections from Strategic Finance Web as well as from Strategic Finance Web Services to the Strategic Finance Server are made using RPC calls too. The Strategic Finance Server uses its own file based data store. JDBC is used to connect to the EPM System Registry from web and application layer.  Disclosure Management has three kinds of clients. While the browser based client and SmartView interact with the Disclosure Management web application directly through Oracle HTTP Server (OHS), Taxonomy Designer does not connect to the Disclosure Management server. Communication to relational repository databases is done via JDBC, to connect to Essbase the Essbase JAPI is utilized.

    Read the article

  • Flow-Design Cheat Sheet &ndash; Part II, Translation

    - by Ralf Westphal
    In my previous post I summarized the notation for Flow-Design (FD) diagrams. Now is the time to show you how to translate those diagrams into code. Hopefully you feel how different this is from UML. UML leaves you alone with your sequence diagram or component diagram or activity diagram. They leave it to you how to translate your elaborate design into code. Or maybe UML thinks it´s so easy no further explanations are needed? I don´t know. I just know that, as soon as people stop designing with UML and start coding, things end up to be very different from the design. And that´s bad. That degrades graphical designs to just time waste on paper (or some designer). I even believe that´s the reason why most programmers view textual source code as the only and single source of truth. Design and code usually do not match. FD is trying to change that. It wants to make true design a first class method in every developers toolchest. For that the first prerequisite is to be able to easily translate any design into code. Mechanically, without thinking. Even a compiler could do it :-) (More of that in some other article.) Translating to Methods The first translation I want to show you is for small designs. When you start using FD you should translate your diagrams like this. Functional units become methods. That´s it. An input-pin becomes a method parameter, an output-pin becomes a return value: The above is a part. But a board can be translated likewise and calls the nested FUs in order: In any case be sure to keep the board method clear of any and all business logic. It should not contain any control structures like if, switch, or a loop. Boards do just one thing: calling nested functional units in proper sequence. What about multiple input-pins? Try to avoid them. Replace them with a join returning a tuple: What about multiple output-pins? Try to avoid them. Or return a tuple. Or use out-parameters: But as I said, this simple translation is for simple designs only. Splits and joins are easily done with method translation: All pretty straightforward, isn´t it. But what about wires, named pins, entry points, explicit dependencies? I suggest you don´t use this kind of translation when your designs need these features. Translating to methods is for small scale designs like you might do once you´re working on the implementation of a part of a larger design. Or maybe for a code kata you´re doing in your local coding dojo. Instead of doing TDD try doing FD and translate your design into methods. You´ll see that way it´s much easier to work collaboratively on designs, remember them more easily, keep them clean, and lessen the need for refactoring. Translating to Events [coming soon]

    Read the article

  • Virtual Lab part 2&ndash;Templates, Patterns, Baselines

    - by Geoff N. Hiten
    Once you have a good virtualization platform chosen, whether it is a desktop, server or laptop environment, the temptation is to build “X”.  “X” may be a SharePoint lab, a Virtual Cluster, an AD test environment or some other cool project that you really need RIGHT NOW.  That would be doing it wrong. My grandfather taught woodworking and cabinetmaking for twenty-seven years at a trade school in Alabama.  He was the first instructor hired at that school and the only teacher for the first two years.  His students built tables, chairs, and workbenches so the school could start its HVAC courses.   Visiting as a child, I also noticed many extra “helper” stands, benches, holders, and gadgets all built from wood.  What does that have to do with a virtual lab, you ask?  Well, that is the same approach you should take.  Build stuff that you will use.  Not for solving a particular problem, but to let the Virtual Lab be part of your normal troubleshooting toolkit. Start with basic copies of various Operating Systems.  Load and patch server and desktop OS environments.  This also helps build your collection of ISO files, another essential element of a virtual Lab.  Once you have these “baseline” images, you can use your Virtualization software’s snapshot capability to freeze the image.  Clone the snapshot and you have a brand new fully patched machine in mere moments.  You may have to sysprep some of the Microsoft OS environments if you are going to create a domain environment or experiment with clustering.  That is still much faster than loading and patching from scratch. So once you have a stock of raw materials (baseline images in this case) where should you start.  Again, my grandfather’s workshop gives us the answer.  In the shop it was workbenches and tables to hold large workpieces that made the equipment more useful.  In a Windows environment the same role falls to the fundamental network services:  DHCP, DNS, Active Directory, Routing, File Services, and Storage services.  Plan your internal network setup.  Build out an AD controller with all the features listed.  Make the actual domain an isolated domain so it will not care about where you take it.  Add the Microsoft iSCSI target.  Once you have this single system, you can leverage it for almost any network environment beyond a simple stand-alone system. Having these templates and fundamental infrastructure elements ready to run means I can build a quick lab in minutes instead of hours.  My solutions are well-tested, my processes fully documented with screenshots, and my plans validated well before I have to make any changes to client systems.  the work I put in is easily returned in increased value and client satisfaction.

    Read the article

  • Oracle WebCenter: Composite Applications & Mash-Ups

    - by kellsey.ruppel(at)oracle.com
    We’ve talked in previous weeks about the key goals of the new release of WebCenter are providing a Modern User Experience, unparalleled Application Integration, converging all the best of the existing portal platforms into WebCenter and delivering a Common User Experience Architecture.  We’ve provided an overview of Oracle WebCenter and discussed some of the other key goals in previous weeks, and this week, we’ll focus on how with the new release of Oracle WebCenter you can create composite applications and mashups.We recently talked with Sachin Agarwal, Director of Product Management of Enterprise 2.0 at Oracle around the topic of Composite Applications and Mashups. Oracle WebCenter provides a rich set of tools and capabilities for pulling in content, applications and collaboration functionality from various different sources and weaving them together into what we call Mashups. Mashups that also consists of transactional applications from multiple sources are specifically called Composite Applications. With the latest release of Oracle WebCenter one can develop highly productive tasked based interfaces that aggregate a related set of applications that are part of a business process and provide in context collaboration tools so that users don’t have to navigate away to different tabs to achieve these tasks. For instance, a call center representative (CSR), not only needs to be able to pull customer information from a CRM application like Siebel, but also related information from Oracle E-Business Suite about whether a specific order has shipped. The CSR will be far more efficient if he or she does not have to open different tabs to login into multiple applications while the customer is waiting, but can access all this information in one mashup.Oracle WebCenter Suite provides a comprehensive set of tooling that enables a business user to quickly aggregate together a mashup and wire-in different backend applications to create a custom dashboard. Not only does Oracle WebCenter supports a wide set of standards (WSRP 1.0, 2.0, JSR 168, JSR 286) that allow portlets  from other applications to be surfaced within WebCenter, but it also provides tools to bring in other web applications such as .Net Applications  as well as SharePoint webparts. The new Business Mash-up editor allows business users to take any Oracle Application or 3rd party application and wire the backend data sources or APIs to a rich set of visualizations and reuse them in mashups.  Moreover, Business users can customize or personalize any page using Oracle WebCenter Composer’s on-the-fly visual page editing features. Users access and select different resource components available in Oracle WebCenter’s Business Dictionary in order to add new content to the page. The Business Dictionary provides a role-based view of available components or resources, and these components can include information from a variety of enterprise resources such as enterprise applications, managed content, rich media, business processes, or business intelligence systems. Together, Oracle WebCenter’s Composer and Business Dictionary give users access to a powerful, yet easy to use, set of tools to personalize and extend their Oracle WebCenter portals and applications without involving IT.Keep checking back this week as we share more information on how you can easily create Commposite Applications and Mashups with Oracle WebCenter .Technorati Tags: UXP, collaboration, enterprise 2.0, modern user experience, oracle, portals, webcenter, applications, mashups, composite applications

    Read the article

  • Oracle WebCenter: Composite Applications & Mash-Ups

    - by kellsey.ruppel(at)oracle.com
    We’ve talked in previous weeks about the key goals of the new release of WebCenter are providing a Modern User Experience, unparalleled Application Integration, converging all the best of the existing portal platforms into WebCenter and delivering a Common User Experience Architecture.  We’ve provided an overview of Oracle WebCenter and discussed some of the other key goals in previous weeks, and this week, we’ll focus on how with the new release of Oracle WebCenter you can create composite applications and mashups.We recently talked with Sachin Agarwal, Director of Product Management of Enterprise 2.0 at Oracle around the topic of Composite Applications and Mashups. Oracle WebCenter provides a rich set of tools and capabilities for pulling in content, applications and collaboration functionality from various different sources and weaving them together into what we call Mashups. Mashups that also consists of transactional applications from multiple sources are specifically called Composite Applications. With the latest release of Oracle WebCenter one can develop highly productive tasked based interfaces that aggregate a related set of applications that are part of a business process and provide in context collaboration tools so that users don’t have to navigate away to different tabs to achieve these tasks. For instance, a call center representative (CSR), not only needs to be able to pull customer information from a CRM application like Siebel, but also related information from Oracle E-Business Suite about whether a specific order has shipped. The CSR will be far more efficient if he or she does not have to open different tabs to login into multiple applications while the customer is waiting, but can access all this information in one mashup.Oracle WebCenter Suite provides a comprehensive set of tooling that enables a business user to quickly aggregate together a mashup and wire-in different backend applications to create a custom dashboard. Not only does Oracle WebCenter supports a wide set of standards (WSRP 1.0, 2.0, JSR 168, JSR 286) that allow portlets  from other applications to be surfaced within WebCenter, but it also provides tools to bring in other web applications such as .Net Applications  as well as SharePoint webparts. The new Business Mash-up editor allows business users to take any Oracle Application or 3rd party application and wire the backend data sources or APIs to a rich set of visualizations and reuse them in mashups.  Moreover, Business users can customize or personalize any page using Oracle WebCenter Composer’s on-the-fly visual page editing features. Users access and select different resource components available in Oracle WebCenter’s Business Dictionary in order to add new content to the page. The Business Dictionary provides a role-based view of available components or resources, and these components can include information from a variety of enterprise resources such as enterprise applications, managed content, rich media, business processes, or business intelligence systems. Together, Oracle WebCenter’s Composer and Business Dictionary give users access to a powerful, yet easy to use, set of tools to personalize and extend their Oracle WebCenter portals and applications without involving IT.Keep checking back this week as we share more information on how you can easily create Commposite Applications and Mashups with Oracle WebCenter .Technorati Tags: UXP, collaboration, enterprise 2.0, modern user experience, oracle, portals, webcenter, applications, mashups, composite applications

    Read the article

  • Oracle WebCenter: Composite Applications & Mash-Ups

    - by kellsey.ruppel(at)oracle.com
    We’ve talked in previous weeks about the key goals of the new release of WebCenter are providing a Modern User Experience, unparalleled Application Integration, converging all the best of the existing portal platforms into WebCenter and delivering a Common User Experience Architecture.  We’ve provided an overview of Oracle WebCenter and discussed some of the other key goals in previous weeks, and this week, we’ll focus on how with the new release of Oracle WebCenter you can create composite applications and mashups.We recently talked with Sachin Agarwal, Director of Product Management of Enterprise 2.0 at Oracle around the topic of Composite Applications and Mashups. Oracle WebCenter provides a rich set of tools and capabilities for pulling in content, applications and collaboration functionality from various different sources and weaving them together into what we call Mashups. Mashups that also consists of transactional applications from multiple sources are specifically called Composite Applications. With the latest release of Oracle WebCenter one can develop highly productive tasked based interfaces that aggregate a related set of applications that are part of a business process and provide in context collaboration tools so that users don’t have to navigate away to different tabs to achieve these tasks. For instance, a call center representative (CSR), not only needs to be able to pull customer information from a CRM application like Siebel, but also related information from Oracle E-Business Suite about whether a specific order has shipped. The CSR will be far more efficient if he or she does not have to open different tabs to login into multiple applications while the customer is waiting, but can access all this information in one mashup.Oracle WebCenter Suite provides a comprehensive set of tooling that enables a business user to quickly aggregate together a mashup and wire-in different backend applications to create a custom dashboard. Not only does Oracle WebCenter supports a wide set of standards (WSRP 1.0, 2.0, JSR 168, JSR 286) that allow portlets  from other applications to be surfaced within WebCenter, but it also provides tools to bring in other web applications such as .Net Applications  as well as SharePoint webparts. The new Business Mash-up editor allows business users to take any Oracle Application or 3rd party application and wire the backend data sources or APIs to a rich set of visualizations and reuse them in mashups.  Moreover, Business users can customize or personalize any page using Oracle WebCenter Composer’s on-the-fly visual page editing features. Users access and select different resource components available in Oracle WebCenter’s Business Dictionary in order to add new content to the page. The Business Dictionary provides a role-based view of available components or resources, and these components can include information from a variety of enterprise resources such as enterprise applications, managed content, rich media, business processes, or business intelligence systems. Together, Oracle WebCenter’s Composer and Business Dictionary give users access to a powerful, yet easy to use, set of tools to personalize and extend their Oracle WebCenter portals and applications without involving IT.Keep checking back this week as we share more information on how you can easily create Commposite Applications and Mashups with Oracle WebCenter .Technorati Tags: UXP, collaboration, enterprise 2.0, modern user experience, oracle, portals, webcenter, applications, mashups, composite applications

    Read the article

  • TFS Hosting: discountasp.net TFS

    - by Enrique Lima
    In the last month or so I have been able to test and experience first hand the offering from discountasp.net for hosted TFS 2010. This first part is a description of the setup process for the account itself and getting some additional information on what you will find through the portal on their site. Not long ago, I posted a little tidbit on hosting TFS.  Through it I also did a shameless plug to my employer, our services and the type of hosting we recommend.  So, wouldn’t me running on discountasp.net be an issue?  Actually? NO. Ok, enough rambling.  Let’s get some details here. It is a Software as a Service model.  Through it we get Source Control, Version Control, Work Item Tracking and such.  What about Build?  If your need includes Build Management and such, you may need to look at some other options.  But, still this is a great offering for those that are moving from SourceSafe.  Or organizations who have 3 to 5 developers on staff, and do not foresee getting larger anytime soon.  Can it support more than 5 developers?  Yes, but then we need to get into how are you using TFS.  Do you need more than just Basic?  For example, SharePoint and Reporting Services integration. The signup process was seamless! Very easy to follow, complete and transition to Visual Studio to start working. An email followed the signup process, it contained details on how to get to the Team Foundation Server Control Panel login.  Once there, here is what I saw after the initial setup process of naming my Team Project Collection: So, moving on … once I clicked the area to get my server info, I got the following: Then it was a matter of getting the first user in there: Then on to connecting Visual Studio to my hosted TFS. Getting the server information, and the user account created I will configure those options in Visual Studio. Using Team Explorer, I am adding a new server configuration. Once this is provided, click OK, I will be challenged for a username and password, provide them and you will land on the following screen. Then Click Close. You will now be connected to your server and Team Project Collection. Since this will likely be the first time connecting, you will have no Projects (I already have 2 going). Click Connect, and you will be back in Team Explorer. My next post in the topic will be on Creating your First Team Project and uploading a Project Template to the server.

    Read the article

  • Updates about Multidimensional vs Tabular #ssas #msbi

    - by Marco Russo (SQLBI)
    I recently read the blog post from James Serra Tabular model: Not ready for prime time? (read also the comments because there are discussions about a few points raised by James) and the following post from Christian Wade Multidimensional or Tabular. In the last 2 years I worked with many companies adopting Tabular in different scenarios and I agree with some of the points expressed by James in his post (especially about missing features in Tabular if compared to Multidimensional), but I strongly disagree in others. In general, Tabular is a good choice for a new project when: the development team does not have a good knowledge of Multidimensional and MDX (DAX is faster to learn, not so easy as it is sold by MS, but definitely easier than MDX) you don’t need calculations based on hierarchies (common in certain financial applications, but not so common as it could seem) there are important calculations based on distinct count measures there are complex calculations based on many-to-many relationships Until now, I never suggested to migrate an existing Multidimensional model to a Tabular one. There should be very important reasons for that, such as performance issues in distinct count and many-to-many relationships that cannot be easily solved by optimizing the Multidimensional model, but I still never encountered this scenario. I would say that in 80% of the new projects, you might use either Multidimensional or Tabular and the real difference is the time-to-market depending on the skills of the development team. So it’s not strange that who is used to Multidimensional is not moving to Tabular, not getting a particular benefit from the new model unless specific requirements exist. The recent DAXMD feature that allows using SharePoint Power View on Multidimensional is a really important one, even if I’d like having also Excel Power View enabled for this scenario (this should be just a question of time). Another scenario in which I’m seeing a growing adoption of Tabular is in companies that creates models for their product/service and do that by using XMLA or Tabular AMO 2012. I am used to call them ISVs, even if those providing services cannot be really defined in this way. These companies are facing the multitenancy challenge with Tabular and even if this is a niche market, I see some potential here, because adopting Tabular seems a much more natural choice than Multidimensional in those scenario where an analytical engine has to be embedded to deliver one of the features of a larger product/service delivered to customers. I’d like to see other feedbacks in the comments: tell your story of choosing between Tabular and Multidimensional in a BI project you started with SQL Server 2012, thanks!

    Read the article

  • JFall 2012

    - by Geertjan
    JFall 2012 was over far too soon! Seven tracks going on simultaneously in a great location, with many artifacts reminding me of JavaOne, and nice snacks and drinks afterwards. The day started, as such things always do, with a keynote. Thanks to @royvanrijn for the photo below, I didn't take any myself and without a picture this report might have been too dry: What you see above is Steve Chin riding into the keynote hall on his NightHacking bike. The keynote was interesting, I can't be too complimentary about it, since I was part of it myself. Bert Ertman introduced the day and then Steve Chin took over, together with Sharat Chander, Tom Eugelink, Timon Veenstra, and myself. We had a strict choreography for the keynote, one that would ensure a lot of variation and some unexpected surprises, such as Steve being thrown off the stage a few times by Bert because of mentioning JavaOne too many times, rather than the clearly much cooler JFall. Steve talked about JavaOne and the direction Java is headed in, Sharat talked about JavaME and embedded devices, Steve and Tom did a demo involving JavaFX, I did a Project Easel demo, and Timon from Ordina talked about his Duke's Choice Award winning AgroSense project. I think the Project Easel demo (which I repeated later in a screencast for Parleys arranged by Eugene Boogaart) came across well and several people I spoke to especially like the roundtrip/bi-directional work that can be done, from browser to IDE and back again, very simply and intuitively. (In a long conversation on the drive back home afterwards, the scenario of a designer laying out the UI in HTML and then handing the HTML to a developer for back-end work, a developer who would then find it convenient to open the HTML in a browser and quickly navigate from the browser to the resources within the IDE, was discussed and considered to be extremely interesting and worth considering adopting NetBeans for, for no other reason than that.) Later I attended a session by David Delabassee on Java EE 7, Hans Dockter on Gradle, and Sander Mak on cross-build injection attacks. I was sorry to have missed Martijn Verburg's session, which sounded like it was really fantastic, among others, such as Gerrit Grunwald. I did a session too, entitled "Unlocking the Java EE 6 Platform", which was very well attended, pretty much a full room, and the demo went very smoothly. I talked to many people, e.g., a long time with Hans Dockter about how cool Gradle is and how great the Gradle/NetBeans plugin is turning out to be. I also had a long conversation (and did a demo) with Chris Chedgey, from Structure101, after his session, which was incredibly well attended; very interesting how popular modularity is. I met several people for the first time, as well as some colleagues from past places I've worked at. All in all, it was a great conference, unfortunately too short, which was very well attended (clearly over 1000) people, with several international speakers, as well as international attendees such as Mattias Karlsson, Sweden JUG leader. And, unsurprisingly, I came across NetBeans Platform applications again, none of which I had ever heard of before. In each case, "our fat client application" was mentioned in passing, never as a main application, and never in a context where there are plans for the application to be migrated to the web or mobile, simply because doing so makes no business sense at all. Great times at JFall, looking forward to meeting with some of the people I met again soon.

    Read the article

  • The Case of the Missing Date/Time Stamp: Reporting Services 2008 R2 Snapshots

    - by smisner
    This week I stumbled upon an undocumented “feature” in SQL Server 2008 R2 Reporting Services as I was preparing a demonstration on how to set up and use report snapshots. If you’re familiar with the main changes in this latest release of Reporting Services, you probably already know that Report Manager got a facelift this time around. Although this facelift was generally a good thing, one of the casualties – in my opinion – is the loss of the snapshot label that served two purposes… First, it flagged the report as a snapshot. Second, it let you know when that snapshot was created. As part of my standard operating procedure when demonstrating report snapshots, I point out this label, so I was rather taken aback when I didn’t see it in the demonstration I was preparing. It sort of upset my routine, and I’m rather partial to my routines. I thought perhaps I wasn’t looking in the right place and changed Report Manager from Tile View to Detail View, but no – that label was still missing. In the grand scheme of life, it’s not an earth-shattering change, but you’ll have to look at the Modified Date in Details View to know when the snapshot was run. Or hope that the report developer included a textbox to show the execution time in the report. (Hint: this is a good time to add this to your list of report development best practices, whether a report gets set up as a report snapshot or not!) A snapshot from the past In case you don’t remember how a snapshot appeared in Report Manager back in the old days (of SQL Server 2008 and earlier), here’s an image I snagged from my Reporting Services 2008 Step by Step manuscript: A snapshot in the present A report server running in SharePoint integrated mode had no such label. There you had to rely on the Report Modified date-time stamp to know the snapshot execution time. So I guess all platforms are now consistent. Here’s a screenshot of Report Manager in the 2008 R2 version. One of these is a snapshot and the rest execute on demand. Can you tell which is the snapshot? Consider descriptions as an alternative So my report snapshot demonstration has one less step, and I’ll need to edit the Denali version of the Step by Step book. Things are simpler this way, but I sure wish we had an easier way to identify the execution methods of the reports. Consider using the description field to alert users that the report is a snapshot. It might save you a few questions about why the data isn’t up-to-date if the users know that something changed in the source of the report. Notice that the full description doesn’t display in Tile View, so keep it short and sweet or instruct users to open Details View to see the entire description.

    Read the article

  • RPi and Java Embedded GPIO: Big Data and Java Technology

    - by hinkmond
    Java Embedded and Big Data go hand-in-hand, especially as demonstrated by prototyping on a Raspberry Pi to show how well the Java Embedded platform can perform on a small embedded device which then becomes the proof-of-concept for industrial controllers, medical equipment, networking gear or any type of sensor-connected device generating large amounts of data. The key is a fast and reliable way to access that data using Java technology. In the previous blog posts you've seen the integration of a static electricity sensor and the Raspberry Pi through the GPIO port, then accessing that data through Java Embedded code. It's important to point out how this works and why it works well with Java code. First, the version of Linux (Debian Wheezy/Raspian) that is found on the RPi has a very convenient way to access the GPIO ports through the use of Linux OS managed file handles. This is key in avoiding terrible and complex coding using register manipulation in C code, or having to program in a less elegant and clumsy procedural scripting language such as python. Instead, using Java Embedded, allows a fast way to access those GPIO ports through those same Linux file handles. Java already has a very easy to program way to access file handles with a high degree of performance that matches direct access of those file handles with the Linux OS. Using the Java API java.io.FileWriter lets us open the same file handles that the Linux OS has for accessing the GPIO ports. Then, by first resetting the ports using the unexport and export file handles, we can initialize them for easy use in a Java app. // Open file handles to GPIO port unexport and export controls FileWriter unexportFile = new FileWriter("/sys/class/gpio/unexport"); FileWriter exportFile = new FileWriter("/sys/class/gpio/export"); ... // Reset the port unexportFile.write(gpioChannel); unexportFile.flush(); // Set the port for use exportFile.write(gpioChannel); exportFile.flush(); Then, another set of file handles can be used by the Java app to control the direction of the GPIO port by writing either "in" or "out" to the direction file handle. // Open file handle to input/output direction control of port FileWriter directionFile = new FileWriter("/sys/class/gpio/gpio" + gpioChannel + "/direction"); // Set port for input directionFile.write("in"); // Or, use "out" for output directionFile.flush(); And, finally, a RandomAccessFile handle can be used with a high degree of performance on par with native C code (only milliseconds to read in data and write out data) with low overhead (unlike python) to manipulate the data going in and out on the GPIO port, while the object-oriented nature of Java programming allows for an easy way to construct complex analytic software around that data access functionality to the external world. RandomAccessFile[] raf = new RandomAccessFile[GpioChannels.length]; ... // Reset file seek pointer to read latest value of GPIO port raf[channum].seek(0); raf[channum].read(inBytes); inLine = new String(inBytes); It's Big Data from sensors and industrial/medical/networking equipment meeting complex analytical software on a small constraint device (like a Linux/ARM RPi) where Java Embedded allows you to shine as an Embedded Device Software Designer. Hinkmond

    Read the article

  • Visual Studio 2010 Launch Events

    - by Jim Duffy
    Don’t miss out on the opportunity to learn about the new features in Visual Studio 2010. Check out the MSDN Events page and find out when the talented folks of the Developer & Evangelism group will be visiting your city to prove to you that /*Life Runs On Code*/. I’ll be attending the Raleigh event June 2, 2010 from 1:00 - 5:00 PM. North Carolina State University, Jane S. McKimmon Conference Center 1101 Gorman St Raleigh North Carolina 27606 United States From the Raleigh Event page: Event Overview Learn about the rich application platforms that Microsoft® Visual Studio® 2010 supports, including Windows® 7, the Web, SharePoint®, Windows Azure™, SQL®, and Windows® Phone 7 Series. From tighter tester and dev collaboration to new ALM tools, there’s a lot that’s new. Here’s what you can expect: Windows Development with Visual Studio 2010 Visual Studio has always been the best way to build compelling visual solutions for Windows. Visual Studio 2010 continues this trend with great new tooling support for Silverlight 4, WPF, and native development. In this demo heavy session, you’ll see how you can build rich Windows applications with Silverlight 4 using new trusted application features including out-of-browser execution, saving to the file system, and even COM Automation. You’ll also see how you can use the new Task Parallel Library from within a WPF application to take advantage of all those cores in today’s modern computers. Web and Cloud Development with Visual Studio 2010 If you build solutions for the web, then this session is for you. Come see how your existing skills move forward with Visual Studio 2010 both for in-house ASP.NET development and the new frontier of the Cloud. In this session, you’ll see improved designers, new HTML and JavaScript snippets, Web Forms enhancements, and how you can quickly build great web sites using Dynamic Data. You’ll see the changes made to testable web sites with MVC 2.0 and how we’ve integrated JQuery support into the platform. You’ll then see how easy it is to leverage your existing code and move to the cloud with Windows Azure. Windows Phone 7 Developer Tools and Platform Overview This session provides an overview of Visual Studio® 2010 for Windows Phone. Learn about the powerful capabilities of this new application platform and the developer tools experience including basic IDE usage, debugging, packaging, and deployment. This session also shows how you can use Microsoft Expression® Blend™ for Windows Phone to build great Silverlight applications. Have a day. :-|

    Read the article

  • How Microsoft Market DotNet?

    - by Fendy
    I just read an Joel's article about Microsoft's breaking change (non-backwards compatibility) with dot net's introduction. It is interesting and explicitly reflected the condition during that time. But now almost 10 years has passed. The breaking change It is mainly on how bad is Microsoft introducing non-backwards compatibility development tools, such as dot net, instead of improving the already-widely used asp classic or VB6. As much have known, dot net is not natively embedded in windows XP (yes in vista or 7), so in order to use the .net apps, you need to install the .net framework of over 300mb (it's big that day). However, as we see that nowadays many business use .net as their main development tools, with asp.net or mvc as their web-based applications. C# nowadays be one of tops programming languages (the most questions in stackoverflow). The more interesing part is, win32api still alive even there is newer technology out there (and still widely used). Imagine if microsoft does not introduce the breaking change, there will many corporates still uses asp classic or vb-based applications (there still is, but not that much). There are many corporates use additional services such as azure or sharepoint (beside how expensive is it). Please note that I also know there are many flagships applications (maybe adobe's and blizzard's) still use C-based or older language and not porting to newer high-level language. The question How can Microsoft persuade the users to migrate their old applications into dot net? As we have known it is very hard and give no immediate value when rewrite the applications (netscape story), and it is very risky. I am more interested in Microsoft's way and not opinion such as "because dot net is OOP, or dot net is dll-embedable, etc". This question may be constructive, as the technology is vastly changes over times lately. As we can see, Microsoft changes Asp.Net webform to MVC, winform is legacy now, it is starting to change to use windows store rather than basic-installment, touchscreen and later on we will have see-through applications such as google class. And that will be breaking changes. We will need to account portability as an issue nowadays. We will need other than just mere technology choice, but also migration plans. Even maybe as critical as we might need multiplatform language compiler, as approached by Joel's Wasabi. (hey, I read his articles too much!)

    Read the article

  • Becoming the well-integrated content company (and combating AIUTLVFS)

    - by Lance Shaw
    Every single day, each of us create more and more content. Sometimes it is brand new material and many times it is iterations of existing content, but no one would argue that information and content growth is growing at an almost exponential rate. With all this content being created and stored, a number of problems naturally arise. One of the most common issues that users run into is "Am I Using The Latest Version of this File Syndrome", or AIUTLVFS. This insidious syndrome is all too common and results in ineffective, poor or downright wrong business decisions being made.  When content or files are unavailable or incorrect within the scope of key business processes, the chance for erroneous and costly business decisions is magnified even further. For many companies, the ideal scenario is to be able to connect multiple business systems, both old and new, into one common content repository.  Not only does this reduce content duplication, it also helps guarantee that everyone in various departments is working off the proverbial "same page".  Sounds simple - but for many organizations, the proliferation of file shares, SharePoint sites, and other storage silos of content keep the dream of a more efficient business a distant one. We've created some online assets to help you in your evaluation and eventual improvement of your current content management and delivery systems. Take a few minutes to check out our Online Assessment Tool.  It's quick, easy and just might provide you with insights into how you can improve your current content ecosystem. While you are there, check out our new Infographic that outlines common issues faced by companies today. Feel free to save our informative Infographic PDF and share it with business colleagues and your management to help them understand the business costs and impact of inaction. Together we can stop AIUTLVFS in its tracks and run our businesses more effectively than ever. Additionally, we hope you will take a few minutes to visit our new and informative webpages dedicated to the value of a well connected, fully integrated content management system. It's a great place to learn more about how integrating WebCenter Content into your infrastructure can lower your operational costs while boosting process and worker efficiency.

    Read the article

  • Turnkey with LightSwitch

    - by Laila
    Microsoft has long wanted to find a replacement for Microsoft Access. The best attempt yet, which is due out in, or before, September is Visual Studio LightSwitch, with which it is said to be as 'easy as flipping a switch' to use Silverlight to create simple form-driven business applications. It is easy to get confused by the various initiatives from Microsoft. No, this isn't WebMatrix. There is no 'Razor', for this isn't meant for cute little ecommerce sites, but is designed to build simple database-applications of the card-box type. It is more clearly a .NET-based solution to the problem that every business seems to suffer from; the plethora of Access-based, and Excel-based 'private' and departmental database-applications. These are a nightmare for any IT department since they are often 'stealth' applications built by the business in the teeth of opposition from the IT Department zealots. As they are undocumented, it is scarily easy to bring a whole department into disarray by decommissioning a PC tucked under a desk somewhere. With LightSwitch, it is easy to re-write such applications in a standard, maintainable, way, using a SQL Server database, deployed somewhere reasonably safe such as Azure. Even Sharepoint or Windows Communication Foundation can be used as data sources. Oracle's ApEx has taken off remarkably well, and has shaken the perception that, for the business user, Oracle must remain a mystic force accessible only to the priests and acolytes. Microsoft, by comparison had only Access, which was first released in 1992, the year of the Madonna conical bustier. It looks just as dated. Microsoft badly needed an entirely new solution to the same business requirement that led to Access's and Foxpro's long-time popularity, but which had the same allure as ApEx. LightSwitch is sound in its ideas, and comfortingly conventional in its architecture. By giving an easy access to SQL Server databases, and providing a 'thumb and blanket' migration path to Access-heads, LightSwitch seems likely to offer a simple way of pulling more Microsoft users into the .NET community. If Microsoft puts its weight behind it, then it will give some glimmer of hope to the many Silverlight developers that Microsoft is capable of seeing through its .NET revolution.

    Read the article

  • Ranking - Part II

    - by PointsToShare
    © 2011 By: Dov Trietsch. All rights reserved   Ranking Part II In my introduction to ranking I also introduced the Ranking Game. This is actually a much more sophisticated program than the one we need to simply rate an item, but it introduced you to the sophisticated results that you may achieve by a bit of code and accompanying CSS. In this installment, I am going to handle simple rating with 5 stars. The extra sophistication will come in the form of creating new elements in run time. Why do I need this? I like to be able to extend the SharePoint New and Update forms and put the starts in them simply by using the code shown here. We do not even need to go into SPD. We may achieve this simply by adding a content editor web part; more about this in the next installment. I have created a new page – Rank the Author – in which you may praise me in 5 different ways, but not immediately. The ranking mechanism – the 5 stars – has to be created first. To achieve that, click the “Add Element” button on the screen and then proceed in giving me the appropriate number of stars. Now view the source and see how this extra 5 start element was added. Also see how the ranking is achieved. This, obviously, is not any different in principle than what we did in the Ranking game. We create some sophisticated HTML, Add some style and create the element by: var divString = "<div id="rateMe" title="Rate Me...">    <a onclick="rateIt(this)" id="_1" title="ehh..." onmouseover="rating(this)" onmouseout="off(this)"></a>    <a onclick="rateIt(this)" id="_2" title="So So" onmouseover="rating(this)" onmouseout="off(this)"></a>    <a onclick="rateIt(this)" id="_3" title="Passable" onmouseover="rating(this)" onmouseout="off(this)"></a>    <a onclick="rateIt(this)" id="_4" title="Not too Bad" onmouseover="rating(this)" onmouseout="off(this)"></a>    <a onclick="rateIt(this)" id="_5" title="Not Bad" onmouseover="rating(this)" onmouseout="off(this)"></a></div>";m = document.createElement("p");m.innerHTML = divString;m.className = "blah";function AddElement(){    y = document.getElementById("Rest");    y.parentNode.insertBefore(m, y);} When you look into the full code, you’ll notice that I have added an empty <div id=”Rest”> into the form. A div element, like p, creates a line break, but the main purpose here was to mark the place above which I wanted to add the stars. Now you may hover over the stars, see how they behave and click on one of them to see that the program can react to your selection. That’s all folks!

    Read the article

  • Application Integration Future &ndash; BizTalk Server &amp; Windows Azure (TechEd 2012)

    - by SURESH GIRIRAJAN
    I am really excited to see lot of new news around BizTalk in TechEd 2012. I was recently watching the session presented by Sriram and Rajesh on “Application Integration Futures: The Road Map and What's Next on Windows Azure”. It was great session and lot of interesting stuff about the feature updates for BizTalk and Azure integration. I have highlighted them below, definitely customers who haven’t started using Microsoft BizTalk ESB Toolkit should start using them which is going to be part of the core BizTalk product in future release, which is cool… BizTalk Server feature enhancements Manageability: ESB Tool Kit is going to be part of the core BizTalk product and Setup. Visualize BizTalk artifact dependencies in BizTalk administration console. HIS administration using configuration files. Performance: Improvements in ordered send port scenarios Improved performance in dynamic send ports and ESB, also to configure BizTalk host handler for dynamic send ports. Right now it runs under default host, which does not enable to scale. MLLP adapter enhancements and DB2 client transaction load balancing / client bulk insert. Platform Support: Visual Studio 2012, Windows 8 Server, SQL Server 2012, Office 15 and System Center 2012. B2B enhancements to support latest industry standards natively. Connectivity Improvements: Consume REST service directly in BizTalk SharePoint integration made easier. Improvements to SMTP adapter, to add macros for sending same email with different content to different parties. Connectivity to Azure Service Bus relay, queues and topics. DB2 client connectivity to SQL Server and SQL Server connectivity to Informix. CICS Http client connectivity to Windows. Azure: Use Azure as IaaS/PaaS for BizTalk environment. Use Azure to provision BizTalk environment for test environment / development. Later move to On-premises or build a Hybrid cloud approach. Eliminate HW procurement for BizTalk environment for testing / demos / development etc. Enable to create BizTalk farm easily and remove/add more servers as needed. EAI Service: EAI Bridge Protocol transformation Message Transformation Running custom code Message Enrichment Hybrid Connectivity LOB Applications On-premises Application On-premises Connectivity to Applications in the cloud Queues/ Topics Ftp Devices Web Services Bridge can be customized based on the service needs to provide different capabilities needed as part of the bridge. Look at the sample for EDI bridge for EDI service sample. Also with Tracking enabled through the portal. http://msdn.microsoft.com/en-us/library/windowsazure/hh674490 Adapters: Service Bus Messaging adapter - New adapter added. WebHttp adapter - For REST services. NetTcpRelay adapter - New adapter added. I will start posting more and once I start playing with this…

    Read the article

  • Seperation of project responsibilities in new project

    - by dreza
    We have very recently started a new project (MVC 3.0) and some of our early discussion has been around how the work and development will be split amongst the team members to ensure we get the least amount of overlap of work and so help make it a bit easier for each developer to get on and do their work. The project is expected to take about 6 months - 1 year (although not all developers are likely to be on and might filter off towards the end), Our team is going to be small so this will help out a bit I believe. The team will essentially consist of: 3 x developers (1 a slightly more experienced and will be the lead) 1 x project manager / product owner / tester An external company responsbile for doing our design work General project/development decisions so far have included: Develop in an Agile way using SCRUM techniques (We are still very much learning this approach as a company) Use MVVM archectecture Use Ninject and DI where possible Attempt to use as TDD as much as possible to drive development. Keep our controllers as skinny as possible Keep our views as simple as possible During our discussions two approaches have been broached as too how to seperate the workload given our objectives outlined above. OPTION 1: A framework seperation where each person is responsible for conceptual areas with overlap and discussion primarily in the integration areas. The integration areas would the responsibily of both developers as required. View prototypes (**Graphic designer**) | - Mockups | Views (Razor and view helpers etc) & Javascript (**Developer 1**) | - View models (Integration point) | Controllers and Application logic (**Developer 2**) | - Models (Integration point) | Domain model and persistence (**Developer 3**) PROS: Integration points are quite clear and so developers can work without dependencies on others fairly easily Code practices such as naming conventions and style is more easily managed in regards to consistancy as primarily only one developer will be handling an area CONS: Completion of an entire feature becomes a bit grey as no single person is responsible for an entire feature (story?) A person might not have a full appreciation for all areas of the project and so code overlap might be lacking if suddenly that person left. OPTION 2: A more task orientated approach where each person is responsible for the completion of the entire task from view - controller - model. PROS: A person is responsible for one entire feature so it's "complete" state can be clearly defined Code overlap into different areas will occur so each individual has good coverage over the entire application CONS: Overlap of development will occur in all the modules and developers can develop/extend without a true understanding of what the original code owner was intending. This could potentially lead more easily to code bloat? Following a convention might be harder as developers are adding to all areas of the project If a developer sets up a way of doing things would it be harder to enforce the other developers to follow that convention or even build on it (or even discuss it?). Dunno.. Bugs could more easily be introduced into areas not thought about by the developer It's easier to possibly to carry a team member in so far as one member just hacks code together to complete a task whilst another takes time to build a foundation that could be used by others and so help make future tasks easier i.e. starts building a framework? QUESTION: As it might appear I'm more in favor of option 1, however I'm interested to see how others might have approached this or what is the standard or best or preferred way of undertaking a project. Or indeed any different approach to handling this?

    Read the article

  • WebCenter Customer Spotlight: SICE

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution SummarySociedad Ibérica de Construcciones Eléctricas, S.A. (SICE) is a Spanish company specializes in engineering and technology integration for intelligent transport systems and environmental control systems. They had a large quantity of engineering and environmental planning documents  which they wanted to manage, classify and integrate with their existing enterprise resource planning (ERP) system. SICE adapted  Oracle WebCenter Content to classify and manage more than 30 different types, defined a security plan to ensure the integrity and recovery of various document types and integrated the document management solution with SICE’s third-party enterprise resource planning (ERP) system. SICE  accelerated time to market for all projects, minimized time required to identify and recover documents  and achieved greater efficiency in all operations. Company Overview Created in 1921, Sociedad Ibérica de Construcciones Eléctricas, S.A. (SICE) currently specializes in engineering and technology integration for intelligent transport systems and environmental control systems. It has more than 2,500 employees, with operations in Spain and various locations in Latin America, the United States, Africa, and Australia. Business Challenges They had a large quantity of engineering and environmental planning documents generated in research and projects which they wanted to manage, classify and integrate with their existing enterprise resource planning (ERP) system. Solution Deployed SICE worked with the Oracle Partner ABAST Solutions to evaluate and choose the best document management system, ultimately selecting Oracle WebCenter Content over other options including  Documentum, SharePoint, OpenText, and Alfresco.They adapted Oracle WebCenter Content to classify and manage more than 30 different types, defined a security plan to ensure the integrity and recovery of various document types and integrated the document management solution  with SICE’s third-party enterprise resource planning (ERP) system to accelerate incorporation with the documentation system and ensure integrity ERP system data. Business Results SICE  accelerated time to market for all projects by releasing reports and information that support and validate engineering projects, stored all documents in a single repository with organizationwide accessibility, minimizing time required to identify and recover documents needed for reports to initiate and execute engineering and building projects. Overall they achieved greater efficiency in all operations, including technical and impact report development and construction documentation management. “The correct and efficient management of information is vital to our environmental management activity. Oracle WebCenter Content  serves as a basis for knowledge management practices, with the objective of adding greater value to everything that we do.” Manuel Delgado, IT Project Engineering, Sociedad Ibérica de Construcciones Eléctricas, S.A Additional Information SICE Customer Snapshot Oracle WebCenter Content

    Read the article

  • [EF + ORACLE] Updating and Deleting Entities

    - by JTorrecilla
    Prologue In previous chapters we have seen how to insert data through EF, with and without sequences. In this one, we are going to see how to Update and delete Data from the DB. Updating data The update of the Entity Data (properties) is a very common and easy action. Before of change any of the properties of the Entity, we can check the EntityState property, and we can see that is EntityState.Unchanged.   For making an update it is needed to get the Entity which will be modified. In the following example, I use the GetEmployeeByNumber to get a valid Entity: 1: EMPLEADOS emp=GetEmployeeByNumber(2); 2: emp.Name="a"; 3: emp.Phone="2"; 4: emp.Mail="aa"; After modifying the desired properties of the Entity, we are going to check again Entitystate property, which now has the EntityState.Modified value. To persist the changes to the DB is necessary to invoke the SaveChanges function of our context. 1: context.SaveChanges(); After modifying the desired properties of the Entity, we are going to check again Entitystate property, which now has the EntityState.Modified value. To persist the changes to the DB is necessary to invoke the SaveChanges function of our context. If we check again the EntityState property we will see that the value will be EntityState.Unchanged.   Deleting Data Another easy action is to delete an Entity.   The first step to delete an Entity from the DB is to select the entity: 1: CLIENTS selectedClient = GetClientByNumber(15); 2: context.CLIENTES.DeleteObject(clienteSeleccionado); Before invoking the DeleteObject function, we will check EntityStet which value must be EntityState.Unchanged. After deleting the object, the state will be changed to EntitySate.Deleted. To commit the action we have to invoke the SaveChanges function. Aftar that, the EntityState property will be EntityState.Detached. Cascade Entity Framework lets cascade updates and deletes, although I never see cascade updates. What is a cascade delete? A cascade delete is an action that allows to delete all the related object to the object we desire to delete. This option could be established in the DB manager, or it could be in the EF model designer. For example: With a given relation (1-N) between clients and requests. The common situation must be to let delete those clients whose have no requests. If we select the relation between both entities, and press the second mouse button, we can see the properties panel of the relation. The props are: This grid shows the relations indicating the Master table(Clients) and the end point (Cabecera or Requests) The property “End 1 OnDelete” indicates the action to do when a Entity from the Master will be deleted. There are two options: - None: No action will be done, it is said, if a Entity has details entities it could not be deleted. - Cascade: It will delete all related entities to the master Entity. If we enable the cascade delete in a relation, and we invoke the DeleteObject function of the set, we could observe that all the related object indicates a Entitystate.Deleted state. Like an update, insert or common delete, until we commit the changes with SaveChanges function, the data would not be commited. Si habilitamos el borrado en cascada de una relación, e invocamos a la función DeleteObject del conjunto, podremos observar que todas las entidades de Detalle (de la relación indicada) presentan el valor EntityState.Deleted en la propiedad EntityState. Del mismo modo que en el borrado, inserción o actualización, hasta que no se invoque al método SaveChanges, los cambios no van a ser confirmados en la Base de Datos. Finally In this chapter we have seen how to update a Entity, how to delete an Entity and how to implement Cascade Deleting through EF. In next chapters we will see how to query the DB data.

    Read the article

  • WebCenter Customer Spotlight: Alberta Agriculture and Rural Developmen

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution SummaryAlberta Agriculture and Rural Development is a government ministry that works with producers and consumers to create a strong, competitive, and sustainable agriculture and food industry in the province of Alberta, Canada The primary business challenge faced by the Alberta Ministry of Agriculture was that of managing the rapid growth of their information.  They needed to incorporate a system that would work across 22 different divisions within the ministry and deliver an improved and more efficient experience for Desktop, Web and Mobile users, while addressing their regulatory compliance needs as part of the Canadian government. The customer implemented a centralized Enterprise Content Management solution based on Oracle WebCenter Content and developed a strong and repeatable information life cycle management methodology across all their 22 divisions and agencies. With the implemented solution, Alberta Agriculture and Rural Development  centrally manages over 20 million documents for 22 divisions and agencies and they have improved time required to find records,  reliability of information, improved speed and accuracy of reporting and data security. Company OverviewAlberta Agriculture and Rural Development is a government ministry that works with producers and consumers to create a strong, competitive, and sustainable agriculture and food industry in the province of Alberta, Canada.  Business ChallengesThe business users were overwhelmed by growth in documents (over 20 million files across 22 divisions and agencies) and it was difficult to find and manage documents and versions. There was a strong need for a personalized easy-to-use, secure and dependable method of managing and consuming content via desktop, Web, and mobile, while improving efficiency and maintaining regulatory compliance by removing the risk of non-uniform approaches to retention and disposition. Solution DeployedAs a first step Alberta Agriculture and Rural Development developed a business case with clear defined business drivers: Reduce time required to find records Locate “lost” records Capture knowledge lost through attrition Increase the ease of retrieval Reduce personal copies Increase reliability of information Improve speed and accuracy of reporting Improve data security The customer implemented a centralized Enterprise Content Management solution based on Oracle WebCenter Content. They used an incremental implementation approach aligned with their divisional and agency structure which allowed continuous process improvement. This led to a very strong and repeatable information life cycle management methodology across all their 22 divisions and agencies. Business ResultsAlberta Agriculture and Rural Development achieved impressive business results: Centrally managing over 20 million files for 22 divisions and agencies Federated model to manage documents in SharePoint and other applications Doing records management for both paper and electronic records Reduced time required to find records Increased the ease of retrieval Increased reliability of information Improved speed and accuracy of reporting Improved data security Additional Information Oracle Open World 2012 Presentation Oracle WebCenter Content

    Read the article

  • I didn't mean to become a database developer, but now I am. Should I stop or try to get better?

    - by pretlow majette
    20 years ago I couldn't afford even a cheap POS program when I opened my first surf shop in the Virgin Islands. I bought a copy of Paradox (remember that?) in 1990 and spent months in a back room scratching out a POS application. Over many iterations, including a switch to Access (2000)/SQL Server (2003), I built a POS and backoffice solution that runs four stores with multiple cash registers, a warehouse and office. Until recently, all my stores were connected to the same LAN (in a small shopping center) and performance wasn't an issue. Now that we've opened a location in the States that's changed. Connecting to my local server via the internet has slowed that locations application to a crawl. This is partly due to the slow and crappy dsl service we have in the Virgin Islands, and partly due to my less-than-professional code and sql. With other far-away stores in the works, I need a better solution. I like my application. My staff knows it well, and I'm not inclined to take on the expense of a proper commercial solution. So where does that leave me? I should probably host my sql online to sidestep the slow dsl here. I think I can handle cleaning up my SQL querries to speed that up a bit. What about Access? My version seems so old, but I don't like the newer versions with the 'ribbon'. There are so many options... Should I be learning Visual Studio with an eye on moving completely to the web? Will my VBA skills help me at all there? I don't have the luxury of a year at the keyboard to figure it out anymore. What about dotnetnuke, sharepoint, or lightswitch? They all seem like possibilities, but even understanding their capabilities is daunting. I'm pretty deep into it, but maybe I should bail and hire a consultant or programmer. That sounds expensive tho, and there's no guarantee there either... Any advice would be greatly appreciated. Or, if anybody is interested in buying a small chain of surf shops...

    Read the article

  • Link instead of Attaching

    - by Daniel Moth
    With email storage not being an issue in many companies (I think I currently have 25GB of storage on my email account, I don’t even think about storage), this encourages bad behaviors such as liberally attaching office documents to emails instead of sharing a link to the document in SharePoint or SkyDrive or some file share etc. Attaching a file admittedly has its usage scenarios too, but it should not be the default. I thought I'd list the reasons why sharing a link can be better than attaching files directly. In no particular order: Better Review. It allows multiple recipients to review the file and their comments are aggregated into a single document. The alternative is everyone having to detach the document, add their comments, then send back to you, and then you have to collate. Wirth the alternative, you also potentially miss out on recipients reading comments from other recipients. Always up to date. The attachment becomes a fork instead of an always up to date document. For example, you send the email on Thursday, I only open it on Tuesday: between those days you could have made updates that now I am missing because you decided to share a link instead of an attachment. Better bookmarking. When I need to find that document you shared, you are forcing me to search through my email (I may not even be running outlook), instead of opening the link which I have bookmarked in my browser or my collection of links in my OneNote or from the recent/pinned links of the office app on my task bar, etc. Can control access. If someone accidentally or naively forwards your link to someone outside your group/org who you’d prefer not to have access to it, the location of the document can be protected with specific access control. Can add more recipients. If someone adds people to the email thread in outlook, your attachment doesn't get re-attached - instead, the person added is left without the attachment unless someone remembers to re-attach it. If it was a link, they are immediately caught up without further actions. Enable Discovery. If you put it on a share, I may be able to discover other cool stuff that lives alongside that document. Save on storage. So this doesn't apply to me given my opening statement, but if in your company you do have such limitations, attaching files eats up storage on all recipients accounts and will also get "lost" when those people archive email (and lose completely at some point if they follow the company retention policy). Like I said, attachments do have their place, but they should be an explicit choice for explicit reasons rather than the default. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

< Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >