Search Results

Search found 5568 results on 223 pages for 'dependency analysis'.

Page 127/223 | < Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >

  • SQL SERVER – Guest Post – Architecting Data Warehouse – Niraj Bhatt

    - by pinaldave
    Niraj Bhatt works as an Enterprise Architect for a Fortune 500 company and has an innate passion for building / studying software systems. He is a top rated speaker at various technical forums including Tech·Ed, MCT Summit, Developer Summit, and Virtual Tech Days, among others. Having run a successful startup for four years Niraj enjoys working on – IT innovations that can impact an enterprise bottom line, streamlining IT budgets through IT consolidation, architecture and integration of systems, performance tuning, and review of enterprise applications. He has received Microsoft MVP award for ASP.NET, Connected Systems and most recently on Windows Azure. When he is away from his laptop, you will find him taking deep dives in automobiles, pottery, rafting, photography, cooking and financial statements though not necessarily in that order. He is also a manager/speaker at BDOTNET, Asia’s largest .NET user group. Here is the guest post by Niraj Bhatt. As data in your applications grows it’s the database that usually becomes a bottleneck. It’s hard to scale a relational DB and the preferred approach for large scale applications is to create separate databases for writes and reads. These databases are referred as transactional database and reporting database. Though there are tools / techniques which can allow you to create snapshot of your transactional database for reporting purpose, sometimes they don’t quite fit the reporting requirements of an enterprise. These requirements typically are data analytics, effective schema (for an Information worker to self-service herself), historical data, better performance (flat data, no joins) etc. This is where a need for data warehouse or an OLAP system arises. A Key point to remember is a data warehouse is mostly a relational database. It’s built on top of same concepts like Tables, Rows, Columns, Primary keys, Foreign Keys, etc. Before we talk about how data warehouses are typically structured let’s understand key components that can create a data flow between OLTP systems and OLAP systems. There are 3 major areas to it: a) OLTP system should be capable of tracking its changes as all these changes should go back to data warehouse for historical recording. For e.g. if an OLTP transaction moves a customer from silver to gold category, OLTP system needs to ensure that this change is tracked and send to data warehouse for reporting purpose. A report in context could be how many customers divided by geographies moved from sliver to gold category. In data warehouse terminology this process is called Change Data Capture. There are quite a few systems that leverage database triggers to move these changes to corresponding tracking tables. There are also out of box features provided by some databases e.g. SQL Server 2008 offers Change Data Capture and Change Tracking for addressing such requirements. b) After we make the OLTP system capable of tracking its changes we need to provision a batch process that can run periodically and takes these changes from OLTP system and dump them into data warehouse. There are many tools out there that can help you fill this gap – SQL Server Integration Services happens to be one of them. c) So we have an OLTP system that knows how to track its changes, we have jobs that run periodically to move these changes to warehouse. The question though remains is how warehouse will record these changes? This structural change in data warehouse arena is often covered under something called Slowly Changing Dimension (SCD). While we will talk about dimensions in a while, SCD can be applied to pure relational tables too. SCD enables a database structure to capture historical data. This would create multiple records for a given entity in relational database and data warehouses prefer having their own primary key, often known as surrogate key. As I mentioned a data warehouse is just a relational database but industry often attributes a specific schema style to data warehouses. These styles are Star Schema or Snowflake Schema. The motivation behind these styles is to create a flat database structure (as opposed to normalized one), which is easy to understand / use, easy to query and easy to slice / dice. Star schema is a database structure made up of dimensions and facts. Facts are generally the numbers (sales, quantity, etc.) that you want to slice and dice. Fact tables have these numbers and have references (foreign keys) to set of tables that provide context around those facts. E.g. if you have recorded 10,000 USD as sales that number would go in a sales fact table and could have foreign keys attached to it that refers to the sales agent responsible for sale and to time table which contains the dates between which that sale was made. These agent and time tables are called dimensions which provide context to the numbers stored in fact tables. This schema structure of fact being at center surrounded by dimensions is called Star schema. A similar structure with difference of dimension tables being normalized is called a Snowflake schema. This relational structure of facts and dimensions serves as an input for another analysis structure called Cube. Though physically Cube is a special structure supported by commercial databases like SQL Server Analysis Services, logically it’s a multidimensional structure where dimensions define the sides of cube and facts define the content. Facts are often called as Measures inside a cube. Dimensions often tend to form a hierarchy. E.g. Product may be broken into categories and categories in turn to individual items. Category and Items are often referred as Levels and their constituents as Members with their overall structure called as Hierarchy. Measures are rolled up as per dimensional hierarchy. These rolled up measures are called Aggregates. Now this may seem like an overwhelming vocabulary to deal with but don’t worry it will sink in as you start working with Cubes and others. Let’s see few other terms that we would run into while talking about data warehouses. ODS or an Operational Data Store is a frequently misused term. There would be few users in your organization that want to report on most current data and can’t afford to miss a single transaction for their report. Then there is another set of users that typically don’t care how current the data is. Mostly senior level executives who are interesting in trending, mining, forecasting, strategizing, etc. don’t care for that one specific transaction. This is where an ODS can come in handy. ODS can use the same star schema and the OLAP cubes we saw earlier. The only difference is that the data inside an ODS would be short lived, i.e. for few months and ODS would sync with OLTP system every few minutes. Data warehouse can periodically sync with ODS either daily or weekly depending on business drivers. Data marts are another frequently talked about topic in data warehousing. They are subject-specific data warehouse. Data warehouses that try to span over an enterprise are normally too big to scope, build, manage, track, etc. Hence they are often scaled down to something called Data mart that supports a specific segment of business like sales, marketing, or support. Data marts too, are often designed using star schema model discussed earlier. Industry is divided when it comes to use of data marts. Some experts prefer having data marts along with a central data warehouse. Data warehouse here acts as information staging and distribution hub with spokes being data marts connected via data feeds serving summarized data. Others eliminate the need for a centralized data warehouse citing that most users want to report on detailed data. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Business Intelligence, Data Warehousing, Database, Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • LightDM will not start after stopping it

    - by Sweeters
    I am running Ubuntu 11.10 "Oneiric Ocelot", and in trying to install the nvidia CUDA developer drivers I switched to a virtual terminal (Ctrl-Alt-F5) and stopped lightdm (installation required that no X server instance be running) through sudo service lightdm stop. Re-starting lightdm with sudo service lightdm start did not work: A couple of * Starting [...] lines where displayed, but the process hanged. (I do not remember at which point, but I think it was * Starting System V runlevel compatibility. I manually rebooted my laptop, and ever since booting seems to hang, usually around the * Starting anac(h)ronistic cron [OK] log line (not consistently at that point, though). From that point on, I seem to be able to interact with my system only through a tty session (Ctrl-Alt-F1). I've tried purging and reinstalling both lightdm and gdm, as well as selecting both as the default display managers (through sudo dpkg-reconfigure [lightdm / gdm] or by manually editing /etc/X11/default-display-manager) through both apt-get and aptitude (that shouldn't make a difference anyway) after updating the packages, but the problem persists. Some of the responses I'm getting are the following: After running sudo dpkg-reconfigure lightdm (but not ... gdm) I get the following message: dpkg-maintscript-helper:warning: environment variable DPKG_MATINSCRIPT_NAME missing dpkg-maintscript-helper:warning: environment variable DPKG_MATINSCRIPT_PACKAGE missing After trying sudo service lightdm start or sudo start lightdm I get to see the boot loading screen again but nothing changes. If I go back to the tty shell I see lightdm start/running, process <num> but ps -e | grep lightdm gives no output. After trying sudo service gdm start or sudo starg gdm I get the gdm start/running, process <num> message, and gdm-binary is supposedly an active process, but all that happens is that the screen blinks a couple of times and nothing else. Other candidate solutions that I'd found on the web included running startx but when I try that I get an error output [...] Fatal server error: no screens found [...]. Moreover, I made sure that lightdm-gtk-greeter is installed but that did not help either. Please excuse my not including complete outputs/logs; I am writing this post from another computer and it's hard to manually copy the complete logs. Also, I've seen several posts that had to do with similar problems, but either there was no fix, or the one suggested did not work for me. In closing: Please help! I very much hope to avoid re-installing Ubuntu from scratch! :) Alex @mosi I did not manage to fix the NVIDIA kernel driver as per your instructions. I should perhaps mention that I'm on a Dell XPS15 laptop with an NVIDIA Optimus graphics card, and that I have bumblebee installed (which installs nvidia drivers during its installation, I believe). Issuing the mentioned commands I get the following: ~$uname -r 3.0.0-12-generic ~$lsmod | grep -i nvidia nvidia 11713772 0 ~$dmesg | grep -i nvidia [ 8.980041] nvidia: module license 'NVIDIA' taints kernel. [ 9.354860] nvidia 0000:01:00.0: power state changed by ACPI to D0 [ 9.354864] nvidia 0000:01:00.0: power state changed by ACPI to D0 [ 9.354868] nvidia 0000:01:00.0: enabling device (0006 -> 0007) [ 9.354873] nvidia 0000:01:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 9.354879] nvidia 0000:01:00.0: setting latency timer to 64 [ 9.355052] NVRM: loading NVIDIA UNIX x86_64 Kernel Module 280.13 Wed Jul 27 16:53:56 PDT 2011 Also, running aptitude search nvidia gives me the following: p nvidia-173 - NVIDIA binary Xorg driver, kernel module a p nvidia-173-dev - NVIDIA binary Xorg driver development file p nvidia-173-updates - NVIDIA binary Xorg driver, kernel module a p nvidia-173-updates-dev - NVIDIA binary Xorg driver development file p nvidia-96 - NVIDIA binary Xorg driver, kernel module a p nvidia-96-dev - NVIDIA binary Xorg driver development file p nvidia-96-updates - NVIDIA binary Xorg driver, kernel module a p nvidia-96-updates-dev - NVIDIA binary Xorg driver development file p nvidia-cg-toolkit - Cg Toolkit - GPU Shader Authoring Language p nvidia-common - Find obsolete NVIDIA drivers i nvidia-current - NVIDIA binary Xorg driver, kernel module a p nvidia-current-dev - NVIDIA binary Xorg driver development file c nvidia-current-updates - NVIDIA binary Xorg driver, kernel module a p nvidia-current-updates-dev - NVIDIA binary Xorg driver development file i nvidia-settings - Tool of configuring the NVIDIA graphics dr p nvidia-settings-updates - Tool of configuring the NVIDIA graphics dr v nvidia-va-driver - v nvidia-va-driver - I've tried manually installing (sudo aptitude install <package>) packages nvidia-common and nvidia-settings-updates but to no avail. For example, sudo aptitude install nvidia-settings-updates returns the following log: Reading package lists... Building dependency tree... Reading state information... Reading extended state information... Initializing package states... Writing extended state information... No packages will be installed, upgraded, or removed. 0 packages upgraded, 0 newly installed, 0 to remove and 83 not upgraded. Need to get 0 B of archives. After unpacking 0 B will be used. Writing extended state information... Reading package lists... Building dependency tree... Reading state information... Reading extended state information... Initializing package states... Writing extended state information... The same happens with the Linux headers (i.e. I cannot seem to be able to install linux-headers-3.0.0-12-generic). The output of aptitude search linux-headers is as follows: v linux-headers - v linux-headers - v linux-headers-2.6 - i linux-headers-2.6.38-11 - Header files related to Linux kernel versi i linux-headers-2.6.38-11-generic - Linux kernel headers for version 2.6.38 on i A linux-headers-2.6.38-8 - Header files related to Linux kernel versi i A linux-headers-2.6.38-8-generic - Linux kernel headers for version 2.6.38 on v linux-headers-3 - v linux-headers-3.0 - v linux-headers-3.0 - i A linux-headers-3.0.0-12 - Header files related to Linux kernel versi p linux-headers-3.0.0-12-generic - Linux kernel headers for version 3.0.0 on p linux-headers-3.0.0-12-generic- - Linux kernel headers for version 3.0.0 on p linux-headers-3.0.0-12-server - Linux kernel headers for version 3.0.0 on p linux-headers-3.0.0-12-virtual - Linux kernel headers for version 3.0.0 on p linux-headers-generic - Generic Linux kernel headers p linux-headers-generic-pae - Generic Linux kernel headers v linux-headers-lbm - v linux-headers-lbm - v linux-headers-lbm-2.6 - v linux-headers-lbm-2.6 - p linux-headers-lbm-3.0.0-12-gene - Header files related to linux-backports-mo p linux-headers-lbm-3.0.0-12-gene - Header files related to linux-backports-mo p linux-headers-lbm-3.0.0-12-serv - Header files related to linux-backports-mo p linux-headers-server - Linux kernel headers on Server Equipment. p linux-headers-virtual - Linux kernel headers for virtual machines @heartsmagic I did try purging and reinstalling any nvidia driver packages, but it did not seem to make a difference, My xorg.conf file contains the following: # nvidia-xconfig: X configuration file generated by nvidia-xconfig # nvidia-xconfig: version 280.13 ([email protected]) Wed Jul 27 17:15:58 PDT 2011 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 0 0 InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Unknown" ModelName "Unknown" HorizSync 28.0 - 33.0 VertRefresh 43.0 - 72.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 SubSection "Display" Depth 24 EndSubSection EndSection

    Read the article

  • ubuntu 12.04 python problem or?

    - by Trki
    Hi i am trying to fix this for a long time but without success. When i open my zsh terminal i get this error: (terminal is working but error appear) Welcome to the world of tomorrow! virtualenvwrapper_run_hook:12: permission denied: virtualenvwrapper.sh: There was a problem running the initialization hooks. If Python could not import the module virtualenvwrapper.hook_loader, check that virtualenv has been installed for VIRTUALENVWRAPPER_PYTHON= and that PATH is set properly. I tried few things but... dont know how to solve it. Somehow during looking for a search i found i should post here an output of: ? sudo dpkg --configure -a Setting up python-pip (1.0-1build1) ... /var/lib/dpkg/info/python-pip.postinst: 6: /var/lib/dpkg/info/python-pip.postinst: pycompile: not found dpkg: error processing python-pip (--configure): subprocess installed post-installation script returned error exit status 127 Setting up libc-dev-bin (2.15-0ubuntu10.5) ... Setting up gnome-control-center-data (1:3.4.2-0ubuntu0.13) ... Setting up linux-libc-dev (3.2.0-56.86) ... Setting up python-virtualenv (1.7.1.2-1) ... /var/lib/dpkg/info/python-virtualenv.postinst: 6: /var/lib/dpkg/info/python-virtualenv.postinst: pycompile: not found dpkg: error processing python-virtualenv (--configure): subprocess installed post-installation script returned error exit status 127 Setting up libglib2.0-0 (2.32.4-0ubuntu1) ... Setting up libglib2.0-0:i386 (2.32.4-0ubuntu1) ... Setting up gimp (2.6.12-1ubuntu1.2) ... /var/lib/dpkg/info/gimp.postinst: 11: /var/lib/dpkg/info/gimp.postinst: pycompile: not found dpkg: error processing gimp (--configure): subprocess installed post-installation script returned error exit status 127 Setting up libpolkit-gobject-1-0 (0.104-1ubuntu1.1) ... Setting up libgnome-control-center1 (1:3.4.2-0ubuntu0.13) ... Setting up libnm-util2 (0.9.4.0-0ubuntu4.3) ... Setting up libc6-dev (2.15-0ubuntu10.5) ... Setting up libpulse-mainloop-glib0 (1:1.1-0ubuntu15.4) ... dpkg: dependency problems prevent configuration of virtualenvwrapper: virtualenvwrapper depends on python-virtualenv; however: Package python-virtualenv is not configured yet. dpkg: error processing virtualenvwrapper (--configure): dependency problems - leaving unconfigured Setting up libpolkit-agent-1-0 (0.104-1ubuntu1.1) ... Setting up libupower-glib1 (0.9.15-3git1ubuntu0.1) ... Setting up libaccountsservice0 (0.6.15-2ubuntu9.6.1) ... Setting up libpolkit-backend-1-0 (0.104-1ubuntu1.1) ... Setting up libglib2.0-bin (2.32.4-0ubuntu1) ... Setting up libnm-glib4 (0.9.4.0-0ubuntu4.3) ... Setting up policykit-1 (0.104-1ubuntu1.1) ... Setting up gnome-settings-daemon (3.4.2-0ubuntu0.6.4) ... Setting up accountsservice (0.6.15-2ubuntu9.6.1) ... dpkg: error processing ubuntu-system-service (--configure): Package is in a very bad inconsistent state - you should reinstall it before attempting configuration. Processing triggers for libc-bin ... ldconfig deferred processing now taking place Errors were encountered while processing: python-pip python-virtualenv gimp virtualenvwrapper ubuntu-system-service Also: ? python --version zsh: command not found: python Part of my ~/.zshrc # python virtual env wrapper if [ -f ~/.local/bin/virtualenvwrapper.sh ]; then export WORKON_HOME=~/.virtualenvs source ~/.local/bin/virtualenvwrapper.sh plugins=("${plugins[@]}" virtualenvwrapper) fi # pythonbrew [[ -s ~/.pythonbrew/etc/bashrc ]] && source ~/.pythonbrew/etc/bashrc Part os zsh -xv # # Invoke the initialization functions # virtualenvwrapper_initialize +/home/trki/.local/bin/virtualenvwrapper.sh:1179> virtualenvwrapper_initialize +virtualenvwrapper_initialize:1> virtualenvwrapper_derive_workon_home +virtualenvwrapper_derive_workon_home:1> typeset 'workon_home_dir=/home/trki/.virtualenvs' +virtualenvwrapper_derive_workon_home:5> [ /home/trki/.virtualenvs '=' '' ']' +virtualenvwrapper_derive_workon_home:12> echo /home/trki/.virtualenvs +virtualenvwrapper_derive_workon_home:12> unset GREP_OPTIONS +virtualenvwrapper_derive_workon_home:12> grep '^[^/~]' +virtualenvwrapper_derive_workon_home:21> echo /home/trki/.virtualenvs +virtualenvwrapper_derive_workon_home:21> unset GREP_OPTIONS +virtualenvwrapper_derive_workon_home:21> egrep '([\$~]|//)' +virtualenvwrapper_derive_workon_home:30> echo /home/trki/.virtualenvs +virtualenvwrapper_derive_workon_home:31> return 0 +virtualenvwrapper_initialize:1> export 'WORKON_HOME=/home/trki/.virtualenvs' +virtualenvwrapper_initialize:3> virtualenvwrapper_verify_workon_home -q +virtualenvwrapper_verify_workon_home:1> RC=0 +virtualenvwrapper_verify_workon_home:2> [ ! -d /home/trki/.virtualenvs/ ']' +virtualenvwrapper_verify_workon_home:11> return 0 +virtualenvwrapper_initialize:6> [ /home/trki/.virtualenvs '=' '' ']' +virtualenvwrapper_initialize:11> virtualenvwrapper_run_hook initialize +virtualenvwrapper_run_hook:1> typeset hook_script +virtualenvwrapper_run_hook:2> typeset result +virtualenvwrapper_run_hook:4> hook_script=+virtualenvwrapper_run_hook:4> virtualenvwrapper_tempfile initialize-hook +virtualenvwrapper_tempfile:2> typeset 'suffix=initialize-hook' +virtualenvwrapper_tempfile:3> typeset file +virtualenvwrapper_tempfile:5> file=+virtualenvwrapper_tempfile:5> virtualenvwrapper_mktemp -t virtualenvwrapper-initialize-hook-XXXXXXXXXX +virtualenvwrapper_mktemp:1> mktemp -t virtualenvwrapper-initialize-hook-XXXXXXXXXX +virtualenvwrapper_tempfile:5> file=/tmp/virtualenvwrapper-initialize-hook-OhY86PXmo7 +virtualenvwrapper_tempfile:6> [ 0 -ne 0 ']' +virtualenvwrapper_tempfile:6> [ -z /tmp/virtualenvwrapper-initialize-hook-OhY86PXmo7 ']' +virtualenvwrapper_tempfile:6> [ ! -f /tmp/virtualenvwrapper-initialize-hook-OhY86PXmo7 ']' +virtualenvwrapper_tempfile:11> echo /tmp/virtualenvwrapper-initialize-hook-OhY86PXmo7 +virtualenvwrapper_tempfile:12> return 0 +virtualenvwrapper_run_hook:4> hook_script=/tmp/virtualenvwrapper-initialize-hook-OhY86PXmo7 +virtualenvwrapper_run_hook:11> cd /home/trki/.virtualenvs +cd:1> [[ x/home/trki/.virtualenvs == x... ]] +cd:3> [[ x/home/trki/.virtualenvs == x.... ]] +cd:5> [[ x/home/trki/.virtualenvs == x..... ]] +cd:7> [[ x/home/trki/.virtualenvs == x...... ]] +cd:9> [ -d /home/trki/.autoenv ']' +cd:13> cd /home/trki/.virtualenvs +virtualenvwrapper_run_hook:12> '' -m virtualenvwrapper.hook_loader --script /tmp/virtualenvwrapper-initialize-hook-OhY86PXmo7 initialize virtualenvwrapper_run_hook:12: permission denied: +virtualenvwrapper_run_hook:15> result=126 +virtualenvwrapper_run_hook:17> [ 126 -eq 0 ']' +virtualenvwrapper_run_hook:27> [ initialize '=' initialize ']' +virtualenvwrapper_run_hook:29> cat - virtualenvwrapper.sh: There was a problem running the initialization hooks. If Python could not import the module virtualenvwrapper.hook_loader, check that virtualenv has been installed for VIRTUALENVWRAPPER_PYTHON= and that PATH is set properly. +virtualenvwrapper_run_hook:38> rm -f /tmp/virtualenvwrapper-initialize-hook-OhY86PXmo7 +virtualenvwrapper_run_hook:39> return 126 +virtualenvwrapper_initialize:13> virtualenvwrapper_setup_tab_completion +virtualenvwrapper_setup_tab_completion:1> [ -n '' ']' +virtualenvwrapper_setup_tab_completion:20> [ -n 4.3.17 ']' +virtualenvwrapper_setup_tab_completion:30> compctl -K _virtualenvs workon rmvirtualenv cpvirtualenv showvirtualenv +virtualenvwrapper_setup_tab_completion:31> compctl -K _cdvirtualenv_complete cdvirtualenv +virtualenvwrapper_setup_tab_completion:32> compctl -K _cdsitepackages_complete cdsitepackages +virtualenvwrapper_initialize:15> return 0 +/home/trki/.zshrc:17> plugins=( git python django symfony2 zsh-syntax-highlighting composer history-substring-search virtualenvwrapper ) # pythonbrew [[ -s ~/.pythonbrew/etc/bashrc ]] && source ~/.pythonbrew/etc/bashrc +/home/trki/.zshrc:21> [[ -s /home/trki/.pythonbrew/etc/bashrc ]] Also when i try to open ubuntu software center absolutly nothing happens. No idea what to do now.

    Read the article

  • New Feature in ODI 11.1.1.6: ODI for Big Data

    - by Julien Testut
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} By Ananth Tirupattur Starting with Oracle Data Integrator 11.1.1.6.0, ODI is offering a solution to process Big Data. This post provides an overview of this feature. With all the buzz around Big Data and before getting into the details of ODI for Big Data, I will provide a brief introduction to Big Data and Oracle Solution for Big Data. So, what is Big Data? Big data includes: structured data (this includes data from relation data stores, xml data stores), semi-structured data (this includes data from weblogs) unstructured data (this includes data from text blob, images) Traditionally, business decisions are based on the information gathered from transactional data. For example, transactional Data from CRM applications is fed to a decision system for analysis and decision making. Products such as ODI play a key role in enabling decision systems. However, with the emergence of massive amounts of semi-structured and unstructured data it is important for decision system to include them in the analysis to achieve better decision making capability. While there is an abundance of opportunities for business for gaining competitive advantages, process of Big Data has challenges. The challenges of processing Big Data include: Volume of data Velocity of data - The high Rate at which data is generated Variety of data In order to address these challenges and convert them into opportunities, we would need an appropriate framework, platform and the right set of tools. Hadoop is an open source framework which is highly scalable, fault tolerant system, for storage and processing large amounts of data. Hadoop provides 2 key services, distributed and reliable storage called Hadoop Distributed File System or HDFS and a framework for parallel data processing called Map-Reduce. Innovations in Hadoop and its related technology continue to rapidly evolve, hence therefore, it is highly recommended to follow information on the web to keep up with latest information. Oracle's vision is to provide a comprehensive solution to address the challenges faced by Big Data. Oracle is providing the necessary Hardware, software and tools for processing Big Data Oracle solution includes: Big Data Appliance Oracle NoSQL Database Cloudera distribution for Hadoop Oracle R Enterprise- R is a statistical package which is very popular among data scientists. ODI solution for Big Data Oracle Loader for Hadoop for loading data from Hadoop to Oracle. Further details can be found here: http://www.oracle.com/us/products/database/big-data-appliance/overview/index.html ODI Solution for Big Data: ODI’s goal is to minimize the need to understand the complexity of Hadoop framework and simplify the adoption of processing Big Data seamlessly in an enterprise. ODI is providing the capabilities for an integrated architecture for processing Big Data. This includes capability to load data in to Hadoop, process data in Hadoop and load data from Hadoop into Oracle. ODI is expanding its support for Big Data by providing the following out of the box Knowledge Modules (KMs). IKM File to Hive (LOAD DATA).Load unstructured data from File (Local file system or HDFS ) into Hive IKM Hive Control AppendTransform and validate structured data on Hive IKM Hive TransformTransform unstructured data on Hive IKM File/Hive to Oracle (OLH)Load processed data in Hive to Oracle RKM HiveReverse engineer Hive tables to generate models Using the Loading KM you can map files (local and HDFS files) to the corresponding Hive tables. For example, you can map weblog files categorized by date into a corresponding partitioned Hive table schema. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Using the Hive control Append KM you can validate and transform data in Hive. In the below example, two source Hive tables are joined and mapped to a target Hive table. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} The Hive Transform KM facilitates processing of semi-structured data in Hive. In the below example, the data from weblog is processed using a Perl script and mapped to target Hive table. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Using the Oracle Loader for Hadoop (OLH) KM you can load data from Hive table or HDFS to a corresponding table in Oracle. OLH is available as a standalone product. ODI greatly enhances OLH capability by generating the configuration and mapping files for OLH based on the configuration provided in the interface and KM options. ODI seamlessly invokes OLH when executing the scenario. In the below example, a HDFS file is mapped to a table in Oracle. Development and Deployment:The following diagram illustrates the development and deployment of ODI solution for Big Data. Using the ODI Studio on your development machine create and develop ODI solution for processing Big Data by connecting to a MySQL DB or Oracle database on a BDA machine or Hadoop cluster. Schedule the ODI scenarios to be executed on the ODI agent deployed on the BDA machine or Hadoop cluster. ODI Solution for Big Data provides several exciting new capabilities to facilitate the adoption of Big Data in an enterprise. You can find more information about the Oracle Big Data connectors on OTN. You can find an overview of all the new features introduced in ODI 11.1.1.6 in the following document: ODI 11.1.1.6 New Features Overview

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #052

    - by Pinal Dave
    Let us continue with the final episode of the Memory Lane Series. Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Set Server Level FILLFACTOR Using T-SQL Script Specifies a percentage that indicates how full the Database Engine should make the leaf level of each index page during index creation or alteration. fillfactor must be an integer value from 1 to 100. The default is 0. Limitation of Online Index Rebuld Operation Online operation means when online operations are happening in the database are in normal operational condition, the processes which are participating in online operations does not require exclusive access to the database. Get Permissions of My Username / Userlogin on Server / Database A few days ago, I was invited to one of the largest database company. I was asked to review database schema and propose changes to it. There was special username or user logic was created for me, so I can review their database. I was very much interested to know what kind of permissions I was assigned per server level and database level. I did not feel like asking Sr. DBA the question about permissions. Simple Example of WHILE Loop With CONTINUE and BREAK Keywords This question is one of those questions which is very simple and most of the users get it correct, however few users find it confusing for the first time. I have tried to explain the usage of simple WHILE loop in the first example. BREAK keyword will exit the stop the while loop and control is moved to the next statement after the while loop. CONTINUE keyword skips all the statement after its execution and control is sent to the first statement of while loop. Forced Parameterization and Simple Parameterization – T-SQL and SSMS When the PARAMETERIZATION option is set to FORCED, any literal value that appears in a SELECT, INSERT, UPDATE or DELETE statement is converted to a parameter during query compilation. When the PARAMETERIZATION database option is SET to SIMPLE, the SQL Server query optimizer may choose to parameterize the queries. 2008 Transaction and Local Variables – Swap Variables – Update All At Once Concept Summary : Transaction have no effect over memory variables. When UPDATE statement is applied over any table (physical or memory) all the updates are applied at one time together when the statement is committed. First of all I suggest that you read the article listed above about the effect of transaction on local variant. As seen there local variables are independent of any transaction effect. Simulate INNER JOIN using LEFT JOIN statement – Performance Analysis Just a day ago, while I was working with JOINs I find one interesting observation, which has prompted me to create following example. Before we continue further let me make very clear that INNER JOIN should be used where it cannot be used and simulating INNER JOIN using any other JOINs will degrade the performance. If there are scopes to convert any OUTER JOIN to INNER JOIN it should be done with priority. 2009 Introduction to Business Intelligence – Important Terms & Definitions Business intelligence (BI) is a broad category of application programs and technologies for gathering, storing, analyzing, and providing access to data from various data sources, thus providing enterprise users with reliable and timely information and analysis for improved decision making. Difference Between Candidate Keys and Primary Key Candidate Key – A Candidate Key can be any column or a combination of columns that can qualify as unique key in database. There can be multiple Candidate Keys in one table. Each Candidate Key can qualify as Primary Key. Primary Key – A Primary Key is a column or a combination of columns that uniquely identify a record. Only one Candidate Key can be Primary Key. 2010 Taking Multiple Backup of Database in Single Command – Mirrored Database Backup I recently had a very interesting experience. In one of my recent consultancy works, I was told by our client that they are going to take the backup of the database and will also a copy of it at the same time. I expressed that it was surely possible if they were going to use a mirror command. In addition, they told me that whenever they take two copies of the database, the size of the database, is always reduced. Now this was something not clear to me, I said it was not possible and so I asked them to show me the script. Corrupted Backup File and Unsuccessful Restore The CTO, who was also present at the location, got very upset with this situation. He then asked when the last successful restore test was done. As expected, the answer was NEVER.There were no successful restore tests done before. During that time, I was present and I could clearly see the stress, confusion, carelessness and anger around me. I did not appreciate the feeling and I was pretty sure that no one in there wanted the atmosphere like me. 2011 TRACEWRITE – Wait Type – Wait Related to Buffer and Resolution SQL Trace is a SQL Server database engine technology which monitors specific events generated when various actions occur in the database engine. When any event is fired it goes through various stages as well various routes. One of the routes is Trace I/O Provider, which sends data to its final destination either as a file or rowset. DATEDIFF – Accuracy of Various Dateparts If you want to have accuracy in seconds, you need to use a different approach. In the first example, the accurate method is to find the number of seconds first and then divide it by 60 to convert it in minutes. Dedicated Access Control for SQL Server Express Edition http://www.youtube.com/watch?v=1k00z82u4OI Book Signing at SQLPASS 2012 Who I Am And How I Got Here – True Story as Blog Post If there was a shortcut to success – I want to know. I learnt SQL Server hard way and I am still learning. There are so many things, I have to learn. There is not enough time to learn everything which we want to learn. I am constantly working on it every day. I welcome you to join my journey as well. Please join me in my journey to learn SQL Server – more the merrier. Vacation, Travel and Study – A New Concept Even those who have advanced degrees and went to college for years, or even decades, find studying hard.  There is a difference between studying for a career and studying for a certification.  At least to get a degree there is a variety of subjects, with labs, exams, and practice problems to make things more interesting. Order By Numeric Values Formatted as String We have a table which has a column containing alphanumeric data. The data always has first as an integer and later part as a string. The business need is to order the data based on the first part of the alphanumeric data which is an integer. Now the problem is that no matter how we use ORDER BY the result is not produced as expected. Let us understand this with an example. Resolving SQL Server Connection Errors – SQL in Sixty Seconds #030 – Video One of the most famous errors related to SQL Server is about connecting to SQL Server itself. Here is how it goes, most of the time developers have worked with SQL Server and knows pretty much every error which they face during development language. However, hardly they install fresh SQL Server. As the installation of the SQL Server is a rare occasion unless you are a DBA who is responsible for such an instance – the error faced during installations are pretty rare as well. http://www.youtube.com/watch?v=1k00z82u4OI Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • CodePlex Daily Summary for Wednesday, May 19, 2010

    CodePlex Daily Summary for Wednesday, May 19, 2010New Projects3FD - Framework For Fast Development: This is a C++ framework that provides a solid error handling structure, garbage collection, multi-threading and portability between compilers. The ...ali test project: test projectAttribute Builder: The Attribute Builder builds an attribute from a lambda expression because it can.BDK0008: it is a food lovers websitecgdigest: cg digest template for non-profit orgCokmez: Bilmuh cokmez duyuru sistemiDot Game: It is a dot game that our Bangladeshi people used to play at their childhood time and their last time when they are poor for working.ESRI Javascript .NET Integration: Visual Studio project that shows how to integrate the Esri Javascript API with .NET Exchange 2010 RBAC Editor (RBAC GUI): Exchange 2010 RBAC Editor (RBAC GUI) Developed in C# and using Powershell behind the scenes RBAC tool to simplfy RBAC administrationFile Validator (Validador de Archivos): Componente que permite realizar la validación de archivos (txt, imagenes, PDF, etc) actualmente solo tiene implementado la parte de los txt, permit...Grip 09 Lab4: GripjPageFlipper: This is a wonderful implementation of page flipper entirely based on HTML 5 <canvas> tag. It means that it can work in any browser that supports HT...Main project: Index bird families and associated species. Malware Analysis and Can Handler: MACH is a tool to organize and catalog your malware analysis canned responses, and to track the topic response lifecycle for forum experts.Perf Web: Performance team web sitePiPiBugNet: PiPiBugNet是一套全新的开源Bug管理系统。 PiPiBugNet代码基于ASP.NET 2.0平台开发,编程语言为C#。 PiPiBugNet界面基于Ext JS设计,提供了极佳的用户体验。RemoteDesktop: integrated remote console, desktop and chat utilityRuneScape emulation done right.: RuneScape emulator.Sandkasten: SandkastenSilverlight Metro Theme: Metro Theme for Silverlight.Silverlight Stereoscopy: Stereoscopy with Silverlight.Twitivia: Twitivia is an online trivia service that runs through twitter and is being used as an example set of projects. C#, MVC, Windows Services, Linq ...XPool: A simple school project.New ReleasesDot Game: 'Dot Game' first release: Dot Game first release This is the 'Dot Game' first release.DotNetNuke® Store: 02.01.35: What's New in this release? Bugs corrected: - Fixed a resource for the header in the Category list of the Store Admin module. - Added several test...ESRI Javascript .NET Integration: Map search results in a DataView: Visual Studio 2010 example showing how to pass Map results back to ASP.NET for use in a DataView.Exchange 2010 RBAC Editor (RBAC GUI): RBAC Editor: This binary is still beta (0.0.9.1) but in most case it's very stableExtending C# editor - Outlining, classification: first revision: a couple of bug has been eliminated, performance improvementFloe IRC Client: Floe IRC Client 2010-05 R6: Corrected bug where text would be unexpectedly copied to the clipboard.Floe IRC Client: Floe IRC Client 2010-05 R7: - Fixed bug where text would show up in a query window with someone if they said something on a channel that you are both present on.Free Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts v3.0.9 GA released: Hi, Today we have released the final version of Visifire v3.0.9 which contains the following enhancements: * Two new properties ActualAxisMin...Free Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts v3.5.2 GA Released: Hi, Today we have released the final version of Visifire v3.5.2 which contains the following enhancements: Two new properties ActualAxisMinimum a...HB Batch Encoder Mk 2: HB Batch Encoder Mk2 v1.02: Added .mov support.jPageFlipper: jPageFlipper 0.9: This is an initial community preview of jPageFlipper. It's not ready for production usage but has almost all functionality implemented.linq.js - LINQ for JavaScript: ver 2.1.0.0: Add Class Dictionary Lookup Grouping OrderedEnumerable Add Method ToDictionary MemoizeAll Share Let Add Overload ...Microsoft Research Biology Extension for Excel: MSR Biology Extension for Excel - M9: M9 Release includes the following updates to the previous release: > Import / Export support from Excel for multiple file formats > Bug fixes and ...Nifty CSharp Tools: Event Watcher: Event Watcher!Paint.NET Bulk Image Processor: Paint.NET Bulk Image Processor v1.0: This is the initial release of the Paint.NET Bulk Image processor plugin. All feedback is welcome.PiPiBugNet: PiPiBugNet架构设计: PiPiBugNet架构设计,未包含功能实现RuneScape emulation done right.: rc0: Release cantidate 0.Rx Contrib: V1.6: Adding CCR queue as adapter for the ReactiveQueue credits goes to Yuval Mazor http://blogs.microsoft.co.il/blogs/yuvmaz/Silverlight Metro Theme: Silverlight Metro Theme Alpha 1: Silverlight Metro Theme Alpha 1Silverlight Stereoscopy: Silverlight Stereoscopy Alpha 1: Silverlight Stereoscopy Alpha 20100518Stratosphere: Stratosphere 1.0.6.0: Introduced support for batch put Introduced Support for conditional updates and consistent read Added support for select conditions Brought t...VCC: Latest build, v2.1.30518.0: Automatic drop of latest buildVideo Downloader: Example Program - 1.1: Example Program showing the features of the DLL and what can be achieved using it. For DLL Version 1.1.Video Downloader: Version 1.1: Version 1.1 See Home Page for usage and more information regarding new features. Please remember changes at You-Tube can prevent this software from...WatchersNET.TagCloud: WatchersNET.TagCloud 01.06.00: Whats New New Tag Mode: Show Tags from Ventrian.com NewsArticles Module New Tag Mode: Show Tags from Ventrian.com SimpleGallery Module Hyperlin...Windows Double Explorer: WDE v0.4: -optimization -switch to new vst2010 -viewer close now by pressing escape -reorder tabs -send selected fullname or shortnames via email (eye button...Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryMicrosoft SQL Server Community & SamplesPHPExcelASP.NETMost Active Projectspatterns & practices – Enterprise LibraryRawrPHPExcelGMap.NET - Great Maps for Windows Forms & PresentationCustomer Portal Accelerator for Microsoft Dynamics CRMBlogEngine.NETWindows Azure Command-line Tools for PHP DevelopersCassiniDev - Cassini 3.5/4.0 Developers EditionSQL Server PowerShell ExtensionsFluent Ribbon Control Suite

    Read the article

  • The Data Scientist

    - by BuckWoody
    A new term - well, perhaps not that new - has come up and I’m actually very excited about it. The term is Data Scientist, and since it’s new, it’s fairly undefined. I’ll explain what I think it means, and why I’m excited about it. In general, I’ve found the term deals at its most basic with analyzing data. Of course, we all do that, and the term itself in that definition is redundant. There is no science that I know of that does not work with analyzing lots of data. But the term seems to refer to more than the common practices of looking at data visually, putting it in a spreadsheet or report, or even using simple coding to examine data sets. The term Data Scientist (as far as I can make out this early in it’s use) is someone who has a strong understanding of data sources, relevance (statistical and otherwise) and processing methods as well as front-end displays of large sets of complicated data. Some - but not all - Business Intelligence professionals have these skills. In other cases, senior developers, database architects or others fill these needs, but in my experience, many lack the strong mathematical skills needed to make these choices properly. I’ve divided the knowledge base for someone that would wear this title into three large segments. It remains to be seen if a given Data Scientist would be responsible for knowing all these areas or would specialize. There are pretty high requirements on the math side, specifically in graduate-degree level statistics, but in my experience a company will only have a few of these folks, so they are expected to know quite a bit in each of these areas. Persistence The first area is finding, cleaning and storing the data. In some cases, no cleaning is done prior to storage - it’s just identified and the cleansing is done in a later step. This area is where the professional would be able to tell if a particular data set should be stored in a Relational Database Management System (RDBMS), across a set of key/value pair storage (NoSQL) or in a file system like HDFS (part of the Hadoop landscape) or other methods. Or do you examine the stream of data without storing it in another system at all? This is an important decision - it’s a foundation choice that deals not only with a lot of expense of purchasing systems or even using Cloud Computing (PaaS, SaaS or IaaS) to source it, but also the skillsets and other resources needed to care and feed the system for a long time. The Data Scientist sets something into motion that will probably outlast his or her career at a company or organization. Often these choices are made by senior developers, database administrators or architects in a company. But sometimes each of these has a certain bias towards making a decision one way or another. The Data Scientist would examine these choices in light of the data itself, starting perhaps even before the business requirements are created. The business may not even be aware of all the strategic and tactical data sources that they have access to. Processing Once the decision is made to store the data, the next set of decisions are based around how to process the data. An RDBMS scales well to a certain level, and provides a high degree of ACID compliance as well as offering a well-known set-based language to work with this data. In other cases, scale should be spread among multiple nodes (as in the case of Hadoop landscapes or NoSQL offerings) or even across a Cloud provider like Windows Azure Table Storage. In fact, in many cases - most of the ones I’m dealing with lately - the data should be split among multiple types of processing environments. This is a newer idea. Many data professionals simply pick a methodology (RDBMS with Star Schemas, NoSQL, etc.) and put all data there, regardless of its shape, processing needs and so on. A Data Scientist is familiar not only with the various processing methods, but how they work, so that they can choose the right one for a given need. This is a huge time commitment, hence the need for a dedicated title like this one. Presentation This is where the need for a Data Scientist is most often already being filled, sometimes with more or less success. The latest Business Intelligence systems are quite good at allowing you to create amazing graphics - but it’s the data behind the graphics that are the most important component of truly effective displays. This is where the mathematics requirement of the Data Scientist title is the most unforgiving. In fact, someone without a good foundation in statistics is not a good candidate for creating reports. Even a basic level of statistics can be dangerous. Anyone who works in analyzing data will tell you that there are multiple errors possible when data just seems right - and basic statistics bears out that you’re on the right track - that are only solvable when you understanding why the statistical formula works the way it does. And there are lots of ways of presenting data. Sometimes all you need is a “yes” or “no” answer that can only come after heavy analysis work. In that case, a simple e-mail might be all the reporting you need. In others, complex relationships and multiple components require a deep understanding of the various graphical methods of presenting data. Knowing which kind of chart, color, graphic or shape conveys a particular datum best is essential knowledge for the Data Scientist. Why I’m excited I love this area of study. I like math, stats, and computing technologies, but it goes beyond that. I love what data can do - how it can help an organization. I’ve been fortunate enough in my professional career these past two decades to work with lots of folks who perform this role at companies from aerospace to medical firms, from manufacturing to retail. Interestingly, the size of the company really isn’t germane here. I worked with one very small bio-tech (cryogenics) company that worked deeply with analysis of complex interrelated data. So  watch this space. No, I’m not leaving Azure or distributed computing or Microsoft. In fact, I think I’m perfectly situated to investigate this role further. We have a huge set of tools, from RDBMS to Hadoop to allow me to explore. And I’m happy to share what I learn along the way.

    Read the article

  • Using Sitecore RenderingContext Parameters as MVC controller action arguments

    - by Kyle Burns
    I have been working with the Technical Preview of Sitecore 6.6 on a project and have been for the most part happy with the way that Sitecore (which truly is an MVC implementation unto itself) has been expanded to support ASP.NET MVC. That said, getting up to speed with the combined platform has not been entirely without stumbles and today I want to share one area where Sitecore could have really made things shine from the "it just works" perspective. A couple days ago I was asked by a colleague about the usage of the "Parameters" field that is defined on Sitecore's Controller Rendering data template. Based on the standard way that Sitecore handles a field named Parameters, I was able to deduce that the field expected key/value pairs separated by the "&" character, but beyond that I wasn't sure and didn't see anything from a documentation perspective to guide me, so it was time to dig and find out where the data in the field was made available. My first thought was that it would be really nice if Sitecore handled the parameters in this field consistently with the way that ASP.NET MVC handles the various parameter collections on the HttpRequest object and automatically maps them to parameters of the action method executing. Being the hopeful sort, I configured a name/value pair on one of my renderings, added a parameter with matching name to the controller action and fired up the bugger to see... that the parameter was not populated. Having established that the field's value was not going to be presented to me the way that I had hoped it would, the next assumption that I would work on was that Sitecore would handle this field similar to how they handle other similar data and would plug it into some ambient object that I could reference from within the controller method. After a considerable amount of guessing, testing, and cracking code open with Redgate's Reflector (a must-have companion to Sitecore documentation), I found that the most direct way to access the parameter was through the ambient RenderingContext object using code similar to: string myArgument = string.Empty; var rc = Sitecore.Mvc.Presentation.RenderingContext.CurrentOrNull; if (rc != null) {     var parms = rc.Rendering.Parameters;     myArgument = parms["myArgument"]; } At this point, we know how this field is used out of the box from Sitecore and can provide information from Sitecore's Content Editor that will be available when the controller action is executing, but it feels a little dirty. In order to properly test the action method I would have to do a lot of setup work and possible use an isolation framework such as Pex and Moles to get at a value that my action method is dependent upon. Notice I said that my method is dependent upon the value but in order to meet that dependency I've accepted another dependency upon Sitecore's RenderingContext.  I'm a big believer in, when possible, ensuring that any piece of code explicitly advertises dependencies using the method signature, so I found myself still wanting this to work the same as if the parameters were in the request route, querystring, or form by being able to add a myArgument parameter to the action method and have this parameter populated by the framework. Lucky for us, the ASP.NET MVC framework is extremely flexible and provides some easy to grok and use extensibility points. ASP.NET MVC is able to provide information from the request as input parameters to controller actions because it uses objects which implement an interface called IValueProvider and have been registered to service the application. The most basic statement of responsibility for an IValueProvider implementation is "I know about some data which is indexed by key. If you hand me the key for a piece of data that I know about I give you that data". When preparing to invoke a controller action, the framework queries registered IValueProvider implementations with the name of each method argument to see if the ValueProvider can supply a value for the parameter. (the rest of this post will assume you're working along and make a lot more sense if you do) Let's pull Sitecore out of the equation for a second to simplify things and create an extremely simple IValueProvider implementation. For this example, I first create a new ASP.NET MVC3 project in Visual Studio, selecting "Internet Application" and otherwise taking defaults (I'm assuming that anyone reading this far in the post either already knows how to do this or will need to take a quick run through one of the many available basic MVC tutorials such as the MVC Music Store). Once the new project is created, go to the Index action of HomeController.  This action sets a Message property on the ViewBag to "Welcome to ASP.NET MVC!" and invokes the View, which has been coded to display the Message. For our example, we will remove the hard coded message from this controller (although we'll leave it just as hard coded somewhere else - this is sample code). For the first step in our exercise, add a string parameter to the Index action method called welcomeMessage and use the value of this argument to set the ViewBag.Message property. The updated Index action should look like: public ActionResult Index(string welcomeMessage) {     ViewBag.Message = welcomeMessage;     return View(); } This represents the entirety of the change that you will make to either the controller or view.  If you run the application now, the home page will display and no message will be presented to the user because no value was supplied to the Action method. Let's now write a ValueProvider to ensure this parameter gets populated. We'll start by creating a new class called StaticValueProvider. When the class is created, we'll update the using statements to ensure that they include the following: using System.Collections.Specialized; using System.Globalization; using System.Web.Mvc; With the appropriate using statements in place, we'll update the StaticValueProvider class to implement the IValueProvider interface. The System.Web.Mvc library already contains a pretty flexible dictionary-like implementation called NameValueCollectionValueProvider, so we'll just wrap that and let it do most of the real work for us. The completed class looks like: public class StaticValueProvider : IValueProvider {     private NameValueCollectionValueProvider _wrappedProvider;     public StaticValueProvider(ControllerContext controllerContext)     {         var parameters = new NameValueCollection();         parameters.Add("welcomeMessage", "Hello from the value provider!");         _wrappedProvider = new NameValueCollectionValueProvider(parameters, CultureInfo.InvariantCulture);     }     public bool ContainsPrefix(string prefix)     {         return _wrappedProvider.ContainsPrefix(prefix);     }     public ValueProviderResult GetValue(string key)     {         return _wrappedProvider.GetValue(key);     } } Notice that the only entry in the collection matches the name of the argument to our HomeController's Index action.  This is the important "secret sauce" that will make things work. We've got our new value provider now, but that's not quite enough to be finished. Mvc obtains IValueProvider instances using factories that are registered when the application starts up. These factories extend the abstract ValueProviderFactory class by initializing and returning the appropriate implementation of IValueProvider from the GetValueProvider method. While I wouldn't do so in production code, for the sake of this example, I'm going to add the following class definition within the StaticValueProvider.cs source file: public class StaticValueProviderFactory : ValueProviderFactory {     public override IValueProvider GetValueProvider(ControllerContext controllerContext)     {         return new StaticValueProvider(controllerContext);     } } Now that we have a factory, we can register it by adding the following line to the end of the Application_Start method in Global.asax.cs: ValueProviderFactories.Factories.Add(new StaticValueProviderFactory()); If you've done everything right to this point, you should be able to run the application and be presented with the home page reading "Hello from the value provider!". Now that you have the basics of the IValueProvider down, you have everything you need to enhance your Sitecore MVC implementation by adding an IValueProvider that exposes values from the ambient RenderingContext's Parameters property. I'll provide the code for the IValueProvider implementation (which should look VERY familiar) and you can use the work we've already done as a reference to create and register the factory: public class RenderingContextValueProvider : IValueProvider {     private NameValueCollectionValueProvider _wrappedProvider = null;     public RenderingContextValueProvider(ControllerContext controllerContext)     {         var collection = new NameValueCollection();         var rc = RenderingContext.CurrentOrNull;         if (rc != null && rc.Rendering != null)         {             foreach(var parameter in rc.Rendering.Parameters)             {                 collection.Add(parameter.Key, parameter.Value);             }         }         _wrappedProvider = new NameValueCollectionValueProvider(collection, CultureInfo.InvariantCulture);         }     public bool ContainsPrefix(string prefix)     {         return _wrappedProvider.ContainsPrefix(prefix);     }     public ValueProviderResult GetValue(string key)     {         return _wrappedProvider.GetValue(key);     } } In this post I've discussed the MVC IValueProvider used to map data to controller action method arguments and how this can be integrated into your Sitecore 6.6 MVC solution.

    Read the article

  • Changing CSS with jQuery syntax in Silverlight using jLight

    - by Timmy Kokke
    Lately I’ve ran into situations where I had to change elements or had to request a value in the DOM from Silverlight. jLight, which was introduced in an earlier article, can help with that. jQuery offers great ways to change CSS during runtime. Silverlight can access the DOM, but it isn’t as easy as jQuery. All examples shown in this article can be looked at in this online demo. The code can be downloaded here.   Part 1: The easy stuff Selecting and changing properties is pretty straight forward. Setting the text color in all <B> </B> elements can be done using the following code:   jQuery.Select("b").Css("color", "red");   The Css() method is an extension method on jQueryObject which is return by the jQuery.Select() method. The Css() method takes to parameters. The first is the Css style property. All properties used in Css can be entered in this string. The second parameter is the value you want to give the property. In this case the property is “color” and it is changed to “red”. To specify which element you want to select you can add a :selector parameter to the Select() method as shown in the next example.   jQuery.Select("b:first").Css("font-family", "sans-serif");   The “:first” pseudo-class selector selects only the first element. This example changes the “font-family” property of the first <B></B> element to “sans-serif”. To make use of intellisense in Visual Studio I’ve added a extension methods to help with the pseudo-classes. In the example below the “font-weight” of every “Even” <LI></LI> is set to “bold”.   jQuery.Select("li".Even()).Css("font-weight", "bold");   Because the Css() extension method returns a jQueryObject it is possible to chain calls to Css(). The following example show setting the “color”, “background-color” and the “font-size” of all headers in one go.   jQuery.Select(":header").Css("color", "#12FF70") .Css("background-color", "yellow") .Css("font-size", "25px");   Part 2: More complex stuff In only a few cases you need to change only one style property. More often you want to change an entire set op style properties all in one go.  You could chain a lot of Css() methods together. A better way is to add a class to a stylesheet and define all properties in there. With the AddClass() method you can set a style class to a set of elements. This example shows how to add the “demostyle” class to all <B></B> in the document.   jQuery.Select("b").AddClass("demostyle");   Removing the class works in the same way:   jQuery.Select("b").RemoveClass("demostyle");   jLight is build for interacting with to the DOM from Silverlight using jQuery. A jQueryObjectCss object can be used to define different sets of style properties in Silverlight. The over 60 most common Css style properties are defined in the jQueryObjectCss class. A string indexer can be used to access all style properties ( CssObject1[“background-color”] equals CssObject1.BackgroundColor). In the code below, two jQueryObjectCss objects are defined and instantiated.   private jQueryObjectCss CssObject1; private jQueryObjectCss CssObject2;   public Demo2() { CssObject1 = new jQueryObjectCss { BackgroundColor = "Lime", Color="Black", FontSize = "12pt", FontFamily = "sans-serif", FontWeight = "bold", MarginLeft = 150, LineHeight = "28px", Border = "Solid 1px #880000" }; CssObject2 = new jQueryObjectCss { FontStyle = "Italic", FontSize = "48", Color = "#225522" }; InitializeComponent(); }   Now instead of chaining to set all different properties you can just pass one of the jQueryObjectCss objects to the Css() method. In this case all <LI></LI> elements are set to match this object.   jQuery.Select("li").Css(CssObject1); When using the jQueryObjectCss objects chaining is still possible. In the following example all headers are given a blue backgroundcolor and the last is set to match CssObject2.   jQuery.Select(":header").Css(new jQueryObjectCss{BackgroundColor = "Blue"}) .Eq(-1).Css(CssObject2);   Part 3: The fun stuff Having Silverlight call JavaScript and than having JavaScript to call Silverlight requires a lot of plumbing code. Everything has to be registered and strings are passed back and forth to execute the JavaScript. jLight makes this kind of stuff so easy, it becomes fun to use. In a lot of situations jQuery can call a function to decide what to do, setting a style class based on complex expressions for example. jLight can do the same, but the callback methods are defined in Silverlight. This example calls the function() method for each <LI></LI> element. The callback method has to take a jQueryObject, an integer and a string as parameters. In this case jLight differs a bit from the actual jQuery implementation. jQuery uses only the index and the className parameters. A jQueryObject is added to make it simpler to access the attributes and properties of the element. If the text of the listitem starts with a ‘D’ or an ‘M’ the class is set. Otherwise null is returned and nothing happens.   private void button1_Click(object sender, RoutedEventArgs e) { jQuery.Select("li").AddClass(function); }   private string function(jQueryObject obj, int index, string className) { if (obj.Text[0] == 'D' || obj.Text[0] == 'M') return "demostyle"; return null; }   The last thing I would like to demonstrate uses even more Silverlight and less jLight, but demonstrates the power of the combination. Animating a style property using a Storyboard with easing functions. First a dependency property is defined. In this case it is a double named Intensity. By handling the changed event the color is set using jQuery.   public double Intensity { get { return (double)GetValue(IntensityProperty); } set { SetValue(IntensityProperty, value); } }   public static readonly DependencyProperty IntensityProperty = DependencyProperty.Register("Intensity", typeof(double), typeof(Demo3), new PropertyMetadata(0.0, IntensityChanged));   private static void IntensityChanged(DependencyObject d, DependencyPropertyChangedEventArgs e) { var i = (byte)(double)e.NewValue; jQuery.Select("span").Css("color", string.Format("#{0:X2}{0:X2}{0:X2}", i)); }   An animation has to be created. This code defines a Storyboard with one keyframe that uses a bounce ease as an easing function. The animation is set to target the Intensity dependency property defined earlier.   private Storyboard CreateAnimation(double value) { Storyboard storyboard = new Storyboard(); var da = new DoubleAnimationUsingKeyFrames(); var d = new EasingDoubleKeyFrame { EasingFunction = new BounceEase(), KeyTime = KeyTime.FromTimeSpan(TimeSpan.FromSeconds(1.0)), Value = value }; da.KeyFrames.Add(d); Storyboard.SetTarget(da, this); Storyboard.SetTargetProperty(da, new PropertyPath(Demo3.IntensityProperty)); storyboard.Children.Add(da); return storyboard; }   Initially the Intensity is set to 128 which results in a gray color. When one of the buttons is pressed, a new animation is created an played. One to animate to black, and one to animate to white.   public Demo3() { InitializeComponent(); Intensity = 128; }   private void button2_Click(object sender, RoutedEventArgs e) { CreateAnimation(255).Begin(); }   private void button3_Click(object sender, RoutedEventArgs e) { CreateAnimation(0).Begin(); }   Conclusion As you can see jLight can make the life of a Silverlight developer a lot easier when accessing the DOM. Almost all jQuery functions that are defined in jLight use the same constructions as described above. I’ve tried to stay as close as possible to the real jQuery. Having JavaScript perform callbacks to Silverlight using jLight will be described in more detail in a future tutorial about AJAX or eventing.

    Read the article

  • Rebuilding CoasterBuzz, Part III: The architecture using the "Web stack of love"

    - by Jeff
    This is the third post in a series about rebuilding one of my Web sites, which has been around for 12 years. I hope to relaunch in the next month or two. More: Part I: Evolution, and death to WCF Part II: Hot data objects I finally hit a point in the re-do of CoasterBuzz where I feel like the major pieces are in place... rewritten, ported and what not, so that I can focus now on front-end design and more interesting creative problems. I've been asked on more than one occasion (OK, just twice) what's going on under the covers, so I figure this might be a good time to explain the overall architecture. As it turns out, I'm using a whole lof of the "Web stack of love," as Scott Hanselman likes to refer to it. Oh that Hanselman. First off, at the center of it all, is BizTalk. Just kidding. That's "enterprise architecture" humor, where every discussion starts with how they'll use BizTalk. Here are the bigger moving parts: It's fairly straight forward. A common library lives in a number of Web apps, all of which are (or will be) powered by ASP.NET MVC 4. They all talk to the same database. There is the main Web site, which also has the endpoint for the Silverlight-based Feed app. The cstr.bz site handles redirects, which are generated when news items are published and sent to Twitter. Facebook publishing is handled via the RSS Graffiti Facebook app. The API site handles requests from the Windows Phone app. The main site depends very heavily on POP Forums, the open source, MVC-based forum I maintain. It serves a number of functions, primarily handling users. These user objects serve in non-forum roles to handle things like news and database contributions, maintaining track records (coaster nerd for "list of rides I've been on") and, perhaps most importantly, paid club memberships. Before I get into more specifics, note that the "glue" for everything is Ninject, the dependency injection framework. I actually prefer StructureMap these days, but I started with Ninject in POP Forums a long time ago. POP Forums has a static class, PopForumsActivation, that new's up an instance of the container, and you can call it from where ever. The downside is that the forums require Ninject in your MVC app as the default dependency resolver. At some point, I'll decouple it, but for now it's not in the way. In the general sense, the entire set of apps follow a repository-service-controller-view pattern. Repos just do data access, service classes do business logic, controllers compose and route, views view. The forum also provides Scoring Game functionality. The Scoring Game is a reasonably abstract framework to award users points based on certain actions, and then award achievements when a certain number of point events happen. For example, the forum already awards a point when someone plus-one's a post you made. You can set up an achievement that says, "Give the user an award when they've had 100 posts plus'd." It also does zero-point entries into the ledger, so if you make a post, you could award an achievement based on 100 posts made. Wiring in the scoring game to CoasterBuzz functionality is just a matter of going to the Ninject container and getting an instance of the event publisher, and passing it events. Forum adapters were introduced into POP Forums a few versions ago, and they can intercept the model generated for forum topic lists and threads and designate an alternate view. These are used to make the "Day in Pictures" forum, where users can upload photos as frame-by-frame photo threads. Another adapter adds an association UI, so users can associate specific amusement parks with their trip report posts. The Silverlight-based Feed app talks to a simple JSON endpoint in the main app. This uses an underlying library I wrote ages ago, simply called Feeds, that aggregates event information. You inherit from a base class that creates instances of a publisher interface, and then use that class to send it an event type and any number of data fields. Feeds has two publishers: One is to the database, and that's used for the endpoint that talks to the Silverlight app. The second publisher publishes to Twitter, if the event is of the type "news." The wiring is a little strange, because for the new posts and topics events, I'm actually pulling out the forum repository classes from the Ninject container and replacing them with overridden methods to publish. I should probably be doing this at the service class level, but whatever. It's my mess. cstr.bz doesn't do anything interesting. It looks up the path, and if it has a match, does a 301 redirect to the long URL. The API site just serves up JSON for the Windows Phone app. The Windows Phone app is Silverlight, of course, and there isn't much to it. It does use the control toolkit, but beyond that, it relies on a simple class that creates a Webclient and calls the server for JSON to deserialize. The same class is now used by the Feed app, which used to use WCF. Simple is better. Data access in POP Forums is all straight SQL, because a lot of it was ported from the ASP.NET version. Most CoasterBuzz data access is handled by the Entity Framework, using the code-first model. The context class in this case does a lot of work to make sure that the table and key mapping works, since much of it breaks from the normal conventions of EF. One of the more powerful things you can do with EF, once you understand the little gotchas, is split tables by row into different entities. For example, a roller coaster photo has everything in the same row, including the metadata, the thumbnail bytes and the image itself. Obviously, if you want to get a list of photos to iterate over in a view, you don't want to get the image data. The use of navigation properties makes it easier to get just what you want. The front end includes Razor views in MVC, and jQuery is used for client-side goodness. I'm also using jQuery UI in a few places, for tabs, a dialog box and autocomplete. I'm also, tentatively, using jQuery Mobile. I've already ported most forum views to Mobile, but they need some work as v1.1 isn't finished yet. I'm not sure if I'll ship CoasterBuzz with mobile views or not yet. It's on the radar, but not something in my delivery criteria. That covers all of the big frameworks in play. Next time I hope to talk more about the front-end experience, which to me is where most of the fun is these days. Hoping to launch in the next month or two. Getting tired of looking at the old site!

    Read the article

  • Columnstore Case Study #2: Columnstore faster than SSAS Cube at DevCon Security

    - by aspiringgeek
    Preamble This is the second in a series of posts documenting big wins encountered using columnstore indexes in SQL Server 2012 & 2014.  Many of these can be found in my big deck along with details such as internals, best practices, caveats, etc.  The purpose of sharing the case studies in this context is to provide an easy-to-consume quick-reference alternative. See also Columnstore Case Study #1: MSIT SONAR Aggregations Why Columnstore? As stated previously, If we’re looking for a subset of columns from one or a few rows, given the right indexes, SQL Server can do a superlative job of providing an answer. If we’re asking a question which by design needs to hit lots of rows—DW, reporting, aggregations, grouping, scans, etc., SQL Server has never had a good mechanism—until columnstore. Columnstore indexes were introduced in SQL Server 2012. However, they're still largely unknown. Some adoption blockers existed; yet columnstore was nonetheless a game changer for many apps.  In SQL Server 2014, potential blockers have been largely removed & they're going to profoundly change the way we interact with our data.  The purpose of this series is to share the performance benefits of columnstore & documenting columnstore is a compelling reason to upgrade to SQL Server 2014. The Customer DevCon Security provides home & business security services & has been in business for 135 years. I met DevCon personnel while speaking to the Utah County SQL User Group on 20 February 2012. (Thanks to TJ Belt (b|@tjaybelt) & Ben Miller (b|@DBADuck) for the invitation which serendipitously coincided with the height of ski season.) The App: DevCon Security Reporting: Optimized & Ad Hoc Queries DevCon users interrogate a SQL Server 2012 Analysis Services cube via SSRS. In addition, the SQL Server 2012 relational back end is the target of ad hoc queries; this DW back end is refreshed nightly during a brief maintenance window via conventional table partition switching. SSRS, SSAS, & MDX Conventional relational structures were unable to provide adequate performance for user interaction for the SSRS reports. An SSAS solution was implemented requiring personnel to ramp up technically, including learning enough MDX to satisfy requirements. Ad Hoc Queries Even though the fact table is relatively small—only 22 million rows & 33GB—the table was a typical DW table in terms of its width: 137 columns, any of which could be the target of ad hoc interrogation. As is common in DW reporting scenarios such as this, it is often nearly to optimize for such queries using conventional indexing. DevCon DBAs & developers attended PASS 2012 & were introduced to the marvels of columnstore in a session presented by Klaus Aschenbrenner (b|@Aschenbrenner) The Details Classic vs. columnstore before-&-after metrics are impressive. Scenario   Conventional Structures   Columnstore   Δ SSRS via SSAS 10 - 12 seconds 1 second >10x Ad Hoc 5-7 minutes (300 - 420 seconds) 1 - 2 seconds >100x Here are two charts characterizing this data graphically.  The first is a linear representation of Report Duration (in seconds) for Conventional Structures vs. Columnstore Indexes.  As is so often the case when we chart such significant deltas, the linear scale doesn’t expose some the dramatically improved values corresponding to the columnstore metrics.  Just to make it fair here’s the same data represented logarithmically; yet even here the values corresponding to 1 –2 seconds aren’t visible.  The Wins Performance: Even prior to columnstore implementation, at 10 - 12 seconds canned report performance against the SSAS cube was tolerable. Yet the 1 second performance afterward is clearly better. As significant as that is, imagine the user experience re: ad hoc interrogation. The difference between several minutes vs. one or two seconds is a game changer, literally changing the way users interact with their data—no mental context switching, no wondering when the results will appear, no preoccupation with the spinning mind-numbing hurry-up-&-wait indicators.  As we’ve commonly found elsewhere, columnstore indexes here provided performance improvements of one, two, or more orders of magnitude. Simplified Infrastructure: Because in this case a nonclustered columnstore index on a conventional DW table was faster than an Analysis Services cube, the entire SSAS infrastructure was rendered superfluous & was retired. PASS Rocks: Once again, the value of attending PASS is proven out. The trip to Charlotte combined with eager & enquiring minds let directly to this success story. Find out more about the next PASS Summit here, hosted this year in Seattle on November 4 - 7, 2014. DevCon BI Team Lead Nathan Allan provided this unsolicited feedback: “What we found was pretty awesome. It has been a game changer for us in terms of the flexibility we can offer people that would like to get to the data in different ways.” Summary For DW, reports, & other BI workloads, columnstore often provides significant performance enhancements relative to conventional indexing.  I have documented here, the second in a series of reports on columnstore implementations, results from DevCon Security, a live customer production app for which performance increased by factors of from 10x to 100x for all report queries, including canned queries as well as reducing time for results for ad hoc queries from 5 - 7 minutes to 1 - 2 seconds. As a result of columnstore performance, the customer retired their SSAS infrastructure. I invite you to consider leveraging columnstore in your own environment. Let me know if you have any questions.

    Read the article

  • From J2EE to Java EE: what has changed?

    - by Bruno.Borges
    See original @Java_EE tweet on 29 May 2014 Yeap, it has been 8 years since the term J2EE was replaced, and still some people refer to it (mostly recruiters, luckily!). But then comes the question: what has changed besides the name? Our community friend Abhishek Gupta worked on this question and provided an excellent response titled "What's in a name? Java EE? J2EE?". But let me give you a few highlights here so you don't lose yourself with YATO (yet another tab opened): J2EE used to be an infrastructure and resources provider only, requiring developers to depend on external 3rd-party frameworks to then implement application requirements or improve productivity J2EE used to require hundreds of XML lines of codes to define just a dozen of resources like EJBs, MDBs, Servlets, and so on J2EE used to support only EAR (Enterprise Archives) with a bunch of other archives like JARs and WARs just to run a simple Web application And so on, and so on! It was a great technology but still required a lot of work to get something up and running. Remember xDoclet? Remember Struts? The old days of pure Hibernate code? Or when Ajax became a trending topic and we were all implementing it with DWR Servlet? Still, we J2EE developers survived, and learned, and helped evolve the platform to a whole new level of DX (Developer Experience). A new DX for J2EE suggested a new name. One that referred to the platform as the Enterprise Edition of Java, because "Java is why we're here" quoting Bill Shannon. The release of Java EE 5 included so many features that clearly showed developers the platform was going after all those DX gaps. Radical simplification of the persistence model with the introduction of JPA Support of Annotations following the launch of Java SE 5.0 Updated XML APIs with the introduction of StAX Drastic simplification of the EJB component model (with annotations!) Convention over Configuration and Dependency Injection A few bullets you may say but that represented a whole new DX and a vision for upcoming versions. Clearly, the release of Java EE 5 helped drive the future of the platform by reducing the number of XMLs, Java Interfaces, simplified configurations, provided convention-over-configuration, etc! We then saw the release of Java EE 6 with even more great features like Managed Beans, CDI, Bean Validation, improved JSP and Servlets APIs, JASPIC, the posisbility to deploy plain WARs and so many other improvements it is difficult to list in one sentence. And we've gotta give Spring Framework some credit here: thanks to Rod Johnson and team, concepts like Dependency Injection fit perfectly into the Java EE Platform. Clearly, Spring used to be one of the most inspiring frameworks for the Java EE platform, and it is great to see things like Pivotal and Spring supporting JSR 352 Batch API standard! Cooperation to keep improving DX at maximum in the server-side Java landscape.  The master piece result of these previous releases is seen and called today as Java EE 7, which by providing a newly and improved JavaServer Faces release, with new features for Web Development like WebSockets API, improved JAX-RS, and JSON-P, but also including Batch API and so many other great improvements, has increased developer productivity and brought innovation to server-side Java developers. Java EE is not just a new name (which was introduced back in May 2006!) but a new Developer Experience for server-side Java developers. To show you why we are here and where we are going (see the Java EE 8 update), we wanted to share with you a draft of the new Java EE logos that the evangelist team created, to help you spread the word about Java EE. You can get access to these images at the Java EE Platform Facebook Album, or the Google+ Java EE Platform Album whichever is better for you, but don't forget to like and/or +1 those social network profiles :-) A message to all job recruiters: stop using J2EE and start using Java EE if you want to find great Java EE 5, Java EE 6, or Java EE 7 developers To not only save you recruiter valuable characters when tweeting that job opportunity but to also match the correct term, we invite you to replace long terms like "Java/J2EE" or even worse "#Java #J2EE #JEE" or all these awkward combinations with the only acceptable hashtag: #JavaEE. And to prove that Java EE is catching among developers and even recruiters, and that J2EE is past, let me highlight here how are the jobs trends! The image below is from Indeed.com trends page, for the following keywords: J2EE, Java/J2EE, Java/JEE, JEE. As you can see, J2EE is indeed going away, while JEE saw some increase. Perhaps because some people are just lazy to type "Java" but at the same time they are aware that J2EE (the '2') is past. We shall forgive that for a while :-) Another proof that J2EE is going away is by looking at its trending statistics at Google. People have been showing less and less interest in the term J2EE. See the chart below:  Recruiter, if you still need proof that J2EE is past, that Java EE is trending, and that other job recruiters are seeking for Java EE developers, and that the developer community is aware of the new term, perhaps these other charts can show you what term you should be using. See for example the Job Trends for Java EE at Indeed.com and notice where it started... 2006! 8 years ago :-) Last but not least, the Google Trends for Java EE term (including the still wrong but forgivable JavaEE term) shows us that the new term is catching up very well. J2EE is past. Oh, and don't worry about the curves going down. We developers like to be hipsters sometimes and today only AngularJS, NodeJS, BigData are going up. Java EE and other traditional server-side technologies such as Spring, or even from other platforms such as Ruby on Rails, PHP, Grails, are pretty much consolidated and the curves... well, they are consolidated too. So If you are a Java EE developer, drop that J2EE from your résumé, and let recruiters also know that this term is past. Embrace Java EE, and enjoy a new developer experience for server-side Java developers. Java EE on TwitterJava EE on Google+Java EE on Facebook

    Read the article

  • Profiling Startup Of VS2012 &ndash; SpeedTrace Profiler

    - by Alois Kraus
    SpeedTrace is a relatively unknown profiler made a company called Ipcas. A single professional license does cost 449€+VAT. For the test I did use SpeedTrace 4.5 which is currently Beta. Although it is cheaper than dotTrace it has by far the most options to influence how profiling does work. First you need to create a tracing project which does configure tracing for one process type. You can start the application directly from the profiler or (much more interesting) it does attach to a specific process when it is started. For this you need to check “Trace the specified …” radio button and enter the process name in the “Process Name of the Trace” edit box. You can even selectively enable tracing for processes with a specific command line. Then you need to activate the trace project by pressing the Activate Project button and you are ready to start VS as usual. If you want to profile the next 10 VS instances that you start you can set the Number of Processes counter to e.g. 10. This is immensely helpful if you are trying to profile only the next 5 started processes. As you can see there are many more tabs which do allow to influence tracing in a much more sophisticated way. SpeedTrace is the only profiler which does not rely entirely on the profiling Api of .NET. Instead it does modify the IL code (instrumentation on the fly) to write tracing information to disc which can later be analyzed. This approach is not only very fast but it does give you unprecedented analysis capabilities. Once the traces are collected they do show up in your workspace where you can open the trace viewer. I do skip the other windows because this view is by far the most useful one. You can sort the methods not only by Wall Clock time but also by CPU consumption and wait time which none of the other products support in their views at the same time. If you want to optimize for CPU consumption sort by CPU time. If you want to find out where most time is spent you need Clock Total time and Clock Waiting. There you can directly see if the method did take long because it did wait on something or it did really execute stuff that did take so long. Once you have found a method you want to drill deeper you can double click on a method to get to the Caller/Callee view which is similar to the JetBrains Method Grid view. But this time you do see much more. In the middle is the clicked method. Above are the methods that call you and below are the methods that you do directly call. Normally you would then start digging deeper to find the end of the chain where the slow method worth optimizing is located. But there is a shortcut. You can press the magic   button to calculate the aggregation of all called methods. This is displayed in the lower left window where you can see each method call and how long it did take. There you can also sort to see if this call stack does only contain methods (e.g. WCF connect calls which you cannot make faster) not worth optimizing. YourKit has a similar feature where it is called Callees List. In the Functions tab you have in the context menu also many other useful analysis options One really outstanding feature is the View Call History Drilldown. When you select this one you get not a sum of all method invocations but a list with the duration of each method call. This is not surprising since SpeedTrace does use tracing to get its timings. There you can get many useful graphs how this method did behave over time. Did it become slower at some point in time or was only the first call slow? The diagrams and the list will tell you that. That is all fine but what should I do when one method call was slow? I want to see from where it was coming from. No problem select the method in the list hit F10 and you get the call stack. This is a life saver if you e.g. search for serialization problems. Today Serializers are used everywhere. You want to find out from where the 5s XmlSerializer.Deserialize call did come from? Hit F10 and you get the call stack which did invoke the 5s Deserialize call. The CPU timeline tab is also useful to find out where long pauses or excessive CPU consumption did happen. Click in the graph to get the Thread Stacks window where you can get a quick overview what all threads were doing at this time. This does look like the Stack Traces feature in YourKit. Only this time you get the last called method first which helps to quickly see what all threads were executing at this moment. YourKit does generate a rather long list which can be hard to go through when you have many threads. The thread list in the middle does not give you call stacks or anything like that but you see which methods were found most often executing code by the profiler which is a good indication for methods consuming most CPU time. This does sound too good to be true? I have not told you the best part yet. The best thing about this profiler is the staff behind it. When I do see a crash or some other odd behavior I send a mail to Ipcas and I do get usually the next day a mail that the problem has been fixed and a download link to the new version. The guys at Ipcas are even so helpful to log in to your machine via a Citrix Client to help you to get started profiling your actual application you want to profile. After a 2h telco I was converted from a hater to a believer of this tool. The fast response time might also have something to do with the fact that they are actively working on 4.5 to get out of the door. But still the support is by far the best I have encountered so far. The only downside is that you should instrument your assemblies including the .NET Framework to get most accurate numbers. You can profile without doing it but then you will see very high JIT times in your process which can severely affect the correctness of the measured timings. If you do not care about exact numbers you can also enable in the main UI in the Data Trace tab logging of method arguments of primitive types. If you need to know what files at which times were opened by your application you can find it out without a debugger. Since SpeedTrace does read huge trace files in its reader you should perhaps use a 64 bit machine to be able to analyze bigger traces as well. The memory consumption of the trace reader is too high for my taste. But they did promise for the next version to come up with something much improved.

    Read the article

  • How to Identify Which Hardware Component is Failing in Your Computer

    - by Chris Hoffman
    Concluding that your computer has a hardware problem is just the first step. If you’re dealing with a hardware issue and not a software issue, the next step is determining what hardware problem you’re actually dealing with. If you purchased a laptop or pre-built desktop PC and it’s still under warranty, you don’t need to care about this. Have the manufacturer fix the PC for you — figuring it out is their problem. If you’ve built your own PC or you want to fix a computer that’s out of warranty, this is something you’ll need to do on your own. Blue Screen 101: Search for the Error Message This may seem like obvious advice, but searching for information about a blue screen’s error message can help immensely. Most blue screens of death you’ll encounter on modern versions of Windows will likely be caused by hardware failures. The blue screen of death often displays information about the driver that crashed or the type of error it encountered. For example, let’s say you encounter a blue screen that identified “NV4_disp.dll” as the driver that caused the blue screen. A quick Google search will reveal that this is the driver for NVIDIA graphics cards, so you now have somewhere to start. It’s possible that your graphics card is failing if you encounter such an error message. Check Hard Drive SMART Status Hard drives have a built in S.M.A.R.T. (Self-Monitoring, Analysis, and Reporting Technology) feature. The idea is that the hard drive monitors itself and will notice if it starts to fail, providing you with some advance notice before the drive fails completely. This isn’t perfect, so your hard drive may fail even if SMART says everything is okay. If you see any sort of “SMART error” message, your hard drive is failing. You can use SMART analysis tools to view the SMART health status information your hard drives are reporting. Test Your RAM RAM failure can result in a variety of problems. If the computer writes data to RAM and the RAM returns different data because it’s malfunctioning, you may see application crashes, blue screens, and file system corruption. To test your memory and see if it’s working properly, use Windows’ built-in Memory Diagnostic tool. The Memory Diagnostic tool will write data to every sector of your RAM and read it back afterwards, ensuring that all your RAM is working properly. Check Heat Levels How hot is is inside your computer? Overheating can rsult in blue screens, crashes, and abrupt shut downs. Your computer may be overheating because you’re in a very hot location, it’s ventilated poorly, a fan has stopped inside your computer, or it’s full of dust. Your computer monitors its own internal temperatures and you can access this information. It’s generally available in your computer’s BIOS, but you can also view it with system information utilities such as SpeedFan or Speccy. Check your computer’s recommended temperature level and ensure it’s within the appropriate range. If your computer is overheating, you may see problems only when you’re doing something demanding, such as playing a game that stresses your CPU and graphics card. Be sure to keep an eye on how hot your computer gets when it performs these demanding tasks, not only when it’s idle. Stress Test Your CPU You can use a utility like Prime95 to stress test your CPU. Such a utility will fore your computer’s CPU to perform calculations without allowing it to rest, working it hard and generating heat. If your CPU is becoming too hot, you’ll start to see errors or system crashes. Overclockers use Prime95 to stress test their overclock settings — if Prime95 experiences errors, they throttle back on their overclocks to ensure the CPU runs cooler and more stable. It’s a good way to check if your CPU is stable under load. Stress Test Your Graphics Card Your graphics card can also be stress tested. For example, if your graphics driver crashes while playing games, the games themselves crash, or you see odd graphical corruption, you can run a graphics benchmark utility like 3DMark. The benchmark will stress your graphics card and, if it’s overheating or failing under load, you’ll see graphical problems, crashes, or blue screens while running the benchmark. If the benchmark seems to work fine but you have issues playing a certain game, it may just be a problem with that game. Swap it Out Not every hardware problem is easy to diagnose. If you have a bad motherboard or power supply, their problems may only manifest through occasional odd issues with other components. It’s hard to tell if these components are causing problems unless you replace them completely. Ultimately, the best way to determine whether a component is faulty is to swap it out. For example, if you think your graphics card may be causing your computer to blue screen, pull the graphics card out of your computer and swap in a new graphics card. If everything is working well, it’s likely that your previous graphics card was bad. This isn’t easy for people who don’t have boxes of components sitting around, but it’s the ideal way to troubleshoot. Troubleshooting is all about trial and error, and swapping components out allows you to pin down which component is actually causing the problem through a process of elimination. This isn’t a complete guide to everything that could likely go wrong and how to identify it — someone could write a full textbook on identifying failing components and still not cover everything. But the tips above should give you some places to start dealing with the more common problems. Image Credit: Justin Marty on Flickr     

    Read the article

  • Ubuntu 12.04 wireless (wifi) not working, can not upgrade to 12.10, touchpad gestures not working. What to do?

    - by Ritwik
    I installed ubuntu 12.04 LTS 3 days ago and since then wireless feature and touchpad gestures are not working. Tried everything on internet but still unsuccessful. I cant upgrade to ubuntu 12.10. These are the following comments I tried. Please help me. EDIT: just realized usb 3.0 is also not working. COMMAND lsb_release -r OUTPUT ----------------------------------------------------------------- Release: 12.04 ----------------------------------------------------------------- COMMAND lspci OUTPUT ------------------------------------------------------------------ 00:00.0 Host bridge: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor DRAM Controller (rev 06) 00:01.0 PCI bridge: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor PCI Express x16 Controller (rev 06) 00:01.1 PCI bridge: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor PCI Express x8 Controller (rev 06) 00:02.0 VGA compatible controller: Intel Corporation 4th Gen Core Processor Integrated Graphics Controller (rev 06) 00:03.0 Audio device: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor HD Audio Controller (rev 06) 00:14.0 USB controller: Intel Corporation 8 Series/C220 Series Chipset Family USB xHCI (rev 05) 00:16.0 Communication controller: Intel Corporation 8 Series/C220 Series Chipset Family MEI Controller #1 (rev 04) 00:1a.0 USB controller: Intel Corporation 8 Series/C220 Series Chipset Family USB EHCI #2 (rev 05) 00:1b.0 Audio device: Intel Corporation 8 Series/C220 Series Chipset High Definition Audio Controller (rev 05) 00:1c.0 PCI bridge: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express Root Port #1 (rev d5) 00:1c.1 PCI bridge: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express Root Port #2 (rev d5) 00:1c.2 PCI bridge: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express Root Port #3 (rev d5) 00:1d.0 USB controller: Intel Corporation 8 Series/C220 Series Chipset Family USB EHCI #1 (rev 05) 00:1f.0 ISA bridge: Intel Corporation HM86 Express LPC Controller (rev 05) 00:1f.2 SATA controller: Intel Corporation 8 Series/C220 Series Chipset Family 6-port SATA Controller 1 [AHCI mode] (rev 05) 00:1f.3 SMBus: Intel Corporation 8 Series/C220 Series Chipset Family SMBus Controller (rev 05) 07:00.0 3D controller: NVIDIA Corporation GF117M [GeForce 610M/710M / GT 620M/625M/630M/720M] (rev a1) 08:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller (rev 07) 09:00.0 Unassigned class [ff00]: Realtek Semiconductor Co., Ltd. RTS5229 PCI Express Card Reader (rev 01) 0f:00.0 Network controller: Qualcomm Atheros QCA9565 / AR9565 Wireless Network Adapter (rev 01) ------------------------------------------------------------------ COMMAND sudo apt-get install linux-backports-modules-wireless-lucid-generic OUTPUT ------------------------------------------------------------------- Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package linux-backports-modules-wireless-lucid-generic ------------------------------------------------------------------- COMMAND cat /etc/lsb-release; uname -a OUTPUT ------------------------------------------------------------------- DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.04 DISTRIB_CODENAME=precise DISTRIB_DESCRIPTION="Ubuntu 12.04.5 LTS" Linux ritwik-PC 3.2.0-67-generic #101-Ubuntu SMP Tue Jul 15 17:46:11 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux ------------------------------------------------------------------- COMMAND lspci -nnk | grep -iA2 net OUTPUT ------------------------------------------------------------------- 08:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller [10ec:8136] (rev 07) Subsystem: Hewlett-Packard Company Device [103c:225d] Kernel driver in use: r8169 -- 0f:00.0 Network controller [0280]: Qualcomm Atheros QCA9565 / AR9565 Wireless Network Adapter [168c:0036] (rev 01) Subsystem: Hewlett-Packard Company Device [103c:217f] ------------------------------------------------------------------- COMMAND lsusb OUTPUT ------------------------------------------------------------------- Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 001 Device 002: ID 8087:8008 Intel Corp. Bus 002 Device 002: ID 8087:8000 Intel Corp. ------------------------------------------------------------------- COMMAND iwconfig OUTPUT ------------------------------------------------------------------- lo no wireless extensions. eth0 no wireless extensions. ------------------------------------------------------------------- COMMAND rfkill list all OUTPUT ------------------------------------------------------------------- 0: hp-wifi: Wireless LAN Soft blocked: no Hard blocked: no 1: hp-bluetooth: Bluetooth Soft blocked: no Hard blocked: no ------------------------------------------------------------------- COMMAND lsmod OUTPUT ------------------------------------------------------------------- Module Size Used by snd_hda_codec_realtek 224215 1 bnep 18281 2 rfcomm 47604 0 bluetooth 180113 10 bnep,rfcomm parport_pc 32866 0 ppdev 17113 0 nls_iso8859_1 12713 1 nls_cp437 16991 1 vfat 17585 1 fat 61512 1 vfat snd_hda_intel 33719 3 snd_hda_codec 127706 2 snd_hda_codec_realtek,snd_hda_intel snd_hwdep 17764 1 snd_hda_codec snd_pcm 97275 2 snd_hda_intel,snd_hda_codec snd_seq_midi 13324 0 snd_rawmidi 30748 1 snd_seq_midi snd_seq_midi_event 14899 1 snd_seq_midi snd_seq 61929 2 snd_seq_midi,snd_seq_midi_event nouveau 775039 0 joydev 17693 0 snd_timer 29990 2 snd_pcm,snd_seq snd_seq_device 14540 3 snd_seq_midi,snd_rawmidi,snd_seq ttm 76949 1 nouveau uvcvideo 72627 0 snd 79041 15 snd_hda_codec_realtek,snd_hda_intel,snd_hda_codec,snd_hwdep,snd_pcm,snd_rawmidi,snd_seq,snd_timer,snd_seq_device videodev 98259 1 uvcvideo drm_kms_helper 46978 1 nouveau psmouse 98051 0 drm 241971 3 nouveau,ttm,drm_kms_helper i2c_algo_bit 13423 1 nouveau soundcore 15091 1 snd snd_page_alloc 18529 2 snd_hda_intel,snd_pcm v4l2_compat_ioctl32 17128 1 videodev hp_wmi 18092 0 serio_raw 13211 0 sparse_keymap 13890 1 hp_wmi mxm_wmi 13021 1 nouveau video 19651 1 nouveau wmi 19256 2 hp_wmi,mxm_wmi mac_hid 13253 0 lp 17799 0 parport 46562 3 parport_pc,ppdev,lp r8169 62190 0 ------------------------------------------------------------------- COMMAND sudo su modprobe -v ath9k OUTPUT ------------------------------------------------------------------- insmod /lib/modules/3.2.0-67-generic/kernel/net/wireless/cfg80211.ko insmod /lib/modules/3.2.0-67-generic/kernel/drivers/net/wireless/ath/ath.ko insmod /lib/modules/3.2.0-67-generic/kernel/drivers/net/wireless/ath/ath9k/ath9k_hw.ko insmod /lib/modules/3.2.0-67-generic/kernel/drivers/net/wireless/ath/ath9k/ath9k_common.ko insmod /lib/modules/3.2.0-67-generic/kernel/net/mac80211/mac80211.ko insmod /lib/modules/3.2.0-67-generic/kernel/drivers/net/wireless/ath/ath9k/ath9k.ko ------------------------------------------------------------------- COMMAND do-release-upgrade OUTPUT ------------------------------------------------------------------- Err Upgrade tool signature 404 Not Found [IP: 91.189.88.149 80] Err Upgrade tool 404 Not Found [IP: 91.189.88.149 80] Fetched 0 B in 0s (0 B/s) WARNING:root:file 'quantal.tar.gz.gpg' missing Failed to fetch Fetching the upgrade failed. There may be a network problem. ------------------------------------------------------------------- COMMAND sudo modprobe ath9k dmesg | grep ath9k NO OUTPUT FOR THEM COMMAND dmesg | grep -e ath -e 80211 OUTPUT ------------------------------------------------------------------- [ 13.232372] type=1400 audit(1408867538.399:9): apparmor="STATUS" operation="profile_load" name="/usr/lib/telepathy/mission-control-5" pid=975 comm="apparmor_parser" [ 13.232615] type=1400 audit(1408867538.399:10): apparmor="STATUS" operation="profile_load" name="/usr/lib/telepathy/telepathy-*" pid=975 comm="apparmor_parser" [ 15.186599] ath3k: probe of 3-4:1.0 failed with error -110 [ 15.186635] usbcore: registered new interface driver ath3k [ 88.219329] cfg80211: Calling CRDA to update world regulatory domain [ 88.351665] cfg80211: World regulatory domain updated: [ 88.351667] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) [ 88.351670] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 88.351671] cfg80211: (2457000 KHz - 2482000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) [ 88.351673] cfg80211: (2474000 KHz - 2494000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) [ 88.351674] cfg80211: (5170000 KHz - 5250000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 88.351675] cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) ------------------------------------------------------------------- COMMAND sudo apt-get install touchpad-indicator OUTPUT ------------------------------------------------------------------- Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: gir1.2-gconf-2.0 python-pyudev Suggested packages: python-qt4 python-pyside.qtcore The following NEW packages will be installed: gir1.2-gconf-2.0 python-pyudev touchpad-indicator 0 upgraded, 3 newly installed, 0 to remove and 0 not upgraded. Need to get 84.1 kB of archives. After this operation, 1,136 kB of additional disk space will be used. Do you want to continue [Y/n]? Y Get:1 http://ppa.launchpad.net/atareao/atareao/ubuntu/ precise/main touchpad-indicator all 0.9.3.12-1ubuntu1 [46.5 kB] Get:2 http://archive.ubuntu.com/ubuntu/ precise/main gir1.2-gconf-2.0 amd64 3.2.5-0ubuntu2 [7,098 B] Get:3 http://archive.ubuntu.com/ubuntu/ precise/main python-pyudev all 0.13-1 [30.5 kB] Fetched 84.1 kB in 2s (31.6 kB/s) Selecting previously unselected package gir1.2-gconf-2.0. (Reading database ... 169322 files and directories currently installed.) Unpacking gir1.2-gconf-2.0 (from .../gir1.2-gconf-2.0_3.2.5-0ubuntu2_amd64.deb) ... Selecting previously unselected package python-pyudev. Unpacking python-pyudev (from .../python-pyudev_0.13-1_all.deb) ... Selecting previously unselected package touchpad-indicator. Unpacking touchpad-indicator (from .../touchpad-indicator_0.9.3.12-1ubuntu1_all.deb) ... Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for desktop-file-utils ... Processing triggers for gnome-menus ... Processing triggers for hicolor-icon-theme ... Processing triggers for software-center ... INFO:softwarecenter.db.update:no translation information in database needed Setting up gir1.2-gconf-2.0 (3.2.5-0ubuntu2) ... Setting up python-pyudev (0.13-1) ... Setting up touchpad-indicator (0.9.3.12-1ubuntu1) ... ------------------------------------------------------------------- Not able to find ( drivers/net/wireless/ath/ath9k/hw.c ) or ( drivers/net/wireless/ath/ath9k/hw.h )

    Read the article

  • What's the best way to expose a Model object in a ViewModel?

    - by Angel
    In a WPF MVVM application, I exposed my model object into my viewModel by creating an instance of Model class (which cause dependency) into ViewModel. Instead of creating separate VM properties, I wrap the Model properties inside my ViewModel Property. My model is just an entity framework generated proxy class: public partial class TblProduct { public TblProduct() { this.TblPurchaseDetails = new HashSet<TblPurchaseDetail>(); this.TblPurchaseOrderDetails = new HashSet<TblPurchaseOrderDetail>(); this.TblSalesInvoiceDetails = new HashSet<TblSalesInvoiceDetail>(); this.TblSalesOrderDetails = new HashSet<TblSalesOrderDetail>(); } public int ProductId { get; set; } public string ProductCode { get; set; } public string ProductName { get; set; } public int CategoryId { get; set; } public string Color { get; set; } public Nullable<decimal> PurchaseRate { get; set; } public Nullable<decimal> SalesRate { get; set; } public string ImagePath { get; set; } public bool IsActive { get; set; } public virtual TblCompany TblCompany { get; set; } public virtual TblProductCategory TblProductCategory { get; set; } public virtual TblUser TblUser { get; set; } public virtual ICollection<TblPurchaseDetail> TblPurchaseDetails { get; set; } public virtual ICollection<TblPurchaseOrderDetail> TblPurchaseOrderDetails { get; set; } public virtual ICollection<TblSalesInvoiceDetail> TblSalesInvoiceDetails { get; set; } public virtual ICollection<TblSalesOrderDetail> TblSalesOrderDetails { get; set; } } Here is my ViewModel: public class ProductViewModel : WorkspaceViewModel { #region Constructor public ProductViewModel() { StartApp(); } #endregion //Constructor #region Properties private IProductDataService _dataService; public IProductDataService DataService { get { if (_dataService == null) { if (IsInDesignMode) { _dataService = new ProductDataServiceMock(); } else { _dataService = new ProductDataService(); } } return _dataService; } } //Get and set Model object private TblProduct _product; public TblProduct Product { get { return _product ?? (_product = new TblProduct()); } set { _product = value; } } #region Public Properties public int ProductId { get { return Product.ProductId; } set { if (Product.ProductId == value) { return; } Product.ProductId = value; RaisePropertyChanged("ProductId"); } } public string ProductName { get { return Product.ProductName; } set { if (Product.ProductName == value) { return; } Product.ProductName = value; RaisePropertyChanged(() => ProductName); } } private ObservableCollection<TblProduct> _productRecords; public ObservableCollection<TblProduct> ProductRecords { get { return _productRecords; } set { _productRecords = value; RaisePropertyChanged("ProductRecords"); } } //Selected Product private TblProduct _selectedProduct; public TblProduct SelectedProduct { get { return _selectedProduct; } set { _selectedProduct = value; if (_selectedProduct != null) { this.ProductId = _selectedProduct.ProductId; this.ProductCode = _selectedProduct.ProductCode; } RaisePropertyChanged("SelectedProduct"); } } #endregion //Public Properties #endregion // Properties #region Commands private ICommand _newCommand; public ICommand NewCommand { get { if (_newCommand == null) { _newCommand = new RelayCommand(() => ResetAll()); } return _newCommand; } } private ICommand _saveCommand; public ICommand SaveCommand { get { if (_saveCommand == null) { _saveCommand = new RelayCommand(() => Save()); } return _saveCommand; } } private ICommand _deleteCommand; public ICommand DeleteCommand { get { if (_deleteCommand == null) { _deleteCommand = new RelayCommand(() => Delete()); } return _deleteCommand; } } #endregion //Commands #region Methods private void StartApp() { LoadProductCollection(); } private void LoadProductCollection() { var q = DataService.GetAllProducts(); this.ProductRecords = new ObservableCollection<TblProduct>(q); } private void Save() { if (SelectedOperateMode == OperateModeEnum.OperateMode.New) { //Pass the Model object into Dataservice for save DataService.SaveProduct(this.Product); } else if (SelectedOperateMode == OperateModeEnum.OperateMode.Edit) { //Pass the Model object into Dataservice for Update DataService.UpdateProduct(this.Product); } ResetAll(); LoadProductCollection(); } #endregion //Methods } Here is my Service class: class ProductDataService:IProductDataService { /// <summary> /// Context object of Entity Framework model /// </summary> private MaizeEntities Context { get; set; } public ProductDataService() { Context = new MaizeEntities(); } public IEnumerable<TblProduct> GetAllProducts() { using(var context=new R_MaizeEntities()) { var q = from p in context.TblProducts where p.IsDel == false select p; return new ObservableCollection<TblProduct>(q); } } public void SaveProduct(TblProduct _product) { using(var context=new R_MaizeEntities()) { _product.LastModUserId = GlobalObjects.LoggedUserID; _product.LastModDttm = DateTime.Now; _product.CompanyId = GlobalObjects.CompanyID; context.TblProducts.Add(_product); context.SaveChanges(); } } public void UpdateProduct(TblProduct _product) { using (var context = new R_MaizeEntities()) { context.TblProducts.Attach(_product); context.Entry(_product).State = EntityState.Modified; _product.LastModUserId = GlobalObjects.LoggedUserID; _product.LastModDttm = DateTime.Now; _product.CompanyId = GlobalObjects.CompanyID; context.SaveChanges(); } } public void DeleteProduct(int _productId) { using (var context = new R_MaizeEntities()) { var product = (from c in context.TblProducts where c.ProductId == _productId select c).First(); product.LastModUserId = GlobalObjects.LoggedUserID; product.LastModDttm = DateTime.Now; product.IsDel = true; context.SaveChanges(); } } } I exposed my model object in my viewModel by creating an instance of it using new keyword, also I instantiated my DataService class in VM. I know this will cause a strong dependency. So: What's the best way to expose a Model object in a ViewModel? What's the best way to use DataService in VM?

    Read the article

  • MVVM- Expose Model object in ViewModel

    - by Angel
    I have a wpf MVVM application , I exposed my model object into my viewModel by creating an instance of Model class (which cause dependency) into ViewModel , and instead of creating seperate VM properties , I wrap the Model properties inside my ViewModel Property. My model is just an entity framework generated proxy classes. Here is my Model class : public partial class TblProduct { public TblProduct() { this.TblPurchaseDetails = new HashSet<TblPurchaseDetail>(); this.TblPurchaseOrderDetails = new HashSet<TblPurchaseOrderDetail>(); this.TblSalesInvoiceDetails = new HashSet<TblSalesInvoiceDetail>(); this.TblSalesOrderDetails = new HashSet<TblSalesOrderDetail>(); } public int ProductId { get; set; } public string ProductCode { get; set; } public string ProductName { get; set; } public int CategoryId { get; set; } public string Color { get; set; } public Nullable<decimal> PurchaseRate { get; set; } public Nullable<decimal> SalesRate { get; set; } public string ImagePath { get; set; } public bool IsActive { get; set; } public virtual TblCompany TblCompany { get; set; } public virtual TblProductCategory TblProductCategory { get; set; } public virtual TblUser TblUser { get; set; } public virtual ICollection<TblPurchaseDetail> TblPurchaseDetails { get; set; } public virtual ICollection<TblPurchaseOrderDetail> TblPurchaseOrderDetails { get; set; } public virtual ICollection<TblSalesInvoiceDetail> TblSalesInvoiceDetails { get; set; } public virtual ICollection<TblSalesOrderDetail> TblSalesOrderDetails { get; set; } } Here is my ViewModel , public class ProductViewModel : WorkspaceViewModel { #region Constructor public ProductViewModel() { StartApp(); } #endregion //Constructor #region Properties private IProductDataService _dataService; public IProductDataService DataService { get { if (_dataService == null) { if (IsInDesignMode) { _dataService = new ProductDataServiceMock(); } else { _dataService = new ProductDataService(); } } return _dataService; } } //Get and set Model object private TblProduct _product; public TblProduct Product { get { return _product ?? (_product = new TblProduct()); } set { _product = value; } } #region Public Properties public int ProductId { get { return Product.ProductId; } set { if (Product.ProductId == value) { return; } Product.ProductId = value; RaisePropertyChanged("ProductId"); } } public string ProductName { get { return Product.ProductName; } set { if (Product.ProductName == value) { return; } Product.ProductName = value; RaisePropertyChanged(() => ProductName); } } private ObservableCollection<TblProduct> _productRecords; public ObservableCollection<TblProduct> ProductRecords { get { return _productRecords; } set { _productRecords = value; RaisePropertyChanged("ProductRecords"); } } //Selected Product private TblProduct _selectedProduct; public TblProduct SelectedProduct { get { return _selectedProduct; } set { _selectedProduct = value; if (_selectedProduct != null) { this.ProductId = _selectedProduct.ProductId; this.ProductCode = _selectedProduct.ProductCode; } RaisePropertyChanged("SelectedProduct"); } } #endregion //Public Properties #endregion // Properties #region Commands private ICommand _newCommand; public ICommand NewCommand { get { if (_newCommand == null) { _newCommand = new RelayCommand(() => ResetAll()); } return _newCommand; } } private ICommand _saveCommand; public ICommand SaveCommand { get { if (_saveCommand == null) { _saveCommand = new RelayCommand(() => Save()); } return _saveCommand; } } private ICommand _deleteCommand; public ICommand DeleteCommand { get { if (_deleteCommand == null) { _deleteCommand = new RelayCommand(() => Delete()); } return _deleteCommand; } } #endregion //Commands #region Methods private void StartApp() { LoadProductCollection(); } private void LoadProductCollection() { var q = DataService.GetAllProducts(); this.ProductRecords = new ObservableCollection<TblProduct>(q); } private void Save() { if (SelectedOperateMode == OperateModeEnum.OperateMode.New) { //Pass the Model object into Dataservice for save DataService.SaveProduct(this.Product); } else if (SelectedOperateMode == OperateModeEnum.OperateMode.Edit) { //Pass the Model object into Dataservice for Update DataService.UpdateProduct(this.Product); } ResetAll(); LoadProductCollection(); } #endregion //Methods } Here is my Service class: class ProductDataService:IProductDataService { /// <summary> /// Context object of Entity Framework model /// </summary> private MaizeEntities Context { get; set; } public ProductDataService() { Context = new MaizeEntities(); } public IEnumerable<TblProduct> GetAllProducts() { using(var context=new R_MaizeEntities()) { var q = from p in context.TblProducts where p.IsDel == false select p; return new ObservableCollection<TblProduct>(q); } } public void SaveProduct(TblProduct _product) { using(var context=new R_MaizeEntities()) { _product.LastModUserId = GlobalObjects.LoggedUserID; _product.LastModDttm = DateTime.Now; _product.CompanyId = GlobalObjects.CompanyID; context.TblProducts.Add(_product); context.SaveChanges(); } } public void UpdateProduct(TblProduct _product) { using (var context = new R_MaizeEntities()) { context.TblProducts.Attach(_product); context.Entry(_product).State = EntityState.Modified; _product.LastModUserId = GlobalObjects.LoggedUserID; _product.LastModDttm = DateTime.Now; _product.CompanyId = GlobalObjects.CompanyID; context.SaveChanges(); } } public void DeleteProduct(int _productId) { using (var context = new R_MaizeEntities()) { var product = (from c in context.TblProducts where c.ProductId == _productId select c).First(); product.LastModUserId = GlobalObjects.LoggedUserID; product.LastModDttm = DateTime.Now; product.IsDel = true; context.SaveChanges(); } } } I exposed my model object in my viewModel by creating an instance of it using new keyword, also I instantiated my DataService class in VM, I know this will cause a strong dependency. So , 1- Whats the best way to expose Model object in ViewModel ? 2- Whats the best way to use DataService in VM ?

    Read the article

  • Why JSF Matters (to You)

    - by reza_rahman
          "Those who have knowledge, don’t predict. Those who predict, don’t have knowledge."                                                                                                    – Lao Tzu You may have noticed Thoughtworks recently crowned the likes AngularJS, etc imminent successors to server-side web frameworks. They apparently also deemed it necessary to single out JSF for righteous scorn. I have to say as I was reading the analysis I couldn't help but remember they also promptly jumped on the Ruby, Rails, Clojure, etc bandwagon a good few years ago seemingly similarly crowing these dynamic languages imminent successors to Java. I remember thinking then as I do now whether the folks at Thoughtworks are really that much smarter than me or if they are simply more prone to the Hipster buzz of the day. I'll let you make the final call on that one. I also noticed mention of "J2EE" in the context of JSF and had to wonder how up-to-date or knowledgeable the person writing the analysis actually was given that the term was basically retired almost a decade ago. There's one thing that I am absolutely sure about though - as a long time pretty happy user of JSF, I had no choice but to speak up on what I believe JSF offers. If you feel the same way, I would encourage you to support the team behind JSF whose hard work you may have benefited from over the years. True to his outspoken character PrimeFaces lead Cagatay Civici certainly did not mince words making the case for the JSF ecosystem - his excellent write-up is well worth a read. He specifically pointed out the practical problems in going whole hog with bare metal JavaScript, CSS, HTML for many development teams. I'll admit I had to smile when I read his closing sentence as well as the rather cheerful comments to the post from actual current JSF/PrimeFaces users that are apparently supposed to be on a gloomy death march. In a similar vein, OmniFaces developer Arjan Tijms did a great job pointing out the fact that despite the extremely competitive server-side Java Web UI space, JSF seems to manage to always consistently come out in either the number one or number two spot over many years and many data sources - do give his well-written message in the JAX-RS user forum a careful read. I don't think it's really reasonable to expect this to be the case for so many years if JSF was not at least a capable if not outstanding technology. If fact if you've ever wondered, Oracle itself is one of the largest JSF users on the planet. As Oracle's Shay Shmeltzer explains in a recent JSF Central interview, many of Oracle's strategic products such as ADF, ADF Mobile and Fusion Applications itself is built on JSF. There are well over 3,000 active developers working on these codebases. I don't think anyone can think of a more compelling reason to make sure that a technology is as effective as possible for practical development under real world conditions. Standing on the shoulders of the above giants, I feel like I can be pretty brief in making my own case for JSF: JSF is a powerful abstraction that brings the original Smalltalk MVC pattern to web development. This means cutting down boilerplate code to the bare minimum such that you really can think of just writing your view markup and then simply wire up some properties and event handlers on a POJO. The best way to see what this really means is to compare JSF code for a pretty small case to other approaches. You should then multiply the additional work for the typical enterprise project to try to understand what the productivity trade-offs are. This is reason alone for me to personally never take any other approach seriously as my primary web UI solution unless it can match the sheer productivity of JSF. Thanks to JSF's focus on components from the ground-up JSF has an extremely strong ecosystem that includes projects like PrimeFaces, RichFaces, OmniFaces, ICEFaces and of course ADF Faces/Mobile. These component libraries taken together constitute perhaps the largest widget set ever developed and optimized for a single web UI technology. To begin to grasp what this really means, just briefly browse the excellent PrimeFaces showcase and think about the fact that you can readily use the widgets on that showcase by just using some simple markup and knowing near to nothing about AJAX, JavaScript or CSS. JSF has the fair and legitimate advantage of being an open vendor neutral standard. This means that no single company, individual or insular clique controls JSF - openness, transparency, accountability, plurality, collaboration and inclusiveness is virtually guaranteed by the standards process itself. You have the option to choose between compatible implementations, escape any form of lock-in or even create your own compatible implementation! As you might gather from the quote at the top of the post, I am not a fan of crystal ball gazing and certainly don't want to engage in it myself. Who knows? However far-fetched it may seem maybe AngularJS is the only future we all have after all. If that is the case, so be it. Unlike what you might have been told, Java EE is about choice at heart and it can certainly work extremely well as a back-end for AngularJS. Likewise, you are also most certainly not limited to just JSF for working with Java EE - you have a rich set of choices like Struts 2, Vaadin, Errai, VRaptor 4, Wicket or perhaps even the new action-oriented web framework being considered for Java EE 8 based on the work in Jersey MVC... Please note that any views expressed here are my own only and certainly does not reflect the position of Oracle as a company.

    Read the article

  • AbstractMethodError on org.apache.xalan.processor.TransformerFactoryImpl

    - by JBristow
    With the following code: private Document transformDoc(Source source) throws TransformerException, IOException { Transformer xslTransformer = TransformerFactory.newInstance().newTransformer(new StreamSource(pdfTransformXslt.getInputStream())); xslTransformer.setParameter("http://apache.org/xml/features/nonvalidating/load-external-dtd", false); xslTransformer.setParameter("http://xml.org/sax/features/validation", false); JDOMResult result = new JDOMResult(); xslTransformer.transform(source, result); return result.getDocument(); } I'm getting the following error: java.lang.AbstractMethodError: org.apache.xalan.processor.TransformerFactoryImpl.setFeature(Ljava/lang/String;Z)V Why is this? Here's my Maven dependency tree: ------------------------------------------------------------------------ Building mc-hub-batch task-segment: [dependency:tree] ------------------------------------------------------------------------ snapshot com.billmelater:mc-test-support:2.0.0.11-SNAPSHOT: checking for updates from repository.jboss.org [dependency:tree {execution: default-cli}] com.billmelater:mc-hub-batch:jar:2.0.0.11-SNAPSHOT +- com.billmelater:mc-hub-core:jar:2.0.0.11-SNAPSHOT:compile | +- commons-lang:commons-lang:jar:2.4:compile | +- commons-collections:commons-collections:jar:3.2.1:compile | +- commons-beanutils:commons-beanutils:jar:1.8.0:compile | +- commons-digester:commons-digester:jar:2.0:compile | | +- (commons-beanutils:commons-beanutils:jar:1.8.0:compile - omitted for duplicate) | | \- (commons-logging:commons-logging:jar:1.1.1:compile - version managed from 1.0.4; omitted for duplicate) | \- (org.springframework.batch:spring-batch-core:jar:2.0.2.RELEASE:compile - omitted for duplicate) +- com.billmelater:mc-test-support:jar:2.0.0.11-SNAPSHOT:test | +- (com.billmelater:mc-hub-core:jar:2.0.0.11-SNAPSHOT:test - omitted for duplicate) | +- (org.springframework:spring:jar:2.5.6:test - omitted for duplicate) | +- org.springframework:spring-jdbc:jar:2.5.6.SEC01:test | | +- (commons-logging:commons-logging:jar:1.1.1:test - omitted for duplicate) | | +- (org.springframework:spring-beans:jar:2.5.6.SEC01:test - omitted for conflict with 2.5.6) | | +- (org.springframework:spring-context:jar:2.5.6.SEC01:test - omitted for conflict with 2.5.6) | | +- (org.springframework:spring-core:jar:2.5.6.SEC01:test - omitted for conflict with 2.5.6) | | \- (org.springframework:spring-tx:jar:2.5.6.SEC01:test - omitted for conflict with 2.5.6) | +- (org.dbunit:dbunit:jar:2.4.5:test - omitted for duplicate) | +- (log4j:log4j:jar:1.2.15:test - omitted for duplicate) | +- (org.slf4j:slf4j-api:jar:1.5.6:compile - version managed from 1.5.8; scope updated from test; omitted for duplicate) | +- (org.slf4j:slf4j-log4j12:jar:1.5.6:test - omitted for duplicate) | +- org.jboss.seam:jboss-seam:jar:2.2.0.GA:test | | +- xstream:xstream:jar:1.1.3:test | | +- (xpp3:xpp3_min:jar:1.1.3.4.O:compile - scope updated from test; omitted for duplicate) | | \- org.jboss.el:jboss-el:jar:1.0_02.CR4:test | +- (org.testng:testng:jar:jdk15:5.8:test - omitted for duplicate) | +- (org.hibernate:hibernate-core:jar:3.3.2.GA:test - version managed from 3.3.0.SP1; omitted for duplicate) | +- org.hibernate:hibernate-entitymanager:jar:3.4.0.GA:test | | +- (org.hibernate:ejb3-persistence:jar:1.0.2.GA:test - omitted for duplicate) | | +- (org.hibernate:hibernate-commons-annotations:jar:3.1.0.GA:test - omitted for duplicate) | | +- (org.hibernate:hibernate-annotations:jar:3.4.0.GA:test - omitted for duplicate) | | +- (org.hibernate:hibernate-core:jar:3.3.2.GA:test - version managed from 3.3.0.SP1; omitted for duplicate) | | +- (org.slf4j:slf4j-api:jar:1.5.6:test - version managed from 1.4.2; omitted for duplicate) | | +- (dom4j:dom4j:jar:1.6.1-jboss:test - version managed from 1.6.1; omitted for duplicate) | | +- (javax.transaction:jta:jar:1.0.1B:test - version managed from 1.1; omitted for duplicate) | | \- javassist:javassist:jar:3.4.GA:test | +- (org.hibernate:hibernate-validator:jar:3.1.0.GA:test - omitted for duplicate) | +- (org.apache.velocity:velocity:jar:1.6.2:test - omitted for duplicate) | \- (ojdbc:ojdbc:jar:14:test - omitted for duplicate) +- org.springframework:spring:jar:2.5.6:compile +- org.springframework.batch:spring-batch-core:jar:2.0.2.RELEASE:compile | +- org.springframework.batch:spring-batch-infrastructure:jar:2.0.2.RELEASE:compile | | +- (commons-logging:commons-logging:jar:1.1.1:compile - omitted for duplicate) | | +- (org.springframework:spring-core:jar:2.5.6:compile - omitted for duplicate) | | \- (stax:stax:jar:1.2.0:compile - omitted for duplicate) | +- org.aspectj:aspectjrt:jar:1.5.4:compile | +- org.aspectj:aspectjweaver:jar:1.5.4:compile | +- com.thoughtworks.xstream:xstream:jar:1.3:compile | | \- xpp3:xpp3_min:jar:1.1.4c:compile | +- org.codehaus.jettison:jettison:jar:1.0:compile | +- org.springframework:spring-aop:jar:2.5.6:compile | | +- aopalliance:aopalliance:jar:1.0:compile | | +- (commons-logging:commons-logging:jar:1.1.1:compile - omitted for duplicate) | | +- (org.springframework:spring-beans:jar:2.5.6:compile - omitted for duplicate) | | \- (org.springframework:spring-core:jar:2.5.6:compile - omitted for duplicate) | +- org.springframework:spring-beans:jar:2.5.6:compile | | +- (commons-logging:commons-logging:jar:1.1.1:compile - omitted for duplicate) | | \- (org.springframework:spring-core:jar:2.5.6:compile - omitted for duplicate) | +- org.springframework:spring-context:jar:2.5.6:compile | | +- (aopalliance:aopalliance:jar:1.0:compile - omitted for duplicate) | | +- (commons-logging:commons-logging:jar:1.1.1:compile - omitted for duplicate) | | +- (org.springframework:spring-beans:jar:2.5.6:compile - omitted for duplicate) | | \- (org.springframework:spring-core:jar:2.5.6:compile - omitted for duplicate) | +- org.springframework:spring-core:jar:2.5.6:compile | | \- (commons-logging:commons-logging:jar:1.1.1:compile - omitted for duplicate) | +- org.springframework:spring-tx:jar:2.5.6:compile | | +- (commons-logging:commons-logging:jar:1.1.1:compile - omitted for duplicate) | | +- (org.springframework:spring-beans:jar:2.5.6:compile - omitted for duplicate) | | +- (org.springframework:spring-context:jar:2.5.6:compile - omitted for duplicate) | | \- (org.springframework:spring-core:jar:2.5.6:compile - omitted for duplicate) | \- stax:stax:jar:1.2.0:compile | \- stax:stax-api:jar:1.0.1:compile +- commons-dbcp:commons-dbcp:jar:1.2.2:compile | \- commons-pool:commons-pool:jar:1.3:compile +- org.hibernate:hibernate-core:jar:3.3.2.GA:compile | +- antlr:antlr:jar:2.7.7:compile (version managed from 2.7.6) | +- dom4j:dom4j:jar:1.6.1-jboss:compile (version managed from 1.6.1) | +- javax.transaction:jta:jar:1.0.1B:compile (version managed from 1.1) | \- (org.slf4j:slf4j-api:jar:1.5.6:compile - version managed from 1.4.2; omitted for duplicate) +- org.hibernate:hibernate-validator:jar:3.1.0.GA:compile | +- (org.hibernate:hibernate-core:jar:3.3.2.GA:compile - version managed from 3.3.0.SP1; omitted for duplicate) | +- org.hibernate:hibernate-commons-annotations:jar:3.1.0.GA:compile | | \- (org.slf4j:slf4j-api:jar:1.5.6:compile - version managed from 1.4.2; omitted for duplicate) | \- (org.slf4j:slf4j-api:jar:1.5.6:compile - version managed from 1.4.2; omitted for duplicate) +- org.hibernate:hibernate-annotations:jar:3.4.0.GA:compile | +- org.hibernate:ejb3-persistence:jar:1.0.2.GA:compile | +- (org.hibernate:hibernate-commons-annotations:jar:3.1.0.GA:compile - omitted for duplicate) | +- (org.hibernate:hibernate-core:jar:3.3.2.GA:compile - version managed from 3.3.0.SP1; omitted for duplicate) | +- (org.slf4j:slf4j-api:jar:1.5.6:compile - version managed from 1.4.2; omitted for duplicate) | \- (dom4j:dom4j:jar:1.6.1-jboss:compile - version managed from 1.6.1; omitted for duplicate) +- ojdbc:ojdbc:jar:14:compile +- org.slf4j:slf4j-api:jar:1.5.6:compile +- org.slf4j:slf4j-log4j12:jar:1.5.6:compile | \- (org.slf4j:slf4j-api:jar:1.5.6:compile - version managed from 1.4.2; omitted for duplicate) +- log4j:log4j:jar:1.2.15:compile +- org.apache.velocity:velocity:jar:1.6.2:compile | +- (commons-collections:commons-collections:jar:3.2.1:compile - omitted for duplicate) | +- (commons-lang:commons-lang:jar:2.4:compile - omitted for duplicate) | \- oro:oro:jar:2.0.8:compile +- org.testng:testng:jar:jdk15:5.8:test +- org.dbunit:dbunit:jar:2.4.5:test | +- junit:junit:jar:4.7:test (version managed from 3.8.2) | +- (org.slf4j:slf4j-api:jar:1.5.6:test - version managed from 1.4.2; omitted for duplicate) | \- (commons-collections:commons-collections:jar:3.2.1:test - omitted for duplicate) +- hsqldb:hsqldb:jar:1.8.0.7:test +- jboss:javassist:jar:3.3.ga:provided +- org.jdom:jdom:jar:1.1:compile +- jaxen:jaxen:jar:1.1.1:provided +- org.apache.xmlgraphics:fop:jar:0.95:compile | +- (org.apache.xmlgraphics:xmlgraphics-commons:jar:1.3.1:compile - omitted for duplicate) | +- org.apache.xmlgraphics:batik-svg-dom:jar:1.7:compile | | +- (org.apache.xmlgraphics:batik-svg-dom:jar:1.7:compile - omitted for cycle) | | +- org.apache.xmlgraphics:batik-anim:jar:1.7:compile | | | +- (org.apache.xmlgraphics:batik-awt-util:jar:1.7:compile - omitted for duplicate) | | | +- (org.apache.xmlgraphics:batik-dom:jar:1.7:compile - omitted for duplicate) | | | +- (org.apache.xmlgraphics:batik-ext:jar:1.7:compile - omitted for duplicate) | | | \- (org.apache.xmlgraphics:batik-parser:jar:1.7:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-awt-util:jar:1.7:compile - omitted for duplicate) | | +- org.apache.xmlgraphics:batik-css:jar:1.7:compile | | | +- (org.apache.xmlgraphics:batik-ext:jar:1.7:compile - omitted for duplicate) | | | +- (org.apache.xmlgraphics:batik-util:jar:1.7:compile - omitted for duplicate) | | | \- (xml-apis:xml-apis-ext:jar:1.3.04:compile - omitted for duplicate) | | +- org.apache.xmlgraphics:batik-dom:jar:1.7:compile | | | +- (org.apache.xmlgraphics:batik-css:jar:1.7:compile - omitted for duplicate) | | | +- (org.apache.xmlgraphics:batik-ext:jar:1.7:compile - omitted for duplicate) | | | +- (org.apache.xmlgraphics:batik-util:jar:1.7:compile - omitted for duplicate) | | | +- (org.apache.xmlgraphics:batik-xml:jar:1.7:compile - omitted for duplicate) | | | +- (xalan:xalan:jar:2.6.0:compile - omitted for duplicate) | | | \- (xml-apis:xml-apis-ext:jar:1.3.04:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-ext:jar:1.7:compile - omitted for duplicate) | | +- org.apache.xmlgraphics:batik-parser:jar:1.7:compile | | | +- (org.apache.xmlgraphics:batik-awt-util:jar:1.7:compile - omitted for duplicate) | | | +- (org.apache.xmlgraphics:batik-util:jar:1.7:compile - omitted for duplicate) | | | \- (org.apache.xmlgraphics:batik-xml:jar:1.7:compile - omitted for duplicate) | | +- org.apache.xmlgraphics:batik-util:jar:1.7:compile | | \- xml-apis:xml-apis-ext:jar:1.3.04:compile | +- org.apache.xmlgraphics:batik-bridge:jar:1.7:compile | | +- (org.apache.xmlgraphics:batik-anim:jar:1.7:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-awt-util:jar:1.7:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-css:jar:1.7:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-dom:jar:1.7:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-ext:jar:1.7:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-bridge:jar:1.7:compile - omitted for cycle) | | +- (org.apache.xmlgraphics:batik-gvt:jar:1.7:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-parser:jar:1.7:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-bridge:jar:1.7:compile - omitted for cycle) | | +- org.apache.xmlgraphics:batik-script:jar:1.7:compile | | +- (org.apache.xmlgraphics:batik-svg-dom:jar:1.7:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-util:jar:1.7:compile - omitted for duplicate) | | +- org.apache.xmlgraphics:batik-xml:jar:1.7:compile | | | \- (org.apache.xmlgraphics:batik-util:jar:1.7:compile - omitted for duplicate) | | +- xalan:xalan:jar:2.6.0:compile | | \- (xml-apis:xml-apis-ext:jar:1.3.04:compile - omitted for duplicate) | +- org.apache.xmlgraphics:batik-awt-util:jar:1.7:compile | | \- (org.apache.xmlgraphics:batik-util:jar:1.7:compile - omitted for duplicate) | +- org.apache.xmlgraphics:batik-gvt:jar:1.7:compile | | +- (org.apache.xmlgraphics:batik-awt-util:jar:1.7:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-gvt:jar:1.7:compile - omitted for cycle) | | +- (org.apache.xmlgraphics:batik-bridge:jar:1.7:compile - omitted for duplicate) | | \- (org.apache.xmlgraphics:batik-util:jar:1.7:compile - omitted for duplicate) | +- org.apache.xmlgraphics:batik-transcoder:jar:1.7:compile | | +- (org.apache.xmlgraphics:batik-awt-util:jar:1.7:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-bridge:jar:1.7:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-dom:jar:1.7:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-gvt:jar:1.7:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-svg-dom:jar:1.7:compile - omitted for duplicate) | | +- org.apache.xmlgraphics:batik-svggen:jar:1.7:compile | | | +- (org.apache.xmlgraphics:batik-awt-util:jar:1.7:compile - omitted for duplicate) | | | \- (org.apache.xmlgraphics:batik-util:jar:1.7:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-util:jar:1.7:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-xml:jar:1.7:compile - omitted for duplicate) | | \- (xml-apis:xml-apis-ext:jar:1.3.04:compile - omitted for duplicate) | +- org.apache.xmlgraphics:batik-extension:jar:1.7:compile | | +- (org.apache.xmlgraphics:batik-awt-util:jar:1.7:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-bridge:jar:1.7:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-css:jar:1.7:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-dom:jar:1.7:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-ext:jar:1.7:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-gvt:jar:1.7:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-parser:jar:1.7:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-svg-dom:jar:1.7:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-util:jar:1.7:compile - omitted for duplicate) | | \- (xml-apis:xml-apis-ext:jar:1.3.04:compile - omitted for duplicate) | +- org.apache.xmlgraphics:batik-ext:jar:1.7:compile | +- commons-logging:commons-logging:jar:1.1.1:compile | +- commons-io:commons-io:jar:1.3.1:compile | \- org.apache.avalon.framework:avalon-framework-api:jar:4.3.1:compile +- org.apache.xmlgraphics:xmlgraphics-commons:jar:1.3.1:compile | +- (commons-io:commons-io:jar:1.3.1:compile - omitted for duplicate) | \- (commons-logging:commons-logging:jar:1.1.1:compile - version managed from 1.0.4; omitted for duplicate) +- org.easymock:easymock:jar:2.0:test \- org.easymock:easymockclassextension:jar:2.2:test +- (org.easymock:easymock:jar:2.2:test - omitted for conflict with 2.0) \- cglib:cglib-nodep:jar:2.2:test (version managed from 2.1_3) Can anyone tell me how to clear out intellij's classpath too?

    Read the article

  • Xapian gem failed to install on Mac OS X Snow Leopard + macports

    - by goodwill
    I have installed xapian-core + xapian-bindings with macports on snow leopard, then trying to install xapian gem fails: Building native extensions. This could take a while... ERROR: Error installing xapian: ERROR: Failed to build gem native extension. /opt/ruby-enterprise/bin/ruby extconf.rb ./configure --with-ruby checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for a thread-safe mkdir -p... ./install-sh -c -d checking for gawk... no checking for mawk... no checking for nawk... no checking for awk... awk checking whether make sets $(MAKE)... yes checking how to create a ustar tar archive... gnutar checking build system type... i386-apple-darwin10.3.0 checking host system type... i386-apple-darwin10.3.0 checking for style of include used by make... GNU checking for gcc... gcc checking for C compiler default output file name... a.out checking whether the C compiler works... yes checking whether we are cross compiling... no checking for suffix of executables... checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether gcc accepts -g... yes checking for gcc option to accept ISO C89... none needed checking dependency style of gcc... gcc3 checking for a sed that does not truncate output... /usr/bin/sed checking for grep that handles long lines and -e... /usr/bin/grep checking for egrep... /usr/bin/grep -E checking for ld used by gcc... /usr/libexec/gcc/i686-apple-darwin10/4.2.1/ld checking if the linker (/usr/libexec/gcc/i686-apple-darwin10/4.2.1/ld) is GNU ld... no checking for /usr/libexec/gcc/i686-apple-darwin10/4.2.1/ld option to reload object files... -r checking for BSD-compatible nm... /usr/bin/nm checking whether ln -s works... yes checking how to recognize dependent libraries... pass_all checking how to run the C preprocessor... gcc -E checking for ANSI C header files... yes checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes checking dlfcn.h usability... yes checking dlfcn.h presence... yes checking for dlfcn.h... yes checking for g++... g++ checking whether we are using the GNU C++ compiler... yes checking whether g++ accepts -g... yes checking dependency style of g++... gcc3 checking how to run the C++ preprocessor... g++ -E checking the maximum length of command line arguments... 196608 checking command to parse /usr/bin/nm output from gcc object... ok checking for objdir... .libs checking for ar... ar checking for ranlib... ranlib checking for strip... strip checking for dsymutil... dsymutil checking for nmedit... nmedit checking for -single_module linker flag... yes checking for -exported_symbols_list linker flag... yes checking if gcc supports -fno-rtti -fno-exceptions... no checking for gcc option to produce PIC... -fno-common checking if gcc PIC flag -fno-common works... yes checking if gcc static flag -static works... no checking if gcc supports -c -o file.o... yes checking whether the gcc linker (/usr/libexec/gcc/i686-apple-darwin10/4.2.1/ld) supports shared libraries... yes checking dynamic linker characteristics... darwin10.3.0 dyld checking how to hardcode library paths into programs... immediate checking whether stripping libraries is possible... yes checking if libtool supports shared libraries... yes checking whether to build shared libraries... yes checking whether to build static libraries... no checking whether we are using the GNU C++ compiler... (cached) yes checking whether g++ accepts -g... (cached) yes checking dependency style of g++... (cached) gcc3 checking for xapian-config... /opt/local/bin/xapian-config checking /opt/local/bin/xapian-config works... yes checking whether to enable maintainer-specific portions of Makefiles... no checking for ruby1.8... no checking for ruby... /opt/ruby-enterprise/bin/ruby checking /opt/ruby-enterprise/bin/ruby version... ruby 1.8.7 (2009-06-12 patchlevel 174) [i686-darwin10.2.0], MBARI 0x6770, Ruby Enterprise Edition 2009.10 checking for /opt/ruby-enterprise/lib/ruby/1.8/i686-darwin10.2.0/ruby.h... yes checking ruby/io.h... no checking whether to use -fvisibility=hidden... yes configure: creating ./config.status config.status: creating Makefile config.status: creating xapian-version.h config.status: creating python/Makefile config.status: creating python/docs/Makefile config.status: creating php/Makefile config.status: creating php/docs/Makefile config.status: creating java/Makefile config.status: creating java/native/Makefile config.status: creating java/org/xapian/Makefile config.status: creating java/org/xapian/errors/Makefile config.status: creating java/org/xapian/examples/Makefile config.status: creating java-swig/Makefile config.status: creating tcl8/Makefile config.status: creating tcl8/docs/Makefile config.status: creating tcl8/pkgIndex.tcl config.status: creating csharp/Makefile config.status: creating csharp/docs/Makefile config.status: creating csharp/AssemblyInfo.cs config.status: creating ruby/Makefile config.status: creating ruby/docs/Makefile config.status: creating xapian-bindings.spec config.status: creating python/generate-python-exceptions config.status: creating config.h config.status: config.h is unchanged config.status: executing depfiles commands *** Building bindings for languages: ruby make make all-recursive Making all in ruby make all-recursive Making all in docs make all-am make[5]: Nothing to be done for `all-am'. /bin/sh ../libtool --tag=CXX --mode=compile g++ -DHAVE_CONFIG_H -I. -I.. -I/opt/ruby-enterprise/lib/ruby/1.8/i686-darwin10.2.0 -I/opt/ruby-enterprise/lib/ruby/1.8/i686-darwin10.2.0 -fno-strict-aliasing -Wall -Wno-unused -Wno-uninitialized -fvisibility=hidden -I/opt/local/include -g -O2 -MT xapian_wrap.lo -MD -MP -MF .deps/xapian_wrap.Tpo -c -o xapian_wrap.lo xapian_wrap.cc ../libtool: line 393: /bin/sed: No such file or directory ../libtool: line 393: /bin/sed: No such file or directory ../libtool: line 792: /bin/sed: No such file or directory : ignoring unknown tag ../libtool: line 792: /bin/sed: No such file or directory *** Warning: inferring the mode of operation is deprecated. *** Future versions of Libtool will require --mode=MODE be specified. ../libtool: line 1103: /bin/sed: No such file or directory ../libtool: line 1103: /bin/sed: No such file or directory ../libtool: line 1103: /bin/sed: No such file or directory ../libtool: line 1103: /bin/sed: No such file or directory ../libtool: line 1103: /bin/sed: No such file or directory ../libtool: line 1103: /bin/sed: No such file or directory ../libtool: line 1103: /bin/sed: No such file or directory ../libtool: line 1103: /bin/sed: No such file or directory ../libtool: line 1103: /bin/sed: No such file or directory ../libtool: line 1103: /bin/sed: No such file or directory ../libtool: line 1103: /bin/sed: No such file or directory ../libtool: line 1103: /bin/sed: No such file or directory ../libtool: line 1103: /bin/sed: No such file or directory ../libtool: line 1103: /bin/sed: No such file or directory ../libtool: line 1103: /bin/sed: No such file or directory ../libtool: line 1103: /bin/sed: No such file or directory ../libtool: line 1103: /bin/sed: No such file or directory ../libtool: line 1103: /bin/sed: No such file or directory ../libtool: line 1103: /bin/sed: No such file or directory ../libtool: line 1103: /bin/sed: No such file or directory ../libtool: line 1103: /bin/sed: No such file or directory ../libtool: line 1156: /bin/sed: No such file or directory : compile: cannot determine name of library object from `' make[4]: *** [xapian_wrap.lo] Error 1 make[3]: *** [all-recursive] Error 1 make[2]: *** [all] Error 2 make[1]: *** [all-recursive] Error 1 make: *** [all] Error 2 extconf.rb:3:in `system!': unhandled exception from extconf.rb:6 Gem files will remain installed in /opt/ruby-enterprise/lib/ruby/gems/1.8/gems/xapian-1.0.15 for inspection. Results logged to /opt/ruby-enterprise/lib/ruby/gems/1.8/gems/xapian-1.0.15/gem_make.out Any idea pal?

    Read the article

  • WinSat command line closes too fast

    - by Rob Cowell
    I'm trying to do some analysis under Windows 7 as to why I can't get a Windows Experience Index (WEI) rating due to disk issues. To this end, I'm trying to run winsat from the command line with :- winsat disk -seq -read -drive c and winsat disk -ran -write -n 2 but the command window is closing too quickly to be able to read the results. I've tried opening a seperate cmd window to run it in but it still insists on launching its own window to run in, closing straight away. Any idea how I can see the output?

    Read the article

  • How to resolve 0xc000007b error opening Adobe After Effects?

    - by user225170
    I have installed Adobe CS6 on Windows 8 and get the error "0xc000007b" every time I open open Adobe After Effects. All other Adobe software, including Photoshop and Premiere Pro, work perfectly. I have looked online extensively but have not found any solutions. What can I try/do to resolve this error? OS: Windows 8 (64 bit) - CPU: Intel Core i3-3110M 2,40GHz - RAM: 6GB DD3 - FX: AMD Radeon HD 7670M When I open Adobe After Effects CS6 with Dependency Walker, this message appears : "Error: At least one module has an unresolved import due to a missing export function in an implicitly dependent module. Error: Modules with different CPU types were found. Warning: At least one module has an unresolved import due to a missing export function in a delay-load dependent module." What can I do to resolve this error?

    Read the article

  • Install Glibc2 using Yum

    - by Nerrve
    I'm trying to install glibc2 version 2.11 that's needed for openoffice 3.4 https://issues.apache.org/ooo/show_bug.cgi?id=119393 but i can't seem to find the dependency with yum. I already have the following dependencies installed. glibc.i686 2.5-49.el5_5.7 installed glibc.x86_64 2.5-49.el5_5.7 installed glibc-common.x86_64 2.5-49.el5_5.7 installed glibc-devel.x86_64 2.5-49.el5_5.7 installed glibc-headers.x86_64 2.5-49.el5_5.7 installed libc-client.x86_64 2004g-2.2.1 installed and glibc.i686 2.5-81.el5_8.2 updates glibc.x86_64 2.5-81.el5_8.2 updates glibc-common.x86_64 2.5-81.el5_8.2 updates glibc-devel.i386 2.5-81.el5_8.2 updates glibc-devel.x86_64 2.5-81.el5_8.2 updates glibc-headers.x86_64 2.5-81.el5_8.2 updates glibc-utils.x86_64 2.5-81.el5_8.2 updates I ran the following to get the version but it shows something different [root@***** /]# ./lib64/libc.so.6 GNU C Library stable release version 2.5, by Roland McGrath et al. Can someone please help? Thanks! EDIT : I'm using CentOS 2.6.18-128.1.10.el5

    Read the article

  • Photoshop CS6 beta - functionality

    - by Biker John
    Is Photoshop beta cs6 full featured, or is just a preview of SOME of the new functions? I want to make sure before I remove my cs5 version. There are two contradictions that are making me unsure (as written on the official cs6 beta download page): Explore Photoshop CS6 beta for a sneak preview of some of the incredible performance enhancements, imaging magic, and creativity tools we are working on... AND Photoshop CS6 beta includes all the features in Photoshop CS6 and Photoshop CS6 Extended. Take this opportunity to try out the 3D image editing and quantitative image analysis capabilities of Photoshop Extended*, but note that—while these features will be included in the shipping version of Photoshop CS6 Extended—they will not be included in the shipping version of Photoshop CS6...

    Read the article

  • How can I get metrics such as incoming and outcoming traffic with Apache servers?

    - by hhh
    Suppose a network consisting of hubs A, B, C, D ... and X. I am looking for ways to visualize how users use the network such as incoming, outgoing and other metrics. In Apache logs, I can see some errs if something did not work but I have no realistic picture about such a system in general i.e. how the system actually works. I am looking for some sort of flow-analysis and I would like to get pure data to create some graph. Then analyze the graph with some metrics where I do not even know the right metrics, perhaps some dispersion metric. My goal is to create some sort of objective way to judge quality.

    Read the article

< Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >