Search Results

Search found 6638 results on 266 pages for 'boost range'.

Page 254/266 | < Previous Page | 250 251 252 253 254 255 256 257 258 259 260 261  | Next Page >

  • Unable to boot Ubuntu 13.10 (nVidia GTX 770m and Intel HD 4600)

    - by Raziel Gonzalez
    Ever since I bought this laptop I've been trying to install Ubuntu on it. It came with W8 preinstalled. Up to this point, I've been able to boot in UEFI mode with a black screen. I can tell it's trying to use the nVidia card (there's a led on the computer, depending on the color you can tell which GPU is using) and if I press crtl+alt+F1 I can go to console mode. Taking this advantage I tried to install bumblebee and after a successful install the led that indicates which GPU is being used change, indicating that it switched to the Intel HD 4600 graphics. After the installation I tried to initiate the graphic interface (startx) with no success. Xorg.0.log shows the error: [ 3706.779] X.Org X Server 1.14.3 Release Date: 2013-09-12 [ 3706.782] X Protocol Version 11, Revision 0 [ 3706.783] Build Operating System: Linux 3.2.0-37-generic x86_64 Ubuntu [ 3706.783] Current Operating System: Linux ubuntu 3.11.0-12-generic #19-Ubuntu SMP Wed Oct 9 16:20:46 UTC 2013 x86_64 [ 3706.783] Kernel command line: BOOT_IMAGE=/casper/vmlinuz.efi file=/cdrom/preseed/ubuntu.seed boot=casper nomodeset -- [ 3706.785] Build Date: 15 October 2013 09:23:37AM [ 3706.786] xorg-server 2:1.14.3-3ubuntu2 (For technical support please see http://www.ubuntu.com/support) [ 3706.786] Current version of pixman: 0.30.2 [ 3706.788] Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. [ 3706.788] Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. [ 3706.791] (==) Log file: "/var/log/Xorg.0.log", Time: Sat Nov 2 12:28:52 2013 [ 3706.792] (==) Using system config directory "/usr/share/X11/xorg.conf.d" [ 3706.792] (==) No Layout section. Using the first Screen section. [ 3706.792] (==) No screen section available. Using defaults. [ 3706.792] (**) |-->Screen "Default Screen Section" (0) [ 3706.792] (**) | |-->Monitor "<default monitor>" [ 3706.792] (==) No monitor specified for screen "Default Screen Section". Using a default monitor configuration. [ 3706.792] (==) Automatically adding devices [ 3706.792] (==) Automatically enabling devices [ 3706.792] (==) Automatically adding GPU devices [ 3706.792] (WW) The directory "/usr/share/fonts/X11/cyrillic" does not exist. [ 3706.792] Entry deleted from font path. [ 3706.792] (WW) The directory "/usr/share/fonts/X11/100dpi/" does not exist. [ 3706.792] Entry deleted from font path. [ 3706.792] (WW) The directory "/usr/share/fonts/X11/75dpi/" does not exist. [ 3706.792] Entry deleted from font path. [ 3706.792] (WW) The directory "/usr/share/fonts/X11/100dpi" does not exist. [ 3706.792] Entry deleted from font path. [ 3706.792] (WW) The directory "/usr/share/fonts/X11/75dpi" does not exist. [ 3706.792] Entry deleted from font path. [ 3706.792] (==) FontPath set to: /usr/share/fonts/X11/misc, /usr/share/fonts/X11/Type1, built-ins [ 3706.792] (==) ModulePath set to "/usr/lib/x86_64-linux-gnu/xorg/extra-modules,/usr/lib/xorg/extra-modules,/usr/lib/xorg/modules" [ 3706.792] (II) The server relies on udev to provide the list of input devices. If no devices become available, reconfigure udev or disable AutoAddDevices. [ 3706.792] (II) Loader magic: 0x7ff680918d20 [ 3706.792] (II) Module ABI versions: [ 3706.792] X.Org ANSI C Emulation: 0.4 [ 3706.792] X.Org Video Driver: 14.1 [ 3706.792] X.Org XInput driver : 19.1 [ 3706.792] X.Org Server Extension : 7.0 [ 3706.793] (--) PCI:*(0:0:2:0) 8086:0416:1462:10e8 rev 6, Mem @ 0xf7400000/4194304, 0xb0000000/268435456, I/O @ 0x0000f000/64 [ 3706.793] (II) Open ACPI successful (/var/run/acpid.socket) [ 3706.794] Initializing built-in extension Generic Event Extension [ 3706.795] Initializing built-in extension SHAPE [ 3706.796] Initializing built-in extension MIT-SHM [ 3706.797] Initializing built-in extension XInputExtension [ 3706.797] Initializing built-in extension XTEST [ 3706.798] Initializing built-in extension BIG-REQUESTS [ 3706.799] Initializing built-in extension SYNC [ 3706.799] Initializing built-in extension XKEYBOARD [ 3706.800] Initializing built-in extension XC-MISC [ 3706.801] Initializing built-in extension SECURITY [ 3706.802] Initializing built-in extension XINERAMA [ 3706.802] Initializing built-in extension XFIXES [ 3706.803] Initializing built-in extension RENDER [ 3706.804] Initializing built-in extension RANDR [ 3706.804] Initializing built-in extension COMPOSITE [ 3706.805] Initializing built-in extension DAMAGE [ 3706.806] Initializing built-in extension MIT-SCREEN-SAVER [ 3706.806] Initializing built-in extension DOUBLE-BUFFER [ 3706.807] Initializing built-in extension RECORD [ 3706.807] Initializing built-in extension DPMS [ 3706.808] Initializing built-in extension X-Resource [ 3706.809] Initializing built-in extension XVideo [ 3706.809] Initializing built-in extension XVideo-MotionCompensation [ 3706.810] Initializing built-in extension SELinux [ 3706.811] Initializing built-in extension XFree86-VidModeExtension [ 3706.811] Initializing built-in extension XFree86-DGA [ 3706.812] Initializing built-in extension XFree86-DRI [ 3706.812] Initializing built-in extension DRI2 [ 3706.812] (II) "glx" will be loaded by default. [ 3706.812] (WW) "xmir" is not to be loaded by default. Skipping. [ 3706.812] (II) LoadModule: "dri2" [ 3706.812] (II) Module "dri2" already built-in [ 3706.812] (II) LoadModule: "glamoregl" [ 3706.813] (II) Loading /usr/lib/xorg/modules/libglamoregl.so [ 3706.813] (II) Module glamoregl: vendor="X.Org Foundation" [ 3706.813] compiled for 1.14.2.901, module version = 0.5.1 [ 3706.813] ABI class: X.Org ANSI C Emulation, version 0.4 [ 3706.813] (II) LoadModule: "glx" [ 3706.813] (II) Loading /usr/lib/xorg/modules/extensions/libglx.so [ 3706.813] (II) Module glx: vendor="X.Org Foundation" [ 3706.813] compiled for 1.14.3, module version = 1.0.0 [ 3706.813] ABI class: X.Org Server Extension, version 7.0 [ 3706.813] (==) AIGLX enabled [ 3706.814] Loading extension GLX [ 3706.814] (==) Matched intel as autoconfigured driver 0 [ 3706.814] (==) Matched vesa as autoconfigured driver 1 [ 3706.814] (==) Matched modesetting as autoconfigured driver 2 [ 3706.814] (==) Matched fbdev as autoconfigured driver 3 [ 3706.814] (==) Assigned the driver to the xf86ConfigLayout [ 3706.814] (II) LoadModule: "intel" [ 3706.814] (II) Loading /usr/lib/xorg/modules/drivers/intel_drv.so [ 3706.814] (II) Module intel: vendor="X.Org Foundation" [ 3706.814] compiled for 1.14.3, module version = 2.99.904 [ 3706.814] Module class: X.Org Video Driver [ 3706.814] ABI class: X.Org Video Driver, version 14.1 [ 3706.814] (II) LoadModule: "vesa" [ 3706.814] (II) Loading /usr/lib/xorg/modules/drivers/vesa_drv.so [ 3706.814] (II) Module vesa: vendor="X.Org Foundation" [ 3706.814] compiled for 1.14.1, module version = 2.3.2 [ 3706.814] Module class: X.Org Video Driver [ 3706.814] ABI class: X.Org Video Driver, version 14.1 [ 3706.814] (II) LoadModule: "modesetting" [ 3706.814] (II) Loading /usr/lib/xorg/modules/drivers/modesetting_drv.so [ 3706.814] (II) Module modesetting: vendor="X.Org Foundation" [ 3706.814] compiled for 1.14.1, module version = 0.8.0 [ 3706.814] Module class: X.Org Video Driver [ 3706.814] ABI class: X.Org Video Driver, version 14.1 [ 3706.814] (II) LoadModule: "fbdev" [ 3706.814] (II) Loading /usr/lib/xorg/modules/drivers/fbdev_drv.so [ 3706.815] (II) Module fbdev: vendor="X.Org Foundation" [ 3706.815] compiled for 1.14.1, module version = 0.4.3 [ 3706.815] Module class: X.Org Video Driver [ 3706.815] ABI class: X.Org Video Driver, version 14.1 [ 3706.815] (II) intel: Driver for Intel(R) Integrated Graphics Chipsets: i810, i810-dc100, i810e, i815, i830M, 845G, 854, 852GM/855GM, 865G, 915G, E7221 (i915), 915GM, 945G, 945GM, 945GME, Pineview GM, Pineview G, 965G, G35, 965Q, 946GZ, 965GM, 965GME/GLE, G33, Q35, Q33, GM45, 4 Series, G45/G43, Q45/Q43, G41, B43, HD Graphics, HD Graphics 2000, HD Graphics 3000, HD Graphics 2500, HD Graphics 4000, HD Graphics P4000, HD Graphics 4600, HD Graphics 5000, HD Graphics P4600/P4700, Iris(TM) Graphics 5100, HD Graphics 4400, HD Graphics 4200, Iris(TM) Pro Graphics 5200 [ 3706.815] (II) VESA: driver for VESA chipsets: vesa [ 3706.815] (II) modesetting: Driver for Modesetting Kernel Drivers: kms [ 3706.815] (II) FBDEV: driver for framebuffer: fbdev [ 3706.815] (--) using VT number 7 [ 3706.819] (WW) Falling back to old probe method for modesetting [ 3706.819] (EE) open /dev/dri/card0: No such file or directory [ 3706.819] (WW) Falling back to old probe method for fbdev [ 3706.819] (II) Loading sub module "fbdevhw" [ 3706.819] (II) LoadModule: "fbdevhw" [ 3706.819] (II) Loading /usr/lib/xorg/modules/libfbdevhw.so [ 3706.819] (II) Module fbdevhw: vendor="X.Org Foundation" [ 3706.819] compiled for 1.14.3, module version = 0.0.2 [ 3706.819] ABI class: X.Org Video Driver, version 14.1 [ 3706.819] (II) Loading sub module "vbe" [ 3706.819] (II) LoadModule: "vbe" [ 3706.819] (II) Loading /usr/lib/xorg/modules/libvbe.so [ 3706.819] (II) Module vbe: vendor="X.Org Foundation" [ 3706.819] compiled for 1.14.3, module version = 1.1.0 [ 3706.819] ABI class: X.Org Video Driver, version 14.1 [ 3706.819] (II) Loading sub module "int10" [ 3706.819] (II) LoadModule: "int10" [ 3706.819] (II) Loading /usr/lib/xorg/modules/libint10.so [ 3706.819] (II) Module int10: vendor="X.Org Foundation" [ 3706.819] compiled for 1.14.3, module version = 1.0.0 [ 3706.819] ABI class: X.Org Video Driver, version 14.1 [ 3706.819] (II) VESA(0): initializing int10 [ 3706.820] (EE) VESA(0): V_BIOS address 0x0 out of range [ 3706.820] (II) UnloadModule: "vesa" [ 3706.820] (II) UnloadSubModule: "int10" [ 3706.820] (II) Unloading int10 [ 3706.820] (II) UnloadSubModule: "vbe" [ 3706.820] (II) Unloading vbe [ 3706.820] (EE) Screen(s) found, but none have a usable configuration. [ 3706.820] (EE) Fatal server error: [ 3706.820] (EE) no screens found(EE) [ 3706.820] (EE) Please consult the The X.Org Foundation support at http://wiki.x.org for help. [ 3706.820] (EE) Please also check the log file at "/var/log/Xorg.0.log" for additional information. [ 3706.820] (EE) [ 3706.827] (EE) Server terminated with error (1). Closing log file. I also saved the dsmeg output to see if it can be of any help. In order to be able to get to this stage I had to boot with nomodeset option and removed quiet and splash. Anyone got this same error? Any guidance? I've tried other linux distros and so far the only one that is able to boot is Opensuse 12.3 without any issues (but only when I switch to legacy mode instead of UEFI).

    Read the article

  • Deterministic/Consistent Unique Masking

    - by Dinesh Rajasekharan-Oracle
    One of the key requirements while masking data in large databases or multi database environment is to consistently mask some columns, i.e. for a given input the output should always be the same. At the same time the masked output should not be predictable. Deterministic masking also eliminates the need to spend enormous amount of time spent in identifying data relationships, i.e. parent and child relationships among columns defined in the application tables. In this blog post I will explain different ways of consistently masking the data across databases using Oracle Data Masking and Subsetting The readers of post should have minimal knowledge on Oracle Enterprise Manager 12c, Application Data Modeling, Data Masking concepts. For more information on these concepts, please refer to Oracle Data Masking and Subsetting document Oracle Data Masking and Subsetting 12c provides four methods using which users can consistently yet irreversibly mask their inputs. 1. Substitute 2. SQL Expression 3. Encrypt 4. User Defined Function SUBSTITUTE The substitute masking format replaces the original value with a value from a pre-created database table. As the method uses a hash based algorithm in the back end the mappings are consistent. For example consider DEPARTMENT_ID in EMPLOYEES table is replaced with FAKE_DEPARTMENT_ID from FAKE_TABLE. The substitute masking transformation that all occurrences of DEPARTMENT_ID say ‘101’ will be replaced with ‘502’ provided same substitution table and column is used , i.e. FAKE_TABLE.FAKE_DEPARTMENT_ID. The following screen shot shows the usage of the Substitute masking format with in a masking definition: Note that the uniqueness of the masked value depends on the number of columns being used in the substitution table i.e. if the original table contains 50000 unique values, then for the masked output to be unique and deterministic the substitution column should also contain 50000 unique values without which only consistency is maintained but not uniqueness. SQL EXPRESSION SQL Expression replaces an existing value with the output of a specified SQL Expression. For example while masking an EMPLOYEES table the EMAIL_ID of an employee has to be in the format EMPLOYEE’s [email protected] while FIRST_NAME and LAST_NAME are the actual column names of the EMPLOYEES table then the corresponding SQL Expression will look like %FIRST_NAME%||’.’||%LAST_NAME%||’@COMPANY.COM’. The advantage of this technique is that if you are masking FIRST_NAME and LAST_NAME of the EMPLOYEES table than the corresponding EMAIL ID will be replaced accordingly by the masking scripts. One of the interesting aspect’s of a SQL Expressions is that you can use sub SQL expressions, which means that you can write a nested SQL and use it as SQL Expression to address a complex masking business use cases. SQL Expression can also be used to consistently replace value with hashed value using Oracle’s PL/SQL function ORA_HASH. The following SQL Expression will help in the previous example for replacing the DEPARTMENT_IDs with a hashed number ORA_HASH (%DEPARTMENT_ID%, 1000) The following screen shot shows the usage of encrypt masking format with in the masking definition: ORA_HASH takes three arguments: 1. Expression which can be of any data type except LONG, LOB, User Defined Type [nested table type is allowed]. In the above example I used the Original value as expression. 2. Number of hash buckets which can be number between 0 and 4294967295. The default value is 4294967295. You can also co-relate the number of hash buckets to a range of numbers. In the above example above the bucket value is specified as 1000, so the end result will be a hashed number in between 0 and 1000. 3. Seed, can be any number which decides the consistency, i.e. for a given seed value the output will always be same. The default seed is 0. In the above SQL Expression a seed in not specified, so it to 0. If you have to use a non default seed then the function will look like. ORA_HASH (%DEPARTMENT_ID%, 1000, 1234 The uniqueness depends on the input and the number of hash buckets used. However as ORA_HASH uses a 32 bit algorithm, considering birthday paradox or pigeonhole principle there is a 0.5 probability of collision after 232-1 unique values. ENCRYPT Encrypt masking format uses a blend of 3DES encryption algorithm, hashing, and regular expression to produce a deterministic and unique masked output. The format of the masked output corresponds to the specified regular expression. As this technique uses a key [string] to encrypt the data, the same string can be used to decrypt the data. The key also acts as seed to maintain consistent outputs for a given input. The following screen shot shows the usage of encrypt masking format with in the masking definition: Regular Expressions may look complex for the first time users but you will soon realize that it’s a simple language. There are many resources in internet, oracle documentation, oracle learning library, my oracle support on writing a Regular Expressions, out of all the following My Oracle Support document helped me to get started with Regular Expressions: Oracle SQL Support for Regular Expressions[Video](Doc ID 1369668.1) USER DEFINED FUNCTION [UDF] User Defined Function or UDF provides flexibility for the users to code their own masking logic in PL/SQL, which can be called from masking Defintion. The standard format of an UDF in Oracle Data Masking and Subsetting is: Function udf_func (rowid varchar2, column_name varchar2, original_value varchar2) returns varchar2; Where • rowid is the row identifier of the column that needs to be masked • column_name is the name of the column that needs to be masked • original_value is the column value that needs to be masked You can achieve deterministic masking by using Oracle’s built in hash functions like, ORA_HASH, DBMS_CRYPTO.MD4, DBMS_CRYPTO.MD5, DBMS_UTILITY. GET_HASH_VALUE.Please refers to the Oracle Database Documentation for more information on the Oracle Hash functions. For example the following masking UDF generate deterministic unique hexadecimal values for a given string input: CREATE OR REPLACE FUNCTION RD_DUX (rid varchar2, column_name varchar2, orig_val VARCHAR2) RETURN VARCHAR2 DETERMINISTIC PARALLEL_ENABLE IS stext varchar2 (26); no_of_characters number(2); BEGIN no_of_characters:=6; stext:=substr(RAWTOHEX(DBMS_CRYPTO.HASH(UTL_RAW.CAST_TO_RAW(text),1)),0,no_of_characters); RETURN stext; END; The uniqueness depends on the input and length of the string and number of bits used by hash algorithm. In the above function MD4 hash is used [denoted by argument 1 in the DBMS_CRYPTO.HASH function which is a 128 bit algorithm which produces 2^128-1 unique hashed values , however this is limited by the length of the input string which is 6, so only 6^6 unique values will be generated. Also do not forget about the birthday paradox/pigeonhole principle mentioned earlier in this post. An another example is to consistently replace characters or numbers preserving the length and special characters as shown below: CREATE OR REPLACE FUNCTION RD_DUS(rid varchar2,column_name varchar2,orig_val VARCHAR2) RETURN VARCHAR2 DETERMINISTIC PARALLEL_ENABLE IS stext varchar2(26); BEGIN DBMS_RANDOM.SEED(orig_val); stext:=TRANSLATE(orig_val,'ABCDEFGHILKLMNOPQRSTUVWXYZ',DBMS_RANDOM.STRING('U',26)); stext:=TRANSLATE(stext,'abcdefghijklmnopqrstuvwxyz',DBMS_RANDOM.STRING('L',26)); stext:=TRANSLATE(stext,'0123456789',to_char(DBMS_RANDOM.VALUE(1,9))); stext:=REPLACE(stext,'.','0'); RETURN stext; END; The following screen shot shows the usage of an UDF with in a masking definition: To summarize, Oracle Data Masking and Subsetting helps you to consistently mask data across databases using one or all of the methods described in this post. It saves the hassle of identifying the parent-child relationships defined in the application table. Happy Masking

    Read the article

  • Vectorization of matlab code for faster execution

    - by user3237134
    My code works in the following manner: 1.First, it obtains several images from the training set 2.After loading these images, we find the normalized faces,mean face and perform several calculation. 3.Next, we ask for the name of an image we want to recognize 4.We then project the input image into the eigenspace, and based on the difference from the eigenfaces we make a decision. 5.Depending on eigen weight vector for each input image we make clusters using kmeans command. Source code i tried: clear all close all clc % number of images on your training set. M=1200; %Chosen std and mean. %It can be any number that it is close to the std and mean of most of the images. um=60; ustd=32; %read and show images(bmp); S=[]; %img matrix for i=1:M str=strcat(int2str(i),'.jpg'); %concatenates two strings that form the name of the image eval('img=imread(str);'); [irow icol d]=size(img); % get the number of rows (N1) and columns (N2) temp=reshape(permute(img,[2,1,3]),[irow*icol,d]); %creates a (N1*N2)x1 matrix S=[S temp]; %X is a N1*N2xM matrix after finishing the sequence %this is our S end %Here we change the mean and std of all images. We normalize all images. %This is done to reduce the error due to lighting conditions. for i=1:size(S,2) temp=double(S(:,i)); m=mean(temp); st=std(temp); S(:,i)=(temp-m)*ustd/st+um; end %show normalized images for i=1:M str=strcat(int2str(i),'.jpg'); img=reshape(S(:,i),icol,irow); img=img'; end %mean image; m=mean(S,2); %obtains the mean of each row instead of each column tmimg=uint8(m); %converts to unsigned 8-bit integer. Values range from 0 to 255 img=reshape(tmimg,icol,irow); %takes the N1*N2x1 vector and creates a N2xN1 matrix img=img'; %creates a N1xN2 matrix by transposing the image. % Change image for manipulation dbx=[]; % A matrix for i=1:M temp=double(S(:,i)); dbx=[dbx temp]; end %Covariance matrix C=A'A, L=AA' A=dbx'; L=A*A'; % vv are the eigenvector for L % dd are the eigenvalue for both L=dbx'*dbx and C=dbx*dbx'; [vv dd]=eig(L); % Sort and eliminate those whose eigenvalue is zero v=[]; d=[]; for i=1:size(vv,2) if(dd(i,i)>1e-4) v=[v vv(:,i)]; d=[d dd(i,i)]; end end %sort, will return an ascending sequence [B index]=sort(d); ind=zeros(size(index)); dtemp=zeros(size(index)); vtemp=zeros(size(v)); len=length(index); for i=1:len dtemp(i)=B(len+1-i); ind(i)=len+1-index(i); vtemp(:,ind(i))=v(:,i); end d=dtemp; v=vtemp; %Normalization of eigenvectors for i=1:size(v,2) %access each column kk=v(:,i); temp=sqrt(sum(kk.^2)); v(:,i)=v(:,i)./temp; end %Eigenvectors of C matrix u=[]; for i=1:size(v,2) temp=sqrt(d(i)); u=[u (dbx*v(:,i))./temp]; end %Normalization of eigenvectors for i=1:size(u,2) kk=u(:,i); temp=sqrt(sum(kk.^2)); u(:,i)=u(:,i)./temp; end % show eigenfaces; for i=1:size(u,2) img=reshape(u(:,i),icol,irow); img=img'; img=histeq(img,255); end % Find the weight of each face in the training set. omega = []; for h=1:size(dbx,2) WW=[]; for i=1:size(u,2) t = u(:,i)'; WeightOfImage = dot(t,dbx(:,h)'); WW = [WW; WeightOfImage]; end omega = [omega WW]; end % Acquire new image % Note: the input image must have a bmp or jpg extension. % It should have the same size as the ones in your training set. % It should be placed on your desktop ed_min=[]; srcFiles = dir('G:\newdatabase\*.jpg'); % the folder in which ur images exists for b = 1 : length(srcFiles) filename = strcat('G:\newdatabase\',srcFiles(b).name); Imgdata = imread(filename); InputImage=Imgdata; InImage=reshape(permute((double(InputImage)),[2,1,3]),[irow*icol,1]); temp=InImage; me=mean(temp); st=std(temp); temp=(temp-me)*ustd/st+um; NormImage = temp; Difference = temp-m; p = []; aa=size(u,2); for i = 1:aa pare = dot(NormImage,u(:,i)); p = [p; pare]; end InImWeight = []; for i=1:size(u,2) t = u(:,i)'; WeightOfInputImage = dot(t,Difference'); InImWeight = [InImWeight; WeightOfInputImage]; end noe=numel(InImWeight); % Find Euclidean distance e=[]; for i=1:size(omega,2) q = omega(:,i); DiffWeight = InImWeight-q; mag = norm(DiffWeight); e = [e mag]; end ed_min=[ed_min MinimumValue]; theta=6.0e+03; %disp(e) z(b,:)=InImWeight; end IDX = kmeans(z,5); clustercount=accumarray(IDX, ones(size(IDX))); disp(clustercount); Running time for 50 images:Elapsed time is 103.947573 seconds. QUESTIONS: 1.It is working fine for M=50(i.e Training set contains 50 images) but not for M=1200(i.e Training set contains 1200 images).It is not showing any error.There is no output.I waited for 10 min still there is no output. I think it is going infinite loop.What is the problem?Where i was wrong?

    Read the article

  • SQL Sentry First Impressions

    - by AjarnMark
    After struggling to defend my SQL Servers from a political attack recently, I realized that I needed better tools to back me up, and SQL Sentry is the leading candidate. A couple of weeks ago, seemingly from out of nowhere, complaints from the business users started coming in that one of the core internal applications was running dramatically slower than normal, and fingers were being pointed at the SQL Server.  Unfortunately, we don’t have a production DBA whose entire job is to monitor and maintain our SQL Servers.  The responsibility falls to me to do the best I can, investing only a small portion of my time, because there are so many other responsibilities to take care of, and our industry is still deep in recession.  I inherited these SQL Servers and have made significant improvements in process and procedure, but I had not yet made the time to take real baseline measurements or keep a really close eye on the performance.  Like many DBAs, I wrote several of my own tools and used the “built-in tools” like Profiler, PerfMon, and sp_who2 (did I mention most of our instances are SQL Server 2000?).  These have all served me well for in-the-moment troubleshooting and maintenance, but they really fell down on the job when I was called upon to “prove” that SQL Server performance was acceptable and more importantly had not degraded recently (i.e. historical comparisons).  I really didn’t have anything from a historical comparison perspective, but I was able to show that current performance was acceptable, and deflect attention back onto other components (which in fact turned out to be the real culprit). That experience dramatically illustrated the need for better monitoring tools.  Coincidentally, I had been talking recently to my boss about the mini nightmare of monitoring several critical and interdependent overnight jobs that operate on separate instances of SQL Server.  Among other tools, I had been using Idera’s SQL Job Manager which is a free tool and did a nice job of showing me job schedules and histories in a nice calendar view.  This worked fairly well, and for the money (did I mention it was free?) it couldn’t be beat.  But it is based on the stored job history in MSDB, and there were other performance problems that we ran into when we started changing the settings for how much job history to retain, in order to be able to look back a month or more in the calendar view.  Another coincidence (if you believe in such things) was that when we had some of those performance challenges, I posted a couple of questions to the #sqlhelp hashtag on Twitter and Greg Gonzalez (@SQLSensei) suggested I check out SQL Sentry’s Event Manager.  At the time, I just thought he worked there, but later found out that he founded the company.  When I took a quick look at the features & benefits, the one that really jumped out at me is Chaining and Queueing which sounded like it would really help with our “interdependent jobs on different servers” issue. I know that is a lot of background story and coincidences, but hopefully you have stuck with me so far, and now we have arrived at the point where last week I downloaded and installed the 30-day trial of the SQL Sentry Power Suite, which is Event Manager plus Performance Advisor.  And I must say that I really like what I see so far.  Here are a few highlights: Great Support.  I had two issues getting the trial setup and monitoring a handful of our servers.  One of which was entirely my fault (missed a security setting in SQL 2008) and the other was mostly my fault (late change to some config settings that were apparently cached and did not get refreshed properly).  In both cases, the support staff at SQL Sentry were very responsive and rather quickly figured out what the cause and fix was for each of them.  This left me with a great impression of the company.  Kudos to them! Chaining and Queueing.  While I have not yet activated this feature, I am very excited about the possibilities.  We have jobs on three different instances of SQL Server that have to be run in a certain order, and each has to finish before the next can successfully begin, and I believe this feature will ensure just that.  It has been a real pain in the backside when one of those jobs runs just a little too long and does not finish before the job on another instance starts, thus triggering a chain reaction of either outright job failures, or worse, successful completion of completely invalid processing. Calendar View.  I really, really like the Event Manager calendar view where I can see all jobs and events across all instances and identify potential resource contention as well as windows of opportunity for maintenance activity.  Very well done, and based on Event Manager’s own database of accumulated historical information rather than querying the source instances every time. Performance Advisor Dashboard History View.  This view let’s me quickly select a date and time range and it displays graphs of key SQL Server and Windows metrics.  This is exactly the thing I needed to answer the “has performance changed recently” question at the beginning of this post. Reporting Services Subscription Jobs with Report Name.  This was a big and VERY pleasant surprise.  If you have ever looked at the list of SQL Server jobs that SQL Server Reporting Services creates when you make a Subscription, you will notice that they all have some sort of GUID as the name of the job.  This is really ugly, and really annoying because when you are just looking at the SQL Agent and Job Activity Monitor, if you see that Job X failed, you really do not have any indication in the name or the properties of the Job itself, as to what Report that was for.  But with SQL Sentry Event Manager you do.  The Jobs list in the Navigator pane in SQL Sentry, amazingly, displays the name of the Report that the Subscription Job is for.  And when you open it to see more details, it shows you the full Reporting Services path to that Report, so you can immediately track it down in the Report Manager in case you want to identify/notify the owner or edit the Subscription information.  I did not expect this at all, but I sure do like it.  HOORAY! That is just my first impressions from using the tools for a few days.  And I haven’t even gotten into how it showed me where I was completely mistaken about one aspect of my SQL Server disk configurations.  I’ll share that lesson in another blog entry.  But I have to say it again, the combination of Event Manager and Performance Advisor working together have really made me a fan.

    Read the article

  • 5 Best Practices - Laying the Foundation for WebCenter Projects

    - by Kellsey Ruppel
    Today’s guest post comes from Oracle WebCenter expert John Brunswick. John specializes in enterprise portal and content management solutions and actively contributes to the enterprise software business community and has authored a series of articles about optimal business involvement in portal, business process management and SOA development, examining ways of helping organizations move away from monolithic application development. We’re happy to have John join us today! Maximizing success with Oracle WebCenter portal requires a strategic understanding of Oracle WebCenter capabilities.  The following best practices enable the creation of portal solutions with minimal resource overhead, while offering the greatest flexibility for progressive elaboration. They are inherently project agnostic, enabling a strong foundation for future growth and an expedient return on your investment in the platform.  If you are able to embrace even only a few of these practices, you will materially improve your deployment capability with WebCenter. 1. Segment Duties Around 3Cs - Content, Collaboration and Contextual Data "Agility" is one of the most common business benefits touted by modern web platforms.  It sounds good - who doesn't want to be Agile, right?  How exactly IT organizations go about supplying agility to their business counterparts often lacks definition - hamstrung by ambiguity. Ultimately, businesses want to benefit from reduced development time to deliver a solution to a particular constituent, which is augmented by as much self-service as possible to develop and manage the solution directly. All done in the absence of direct IT involvement. With Oracle WebCenter's depth in the areas of content management, pallet of native collaborative services, enterprise mashup capability and delegated administration, it is very possible to execute on this business vision at a technical level. To realize the benefits of the platform depth we can think of Oracle WebCenter's segmentation of duties along the lines of the 3 Cs - Content, Collaboration and Contextual Data.  All three of which can have their foundations developed by IT, then provisioned to the business on a per role basis. Content – Oracle WebCenter benefits from an extremely mature content repository.  Work flow, audit, notification, office integration and conversion capabilities for documents (HTML & PDF) make this a haven for business users to take control of content within external and internal portals, custom applications and web sites.  When deploying WebCenter portal take time to think of areas in which IT can provide the "harness" for content to reside, then allow the business to manage any content items within the site, using the content foundation to ensure compliance with business rules and process.  This frees IT to work on more mission critical challenges and allows the business to respond in short order to emerging market needs. Collaboration – Native collaborative services and WebCenter spaces are a perfect match for business users who are looking to enable document sharing, discussions and social networking.  The ability to deploy the services is granular and on the basis of roles scoped to given areas of the system - much like the first C “content”.  This enables business analysts to design the roles required and IT to provision with peace of mind that users leveraging the collaborative services are only able to do so in explicitly designated areas of a site. Bottom line - business will not need to wait for IT, but cannot go outside of the scope that has been defined based on their roles. Contextual Data – Collaborative capabilities are most powerful when included within the context of business data.  The ability to supply business users with decision shaping data that they can include in various parts of a portal or portals, just as they would with content items, is one of the most powerful aspects of Oracle WebCenter.  Imagine a discussion about new store selection for a retail chain that re-purposes existing information from business intelligence services about various potential locations and or custom backend systems - presenting it directly in the context of the discussion.  If there are some data sources that are preexisting in your enterprise take a look at how they can be made into discrete offerings within the portal, then scoped to given business user roles for inclusion within collaborative activities. 2. Think Generically, Execute Specifically Constructs.  Anyone who has spent much time around me knows that I am obsessed with this word.  Why? Because Constructs offer immense power - more than APIs, Web Services or other technical capability. Constructs offer organizations the ability to leverage a platform's native characteristics to offer substantial business functionality - without writing code.  This concept becomes more powerful with the additional understanding of the concepts from the platform that an organization learns over time.  Let's take a look at an example of where an Oracle WebCenter construct can substantially reduce the time to get a subscription-based site out the door and into the hands of the end consumer. Imagine a site that allows members to subscribe to specific disciplines to access information and application data around that various discipline.  A space is a collection of secured pages within Oracle WebCenter.  Spaces are not only secured, but also default content stored within it to be scoped automatically to that space. Taking this a step further, Oracle WebCenter’s Activity Stream surfaces events, discussions and other activities that are scoped to the given user on the basis of their space affiliations.  In order to have a portal that would allow users to "subscribe" to information around various disciplines - spaces could be used out of the box to achieve this capability and without using any APIs or low level technical work to achieve this. 3. Make Governance Work for You Imagine driving down the street without the painted lines on the road.  The rules of the road are so ingrained in our minds, we often do not think about the process, but seemingly mundane lane markers are critical enablers. Lane markers allow us to travel at speeds that would be impossible if not for the agreed upon direction of flow. Additionally and more importantly, it allows people to act autonomously - going where they please at any given time. The return on the investment for mobility is high enough for people to buy into globally agreed up governance processes. In Oracle WebCenter we can use similar enablers to lane markers.  Our goal should be to enable the flow of information and provide end users with the ability to arrive at business solutions as needed, not on the basis of cumbersome processes that cannot meet the business needs in a timely fashion. How do we do this? Just as with "Segmentation of Duties" Oracle WebCenter technologies offer the opportunity to compartmentalize various business initiatives from each other within the system due to constructs and security that are available to use within the platform. For instance, when a WebCenter space is created, any content added within that space by default will be secured to that particular space and inherits meta data that is associated with a folder created for the space. Oracle WebCenter content uses meta data to support a broad range of rich ECM functionality and can automatically impart retention, workflow and other policies automatically on the basis of what has been defaulted for that space. Depending on your business needs, this paradigm will also extend to sub sections of a space, offering some interesting possibilities to enable automated management around content. An example may be press releases within a particular area of an extranet that require a five year retention period and need to the reviewed by marketing and legal before release.  The underlying content system will transparently take care of this process on the basis of the above rules, enabling peace of mind over unstructured data - which could otherwise become overwhelming. 4. Make Your First Project Your Second Imagine if Michael Phelps was competing in a swimming championship, but told right before his race that he had to use a brand new stroke.  There is no doubt that Michael is an outstanding swimmer, but chances are that he would like to have some time to get acquainted with the new stroke. New technologies should not be treated any differently.  Before jumping into the deep end it helps to take time to get to know the new approach - even though you may have been swimming thousands of times before. To quickly get a handle on Oracle WebCenter capabilities it can be helpful to deploy a sandbox for the team to use to share project documents, discussions and announcements in an effort to help the actual deployment get under way, while increasing everyone’s knowledge of the platform and its functionality that may be helpful down the road. Oracle Technology Network has made a pre-configured virtual machine available for download that can be a great starting point for this exercise. 5. Get to Know the Community If you are reading this blog post you have most certainly faced a software decision or challenge that was solved on the basis of a small piece of missing critical information - which took substantial research to discover.  Chances were also good that somewhere, someone had already come across this information and would have been excited to share it. There is no denying the power of passionate, connected users, sharing key tips around technology.  The Oracle WebCenter brand has a rich heritage that includes industry-leading technology and practitioners.  With the new Oracle WebCenter brand, opportunities to connect with these experts has become easier. Oracle WebCenter Blog Oracle Social Enterprise LinkedIn WebCenter Group Oracle WebCenter Twitter Oracle WebCenter Facebook Oracle User Groups Additionally, there are various Oracle WebCenter related blogs by an excellent grouping of services partners.

    Read the article

  • I see no LOBs!

    - by Paul White
    Is it possible to see LOB (large object) logical reads from STATISTICS IO output on a table with no LOB columns? I was asked this question today by someone who had spent a good fraction of their afternoon trying to work out why this was occurring – even going so far as to re-run DBCC CHECKDB to see if any corruption had taken place.  The table in question wasn’t particularly pretty – it had grown somewhat organically over time, with new columns being added every so often as the need arose.  Nevertheless, it remained a simple structure with no LOB columns – no TEXT or IMAGE, no XML, no MAX types – nothing aside from ordinary INT, MONEY, VARCHAR, and DATETIME types.  To add to the air of mystery, not every query that ran against the table would report LOB logical reads – just sometimes – but when it did, the query often took much longer to execute. Ok, enough of the pre-amble.  I can’t reproduce the exact structure here, but the following script creates a table that will serve to demonstrate the effect: IF OBJECT_ID(N'dbo.Test', N'U') IS NOT NULL DROP TABLE dbo.Test GO CREATE TABLE dbo.Test ( row_id NUMERIC IDENTITY NOT NULL,   col01 NVARCHAR(450) NOT NULL, col02 NVARCHAR(450) NOT NULL, col03 NVARCHAR(450) NOT NULL, col04 NVARCHAR(450) NOT NULL, col05 NVARCHAR(450) NOT NULL, col06 NVARCHAR(450) NOT NULL, col07 NVARCHAR(450) NOT NULL, col08 NVARCHAR(450) NOT NULL, col09 NVARCHAR(450) NOT NULL, col10 NVARCHAR(450) NOT NULL, CONSTRAINT [PK dbo.Test row_id] PRIMARY KEY CLUSTERED (row_id) ) ; The next script loads the ten variable-length character columns with one-character strings in the first row, two-character strings in the second row, and so on down to the 450th row: WITH Numbers AS ( -- Generates numbers 1 - 450 inclusive SELECT TOP (450) n = ROW_NUMBER() OVER (ORDER BY (SELECT 0)) FROM master.sys.columns C1, master.sys.columns C2, master.sys.columns C3 ORDER BY n ASC ) INSERT dbo.Test WITH (TABLOCKX) SELECT REPLICATE(N'A', N.n), REPLICATE(N'B', N.n), REPLICATE(N'C', N.n), REPLICATE(N'D', N.n), REPLICATE(N'E', N.n), REPLICATE(N'F', N.n), REPLICATE(N'G', N.n), REPLICATE(N'H', N.n), REPLICATE(N'I', N.n), REPLICATE(N'J', N.n) FROM Numbers AS N ORDER BY N.n ASC ; Once those two scripts have run, the table contains 450 rows and 10 columns of data like this: Most of the time, when we query data from this table, we don’t see any LOB logical reads, for example: -- Find the maximum length of the data in -- column 5 for a range of rows SELECT result = MAX(DATALENGTH(T.col05)) FROM dbo.Test AS T WHERE row_id BETWEEN 50 AND 100 ; But with a different query… -- Read all the data in column 1 SELECT result = MAX(DATALENGTH(T.col01)) FROM dbo.Test AS T ; …suddenly we have 49 LOB logical reads, as well as the ‘normal’ logical reads we would expect. The Explanation If we had tried to create this table in SQL Server 2000, we would have received a warning message to say that future INSERT or UPDATE operations on the table might fail if the resulting row exceeded the in-row storage limit of 8060 bytes.  If we needed to store more data than would fit in an 8060 byte row (including internal overhead) we had to use a LOB column – TEXT, NTEXT, or IMAGE.  These special data types store the large data values in a separate structure, with just a small pointer left in the original row. Row Overflow SQL Server 2005 introduced a feature called row overflow, which allows one or more variable-length columns in a row to move to off-row storage if the data in a particular row would otherwise exceed 8060 bytes.  You no longer receive a warning when creating (or altering) a table that might need more than 8060 bytes of in-row storage; if SQL Server finds that it can no longer fit a variable-length column in a particular row, it will silently move one or more of these columns off the row into a separate allocation unit. Only variable-length columns can be moved in this way (for example the (N)VARCHAR, VARBINARY, and SQL_VARIANT types).  Fixed-length columns (like INTEGER and DATETIME for example) never move into ‘row overflow’ storage.  The decision to move a column off-row is done on a row-by-row basis – so data in a particular column might be stored in-row for some table records, and off-row for others. In general, if SQL Server finds that it needs to move a column into row-overflow storage, it moves the largest variable-length column record for that row.  Note that in the case of an UPDATE statement that results in the 8060 byte limit being exceeded, it might not be the column that grew that is moved! Sneaky LOBs Anyway, that’s all very interesting but I don’t want to get too carried away with the intricacies of row-overflow storage internals.  The point is that it is now possible to define a table with non-LOB columns that will silently exceed the old row-size limit and result in ordinary variable-length columns being moved to off-row storage.  Adding new columns to a table, expanding an existing column definition, or simply storing more data in a column than you used to – all these things can result in one or more variable-length columns being moved off the row. Note that row-overflow storage is logically quite different from old-style LOB and new-style MAX data type storage – individual variable-length columns are still limited to 8000 bytes each – you can just have more of them now.  Having said that, the physical mechanisms involved are very similar to full LOB storage – a column moved to row-overflow leaves a 24-byte pointer record in the row, and the ‘separate storage’ I have been talking about is structured very similarly to both old-style LOBs and new-style MAX types.  The disadvantages are also the same: when SQL Server needs a row-overflow column value it needs to follow the in-row pointer a navigate another chain of pages, just like retrieving a traditional LOB. And Finally… In the example script presented above, the rows with row_id values from 402 to 450 inclusive all exceed the total in-row storage limit of 8060 bytes.  A SELECT that references a column in one of those rows that has moved to off-row storage will incur one or more lob logical reads as the storage engine locates the data.  The results on your system might vary slightly depending on your settings, of course; but in my tests only column 1 in rows 402-450 moved off-row.  You might like to play around with the script – updating columns, changing data type lengths, and so on – to see the effect on lob logical reads and which columns get moved when.  You might even see row-overflow columns moving back in-row if they are updated to be smaller (hint: reduce the size of a column entry by at least 1000 bytes if you hope to see this). Be aware that SQL Server will not warn you when it moves ‘ordinary’ variable-length columns into overflow storage, and it can have dramatic effects on performance.  It makes more sense than ever to choose column data types sensibly.  If you make every column a VARCHAR(8000) or NVARCHAR(4000), and someone stores data that results in a row needing more than 8060 bytes, SQL Server might turn some of your column data into pseudo-LOBs – all without saying a word. Finally, some people make a distinction between ordinary LOBs (those that can hold up to 2GB of data) and the LOB-like structures created by row-overflow (where columns are still limited to 8000 bytes) by referring to row-overflow LOBs as SLOBs.  I find that quite appealing, but the ‘S’ stands for ‘small’, which makes expanding the whole acronym a little daft-sounding…small large objects anyone? © Paul White 2011 email: [email protected] twitter: @SQL_Kiwi

    Read the article

  • The Sensemaking Spectrum for Business Analytics: Translating from Data to Business Through Analysis

    - by Joe Lamantia
    One of the most compelling outcomes of our strategic research efforts over the past several years is a growing vocabulary that articulates our cumulative understanding of the deep structure of the domains of discovery and business analytics. Modes are one example of the deep structure we’ve found.  After looking at discovery activities across a very wide range of industries, question types, business needs, and problem solving approaches, we've identified distinct and recurring kinds of sensemaking activity, independent of context.  We label these activities Modes: Explore, compare, and comprehend are three of the nine recognizable modes.  Modes describe *how* people go about realizing insights.  (Read more about the programmatic research and formal academic grounding and discussion of the modes here: https://www.researchgate.net/publication/235971352_A_Taxonomy_of_Enterprise_Search_and_Discovery) By analogy to languages, modes are the 'verbs' of discovery activity.  When applied to the practical questions of product strategy and development, the modes of discovery allow one to identify what kinds of analytical activity a product, platform, or solution needs to support across a spread of usage scenarios, and then make concrete and well-informed decisions about every aspect of the solution, from high-level capabilities, to which specific types of information visualizations better enable these scenarios for the types of data users will analyze. The modes are a powerful generative tool for product making, but if you've spent time with young children, or had a really bad hangover (or both at the same time...), you understand the difficult of communicating using only verbs.  So I'm happy to share that we've found traction on another facet of the deep structure of discovery and business analytics.  Continuing the language analogy, we've identified some of the ‘nouns’ in the language of discovery: specifically, the consistently recurring aspects of a business that people are looking for insight into.  We call these discovery Subjects, since they identify *what* people focus on during discovery efforts, rather than *how* they go about discovery as with the Modes. Defining the collection of Subjects people repeatedly focus on allows us to understand and articulate sense making needs and activity in more specific, consistent, and complete fashion.  In combination with the Modes, we can use Subjects to concretely identify and define scenarios that describe people’s analytical needs and goals.  For example, a scenario such as ‘Explore [a Mode] the attrition rates [a Measure, one type of Subject] of our largest customers [Entities, another type of Subject] clearly captures the nature of the activity — exploration of trends vs. deep analysis of underlying factors — and the central focus — attrition rates for customers above a certain set of size criteria — from which follow many of the specifics needed to address this scenario in terms of data, analytical tools, and methods. We can also use Subjects to translate effectively between the different perspectives that shape discovery efforts, reducing ambiguity and increasing impact on both sides the perspective divide.  For example, from the language of business, which often motivates analytical work by asking questions in business terms, to the perspective of analysis.  The question posed to a Data Scientist or analyst may be something like “Why are sales of our new kinds of potato chips to our largest customers fluctuating unexpectedly this year?” or “Where can innovate, by expanding our product portfolio to meet unmet needs?”.  Analysts translate questions and beliefs like these into one or more empirical discovery efforts that more formally and granularly indicate the plan, methods, tools, and desired outcomes of analysis.  From the perspective of analysis this second question might become, “Which customer needs of type ‘A', identified and measured in terms of ‘B’, that are not directly or indirectly addressed by any of our current products, offer 'X' potential for ‘Y' positive return on the investment ‘Z' required to launch a new offering, in time frame ‘W’?  And how do these compare to each other?”.  Translation also happens from the perspective of analysis to the perspective of data; in terms of availability, quality, completeness, format, volume, etc. By implication, we are proposing that most working organizations — small and large, for profit and non-profit, domestic and international, and in the majority of industries — can be described for analytical purposes using this collection of Subjects.  This is a bold claim, but simplified articulation of complexity is one of the primary goals of sensemaking frameworks such as this one.  (And, yes, this is in fact a framework for making sense of sensemaking as a category of activity - but we’re not considering the recursive aspects of this exercise at the moment.) Compellingly, we can place the collection of subjects on a single continuum — we call it the Sensemaking Spectrum — that simply and coherently illustrates some of the most important relationships between the different types of Subjects, and also illuminates several of the fundamental dynamics shaping business analytics as a domain.  As a corollary, the Sensemaking Spectrum also suggests innovation opportunities for products and services related to business analytics. The first illustration below shows Subjects arrayed along the Sensemaking Spectrum; the second illustration presents examples of each kind of Subject.  Subjects appear in colors ranging from blue to reddish-orange, reflecting their place along the Spectrum, which indicates whether a Subject addresses more the viewpoint of systems and data (Data centric and blue), or people (User centric and orange).  This axis is shown explicitly above the Spectrum.  Annotations suggest how Subjects align with the three significant perspectives of Data, Analysis, and Business that shape business analytics activity.  This rendering makes explicit the translation and bridging function of Analysts as a role, and analysis as an activity. Subjects are best understood as fuzzy categories [http://georgelakoff.files.wordpress.com/2011/01/hedges-a-study-in-meaning-criteria-and-the-logic-of-fuzzy-concepts-journal-of-philosophical-logic-2-lakoff-19731.pdf], rather than tightly defined buckets.  For each Subject, we suggest some of the most common examples: Entities may be physical things such as named products, or locations (a building, or a city); they could be Concepts, such as satisfaction; or they could be Relationships between entities, such as the variety of possible connections that define linkage in social networks.  Likewise, Events may indicate a time and place in the dictionary sense; or they may be Transactions involving named entities; or take the form of Signals, such as ‘some Measure had some value at some time’ - what many enterprises understand as alerts.   The central story of the Spectrum is that though consumers of analytical insights (represented here by the Business perspective) need to work in terms of Subjects that are directly meaningful to their perspective — such as Themes, Plans, and Goals — the working realities of data (condition, structure, availability, completeness, cost) and the changing nature of most discovery efforts make direct engagement with source data in this fashion impossible.  Accordingly, business analytics as a domain is structured around the fundamental assumption that sense making depends on analytical transformation of data.  Analytical activity incrementally synthesizes more complex and larger scope Subjects from data in its starting condition, accumulating insight (and value) by moving through a progression of stages in which increasingly meaningful Subjects are iteratively synthesized from the data, and recombined with other Subjects.  The end goal of  ‘laddering’ successive transformations is to enable sense making from the business perspective, rather than the analytical perspective.Synthesis through laddering is typically accomplished by specialized Analysts using dedicated tools and methods. Beginning with some motivating question such as seeking opportunities to increase the efficiency (a Theme) of fulfillment processes to reach some level of profitability by the end of the year (Plan), Analysts will iteratively wrangle and transform source data Records, Values and Attributes into recognizable Entities, such as Products, that can be combined with Measures or other data into the Events (shipment of orders) that indicate the workings of the business.  More complex Subjects (to the right of the Spectrum) are composed of or make reference to less complex Subjects: a business Process such as Fulfillment will include Activities such as confirming, packing, and then shipping orders.  These Activities occur within or are conducted by organizational units such as teams of staff or partner firms (Networks), composed of Entities which are structured via Relationships, such as supplier and buyer.  The fulfillment process will involve other types of Entities, such as the products or services the business provides.  The success of the fulfillment process overall may be judged according to a sophisticated operating efficiency Model, which includes tiered Measures of business activity and health for the transactions and activities included.  All of this may be interpreted through an understanding of the operational domain of the businesses supply chain (a Domain).   We'll discuss the Spectrum in more depth in succeeding posts.

    Read the article

  • How to Identify Which Hardware Component is Failing in Your Computer

    - by Chris Hoffman
    Concluding that your computer has a hardware problem is just the first step. If you’re dealing with a hardware issue and not a software issue, the next step is determining what hardware problem you’re actually dealing with. If you purchased a laptop or pre-built desktop PC and it’s still under warranty, you don’t need to care about this. Have the manufacturer fix the PC for you — figuring it out is their problem. If you’ve built your own PC or you want to fix a computer that’s out of warranty, this is something you’ll need to do on your own. Blue Screen 101: Search for the Error Message This may seem like obvious advice, but searching for information about a blue screen’s error message can help immensely. Most blue screens of death you’ll encounter on modern versions of Windows will likely be caused by hardware failures. The blue screen of death often displays information about the driver that crashed or the type of error it encountered. For example, let’s say you encounter a blue screen that identified “NV4_disp.dll” as the driver that caused the blue screen. A quick Google search will reveal that this is the driver for NVIDIA graphics cards, so you now have somewhere to start. It’s possible that your graphics card is failing if you encounter such an error message. Check Hard Drive SMART Status Hard drives have a built in S.M.A.R.T. (Self-Monitoring, Analysis, and Reporting Technology) feature. The idea is that the hard drive monitors itself and will notice if it starts to fail, providing you with some advance notice before the drive fails completely. This isn’t perfect, so your hard drive may fail even if SMART says everything is okay. If you see any sort of “SMART error” message, your hard drive is failing. You can use SMART analysis tools to view the SMART health status information your hard drives are reporting. Test Your RAM RAM failure can result in a variety of problems. If the computer writes data to RAM and the RAM returns different data because it’s malfunctioning, you may see application crashes, blue screens, and file system corruption. To test your memory and see if it’s working properly, use Windows’ built-in Memory Diagnostic tool. The Memory Diagnostic tool will write data to every sector of your RAM and read it back afterwards, ensuring that all your RAM is working properly. Check Heat Levels How hot is is inside your computer? Overheating can rsult in blue screens, crashes, and abrupt shut downs. Your computer may be overheating because you’re in a very hot location, it’s ventilated poorly, a fan has stopped inside your computer, or it’s full of dust. Your computer monitors its own internal temperatures and you can access this information. It’s generally available in your computer’s BIOS, but you can also view it with system information utilities such as SpeedFan or Speccy. Check your computer’s recommended temperature level and ensure it’s within the appropriate range. If your computer is overheating, you may see problems only when you’re doing something demanding, such as playing a game that stresses your CPU and graphics card. Be sure to keep an eye on how hot your computer gets when it performs these demanding tasks, not only when it’s idle. Stress Test Your CPU You can use a utility like Prime95 to stress test your CPU. Such a utility will fore your computer’s CPU to perform calculations without allowing it to rest, working it hard and generating heat. If your CPU is becoming too hot, you’ll start to see errors or system crashes. Overclockers use Prime95 to stress test their overclock settings — if Prime95 experiences errors, they throttle back on their overclocks to ensure the CPU runs cooler and more stable. It’s a good way to check if your CPU is stable under load. Stress Test Your Graphics Card Your graphics card can also be stress tested. For example, if your graphics driver crashes while playing games, the games themselves crash, or you see odd graphical corruption, you can run a graphics benchmark utility like 3DMark. The benchmark will stress your graphics card and, if it’s overheating or failing under load, you’ll see graphical problems, crashes, or blue screens while running the benchmark. If the benchmark seems to work fine but you have issues playing a certain game, it may just be a problem with that game. Swap it Out Not every hardware problem is easy to diagnose. If you have a bad motherboard or power supply, their problems may only manifest through occasional odd issues with other components. It’s hard to tell if these components are causing problems unless you replace them completely. Ultimately, the best way to determine whether a component is faulty is to swap it out. For example, if you think your graphics card may be causing your computer to blue screen, pull the graphics card out of your computer and swap in a new graphics card. If everything is working well, it’s likely that your previous graphics card was bad. This isn’t easy for people who don’t have boxes of components sitting around, but it’s the ideal way to troubleshoot. Troubleshooting is all about trial and error, and swapping components out allows you to pin down which component is actually causing the problem through a process of elimination. This isn’t a complete guide to everything that could likely go wrong and how to identify it — someone could write a full textbook on identifying failing components and still not cover everything. But the tips above should give you some places to start dealing with the more common problems. Image Credit: Justin Marty on Flickr     

    Read the article

  • DTracing TCP congestion control

    - by user12820842
    In a previous post, I showed how we can use DTrace to probe TCP receive and send window events. TCP receive and send windows are in effect both about flow-controlling how much data can be received - the receive window reflects how much data the local TCP is prepared to receive, while the send window simply reflects the size of the receive window of the peer TCP. Both then represent flow control as imposed by the receiver. However, consider that without the sender imposing flow control, and a slow link to a peer, TCP will simply fill up it's window with sent segments. Dealing with multiple TCP implementations filling their peer TCP's receive windows in this manner, busy intermediate routers may drop some of these segments, leading to timeout and retransmission, which may again lead to drops. This is termed congestion, and TCP has multiple congestion control strategies. We can see that in this example, we need to have some way of adjusting how much data we send depending on how quickly we receive acknowledgement - if we get ACKs quickly, we can safely send more segments, but if acknowledgements come slowly, we should proceed with more caution. More generally, we need to implement flow control on the send side also. Slow Start and Congestion Avoidance From RFC2581, let's examine the relevant variables: "The congestion window (cwnd) is a sender-side limit on the amount of data the sender can transmit into the network before receiving an acknowledgment (ACK). Another state variable, the slow start threshold (ssthresh), is used to determine whether the slow start or congestion avoidance algorithm is used to control data transmission" Slow start is used to probe the network's ability to handle transmission bursts both when a connection is first created and when retransmission timers fire. The latter case is important, as the fact that we have effectively lost TCP data acts as a motivator for re-probing how much data the network can handle from the sending TCP. The congestion window (cwnd) is initialized to a relatively small value, generally a low multiple of the sending maximum segment size. When slow start kicks in, we will only send that number of bytes before waiting for acknowledgement. When acknowledgements are received, the congestion window is increased in size until cwnd reaches the slow start threshold ssthresh value. For most congestion control algorithms the window increases exponentially under slow start, assuming we receive acknowledgements. We send 1 segment, receive an ACK, increase the cwnd by 1 MSS to 2*MSS, send 2 segments, receive 2 ACKs, increase the cwnd by 2*MSS to 4*MSS, send 4 segments etc. When the congestion window exceeds the slow start threshold, congestion avoidance is used instead of slow start. During congestion avoidance, the congestion window is generally updated by one MSS for each round-trip-time as opposed to each ACK, and so cwnd growth is linear instead of exponential (we may receive multiple ACKs within a single RTT). This continues until congestion is detected. If a retransmit timer fires, congestion is assumed and the ssthresh value is reset. It is reset to a fraction of the number of bytes outstanding (unacknowledged) in the network. At the same time the congestion window is reset to a single max segment size. Thus, we initiate slow start until we start receiving acknowledgements again, at which point we can eventually flip over to congestion avoidance when cwnd ssthresh. Congestion control algorithms differ most in how they handle the other indication of congestion - duplicate ACKs. A duplicate ACK is a strong indication that data has been lost, since they often come from a receiver explicitly asking for a retransmission. In some cases, a duplicate ACK may be generated at the receiver as a result of packets arriving out-of-order, so it is sensible to wait for multiple duplicate ACKs before assuming packet loss rather than out-of-order delivery. This is termed fast retransmit (i.e. retransmit without waiting for the retransmission timer to expire). Note that on Oracle Solaris 11, the congestion control method used can be customized. See here for more details. In general, 3 or more duplicate ACKs indicate packet loss and should trigger fast retransmit . It's best not to revert to slow start in this case, as the fact that the receiver knew it was missing data suggests it has received data with a higher sequence number, so we know traffic is still flowing. Falling back to slow start would be excessive therefore, so fast recovery is used instead. Observing slow start and congestion avoidance The following script counts TCP segments sent when under slow start (cwnd ssthresh). #!/usr/sbin/dtrace -s #pragma D option quiet tcp:::connect-request / start[args[1]-cs_cid] == 0/ { start[args[1]-cs_cid] = 1; } tcp:::send / start[args[1]-cs_cid] == 1 && args[3]-tcps_cwnd tcps_cwnd_ssthresh / { @c["Slow start", args[2]-ip_daddr, args[4]-tcp_dport] = count(); } tcp:::send / start[args[1]-cs_cid] == 1 && args[3]-tcps_cwnd args[3]-tcps_cwnd_ssthresh / { @c["Congestion avoidance", args[2]-ip_daddr, args[4]-tcp_dport] = count(); } As we can see the script only works on connections initiated since it is started (using the start[] associative array with the connection ID as index to set whether it's a new connection (start[cid] = 1). From there we simply differentiate send events where cwnd ssthresh (congestion avoidance). Here's the output taken when I accessed a YouTube video (where rport is 80) and from an FTP session where I put a large file onto a remote system. # dtrace -s tcp_slow_start.d ^C ALGORITHM RADDR RPORT #SEG Slow start 10.153.125.222 20 6 Slow start 138.3.237.7 80 14 Slow start 10.153.125.222 21 18 Congestion avoidance 10.153.125.222 20 1164 We see that in the case of the YouTube video, slow start was exclusively used. Most of the segments we sent in that case were likely ACKs. Compare this case - where 14 segments were sent using slow start - to the FTP case, where only 6 segments were sent before we switched to congestion avoidance for 1164 segments. In the case of the FTP session, the FTP data on port 20 was predominantly sent with congestion avoidance in operation, while the FTP session relied exclusively on slow start. For the default congestion control algorithm - "newreno" - on Solaris 11, slow start will increase the cwnd by 1 MSS for every acknowledgement received, and by 1 MSS for each RTT in congestion avoidance mode. Different pluggable congestion control algorithms operate slightly differently. For example "highspeed" will update the slow start cwnd by the number of bytes ACKed rather than the MSS. And to finish, here's a neat oneliner to visually display the distribution of congestion window values for all TCP connections to a given remote port using a quantization. In this example, only port 80 is in use and we see the majority of cwnd values for that port are in the 4096-8191 range. # dtrace -n 'tcp:::send { @q[args[4]-tcp_dport] = quantize(args[3]-tcps_cwnd); }' dtrace: description 'tcp:::send ' matched 10 probes ^C 80 value ------------- Distribution ------------- count -1 | 0 0 |@@@@@@ 5 1 | 0 2 | 0 4 | 0 8 | 0 16 | 0 32 | 0 64 | 0 128 | 0 256 | 0 512 | 0 1024 | 0 2048 |@@@@@@@@@ 8 4096 |@@@@@@@@@@@@@@@@@@@@@@@@@@ 23 8192 | 0

    Read the article

  • SpriteFont Exception, no such character?

    - by Michal Bozydar Pawlowski
    I have such spriteFont: <?xml version="1.0" encoding="utf-8"?> <!-- This file contains an xml description of a font, and will be read by the XNA Framework Content Pipeline. Follow the comments to customize the appearance of the font in your game, and to change the characters which are available to draw with. --> <XnaContent xmlns:Graphics="Microsoft.Xna.Framework.Content.Pipeline.Graphics"> <Asset Type="Graphics:FontDescription"> <!-- Modify this string to change the font that will be imported. --> <FontName>Segoe UI</FontName> <!-- Size is a float value, measured in points. Modify this value to change the size of the font. --> <Size>20</Size> <!-- Spacing is a float value, measured in pixels. Modify this value to change the amount of spacing in between characters. --> <Spacing>0</Spacing> <!-- UseKerning controls the layout of the font. If this value is true, kerning information will be used when placing characters. --> <UseKerning>true</UseKerning> <!-- Style controls the style of the font. Valid entries are "Regular", "Bold", "Italic", and "Bold, Italic", and are case sensitive. --> <Style>Regular</Style> <!-- If you uncomment this line, the default character will be substituted if you draw or measure text that contains characters which were not included in the font. --> <!-- <DefaultCharacter>*</DefaultCharacter> --> <!-- CharacterRegions control what letters are available in the font. Every character from Start to End will be built and made available for drawing. The default range is from 32, (ASCII space), to 126, ('~'), covering the basic Latin character set. The characters are ordered according to the Unicode standard. See the documentation for more information. --> <CharacterRegions> <CharacterRegion> <Start>&#09;</Start> <End>&#09;</End> </CharacterRegion> <CharacterRegion> <Start>&#32;</Start> <End>&#1200;</End> </CharacterRegion> </CharacterRegions> </Asset> </XnaContent> It has the character regions (32-1200) And I get this exception: A first chance exception of type 'System.ArgumentException' occurred in Microsoft.Xna.Framework.Graphics.ni.dll The character '?' (0x0441) is not available in this SpriteFont. If applicable, adjust the font's start and end CharacterRegions to include this character. Parameter name: character Why? I'm drawing the string like this: spriteBatch.DrawString(font24, zasadyText, zasadyTextPos, kolorCzcionki1, -0.05f, Vector2.Zero, 1.0f, SpriteEffects.None, 0.5f) I even changed the spriteFont to cyrillic: <CharacterRegions> <CharacterRegion> <Start>&#09;</Start> <End>&#09;</End> </CharacterRegion> <CharacterRegion> <Start>&#0032;</Start> <End>&#0383;</End> </CharacterRegion> <CharacterRegion> <Start>&#1040;</Start> <End>&#1111;</End> </CharacterRegion> </CharacterRegions> </Asset> </XnaContent> and it still doesn't work. I got the (0x441 = char) exception -- EDIT -- Ok, I got the solution. It was a letter mistake in language. I had this: if (jezyk == "ru_RU") { font14 = Content.Load<SpriteFont>("ru_font14"); font24 = Content.Load<SpriteFont>("ru_font24"); font12 = Content.Load<SpriteFont>("ru_czcionkaFloty"); font10 = Content.Load<SpriteFont>("ru_font10"); font28 = Content.Load<SpriteFont>("ru_font28"); font20 = Content.Load<SpriteFont>("ru_font20"); } else { font14 = Content.Load<SpriteFont>("font14"); font24 = Content.Load<SpriteFont>("font24"); font12 = Content.Load<SpriteFont>("czcionkaFloty"); font10 = Content.Load<SpriteFont>("font10"); font28 = Content.Load<SpriteFont>("font28"); font20 = Content.Load<SpriteFont>("font20"); } and there should be not "ru_RU" but "ru-RU" I have no idea. I changed the spriteFont to cyrillic: <CharacterRegions> <CharacterRegion> <Start>&#09;</Start> <End>&#09;</End> </CharacterRegion> <CharacterRegion> <Start>&#0032;</Start> <End>&#0383;</End> </CharacterRegion> <CharacterRegion> <Start>&#1040;</Start> <End>&#1111;</End> </CharacterRegion> </CharacterRegions> </Asset> </XnaContent> and it still doesn't work. I got the (0x441 = char) exception

    Read the article

  • Orchestrating the Virtual Enterprise

    - by John Murphy
    During the American Industrial Revolution, the Ford Motor Company did it all. It turned raw materials into a showroom full of Model Ts. It owned a steel mill, a glass factory, and an automobile assembly line. The company was both self-sufficient and innovative and went on to become one of the largest and most profitable companies in the world. Nowadays, it's unusual for any business to follow this vertical integration model because its much harder to be best in class across such a wide a range of capabilities and services. Instead, businesses focus on their core competencies and outsource other business functions to specialized suppliers. They exchange vertical integration for collaboration. When done well, all parties benefit from this arrangement and the collaboration leads to the creation of an agile, lean and successful "virtual enterprise." Case in point: For Sun hardware, Oracle outsources most of its manufacturing and all of its logistics to third parties. These are vital activities, but ones where Oracle doesn't have a core competency, so we shift them to business partners who do. Within our enterprise, we always retain the core functions of product development, support, and most of the sales function, because that's what constitutes our core value to our customers. This is a perfect example of a virtual enterprise.  What are the implications of this? It means that we must exchange direct internal control for indirect external collaboration. This fundamentally changes the relative importance of different business processes, the boundaries of security and information sharing, and the relationship of the supply chain systems to the ERP. The challenge is that the systems required to support this virtual paradigm are still mired in "island enterprise" thinking. But help is at hand. Developments such as the Web, social networks, collaboration, and rules-based orchestration offer great potential to fundamentally re-architect supply chain systems to better support the virtual enterprise.  Supply Chain Management Systems in a Virtual Enterprise Historically enterprise software was constructed to automate the ERP - and then the supply chain systems extended the ERP. They were joined at the hip. In virtual enterprises, the supply chain system needs to be ERP agnostic, sitting above each of the ERPs that are distributed across the virtual enterprise - most of which are operating in other businesses. This is vital so that the supply chain system can manage the flow of material and the related information through the multiple enterprises. It has to have strong collaboration tools. It needs to be highly flexible. Users need to be able to see information that's coming from multiple sources and be able to react and respond to events across those sources.  Oracle Fusion Distributed Order Orchestration (DOO) is a perfect example of a supply chain system designed to operate in this virtual way. DOO embraces the idea that a company's fulfillment challenge is a distributed, multi-enterprise problem. It enables users to manage the process and the trading partners in a uniform way and deliver a consistent user experience while operating over a heterogeneous, virtual enterprise. This is a fundamental shift at the core of managing supply chains. It forces virtual enterprises to think architecturally about how best to construct their supply chain systems.  Case in point, almost everyone has ordered from Amazon.com at one time or another. Our orders are as likely to be fulfilled by third parties as they are by Amazon itself. To deliver the order promptly and efficiently, Amazon has to send it to the right fulfillment location and know the availability in that location. It needs to be able to track status of the fulfillment and deal with exceptions. As a virtual enterprise, Amazon's operations, using thousands of trading partners, requires a very different approach to fulfillment than the traditional 'take an order and ship it from your own warehouse' model. Amazon had no choice but to develop a complex, expensive and custom solution to tackle this problem as there used to be no product solution available. Now, other companies who want to follow similar models have a better off-the-shelf choice -- Oracle Distributed Order Orchestration (DOO).  Consider how another of our customers is using our distributed orchestration solution. This major airplane manufacturer has a highly complex business and interacts regularly with the U.S. Government and major airlines. It sits in the middle of an intricate supply chain and needed to improve visibility across its many different entities. Oracle Fusion DOO gives the company an orchestration mechanism so it could improve quality, speed, flexibility, and consistency without requiring an organ transplant of these highly complex legacy systems. Many retailers face the challenge of dealing with brick and mortar, Web, and reseller channels. They all need to be knitted together into a virtual enterprise experience that is consistent for their customers. When a large U.K. grocer with a strong brick and mortar retail operation added an online business, they turned to Oracle Fusion DOO to bring these entities together. Disturbing the Peace with Acquisitions Quite often a company's ERP system is disrupted when it acquires a new company. An acquisition can inject a new set of processes and systems -- or even introduce an entirely new business like Sun's hardware did at Oracle. This challenge has been a driver for some of our DOO customers. A large power management company is using Oracle Fusion DOO to provide the flexibility to rapidly integrate additional products and services into its central fulfillment operation. The Flip Side of Fulfillment Meanwhile, we haven't ignored similar challenges on the supply side of the equation. Specifically, how to manage complex supply in a flexible way when there are multiple trading parties involved? How to manage the supply to suppliers? How to manage critical components that need to merge in a tier two or tier three supply chain? By investing in supply orchestration solutions for the virtual enterprise, we plan to give users better visibility into their network of suppliers to help them drive down costs. We also think this technology and full orchestration process can be applied to the financial side of organizations. An example is transactions that flow through complex internal structures to minimize tax exposure. We can help companies manage those transactions effectively by thinking about the internal organization as a virtual enterprise and bringing the same solution set to this internal challenge.  The Clear Front Runner No other company is investing in solving the virtual enterprise supply chain issues like Oracle is. Oracle is in a unique position to become the gold standard in this market space. We have the infrastructure of Oracle technology. We already have an Oracle Fusion DOO application which embraces the best of what's required in this area. And we're absolutely committed to extending our Fusion solution to other use cases and delivering even more business value.

    Read the article

  • General Purpose ASP.NET Data Source Control

    - by Ricardo Peres
    OK, you already know about the ObjectDataSource control, so what’s wrong with it? Well, for once, it doesn’t pass any context to the SelectMethod, you only get the parameters supplied on the SelectParameters plus the desired ordering, starting page and maximum number of rows to display. Also, you must have two separate methods, one for actually retrieving the data, and the other for getting the total number of records (SelectCountMethod). Finally, you don’t get a chance to alter the supplied data before you bind it to the target control. I wanted something simple to use, and more similar to ASP.NET 4.5, where you can have the select method on the page itself, so I came up with CustomDataSource. Here’s how to use it (I chose a GridView, but it works equally well with any regular data-bound control): 1: <web:CustomDataSourceControl runat="server" ID="datasource" PageSize="10" OnData="OnData" /> 2: <asp:GridView runat="server" ID="grid" DataSourceID="datasource" DataKeyNames="Id" PageSize="10" AllowPaging="true" AllowSorting="true" /> The OnData event handler receives a DataEventArgs instance, which contains some properties that describe the desired paging location and size, and it’s where you return the data plus the total record count. Here’s a quick example: 1: protected void OnData(object sender, DataEventArgs e) 2: { 3: //just return some data 4: var data = Enumerable.Range(e.StartRowIndex, e.PageSize).Select(x => new { Id = x, Value = x.ToString(), IsPair = ((x % 2) == 0) }); 5: e.Data = data; 6: //the total number of records 7: e.TotalRowCount = 100; 8: } Here’s the code for the DataEventArgs: 1: [Serializable] 2: public class DataEventArgs : EventArgs 3: { 4: public DataEventArgs(Int32 pageSize, Int32 startRowIndex, String sortExpression, IOrderedDictionary parameters) 5: { 6: this.PageSize = pageSize; 7: this.StartRowIndex = startRowIndex; 8: this.SortExpression = sortExpression; 9: this.Parameters = parameters; 10: } 11:  12: public IEnumerable Data 13: { 14: get; 15: set; 16: } 17:  18: public IOrderedDictionary Parameters 19: { 20: get; 21: private set; 22: } 23:  24: public String SortExpression 25: { 26: get; 27: private set; 28: } 29:  30: public Int32 StartRowIndex 31: { 32: get; 33: private set; 34: } 35:  36: public Int32 PageSize 37: { 38: get; 39: private set; 40: } 41:  42: public Int32 TotalRowCount 43: { 44: get; 45: set; 46: } 47: } As you can guess, the StartRowIndex and PageSize receive the starting row and the desired page size, where the page size comes from the PageSize property on the markup. There’s also a SortExpression, which gets passed the sorted-by column and direction (if descending) and a dictionary containing all the values coming from the SelectParameters collection, if any. All of these are read only, and it is your responsibility to fill in the Data and TotalRowCount. The code for the CustomDataSource is very simple: 1: [NonVisualControl] 2: public class CustomDataSourceControl : DataSourceControl 3: { 4: public CustomDataSourceControl() 5: { 6: this.SelectParameters = new ParameterCollection(); 7: } 8:  9: protected override DataSourceView GetView(String viewName) 10: { 11: return (new CustomDataSourceView(this, viewName)); 12: } 13:  14: internal void GetData(DataEventArgs args) 15: { 16: this.OnData(args); 17: } 18:  19: protected virtual void OnData(DataEventArgs args) 20: { 21: EventHandler<DataEventArgs> data = this.Data; 22:  23: if (data != null) 24: { 25: data(this, args); 26: } 27: } 28:  29: [Browsable(false)] 30: [DesignerSerializationVisibility(DesignerSerializationVisibility.Visible)] 31: [PersistenceMode(PersistenceMode.InnerProperty)] 32: public ParameterCollection SelectParameters 33: { 34: get; 35: private set; 36: } 37:  38: public event EventHandler<DataEventArgs> Data; 39:  40: public Int32 PageSize 41: { 42: get; 43: set; 44: } 45: } Also, the code for the accompanying internal – as there is no need to use it from outside of its declaring assembly - data source view: 1: sealed class CustomDataSourceView : DataSourceView 2: { 3: private readonly CustomDataSourceControl dataSourceControl = null; 4:  5: public CustomDataSourceView(CustomDataSourceControl dataSourceControl, String viewName) : base(dataSourceControl, viewName) 6: { 7: this.dataSourceControl = dataSourceControl; 8: } 9:  10: public override Boolean CanPage 11: { 12: get 13: { 14: return (true); 15: } 16: } 17:  18: public override Boolean CanRetrieveTotalRowCount 19: { 20: get 21: { 22: return (true); 23: } 24: } 25:  26: public override Boolean CanSort 27: { 28: get 29: { 30: return (true); 31: } 32: } 33:  34: protected override IEnumerable ExecuteSelect(DataSourceSelectArguments arguments) 35: { 36: IOrderedDictionary parameters = this.dataSourceControl.SelectParameters.GetValues(HttpContext.Current, this.dataSourceControl); 37: DataEventArgs args = new DataEventArgs(this.dataSourceControl.PageSize, arguments.StartRowIndex, arguments.SortExpression, parameters); 38:  39: this.dataSourceControl.GetData(args); 40:  41: arguments.TotalRowCount = args.TotalRowCount; 42: arguments.MaximumRows = this.dataSourceControl.PageSize; 43: arguments.AddSupportedCapabilities(DataSourceCapabilities.Page | DataSourceCapabilities.Sort | DataSourceCapabilities.RetrieveTotalRowCount); 44: arguments.RetrieveTotalRowCount = true; 45:  46: if (!(args.Data is ICollection)) 47: { 48: return (args.Data.OfType<Object>().ToList()); 49: } 50: else 51: { 52: return (args.Data); 53: } 54: } 55: } As always, looking forward to hearing from you!

    Read the article

  • BI Applications overview

    - by sv744
    Welcome to Oracle BI applications blog! This blog will talk about various features, general roadmap, description of functionality and implementation steps related to Oracle BI applications. In the first post we start with an overview of the BI apps and will delve deeper into some of the topics below in the upcoming weeks and months. If there are other topics you would like us to talk about, pl feel free to provide feedback on that. The Oracle BI applications are a set of pre-built applications that enable pervasive BI by providing role-based insight for each functional area, including sales, service, marketing, contact center, finance, supplier/supply chain, HR/workforce, and executive management. For example, Sales Analytics includes role-based applications for sales executives, sales management, as well as front-line sales reps, each of whom have different needs. The applications integrate and transform data from a range of enterprise sources—including Siebel, Oracle, PeopleSoft, SAP, and others—into actionable intelligence for each business function and user role. This blog  starts with the key benefits and characteristics of Oracle BI applications. In a series of subsequent blogs, each of these points will be explained in detail. Why BI apps? Demonstrate the value of BI to a business user, show reports / dashboards / model that can answer their business questions as part of the sales cycle. Demonstrate technical feasibility of BI project and significantly lower risk and improve success Build Vs Buy benefit Don’t have to start with a blank sheet of paper. Help consolidate disparate systems Data integration in M&A situations Insulate BI consumers from changes in the OLTP Present OLTP data and highlight issues of poor data / missing data – and improve data quality and accuracy Prebuilt Integrations BI apps support prebuilt integrations against leading ERP sources: Fusion Applications, E- Business Suite, Peoplesoft, JD Edwards, Siebel, SAP Co-developed with inputs from functional experts in BI and Applications teams. Out of the box dimensional model to source model mappings Multi source and Multi Instance support Rich Data Model    BI apps have a very rich dimensionsal data model built over 10 years that incorporates best practises from BI modeling perspective as well as reflect the source system complexities  Thanks for reading a long post, and be on the lookout for future posts.  We will look forward to your valuable feedback on these topics as well as suggestions on what other topics would you like us to cover. I Conformed dimensional model across all business subject areas allows cross functional reporting, e.g. customer / supplier 360 Over 360 fact tables across 7 product areas CRM – 145, SCM – 47, Financials – 28, Procurement – 20, HCM – 27, Projects – 18, Campus Solutions – 21, PLM - 56 Supported by 300 physical dimensions Support for extensive calendars; Gregorian, enterprise and ledger based Conformed data model and metrics for real time vs warehouse based reporting  Multi-tenant enabled Extensive BI related transformations BI apps ETL and data integration support various transformations required for dimensional models and reporting requirements. All these have been distilled into common patterns and abstracted logic which can be readily reused across different modules Slowly Changing Dimension support Hierarchy flattening support Row / Column Hybrid Hierarchy Flattening As Is vs. As Was hierarchy support Currency Conversion :-  Support for 3 corporate, CRM, ledger and transaction currencies UOM conversion Internationalization / Localization Dynamic Data translations Code standardization (Domains) Historical Snapshots Cycle and process lifecycle computations Balance Facts Equalization of GL accounting chartfields/segments Standardized values for categorizing GL accounts Reconciliation between GL and subledgers to track accounted/transferred/posted transactions to GL Materialization of data only available through costly and complex APIs e.g. Fusion Payroll, EBS / Fusion Accruals Complex event Interpretation of source data – E.g. o    What constitutes a transfer o    Deriving supervisors via position hierarchy o    Deriving primary assignment in PSFT o    Categorizing and transposition to measures of Payroll Balances to specific metrics to support side by side comparison of measures of for example Fixed Salary, Variable Salary, Tax, Bonus, Overtime Payments. o    Counting of Events – E.g. converting events to fact counters so that for example the number of hires can easily be added up and compared alongside the total transfers and terminations. Multi pass processing of multiple sources e.g. headcount, salary, promotion, performance to allow side to side comparison. Adding value to data to aid analysis through banding, additional domain classifications and groupings to allow higher level analytical reporting and data discovery Calculation of complex measures examples: o    COGs, DSO, DPO, Inventory turns  etc o    Transfers within a Hierarchy or out of / into a hierarchy relative to view point in hierarchy. Configurability and Extensibility support  BI apps offer support for extensibility for various entities as automated extensibility or part of extension methodology Key Flex fields and Descriptive Flex support  Extensible attribute support (JDE)  Conformed Domains ETL Architecture BI apps offer a modular adapter architecture which allows support of multiple product lines into a single conformed model Multi Source Multi Technology Orchestration – creates load plan taking into account task dependencies and customers deployment to generate a plan based on a customers of multiple complex etl tasks Plan optimization allowing parallel ETL tasks Oracle: Bit map indexes and partition management High availability support    Follow the sun support. TCO BI apps support several utilities / capabilities that help with overall total cost of ownership and ensure a rapid implementation Improved cost of ownership – lower cost to deploy On-going support for new versions of the source application Task based setups flows Data Lineage Functional setup performed in Web UI by Functional person Configuration Test to Production support Security BI apps support both data and object security enabling implementations to quickly configure the application as per the reporting security needs Fine grain object security at report / dashboard and presentation catalog level Data Security integration with source systems  Extensible to support external data security rules Extensive Set of KPIs Over 7000 base and derived metrics across all modules Time series calculations (YoY, % growth etc) Common Currency and UOM reporting Cross subject area KPIs (analyzing HR vs GL data, drill from GL to AP/AR, etc) Prebuilt reports and dashboards 3000+ prebuilt reports supporting a large number of industries Hundreds of role based dashboards Dynamic currency conversion at dashboard level Highly tuned Performance The BI apps have been tuned over the years for both a very performant ETL and dashboard performance. The applications use best practises and advanced database features to enable the best possible performance. Optimized data model for BI and analytic queries Prebuilt aggregates& the ability for customers to create their own aggregates easily on warehouse facts allows for scalable end user performance Incremental extracts and loads Incremental Aggregate build Automatic table index and statistics management Parallel ETL loads Source system deletes handling Low latency extract with Golden Gate Micro ETL support Bitmap Indexes Partitioning support Modularized deployment, start small and add other subject areas seamlessly Source Specfic Staging and Real Time Schema Support for source specific operational reporting schema for EBS, PSFT, Siebel and JDE Application Integrations The BI apps also allow for integration with source systems as well as other applications that provide value add through BI and enable BI consumption during operational decision making Embedded dashboards for Fusion, EBS and Siebel applications Action Link support Marketing Segmentation Sales Predictor Dashboard Territory Management External Integrations The BI apps data integration choices include support for loading extenral data External data enrichment choices : UNSPSC, Item class etc. Extensible Spend Classification Broad Deployment Choices Exalytics support Databases :  Oracle, Exadata, Teradata, DB2, MSSQL ETL tool of choice : ODI (coming), Informatica Extensible and Customizable Extensible architecture and Methodology to add custom and external content Upgradable across releases

    Read the article

  • How to implement smooth flocking

    - by Craig
    I'm working on a simple survival game, avoid the big guy and chase the the small guys to stay alive for as long as possible. I have taken the chase and evade example from MSDN create and drawn 20 mice on the screen. I want the small guys to flock when they arent evading. They are doing this, but it isnt as smooth as I would like it to be. How do i make the movement smoother? Its very jittery.# Below is what I have going at the moment, flocking code is within the IF statement, when it isnt set to evading. Any help would be greatly appreciated! :) namespace ChaseAndEvade { class MouseSprite { public enum MouseAiState { // evading the cat Evading, // the mouse can't see the "cat", and it's wandering around. Wander } // how fast can the mouse move? public float MaxMouseSpeed = 4.5f; // and how fast can it turn? public float MouseTurnSpeed = 0.20f; // MouseEvadeDistance controls the distance at which the mouse will flee from // cat. If the mouse is further than "MouseEvadeDistance" pixels away, he will // consider himself safe. public float MouseEvadeDistance = 100.0f; // this constant is similar to TankHysteresis. The value is larger than the // tank's hysteresis value because the mouse is faster than the tank: with a // higher velocity, small fluctuations are much more visible. public float MouseHysteresis = 60.0f; public Texture2D mouseTexture; public Vector2 mouseTextureCenter; public Vector2 mousePosition; public MouseAiState mouseState = MouseAiState.Wander; public float mouseOrientation; public Vector2 mouseWanderDirection; int separationImpact = 4; int cohesionImpact = 6; int alignmentImpact = 2; int sensorDistance = 50; public void UpdateMouse(Vector2 position, MouseSprite [] mice, int numberMice, int index) { Vector2 catPosition = position; int enemies = numberMice; // first, calculate how far away the mouse is from the cat, and use that // information to decide how to behave. If they are too close, the mouse // will switch to "active" mode - fleeing. if they are far apart, the mouse // will switch to "idle" mode, where it roams around the screen. // we use a hysteresis constant in the decision making process, as described // in the accompanying doc file. float distanceFromCat = Vector2.Distance(mousePosition, catPosition); // the cat is a safe distance away, so the mouse should idle: if (distanceFromCat > MouseEvadeDistance + MouseHysteresis) { mouseState = MouseAiState.Wander; } // the cat is too close; the mouse should run: else if (distanceFromCat < MouseEvadeDistance - MouseHysteresis) { mouseState = MouseAiState.Evading; } // if neither of those if blocks hit, we are in the "hysteresis" range, // and the mouse will continue doing whatever it is doing now. // the mouse will move at a different speed depending on what state it // is in. when idle it won't move at full speed, but when actively evading // it will move as fast as it can. this variable is used to track which // speed the mouse should be moving. float currentMouseSpeed; // the second step of the Update is to change the mouse's orientation based // on its current state. if (mouseState == MouseAiState.Evading) { // If the mouse is "active," it is trying to evade the cat. The evasion // behavior is accomplished by using the TurnToFace function to turn // towards a point on a straight line facing away from the cat. In other // words, if the cat is point A, and the mouse is point B, the "seek // point" is C. // C // B // A Vector2 seekPosition = 2 * mousePosition - catPosition; // Use the TurnToFace function, which we introduced in the AI Series 1: // Aiming sample, to turn the mouse towards the seekPosition. Now when // the mouse moves forward, it'll be trying to move in a straight line // away from the cat. mouseOrientation = ChaseAndEvadeGame.TurnToFace(mousePosition, seekPosition, mouseOrientation, MouseTurnSpeed); // set currentMouseSpeed to MaxMouseSpeed - the mouse should run as fast // as it can. currentMouseSpeed = MaxMouseSpeed; } else { // if the mouse isn't trying to evade the cat, it should just meander // around the screen. we'll use the Wander function, which the mouse and // tank share, to accomplish this. mouseWanderDirection and // mouseOrientation are passed by ref so that the wander function can // modify them. for more information on ref parameters, see // http://msdn2.microsoft.com/en-us/library/14akc2c7(VS.80).aspx ChaseAndEvadeGame.Wander(mousePosition, ref mouseWanderDirection, ref mouseOrientation, MouseTurnSpeed); // if the mouse is wandering, it should only move at 25% of its maximum // speed. currentMouseSpeed = .25f * MaxMouseSpeed; Vector2 separate = Vector2.Zero; Vector2 moveCloser = Vector2.Zero; Vector2 moveAligned = Vector2.Zero; // What the AI does when it sees other AIs for (int j = 0; j < enemies; j++) { if (index != j) { // Calculate a vector towards another AI Vector2 separation = mice[index].mousePosition - mice[j].mousePosition; // Only react if other AI is within a certain distance if ((separation.Length() < this.sensorDistance) & (separation.Length()> 0) ) { moveAligned += mice[j].mouseWanderDirection; float distance = Math.Abs(separation.Length()); if (distance == 0) distance = 1; moveCloser += mice[j].mousePosition; separation.Normalize(); separate += separation / distance; } } } if (moveAligned.LengthSquared() != 0) { moveAligned.Normalize(); } if (moveCloser.LengthSquared() != 0) { moveCloser.Normalize(); } moveCloser /= enemies; mice[index].mousePosition += (separate * separationImpact) + (moveCloser * cohesionImpact) + (moveAligned * alignmentImpact); } // The final step is to move the mouse forward based on its current // orientation. First, we construct a "heading" vector from the orientation // angle. To do this, we'll use Cosine and Sine to tell us the x and y // components of the heading vector. See the accompanying doc for more // information. Vector2 heading = new Vector2( (float)Math.Cos(mouseOrientation), (float)Math.Sin(mouseOrientation)); // by multiplying the heading and speed, we can get a velocity vector. the // velocity vector is then added to the mouse's current position, moving him // forward. mousePosition += heading * currentMouseSpeed; } } }

    Read the article

  • Using IIS Logs for Performance Testing with Visual Studio

    - by Tarun Arora
    In this blog post I’ll show you how you can play back the IIS Logs in Visual Studio to automatically generate the web performance tests. You can also download the sample solution I am demo-ing in the blog post. Introduction Performance testing is as important for new websites as it is for evolving websites. If you already have your website running in production you could mine the information available in IIS logs to analyse the dense zones (most used pages) and performance test those pages rather than wasting time testing & tuning the least used pages in your application. What are IIS Logs To help with server use and analysis, IIS is integrated with several types of log files. These log file formats provide information on a range of websites and specific statistics, including Internet Protocol (IP) addresses, user information and site visits as well as dates, times and queries. If you are using IIS 7 and above you will find the log files in the following directory C:\Interpub\Logs\ Walkthrough 1. Download and Install Log Parser from the Microsoft download Centre. You should see the LogParser.dll in the install folder, the default install location is C:\Program Files (x86)\Log Parser 2.2. LogParser.dll gives us a library to query the iis log files programmatically. By the way if you haven’t used Log Parser in the past, it is a is a powerful, versatile tool that provides universal query access to text-based data such as log files, XML files and CSV files, as well as key data sources on the Windows operating system such as the Event Log, the Registry, the file system, and Active Directory. More details… 2. Create a new test project in Visual Studio. Let’s call it IISLogsToWebPerfTestDemo.   3.  Delete the UnitTest1.cs class that gets created by default. Right click the solution and add a project of type class library, name it, IISLogsToWebPerfTestEngine. Delete the default class Program.cs that gets created with the project. 4. Under the IISLogsToWebPerfTestEngine project add a reference to Microsoft.VisualStudio.QualityTools.WebTestFramework – c:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\PublicAssemblies\Microsoft.VisualStudio.QualityTools.WebTestFramework.dll LogParser also called MSUtil - c:\users\tarora\documents\visual studio 2010\Projects\IisLogsToWebPerfTest\IisLogsToWebPerfTestEngine\obj\Debug\Interop.MSUtil.dll 5. Right click IISLogsToWebPerfTestEngine project and add a new classes – IISLogReader.cs The IISLogReader class queries the iis logs using the log parser. using System; using System.Collections.Generic; using System.Text; using MSUtil; using LogQuery = MSUtil.LogQueryClassClass; using IISLogInputFormat = MSUtil.COMIISW3CInputContextClassClass; using LogRecordSet = MSUtil.ILogRecordset; using Microsoft.VisualStudio.TestTools.WebTesting; using System.Diagnostics; namespace IisLogsToWebPerfTestEngine { // By making use of log parser it is possible to query the iis log using select queries public class IISLogReader { private string _iisLogPath; public IISLogReader(string iisLogPath) { _iisLogPath = iisLogPath; } public IEnumerable<WebTestRequest> GetRequests() { LogQuery logQuery = new LogQuery(); IISLogInputFormat iisInputFormat = new IISLogInputFormat(); // currently these columns give us suffient information to construct the web test requests string query = @"SELECT s-ip, s-port, cs-method, cs-uri-stem, cs-uri-query FROM " + _iisLogPath; LogRecordSet recordSet = logQuery.Execute(query, iisInputFormat); // Apply a bit of transformation while (!recordSet.atEnd()) { ILogRecord record = recordSet.getRecord(); if (record.getValueEx("cs-method").ToString() == "GET") { string server = record.getValueEx("s-ip").ToString(); string path = record.getValueEx("cs-uri-stem").ToString(); string querystring = record.getValueEx("cs-uri-query").ToString(); StringBuilder urlBuilder = new StringBuilder(); urlBuilder.Append("http://"); urlBuilder.Append(server); urlBuilder.Append(path); if (!String.IsNullOrEmpty(querystring)) { urlBuilder.Append("?"); urlBuilder.Append(querystring); } // You could make substitutions by introducing parameterized web tests. WebTestRequest request = new WebTestRequest(urlBuilder.ToString()); Debug.WriteLine(request.UrlWithQueryString); yield return request; } recordSet.moveNext(); } Console.WriteLine(" That's it! Closing the reader"); recordSet.close(); } } }   6. Connect the dots by adding the project reference ‘IisLogsToWebPerfTestEngine’ to ‘IisLogsToWebPerfTest’. Right click the ‘IisLogsToWebPerfTest’ project and add a new class ‘WebTest1Coded.cs’ The WebTest1Coded.cs inherits from the WebTest class. By overriding the GetRequestMethod we can inject the log files to the IISLogReader class which uses Log parser to query the log file and extract the web requests to generate the web test request which is yielded back for play back when the test is run. namespace IisLogsToWebPerfTest { using System; using System.Collections.Generic; using System.Text; using Microsoft.VisualStudio.TestTools.WebTesting; using Microsoft.VisualStudio.TestTools.WebTesting.Rules; using IisLogsToWebPerfTestEngine; // This class is a coded web performance test implementation, that simply passes // the path of the iis logs to the IisLogReader class which does the heavy // lifting of reading the contents of the log file and converting them to tests. // You could have multiple such classes that inherit from WebTest and implement // GetRequestEnumerator Method and pass differnt log files for different tests. public class WebTest1Coded : WebTest { public WebTest1Coded() { this.PreAuthenticate = true; } public override IEnumerator<WebTestRequest> GetRequestEnumerator() { // substitute the highlighted path with the path of the iis log file IISLogReader reader = new IISLogReader(@"C:\Demo\iisLog1.log"); foreach (WebTestRequest request in reader.GetRequests()) { yield return request; } } } }   7. Its time to fire the test off and see the iis log playback as a web performance test. From the Test menu choose Test View Window you should be able to see the WebTest1Coded test show up. Highlight the test and press Run selection (you can also debug the test in case you face any failures during test execution). 8. Optionally you can create a Load Test by keeping ‘WebTest1Coded’ as the base test. Conclusion You have just helped your testing team, you now have become the coolest developer in your organization! Jokes apart, log parser and web performance test together allow you to save a lot of time by not having to worry about what to test or even worrying about how to record the test. If you haven’t already, download the solution from here. You can take this to the next level by using LogParser to extract the log files as part of an end of day batch to a database. See the usage trends by user this solution over a longer term and have your tests consume the web requests now stored in the database to generate the web performance tests. If you like the post, don’t forget to share … Keep RocKiNg!

    Read the article

  • JavaOne Session Report: “50 Tips in 50 Minutes for GlassFish Fans”

    - by Janice J. Heiss
    At JavaOne 2012 on Monday, Oracle’s Engineer Chris Kasso, and Technology Evangelist Arun Gupta, presented a head-spinning session (CON4701) in which they offered 50 tips for GlassFish fans. Kasso and Gupta alternated back and forth with each presenting 10 tips at a time. An audience of about (appropriately) 50 attentive and appreciative developers was on hand in what has to be one of the most information-packed sessions ever at JavaOne!Aside: I experienced one of the quiet joys of JavaOne when, just before the session began, I spotted Java Champion and JavaOne Rock Star Adam Bien sitting nearby – Adam is someone I have been fortunate to know for many years.GlassFish is a freely available, commercially supported Java EE reference implementation. The session prioritized quantity of tips over depth of information and offered tips that are intended for both seasoned and new users, that are meant to increase the range of functional options available to GlassFish users. The focus was on lesser-known dimensions of GlassFish. Attendees were encouraged to pursue tips that contained new information for them. All 50 tips can be accessed here.Below are several examples of more elaborate tips and a final practical tip on how to get in touch with these folks. Tip #1: Using the login Command * To execute a remote command with asadmin you must provide the admin's user name and password.* The login command allows you to store the login credentials to be reused in subsequent commands.* Can be logged into multiple servers (distinguish by host and port). Example:     % asadmin --host ouch login     Enter admin user name [default: admin]>     Enter admin password>     Login information relevant to admin user name [admin]     for host [ouch] and admin port [4848] stored at     [/Users/ckasso/.asadminpass] successfully.     Make sure that this file remains protected.     Information stored in this file will be used by     asadmin commands to manage the associated domain.     Command login executed successfully.     % asadmin --host ouch list-clusters     c1 not running     Command list-clusters executed successfully.Tip #4: Using the AS_DEBUG Env Variable* Environment variable to control client side debug output* Exposes: command processing info URL used to access the command:                           http://localhost:4848/__asadmin/uptime Raw response from the server Example:   % export AS_DEBUG=true  % asadmin uptime  CLASSPATH= ./../glassfish/modules/admin-cli.jar  Commands: [uptime]  asadmin extension directory: /work/gf-3.1.2/glassfish3/glassfish/lib/asadm      ------- RAW RESPONSE  ---------   Signature-Version: 1.0   message: Up 7 mins 10 secs   milliseconds_value: 430194   keys: milliseconds   milliseconds_name: milliseconds   use-main-children-attribute: false   exit-code: SUCCESS  ------- RAW RESPONSE  ---------Tip #11: Using Password Aliases * Some resources require a password to access (e.g. DB, JMS, etc.).* The resource connector is defined in the domain.xml.Example:Suppose the DB resource you wish to access requires an entry like this in the domain.xml:     <property name="password" value="secretp@ssword"/>But company policies do not allow you to store the password in the clear.* Use password aliases to avoid storing the password in the domain.xml* Create a password alias:     % asadmin create-password-alias DB_pw_alias     Enter the alias password>     Enter the alias password again>     Command create-password-alias executed successfully.* The password is stored in domain's encrypted keystore.* Now update the password value in the domain.xml:     <property name="password" value="${ALIAS=DB_pw_alias}"/>Tip #21: How to Start GlassFish as a Service * Configuring a server to automatically start at boot can be tedious.* Each platform does it differently.* The create-service command makes this easy.   Windows: creates a Windows service Linux: /etc/init.d script Solaris: Service Management Facility (SMF) service * Must execute create-service with admin privileges.* Can be used for the DAS or instances* Try it first with the --dry-run option.* There is a (unsupported) _delete-serverExample:     # asadmin create-service domain1     The Service was created successfully. Here are the details:     Name of the service:application/GlassFish/domain1     Type of the service:Domain     Configuration location of the service:/work/gf-3.1.2.2/glassfish3/glassfish/domains     Manifest file location on the system:/var/svc/manifest/application/GlassFish/domain1_work_gf-3.1.2.2_glassfish3_glassfish_domains/Domain-service-smf.xml.     You have created the service but you need to start it yourself. Here are the most typical Solaris commands of interest:     * /usr/bin/svcs  -a | grep domain1  // status     * /usr/sbin/svcadm enable domain1 // start     * /usr/sbin/svcadm disable domain1 // stop     * /usr/sbin/svccfg delete domain1 // uninstallTip #34: Posting a Command via REST* Use wget/curl to execute commands on the DAS.Example:  Deploying an application   % curl -s -S \       -H 'Accept: application/json' -X POST \       -H 'X-Requested-By: anyvalue' \       -F id=@/path/to/application.war \       -F force=true http://localhost:4848/management/domain/applications/application* Use @ before a file name to tell curl to send the file's contents.* The force option tells GlassFish to force the deployment in case the application is already deployed.* Use wget/curl to execute commands on the DAS.Example:  Deploying an application   % curl -s -S \       -H 'Accept: application/json' -X POST \       -H 'X-Requested-By: anyvalue' \       -F id=@/path/to/application.war \       -F force=true http://localhost:4848/management/domain/applications/application* Use @ before a file name to tell curl to send the file's contents.* The force option tells GlassFish to force the deployment in case the application is already deployed.Tip #46: Upgrading to a Newer Version * Upgrade applications and configuration from an earlier version* Upgrade Tool: Side-by-side upgrade– GUI: asupgrade– CLI: asupgrade --c– What happens ?* Copies older source domain -> target domain directory* asadmin start-domain --upgrade* Update Tool and pkg: In-place upgrade– GUI: updatetool, install all Available Updates– CLI: pkg image-update– Upgrade the domain* asadmin start-domain --upgradeTip #50: How to reach us?* GlassFish Forum: http://www.java.net/forums/glassfish/glassfish* [email protected]* @glassfish* facebook.com/glassfish* youtube.com/GlassFishVideos* blogs.oracle.com/theaquariumArun Gupta acknowledged that their method of presentation was experimental and actively solicited feedback about the session. The best way to reach them is on the GlassFish user forum.In addition, check out Gupta’s new book Java EE 6 Pocket Guide.

    Read the article

  • Can't get the L2TP IPSEC up and running

    - by Maciej Swic
    i have an Ubuntu 11.10 (oneiric) server running on a ReadyNAS. Im planning to use this to accept ipsec+l2tp connections through a router. However, the connection is failing somewhere half through. Using Openswan IPsec U2.6.28/K3.0.0-12-generic and trying to connect with an iOS 5 iPhone 4S. This is how far i can get: auth.log: Jan 19 13:54:11 ubuntu pluto[1990]: added connection description "PSK" Jan 19 13:54:11 ubuntu pluto[1990]: added connection description "L2TP-PSK-NAT" Jan 19 13:54:11 ubuntu pluto[1990]: added connection description "L2TP-PSK-noNAT" Jan 19 13:54:11 ubuntu pluto[1990]: added connection description "passthrough-for-non-l2tp" Jan 19 13:54:11 ubuntu pluto[1990]: listening for IKE messages Jan 19 13:54:11 ubuntu pluto[1990]: NAT-Traversal: Trying new style NAT-T Jan 19 13:54:11 ubuntu pluto[1990]: NAT-Traversal: ESPINUDP(1) setup failed for new style NAT-T family IPv4 (errno=19) Jan 19 13:54:11 ubuntu pluto[1990]: NAT-Traversal: Trying old style NAT-T Jan 19 13:54:11 ubuntu pluto[1990]: adding interface eth0/eth0 192.168.19.99:500 Jan 19 13:54:11 ubuntu pluto[1990]: adding interface eth0/eth0 192.168.19.99:4500 Jan 19 13:54:11 ubuntu pluto[1990]: adding interface lo/lo 127.0.0.1:500 Jan 19 13:54:11 ubuntu pluto[1990]: adding interface lo/lo 127.0.0.1:4500 Jan 19 13:54:11 ubuntu pluto[1990]: adding interface lo/lo ::1:500 Jan 19 13:54:11 ubuntu pluto[1990]: adding interface eth0/eth0 2001:470:28:81:a00:27ff:* Jan 19 13:54:11 ubuntu pluto[1990]: loading secrets from "/etc/ipsec.secrets" Jan 19 13:54:11 ubuntu pluto[1990]: loading secrets from "/var/lib/openswan/ipsec.secrets.inc" Jan 19 14:04:31 ubuntu pluto[1990]: packet from 95.*.*.233:500: received Vendor ID payload [RFC 3947] method set to=109 Jan 19 14:04:31 ubuntu pluto[1990]: packet from 95.*.*.233:500: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike] method set to=110 Jan 19 14:04:31 ubuntu pluto[1990]: packet from 95.*.*.233:500: ignoring unknown Vendor ID payload [8f8d83826d246b6fc7a8a6a428c11de8] Jan 19 14:04:31 ubuntu pluto[1990]: packet from 95.*.*.233:500: ignoring unknown Vendor ID payload [439b59f8ba676c4c7737ae22eab8f582] Jan 19 14:04:31 ubuntu pluto[1990]: packet from 95.*.*.233:500: ignoring unknown Vendor ID payload [4d1e0e136deafa34c4f3ea9f02ec7285] Jan 19 14:04:31 ubuntu pluto[1990]: packet from 95.*.*.233:500: ignoring unknown Vendor ID payload [80d0bb3def54565ee84645d4c85ce3ee] Jan 19 14:04:31 ubuntu pluto[1990]: packet from 95.*.*.233:500: ignoring unknown Vendor ID payload [9909b64eed937c6573de52ace952fa6b] Jan 19 14:04:31 ubuntu pluto[1990]: packet from 95.*.*.233:500: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-03] meth=108, but already using method 110 Jan 19 14:04:31 ubuntu pluto[1990]: packet from 95.*.*.233:500: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-02] meth=107, but already using method 110 Jan 19 14:04:31 ubuntu pluto[1990]: packet from 95.*.*.233:500: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-02_n] meth=106, but already using method 110 Jan 19 14:04:31 ubuntu pluto[1990]: packet from 95.*.*.233:500: received Vendor ID payload [Dead Peer Detection] Jan 19 14:04:31 ubuntu pluto[1990]: "PSK"[1] 95.*.*.233 #1: responding to Main Mode from unknown peer 95.*.*.233 Jan 19 14:04:31 ubuntu pluto[1990]: "PSK"[1] 95.*.*.233 #1: transition from state STATE_MAIN_R0 to state STATE_MAIN_R1 Jan 19 14:04:31 ubuntu pluto[1990]: "PSK"[1] 95.*.*.233 #1: STATE_MAIN_R1: sent MR1, expecting MI2 Jan 19 14:04:33 ubuntu pluto[1990]: "PSK"[1] 95.*.*.233 #1: NAT-Traversal: Result using draft-ietf-ipsec-nat-t-ike (MacOS X): both are NATed Jan 19 14:04:33 ubuntu pluto[1990]: "PSK"[1] 95.*.*.233 #1: transition from state STATE_MAIN_R1 to state STATE_MAIN_R2 Jan 19 14:04:33 ubuntu pluto[1990]: "PSK"[1] 95.*.*.233 #1: STATE_MAIN_R2: sent MR2, expecting MI3 Jan 19 14:05:03 ubuntu pluto[1990]: ERROR: asynchronous network error report on eth0 (sport=500) for message to 95.*.*.233 port 500, complainant 95.*.*.233: Connection refused [errno 111, origin ICMP type 3 code 3 (not authenticated)] Router config UDP 500, 1701 and 4500 forwarded to 192.168.19.99 (Ubuntu server for ipsec). Ipsec passthrough enabled. /etc/ipsec.conf # /etc/ipsec.conf - Openswan IPsec configuration file # This file: /usr/share/doc/openswan/ipsec.conf-sample # # Manual: ipsec.conf.5 version 2.0 # conforms to second version of ipsec.conf specification config setup nat_traversal=yes #charonstart=yes #plutostart=yes protostack=netkey conn PSK authby=secret forceencaps=yes pfs=no auto=add keyingtries=3 dpdtimeout=60 dpdaction=clear rekey=no left=192.168.19.99 leftnexthop=192.168.19.1 leftprotoport=17/1701 right=%any rightprotoport=17/%any rightsubnet=vhost:%priv,%no dpddelay=10 #dpdtimeout=10 #dpdaction=clear include /etc/ipsec.d/l2tp-psk.conf /etc/ipsec.d/l2tp-psk.conf conn L2TP-PSK-NAT rightsubnet=vhost:%priv also=L2TP-PSK-noNAT conn L2TP-PSK-noNAT # # PreSharedSecret needs to be specified in /etc/ipsec.secrets as # YourIPAddress %any: "sharedsecret" authby=secret pfs=no auto=add keyingtries=3 # we cannot rekey for %any, let client rekey rekey=no # Set ikelifetime and keylife to same defaults windows has ikelifetime=8h keylife=1h # l2tp-over-ipsec is transport mode type=transport # left=192.168.19.99 # # For updated Windows 2000/XP clients, # to support old clients as well, use leftprotoport=17/%any leftprotoport=17/1701 # # The remote user. # right=%any # Using the magic port of "0" means "any one single port". This is # a work around required for Apple OSX clients that use a randomly # high port, but propose "0" instead of their port. rightprotoport=17/%any dpddelay=10 dpdtimeout=10 dpdaction=clear conn passthrough-for-non-l2tp type=passthrough left=192.168.19.99 leftnexthop=192.168.19.1 right=0.0.0.0 rightsubnet=0.0.0.0/0 auto=route /etc/ipsec.secrets include /var/lib/openswan/ipsec.secrets.inc %any %any: PSK "my-key" 192.168.19.99 %any: PSK "my-key" /etc/xl2tpd/xl2tpd.conf [global] debug network = yes debug tunnel = yes ipsec saref = no listen-addr = 192.168.19.99 [lns default] ip range = 192.168.19.201-192.168.19.220 local ip = 192.168.19.99 require chap = yes refuse chap = no refuse pap = no require authentication = no ppp debug = yes pppoptfile = /etc/ppp/options.xl2tpd length bit = yes /etc/ppp/options.xl2tpd pcp-accept-local ipcp-accept-remote noccp auth crtscts idle 1800 mtu 1410 mru 1410 defaultroute debug lock proxyarp connect-delay 5000 ipcp-accept-local /etc/ppp/chap-secrets # Secrets for authentication using CHAP # client server secret IP addresses maciekish * my-secret * * maciekish my-secret * I can't seem to find the problem. Other ipsec connections to other hosts work from the network im currently at.

    Read the article

  • Why does WPF Style to show validation errors in ToolTip work for a TextBox but fails for a ComboBox?

    - by Mike B
    I am using a typical Style to display validation errors as a tooltip from IErrorDataInfo for a textbox as shown below and it works fine. <Style TargetType="{x:Type TextBox}"> <Style.Triggers> <Trigger Property="Validation.HasError" Value="true"> <Setter Property="ToolTip" Value="{Binding RelativeSource={RelativeSource Self}, Path=(Validation.Errors)[0].ErrorContent}"/> </Trigger> </Style.Triggers> </Style> But when i try to do the same thing for a ComboBox like this it fails <Style TargetType="{x:Type ComboBox}"> <Style.Triggers> <Trigger Property="Validation.HasError" Value="true"> <Setter Property="ToolTip" Value="{Binding RelativeSource={RelativeSource Self}, Path=(Validation.Errors)[0].ErrorContent}"/> </Trigger> </Style.Triggers> </Style> The error I get in the output window is: System.Windows.Data Error: 17 : Cannot get 'Item[]' value (type 'ValidationError') from '(Validation.Errors)' (type 'ReadOnlyObservableCollection`1'). BindingExpression:Path=(0)[0].ErrorContent; DataItem='ComboBox' (Name='ownerComboBox'); target element is 'ComboBox' (Name='ownerComboBox'); target property is 'ToolTip' (type 'Object') ArgumentOutOfRangeException:'System.ArgumentOutOfRangeException: Specified argument was out of the range of valid values.Parameter name: index' Oddly it also attempts to make invalid Database changes when I close the window if I change any ComboBox values (This is also when the binding error occurs)!!! Cannot insert the value NULL into column 'EmpFirstName', table 'OITaskManager.dbo.Employees'; column does not allow nulls. INSERT fails. The statement has been terminated. Simply by commenting the style out everyting works perfectly. How do I fix this? Just in case anyone needs it one of the comboBox' xaml follows: <ComboBox ItemsSource="{Binding Path=Employees}" SelectedValuePath="EmpID" SelectedValue="{Binding Path=SelectedIssue.Employee2.EmpID, Mode=OneWay, ValidatesOnDataErrors=True}" ItemTemplate="{StaticResource LastNameFirstComboBoxTemplate}" Height="28" Name="ownerComboBox" Width="120" Margin="2" SelectionChanged="ownerComboBox_SelectionChanged" /> <DataTemplate x:Key="LastNameFirstComboBoxTemplate"> <TextBlock> <TextBlock.Text> <MultiBinding StringFormat="{}{1}, {0}" > <Binding Path="EmpFirstName" /> <Binding Path="EmpLastName" /> </MultiBinding> </TextBlock.Text> </TextBlock> </DataTemplate> SelectionChanged: (I do plan to implement commanding before long but, as this is my first WPF project I have not gone full MVVM yet. I am trying to take things in small-medium sized bites) // This is done this way to maintain the DataContext Integrity // and avoid an error due to an Object being "Not New" in Linq-to-SQL private void ownerComboBox_SelectionChanged(object sender, SelectionChangedEventArgs e) { Employee currentEmpl = ownerComboBox.SelectedItem as Employee; if (currentEmpl != null && currentEmpl != statusBoardViewModel.SelectedIssue.Employee2) { statusBoardViewModel.SelectedIssue.Employee2 = currentEmpl; } }

    Read the article

  • Python unicode Decode Error SUDs

    - by PylonsN00b
    OK so I have # -*- coding: utf-8 -*- at the top of my script and it worked for being able to pull data from the database that had funny chars(Ñ ,Õ,é,—,–,’,…) in it and store that data into variables...but I have run into other problems, see I pull my data, organize it, and then dump it into a variables like so: title = product[1] Where product[1] is from my database result set Then I load it up for Suds like so: array_of_inventory_item_submit = ca_client_inventory.factory.create('ArrayOfInventoryItemSubmit') for product in products: inventory_item_submit = ca_client_inventory.factory.create('InventoryItemSubmit') inventory_item_list = get_item_list(product) inventory_item_submit = [inventory_item_list] array_of_inventory_item_submit.InventoryItemSubmit.append(inventory_item_submit) #Call that service baby! ca_client_inventory.service.SynchInventoryItemList(accountID, array_of_inventory_item_submit) Where get_item_list sets product[1] to title and (including a whole bunch of other nodes): inventory_item_submit.Title = title So everything runs fine until I call ca_client_inventory.service.SynchInventoryItemList that contains array_of_inventory_item_submit which contains the title w/ the funky char...here is the error: Traceback (most recent call last): File "upload_all_inventory_ebay.py", line 421, in <module> ca_client_inventory.service.SynchInventoryItemList(accountID, array_of_inventory_item_submit) File "build/bdist.macosx-10.6-i386/egg/suds/client.py", line 539, in __call__ File "build/bdist.macosx-10.6-i386/egg/suds/client.py", line 592, in invoke File "build/bdist.macosx-10.6-i386/egg/suds/bindings/binding.py", line 118, in get_message File "build/bdist.macosx-10.6-i386/egg/suds/bindings/document.py", line 63, in bodycontent File "build/bdist.macosx-10.6-i386/egg/suds/bindings/document.py", line 105, in mkparam File "build/bdist.macosx-10.6-i386/egg/suds/bindings/binding.py", line 260, in mkparam File "build/bdist.macosx-10.6-i386/egg/suds/mx/core.py", line 62, in process File "build/bdist.macosx-10.6-i386/egg/suds/mx/core.py", line 75, in append File "build/bdist.macosx-10.6-i386/egg/suds/mx/appender.py", line 102, in append File "build/bdist.macosx-10.6-i386/egg/suds/mx/appender.py", line 243, in append File "build/bdist.macosx-10.6-i386/egg/suds/mx/appender.py", line 182, in append File "build/bdist.macosx-10.6-i386/egg/suds/mx/core.py", line 75, in append File "build/bdist.macosx-10.6-i386/egg/suds/mx/appender.py", line 102, in append File "build/bdist.macosx-10.6-i386/egg/suds/mx/appender.py", line 298, in append File "build/bdist.macosx-10.6-i386/egg/suds/mx/appender.py", line 182, in append File "build/bdist.macosx-10.6-i386/egg/suds/mx/core.py", line 75, in append File "build/bdist.macosx-10.6-i386/egg/suds/mx/appender.py", line 102, in append File "build/bdist.macosx-10.6-i386/egg/suds/mx/appender.py", line 298, in append File "build/bdist.macosx-10.6-i386/egg/suds/mx/appender.py", line 182, in append File "build/bdist.macosx-10.6-i386/egg/suds/mx/core.py", line 75, in append File "build/bdist.macosx-10.6-i386/egg/suds/mx/appender.py", line 102, in append File "build/bdist.macosx-10.6-i386/egg/suds/mx/appender.py", line 243, in append File "build/bdist.macosx-10.6-i386/egg/suds/mx/appender.py", line 182, in append File "build/bdist.macosx-10.6-i386/egg/suds/mx/core.py", line 75, in append File "build/bdist.macosx-10.6-i386/egg/suds/mx/appender.py", line 102, in append File "build/bdist.macosx-10.6-i386/egg/suds/mx/appender.py", line 198, in append File "build/bdist.macosx-10.6-i386/egg/suds/sax/element.py", line 251, in setText File "build/bdist.macosx-10.6-i386/egg/suds/sax/text.py", line 43, in __new__ UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 116: ordinal not in range(128) Now what? My guess is my script can take in these funky chars because I have # -*- coding: utf-8 -*- at the top but Suds does NOT have that at the top of its files. Do I really want to go and change the Suds files...we all know this is the least desired last possible solution...what can I do?

    Read the article

  • SlimDX: Lightning problem with Direct3D9

    - by Spi1988
    I am creating a simple application to get familiar with SlimDX library. I found some code written in MDX and I'm trying to convert it to run on SlimDX. I am having some problems with the light because everything is being shown as black. The code is: public partial class DirectTest : Form { private Device device= null; private float angle = 0.0f; Light light = new Light(); public DirectTest() { InitializeComponent(); this.Size = new Size(800, 600); this.SetStyle(ControlStyles.AllPaintingInWmPaint | ControlStyles.Opaque, true); } /// <summary> /// We will initialize our graphics device here /// </summary> public void InitializeGraphics() { PresentParameters pres_params = new PresentParameters() { Windowed = true, SwapEffect = SwapEffect.Discard }; device = new Device(new Direct3D(), 0, DeviceType.Hardware, this.Handle, CreateFlags.SoftwareVertexProcessing, pres_params); } private void SetupCamera() { device.SetRenderState(RenderState.CullMode, Cull.None); device.SetTransform(TransformState.World, Matrix.RotationAxis(new Vector3(angle / ((float)Math.PI * 2.0f), angle / ((float)Math.PI * 4.0f), angle / ((float)Math.PI * 6.0f)), angle / (float)Math.PI)); angle += 0.1f; device.SetTransform(TransformState.Projection, Matrix.PerspectiveFovLH((float)Math.PI / 4, this.Width / this.Height, 1.0f, 100.0f)); device.SetTransform(TransformState.View, Matrix.LookAtLH(new Vector3(0, 0, 5.0f), new Vector3(), new Vector3(0, 1, 0))); device.SetRenderState(RenderState.Lighting, false); } protected override void OnPaint(System.Windows.Forms.PaintEventArgs e) { device.Clear(ClearFlags.Target | ClearFlags.ZBuffer, Color.CornflowerBlue, 1.0f, 0); SetupCamera(); CustomVertex.PositionColored[] verts = new CustomVertex.PositionColored[3]; verts[0].Position = new Vector3(0.0f, 1.0f, 1.0f); verts[0].Normal = new Vector3(0.0f, 0.0f, -1.0f); verts[0].Color = System.Drawing.Color.White.ToArgb(); verts[1].Position = new Vector3(-1.0f, -1.0f, 1.0f); verts[1].Normal = new Vector3(0.0f, 0.0f, -1.0f); verts[1].Color = System.Drawing.Color.White.ToArgb(); verts[2].Position = new Vector3(1.0f, -1.0f, 1.0f); verts[2].Normal = new Vector3(0.0f, 0.0f, -1.0f); verts[2].Color = System.Drawing.Color.White.ToArgb(); light.Type = LightType.Point; light.Position = new Vector3(); light.Diffuse = System.Drawing.Color.White; light.Attenuation0 = 0.2f; light.Range = 10000.0f; device.SetLight(0, light); device.EnableLight(0, true); device.BeginScene(); device.VertexFormat = CustomVertex.PositionColored.format; device.DrawUserPrimitives<CustomVertex.PositionColored>(PrimitiveType.TriangleList, 1, verts); device.EndScene(); device.Present(); this.Invalidate(); } } } The Vertex Format that I am using is the following [StructLayout(LayoutKind.Sequential)] public struct PositionNormalColored { public Vector3 Position; public int Color; public Vector3 Normal; public static readonly VertexFormat format = VertexFormat.Diffuse | VertexFormat.Position | VertexFormat.Normal; } Any suggestions on what the problem might be? Thanks in Advance

    Read the article

  • DSOFramer closing Excel doc in another window. If unsaved data in file, dsoframer fails to open with

    - by Steve
    I'm using Microsoft's DSOFramer control to allow me to embed an Excel file in my dialog so the user can choose his sheet, then select his range of cells; it's used with an import button on my dialog. The problem is that when I call the DSOFramer's OPEN function, if I have Excel open in another window, it closes the Excel document (but leaves Excel running). If the document it tries to close has unsaved data, I get a dialog boxclosing Excel doc in another window. If unsaved data in file, dsoframer fails to open with a messagebox: "Attempt to access invalid address". I built the source, and stepped through, and its making a call in its CDsoDocObject::CreateFromFile function, calling BindToObject on an object of class IMoniker. The HR is 0x8001010a "The message filter indicated that the application is busy". On that failure, it tries to InstantiateDocObjectServer by classid of CLSID Microsoft Excel Worksheet... this fails with an HRESULT of 0x80040154 "Class not registered". The InstantiateDocObjectServer just calls CoCreateInstance on the classid, first with CLSCTX_LOCAL_SERVER, then (if that fails) with CLSCTX_INPROC_SERVER. I know DSOFramer is a popular sample project for embedding Office apps in various dialogs and forms. I'm hoping someone else has had this problem and might have some insight on how I can solve this. I really don't want it to close any other open Excel documents, and I really don't want it to error-out if it can't close the document due to unsaved data. Update 1: I've tried changing the classid that's passed in to "Excel.Application" (I know that class will resolve), but that didn't work. In CDsoDocObject, it tries to open key "HKEY_CLASSES_ROOT\CLSID{00024500-0000-0000-C000-000000000046}\DocObject", but fails. I've visually confirmed that the key is not present in my registry; The key is present for the guid, but there's no DocObject subkey. It then produces an error message box: "The associated COM server does not support ActiveX document embedding". I get similar (different key, of course) results when I try to use the Excel.Workbook programid. Update 2: I tried starting a 2nd instance of Excel, hoping that my automation would bind to it (being the most recently invoked) instead of the problem Excel instance, but it didn't seem to do that. Results were the same. My problem seems to have boiled down to this: I'm calling the BindToObject on an object of class IMoniker, and receiving 0x8001010A (RPC_E_SERVERCALL_RETRYLATER) "The message filter indicated that the application is busy". I've tried playing with the flags passed to the BindToObject (via the SetBindOptions), but nothing seems to make any difference. Update 3: It first tries to bind using an IMoniker class. If that fails, it calls CoCreateInstance for the clsid as a "fallback" method. This may work for other MS Office objects, but when it's Excel, the class is for the Worksheet. I modified the sample to CoCreateInstance _Application, then got the workbooks, then called the Workbooks::Open for the target file, which returns a Worksheet object. I then returned that pointer and merged back with the original sample code path. All working now.

    Read the article

  • .NET Framework generates strange DCOM error

    - by Anders Oestergaard Jensen
    Hello, I am creating a simple application that enables merging of key-value pairs fields in a Word and/or Excel document. Until this day, the application has worked out just fine. I am using the latest version of .NET Framework 4.0 (since it provides a nice wrapper API for Interop). My sample merging method looks like this: public byte[] ProcessWordDocument(string path, List<KeyValuePair<string, string>> kvs) { logger.InfoFormat("ProcessWordDocument: path = {0}", path); var localWordapp = new Word.Application(); localWordapp.Visible = false; Word.Document doc = null; try { doc = localWordapp.Documents.Open(path, ReadOnly: false); logger.Debug("Executing Find->Replace..."); foreach (Word.Range r in doc.StoryRanges) { foreach (KeyValuePair<string, string> kv in kvs) { r.Find.Execute(Replace: Word.WdReplace.wdReplaceAll, FindText: kv.Key, ReplaceWith: kv.Value, Wrap: Word.WdFindWrap.wdFindContinue); } } logger.Debug("Done! Saving document and cleaning up"); doc.Save(); doc.Close(); System.Runtime.InteropServices.Marshal.ReleaseComObject(doc); localWordapp.Quit(); System.Runtime.InteropServices.Marshal.ReleaseComObject(localWordapp); logger.Debug("Done."); return System.IO.File.ReadAllBytes(path); } catch (Exception ex) { // Logging... // doc.Close(); if (doc != null) { doc.Close(); System.Runtime.InteropServices.Marshal.ReleaseComObject(doc); } localWordapp.Quit(); System.Runtime.InteropServices.Marshal.ReleaseComObject(localWordapp); throw; } } The above C# snippet has worked all fine (compiled and deployed unto a Windows Server 2008 x64) with latest updates installed. But now, suddenly, I get the following strange error: System.Runtime.InteropServices.COMException (0x80080005): Retrieving the COM class factory for component with CLSID {000209FF-0000-0000-C000-000000000046} failed due to the following error: 80080005 Server execution failed (Exception from HRESULT: 0x80080005 (CO_E_SERVER_EXEC_FAILURE)). at System.RuntimeTypeHandle.CreateInstance(RuntimeType type, Boolean publicOnly, Boolean noCheck, Boolean& canBeCached, RuntimeMethodHandleInternal& ctor, Boolean& bNeedSecurityCheck) at System.RuntimeType.CreateInstanceSlow(Boolean publicOnly, Boolean skipCheckThis, Boolean fillCache) at System.RuntimeType.CreateInstanceDefaultCtor(Boolean publicOnly, Boolean skipVisibilityChecks, Boolean skipCheckThis, Boolean fillCache) at System.Activator.CreateInstance(Type type, Boolean nonPublic) at Meeho.Integration.OfficeHelper.ProcessWordDocument(String path, List`1 kvs) in C:\meeho\src\webservices\Meeho.Integration\OfficeHelper.cs:line 30 at Meeho.IntegrationService.ConvertDocument(Byte[] template, String ext, String[] fields, String[] values) in C:\meeho\src\webservices\MeehoService\IntegrationService.asmx.cs:line 49 -- I googled the COM error, but it returns nothing of particular value. I even gave the right permissions for the COM dll's using mmc -32, where I allocated the Word and Excel documents respectively and set the execution rights to the Administrator. I could not, however, locate the dll's by the exact COM CLSID given above. Very frustrating. Please, please, please help me as the application is currently pulled out of production. Anders EDIT: output from the Windows event log: Faulting application name: WINWORD.EXE, version: 12.0.6514.5000, time stamp: 0x4a89d533 Faulting module name: unknown, version: 0.0.0.0, time stamp: 0x00000000 Exception code: 0xc0000005 Fault offset: 0x00000000 Faulting process id: 0x720 Faulting application start time: 0x01cac571c4f82a7b Faulting application path: C:\Program Files (x86)\Microsoft Office\Office12\WINWORD.EXE Faulting module path: unknown Report Id: 041dd5f9-3165-11df-b96a-0025643cefe6 - 1000 2 100 0x80000000000000 2963 Application meeho3 - WINWORD.EXE 12.0.6514.5000 4a89d533 unknown 0.0.0.0 00000000 c0000005 00000000 720 01cac571c4f82a7b C:\Program Files (x86)\Microsoft Office\Office12\WINWORD.EXE unknown 041dd5f9-3165-11df-b96a-0025643cefe6

    Read the article

  • Openswan + xl2tpd connections time out after a while

    - by Halfgaar
    I have a non-NATed Openswan+xl2tpd server (Ubuntu 12.04), to which I connect with a Windows 8 behind NAT. The client loses its connection after a while of doing nothing (between 30 and 60 minutes, but I didn't time it). The client doesn't have enabled that it should kill inactive connections. Nor does it ever go into sleep mode. I also tried setting the kill-after-time to 24 hours, but that didn't help. The NAT router behind which the client located is Debian Linux, and its router is a Cisco which connects us directly to the data center where the server is. None of our other connections, like SSH, get dropped with inactivity (because of cheap routers). I did however try turning on the keepalives in /etc/ipsec.conf: config setup (...snip...) nat_traversal=yes force_keepalive=yes keep_alive=10 but that didn't help. As you can see in the config later, dead peer detection's action is clear. That would be a first suggestion to fix, but I need clear, because people will be connecting from everwhere but the kitchen sink. Besides, as I said, in the test setup I have now, I can't see any device killing its connection. (edit: 'restart' also has the same effect) These are of one time it happened: Jul 18 16:18:06 host xl2tpd[1918]: Maximum retries exceeded for tunnel 49070. Closing. Jul 18 16:18:06 host xl2tpd[1918]: Terminating pppd: sending TERM signal to pid 18359 Jul 18 16:18:06 host xl2tpd[1918]: Connection 4 closed to 89.188.x.y, port 1701 (Timeout) Jul 18 16:18:11 host xl2tpd[1918]: Unable to deliver closing message for tunnel 49070. Destroying anyway. and these on another: Jul 18 17:44:39 host xl2tpd[1918]: udp_xmit failed to 89.188.x.y:1701 with err=-1:Operation not permitted Jul 18 17:44:43 xl2tpd[1918]: last message repeated 4 times Jul 18 17:44:43 host xl2tpd[1918]: Maximum retries exceeded for tunnel 10918. Closing. Jul 18 17:44:43 host xl2tpd[1918]: udp_xmit failed to 89.188.x.y:1701 with err=-1:Operation not permitted Jul 18 17:44:43 host xl2tpd[1918]: Terminating pppd: sending TERM signal to pid 26338 Jul 18 17:44:43 host xl2tpd[1918]: Connection 6 closed to 89.188.x.y, port 1701 (Timeout) Jul 18 17:44:44 host xl2tpd[1918]: udp_xmit failed to 89.188.x.y:1701 with err=-1:Operation not permitted Jul 18 17:44:48 xl2tpd[1918]: last message repeated 3 times Jul 18 17:44:48 host xl2tpd[1918]: Unable to deliver closing message for tunnel 10918. Destroying anyway. Jul 18 17:44:59 host xl2tpd[1918]: Can not find tunnel 10918 (refhim=0) Jul 18 17:44:59 host xl2tpd[1918]: network_thread: unable to find call or tunnel to handle packet. call = 0, tunnel = 10918 Dumping. Jul 18 17:45:09 host xl2tpd[1918]: Can not find tunnel 10918 (refhim=0) Jul 18 17:45:09 host xl2tpd[1918]: network_thread: unable to find call or tunnel to handle packet. call = 0, tunnel = 10918 Dumping. Jul 18 17:45:19 host xl2tpd[1918]: Can not find tunnel 10918 (refhim=0) Jul 18 17:45:19 host xl2tpd[1918]: network_thread: unable to find call or tunnel to handle packet. call = 0, tunnel = 10918 Dumping. Jul 18 17:45:29 host xl2tpd[1918]: Can not find tunnel 10918 (refhim=0) Jul 18 17:45:29 host xl2tpd[1918]: network_thread: unable to find call or tunnel to handle packet. call = 0, tunnel = 10918 Dumping. Jul 18 17:45:39 host xl2tpd[1918]: Can not find tunnel 10918 (refhim=0) Jul 18 17:45:39 host xl2tpd[1918]: network_thread: unable to find call or tunnel to handle packet. call = 0, tunnel = 10918 Dumping. Jul 18 17:45:49 host xl2tpd[1918]: Can not find tunnel 10918 (refhim=0) Jul 18 17:45:49 host xl2tpd[1918]: network_thread: unable to find call or tunnel to handle packet. call = 0, tunnel = 10918 Dumping. Versions: Ubuntu 12.04 Openswan: 2.6.37-1 xl2tpd: 3.1+dfsg-1 kernel: 3.2.0-49-generic configs: /etc/ipsec.conf: version 2.0 # conforms to second version of ipsec.conf specification config setup nat_traversal=yes virtual_private=%v4:10.0.0.0/8,%v4:192.168.0.0/16,%v4:172.16.0.0/12,%v4:!10.152.2.0/24 oe=off protostack=netkey force_keepalive=yes keep_alive=10 conn L2TP-PSK-NAT rightsubnet=vhost:%priv also=L2TP-PSK-noNAT conn L2TP-PSK-noNAT authby=secret pfs=no auto=add keyingtries=2 rekey=no dpddelay=30 dpdtimeout=120 dpdaction=clear ikelifetime=8h keylife=1h type=transport left=%defaultroute leftprotoport=17/1701 right=%any rightprotoport=17/%any /etc/xl2tpd/xl2tpd.conf [global] ipsec saref = no [lns default] ip range = 10.152.2.2-10.152.2.254 local ip = 10.152.2.1 refuse chap = yes refuse pap = yes require authentication = yes ppp debug = no pppoptfile = /etc/ppp/options.xl2tpd length bit = yes /etc/ppp/options.xl2tpd: require-mschap-v2 refuse-mschap ms-dns 10.152.2.1 asyncmap 0 auth crtscts idle 1800 mtu 1200 mru 1200 lock hide-password local #debug name l2tpd proxyarp lcp-echo-interval 30 lcp-echo-failure 4

    Read the article

  • Optimizing Jaro-Winkler algorithm

    - by Pentium10
    I have this code for Jaro-Winkler algorithm taken from this website. I need to run 150,000 times to get distance between differences. It takes a long time, as I run on an Android mobile device. Can it be optimized more? public class Jaro { /** * gets the similarity of the two strings using Jaro distance. * * @param string1 the first input string * @param string2 the second input string * @return a value between 0-1 of the similarity */ public float getSimilarity(final String string1, final String string2) { //get half the length of the string rounded up - (this is the distance used for acceptable transpositions) final int halflen = ((Math.min(string1.length(), string2.length())) / 2) + ((Math.min(string1.length(), string2.length())) % 2); //get common characters final StringBuffer common1 = getCommonCharacters(string1, string2, halflen); final StringBuffer common2 = getCommonCharacters(string2, string1, halflen); //check for zero in common if (common1.length() == 0 || common2.length() == 0) { return 0.0f; } //check for same length common strings returning 0.0f is not the same if (common1.length() != common2.length()) { return 0.0f; } //get the number of transpositions int transpositions = 0; int n=common1.length(); for (int i = 0; i < n; i++) { if (common1.charAt(i) != common2.charAt(i)) transpositions++; } transpositions /= 2.0f; //calculate jaro metric return (common1.length() / ((float) string1.length()) + common2.length() / ((float) string2.length()) + (common1.length() - transpositions) / ((float) common1.length())) / 3.0f; } /** * returns a string buffer of characters from string1 within string2 if they are of a given * distance seperation from the position in string1. * * @param string1 * @param string2 * @param distanceSep * @return a string buffer of characters from string1 within string2 if they are of a given * distance seperation from the position in string1 */ private static StringBuffer getCommonCharacters(final String string1, final String string2, final int distanceSep) { //create a return buffer of characters final StringBuffer returnCommons = new StringBuffer(); //create a copy of string2 for processing final StringBuffer copy = new StringBuffer(string2); //iterate over string1 int n=string1.length(); int m=string2.length(); for (int i = 0; i < n; i++) { final char ch = string1.charAt(i); //set boolean for quick loop exit if found boolean foundIt = false; //compare char with range of characters to either side for (int j = Math.max(0, i - distanceSep); !foundIt && j < Math.min(i + distanceSep, m - 1); j++) { //check if found if (copy.charAt(j) == ch) { foundIt = true; //append character found returnCommons.append(ch); //alter copied string2 for processing copy.setCharAt(j, (char)0); } } } return returnCommons; } } I mention that in the whole process I make just instance of the script, so only once jaro= new Jaro(); If you are going to test and need examples so not break the script, you will find it here, in another thread for python optimization.

    Read the article

  • Why does my MacBook Pro have long ping times over Wi-Fi?

    - by randynov
    I have been having problems connecting with my Wi-Fi. It is weird, the ping times to the router (<30 feet away) seem to surge, often getting over 10 seconds before slowly coming back down. You can see the trend below. I'm on a MacBook Pro and have done the normal stuff (reset the PRAM and SMC, changed wireless channels, etc.). It happens across different routers, so I think it must be my laptop, but I don't know what it could be. The RSSI value hovers around -57, but I've seen the transmit rate flip between 0, 48 and 54. The signal strength is ~60% with 9% noise. Currently, there are 17 other wireless networks in range, but only one in the same channel. 1 - How can I figure out what's going on? 2 - How can I correct the situation? PING 192.168.1.1 (192.168.1.1): 56 data bytes 64 bytes from 192.168.1.1: icmp_seq=0 ttl=254 time=781.107 ms 64 bytes from 192.168.1.1: icmp_seq=1 ttl=254 time=681.551 ms 64 bytes from 192.168.1.1: icmp_seq=2 ttl=254 time=610.001 ms 64 bytes from 192.168.1.1: icmp_seq=3 ttl=254 time=544.915 ms 64 bytes from 192.168.1.1: icmp_seq=4 ttl=254 time=547.622 ms 64 bytes from 192.168.1.1: icmp_seq=5 ttl=254 time=468.914 ms 64 bytes from 192.168.1.1: icmp_seq=6 ttl=254 time=237.368 ms 64 bytes from 192.168.1.1: icmp_seq=7 ttl=254 time=229.902 ms 64 bytes from 192.168.1.1: icmp_seq=8 ttl=254 time=11754.151 ms 64 bytes from 192.168.1.1: icmp_seq=9 ttl=254 time=10753.943 ms 64 bytes from 192.168.1.1: icmp_seq=10 ttl=254 time=9754.428 ms 64 bytes from 192.168.1.1: icmp_seq=11 ttl=254 time=8754.199 ms 64 bytes from 192.168.1.1: icmp_seq=12 ttl=254 time=7754.138 ms 64 bytes from 192.168.1.1: icmp_seq=13 ttl=254 time=6754.159 ms 64 bytes from 192.168.1.1: icmp_seq=14 ttl=254 time=5753.991 ms 64 bytes from 192.168.1.1: icmp_seq=15 ttl=254 time=4754.068 ms 64 bytes from 192.168.1.1: icmp_seq=16 ttl=254 time=3753.930 ms 64 bytes from 192.168.1.1: icmp_seq=17 ttl=254 time=2753.768 ms 64 bytes from 192.168.1.1: icmp_seq=18 ttl=254 time=1753.866 ms 64 bytes from 192.168.1.1: icmp_seq=19 ttl=254 time=753.592 ms 64 bytes from 192.168.1.1: icmp_seq=20 ttl=254 time=517.315 ms 64 bytes from 192.168.1.1: icmp_seq=37 ttl=254 time=1.315 ms 64 bytes from 192.168.1.1: icmp_seq=38 ttl=254 time=1.035 ms 64 bytes from 192.168.1.1: icmp_seq=39 ttl=254 time=4.597 ms 64 bytes from 192.168.1.1: icmp_seq=21 ttl=254 time=18010.681 ms 64 bytes from 192.168.1.1: icmp_seq=22 ttl=254 time=17010.449 ms 64 bytes from 192.168.1.1: icmp_seq=23 ttl=254 time=16010.430 ms 64 bytes from 192.168.1.1: icmp_seq=24 ttl=254 time=15010.540 ms 64 bytes from 192.168.1.1: icmp_seq=25 ttl=254 time=14010.450 ms 64 bytes from 192.168.1.1: icmp_seq=26 ttl=254 time=13010.175 ms 64 bytes from 192.168.1.1: icmp_seq=27 ttl=254 time=12010.282 ms 64 bytes from 192.168.1.1: icmp_seq=28 ttl=254 time=11010.265 ms 64 bytes from 192.168.1.1: icmp_seq=29 ttl=254 time=10010.285 ms 64 bytes from 192.168.1.1: icmp_seq=30 ttl=254 time=9010.235 ms 64 bytes from 192.168.1.1: icmp_seq=31 ttl=254 time=8010.399 ms 64 bytes from 192.168.1.1: icmp_seq=32 ttl=254 time=7010.144 ms 64 bytes from 192.168.1.1: icmp_seq=33 ttl=254 time=6010.113 ms 64 bytes from 192.168.1.1: icmp_seq=34 ttl=254 time=5010.025 ms 64 bytes from 192.168.1.1: icmp_seq=35 ttl=254 time=4009.966 ms 64 bytes from 192.168.1.1: icmp_seq=36 ttl=254 time=3009.825 ms 64 bytes from 192.168.1.1: icmp_seq=40 ttl=254 time=16000.676 ms 64 bytes from 192.168.1.1: icmp_seq=41 ttl=254 time=15000.477 ms 64 bytes from 192.168.1.1: icmp_seq=42 ttl=254 time=14000.388 ms 64 bytes from 192.168.1.1: icmp_seq=43 ttl=254 time=13000.549 ms 64 bytes from 192.168.1.1: icmp_seq=44 ttl=254 time=12000.469 ms 64 bytes from 192.168.1.1: icmp_seq=45 ttl=254 time=11000.332 ms 64 bytes from 192.168.1.1: icmp_seq=46 ttl=254 time=10000.339 ms 64 bytes from 192.168.1.1: icmp_seq=47 ttl=254 time=9000.338 ms 64 bytes from 192.168.1.1: icmp_seq=48 ttl=254 time=8000.198 ms 64 bytes from 192.168.1.1: icmp_seq=49 ttl=254 time=7000.388 ms 64 bytes from 192.168.1.1: icmp_seq=50 ttl=254 time=6000.217 ms 64 bytes from 192.168.1.1: icmp_seq=51 ttl=254 time=5000.084 ms 64 bytes from 192.168.1.1: icmp_seq=52 ttl=254 time=3999.920 ms 64 bytes from 192.168.1.1: icmp_seq=53 ttl=254 time=3000.010 ms 64 bytes from 192.168.1.1: icmp_seq=54 ttl=254 time=1999.832 ms 64 bytes from 192.168.1.1: icmp_seq=55 ttl=254 time=1000.072 ms 64 bytes from 192.168.1.1: icmp_seq=58 ttl=254 time=1.125 ms 64 bytes from 192.168.1.1: icmp_seq=59 ttl=254 time=1.070 ms 64 bytes from 192.168.1.1: icmp_seq=60 ttl=254 time=2.515 ms

    Read the article

< Previous Page | 250 251 252 253 254 255 256 257 258 259 260 261  | Next Page >