Search Results

Search found 14598 results on 584 pages for 'address'.

Page 541/584 | < Previous Page | 537 538 539 540 541 542 543 544 545 546 547 548  | Next Page >

  • Lubuntu 13.10 unable to connect to cups localhost:631

    - by user142139
    I am using Lubuntu 13.10 (recently upgraded) and am trying to print to a network printer (HP photosmart 7960) through my router (US Robotics 5461). My printer is connected to the router via USB cable. Normally, I would use the cups configuration interface to set up the wireless connection to the printer. I was able to use the printer through the router wirelessly, using Ubuntu 12.04. Now, with my recently upgraded Lubuntu 13.10, I am unable to get the Cups config webpage (http://localhost:631) to come up. In Chromium, I get: This web page is not available. In Firefox, I get: Unable to connect. Firefox can't establish a connection to the server at localhost:631. The CUPS config file details are below. I have this website to help with the router connections for Linux: http://www.usr.com/support/5461/5461-files/printer_installation_linux/index.html My printer's address through the router is: http://192.168.2.1:1631/printers/My_Printer Can you tell me how to fix this? Or, what to add to the cups configuration file to make this work? Please help. Thanks psychicnut CUPS CONFIG FILE DETAILS: # Show general information in error_log. LogLevel warn MaxLogSize 0 SystemGroup lpadmin Listen /var/run/cups/cups.sock Listen /var/run/cups/cups.sock Listen 192.168.2.1:1631 Browsing Off BrowseLocalProtocols dnssd DefaultAuthType Basic WebInterface Yes <Location /> Order allow,deny </Location> <Location /admin> Order allow,deny </Location> <Location /admin/conf> AuthType Default Require user @SYSTEM Order allow,deny </Location> <Policy default> JobPrivateAccess default JobPrivateValues default SubscriptionPrivateAccess default SubscriptionPrivateValues default <Limit Create-Job Print-Job Print-URI Validate-Job> Order deny,allow </Limit> <Limit Send-Document Send-URI Hold-Job Release-Job Restart-Job Purge-Jobs Set-Job-Attributes Create-Job-Subscription Renew-Subscription Cancel-Subscription Get-Notifications Reprocess-Job Cancel-Current-Job Suspend-Current-Job Resume-Job Cancel-My-Jobs Close-Job CUPS-Move-Job CUPS-Get-Document> Require user @OWNER @SYSTEM Order deny,allow </Limit> <Limit CUPS-Add-Modify-Printer CUPS-Delete-Printer CUPS-Add-Modify-Class CUPS-Delete-Class CUPS-Set-Default CUPS-Get-Devices> AuthType Default Require user @SYSTEM Order deny,allow </Limit> <Limit Pause-Printer Resume-Printer Enable-Printer Disable-Printer Pause-Printer-After-Current-Job Hold-New-Jobs Release-Held-New-Jobs Deactivate-Printer Activate-Printer Restart-Printer Shutdown-Printer Startup-Printer Promote-Job Schedule-Job-After Cancel-Jobs CUPS-Accept-Jobs CUPS-Reject-Jobs> AuthType Default Require user @SYSTEM Order deny,allow </Limit> <Limit Cancel-Job CUPS-Authenticate-Job> Require user @OWNER @SYSTEM Order deny,allow </Limit> <Limit All> Order deny,allow </Limit> </Policy> <Policy authenticated> JobPrivateAccess default JobPrivateValues default SubscriptionPrivateAccess default SubscriptionPrivateValues default <Limit Create-Job Print-Job Print-URI Validate-Job> AuthType Default Order deny,allow </Limit> <Limit Send-Document Send-URI Hold-Job Release-Job Restart-Job Purge-Jobs Set-Job-Attributes Create-Job-Subscription Renew-Subscription Cancel-Subscription Get-Notifications Reprocess-Job Cancel-Current-Job Suspend-Current-Job Resume-Job Cancel-My-Jobs Close-Job CUPS-Move-Job CUPS-Get-Document> AuthType Default Require user @OWNER @SYSTEM Order deny,allow </Limit> <Limit CUPS-Add-Modify-Printer CUPS-Delete-Printer CUPS-Add-Modify-Class CUPS-Delete-Class CUPS-Set-Default> AuthType Default Require user @SYSTEM Order deny,allow </Limit> <Limit Pause-Printer Resume-Printer Enable-Printer Disable-Printer Pause-Printer-After-Current-Job Hold-New-Jobs Release-Held-New-Jobs Deactivate-Printer Activate-Printer Restart-Printer Shutdown-Printer Startup-Printer Promote-Job Schedule-Job-After Cancel-Jobs CUPS-Accept-Jobs CUPS-Reject-Jobs> AuthType Default Require user @SYSTEM Order deny,allow </Limit> <Limit Cancel-Job CUPS-Authenticate-Job> AuthType Default Require user @OWNER @SYSTEM Order deny,allow </Limit> <Limit All> Order deny,allow </Limit> </Policy> JobPrivateAccess default JobPrivateValues default SubscriptionPrivateAccess default SubscriptionPrivateValues default

    Read the article

  • L2TP connection fails!

    - by a.toraby
    I've installed l2tp-ipsec-vpn but when I try to connect to the vpn server I get error 500. Here are the logs: Jun 17 12:54:37.449 ipsec_setup: Stopping Openswan IPsec... Jun 17 12:54:38.858 Stopping xl2tpd: xl2tpd. Jun 17 12:54:38.859 xl2tpd[1511]: death_handler: Fatal signal 15 received Jun 17 12:54:38.872 ipsec_setup: Starting Openswan IPsec U2.6.37/K3.2.0-23-generic... Jun 17 12:54:39.027 ipsec__plutorun: Starting Pluto subsystem... Jun 17 12:54:39.033 ipsec__plutorun: adjusting ipsec.d to /etc/ipsec.d Jun 17 12:54:39.037 recvref[30]: Protocol not available Jun 17 12:54:39.038 xl2tpd[2442]: This binary does not support kernel L2TP. Jun 17 12:54:39.038 xl2tpd[2444]: xl2tpd version xl2tpd-1.3.1 started on atp-ThinkPad-SL410 PID:2444 Jun 17 12:54:39.038 xl2tpd[2444]: Written by Mark Spencer, Copyright (C) 1998, Adtran, Inc. Jun 17 12:54:39.038 xl2tpd[2444]: Forked by Scott Balmos and David Stipp, (C) 2001 Jun 17 12:54:39.038 xl2tpd[2444]: Inherited by Jeff McAdams, (C) 2002 Jun 17 12:54:39.039 xl2tpd[2444]: Forked again by Xelerance (www.xelerance.com) (C) 2006 Jun 17 12:54:39.039 xl2tpd[2444]: Listening on IP address 0.0.0.0, port 1701 Jun 17 12:54:39.040 Starting xl2tpd: xl2tpd. Jun 17 12:54:39.062 ipsec__plutorun: 002 added connection description "L2TP" Jun 17 12:55:30.753 104 "L2TP" #1: STATE_MAIN_I1: initiate Jun 17 12:55:30.754 010 "L2TP" #1: STATE_MAIN_I1: retransmission; will wait 20s for response Jun 17 12:55:30.754 010 "L2TP" #1: STATE_MAIN_I1: retransmission; will wait 40s for response Jun 17 12:55:30.754 003 "L2TP" #1: ignoring Vendor ID payload [MS NT5 ISAKMPOAKLEY 00000008] Jun 17 12:55:30.754 003 "L2TP" #1: received Vendor ID payload [RFC 3947] method set to=109 Jun 17 12:55:30.754 003 "L2TP" #1: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-02_n] meth=106, but already using method 109 Jun 17 12:55:30.755 003 "L2TP" #1: ignoring Vendor ID payload [FRAGMENTATION] Jun 17 12:55:30.755 003 "L2TP" #1: ignoring Vendor ID payload [MS-Negotiation Discovery Capable] Jun 17 12:55:30.755 003 "L2TP" #1: ignoring Vendor ID payload [IKE CGA version 1] Jun 17 12:55:30.755 106 "L2TP" #1: STATE_MAIN_I2: sent MI2, expecting MR2 Jun 17 12:55:30.755 010 "L2TP" #1: STATE_MAIN_I2: retransmission; will wait 20s for response Jun 17 12:55:30.755 003 "L2TP" #1: NAT-Traversal: Result using RFC 3947 (NAT-Traversal): i am NATed Jun 17 12:55:30.755 108 "L2TP" #1: STATE_MAIN_I3: sent MI3, expecting MR3 Jun 17 12:55:30.756 004 "L2TP" #1: STATE_MAIN_I4: ISAKMP SA established {auth=OAKLEY_PRESHARED_KEY cipher=oakley_3des_cbc_192 prf=oakley_sha group=modp1024} Jun 17 12:55:30.756 117 "L2TP" #2: STATE_QUICK_I1: initiate Jun 17 12:55:30.756 010 "L2TP" #2: STATE_QUICK_I1: retransmission; will wait 20s for response Jun 17 12:55:30.756 003 "L2TP" #2: ignoring informational payload, type IPSEC_RESPONDER_LIFETIME msgid=6b03ff69 Jun 17 12:55:30.756 003 "L2TP" #2: NAT-Traversal: received 2 NAT-OA. ignored because peer is not NATed Jun 17 12:55:30.756 003 "L2TP" #2: our client subnet returned doesn't match my proposal - us:192.168.1.3/32 vs them:109.162.174.235/32 Jun 17 12:55:30.757 003 "L2TP" #2: Allowing questionable proposal anyway [ALLOW_MICROSOFT_BAD_PROPOSAL] Jun 17 12:55:30.757 004 "L2TP" #2: STATE_QUICK_I2: sent QI2, IPsec SA established transport mode {ESP=>0x23af21f8 <0xdb4a87b6 xfrm=AES_128-HMAC_SHA1 NATOA=none NATD=none DPD=none} Jun 17 12:55:31.759 xl2tpd[2444]: Connecting to host x.x.x.x, port 1701 Jun 17 12:55:32.021 xl2tpd[2444]: Connection established to x.x.x.x, 1701. Local: 4720, Remote: 200 (ref=0/0). Jun 17 12:55:32.023 xl2tpd[2444]: Calling on tunnel 4720 Jun 17 12:55:32.454 xl2tpd[2444]: Call established with x.x.x.x, Local: 9667, Remote: 3, Serial: 1 (ref=0/0) Jun 17 12:55:32.456 xl2tpd[2444]: start_pppd: I'm running: Jun 17 12:55:32.456 xl2tpd[2444]: "/usr/sbin/pppd" Jun 17 12:55:32.457 xl2tpd[2444]: "passive" Jun 17 12:55:32.458 xl2tpd[2444]: "nodetach" Jun 17 12:55:32.458 xl2tpd[2444]: ":" Jun 17 12:55:32.459 xl2tpd[2444]: "file" Jun 17 12:55:32.459 xl2tpd[2444]: "/etc/ppp/L2TP.options.xl2tpd" Jun 17 12:55:32.460 xl2tpd[2444]: "ipparam" Jun 17 12:55:32.461 xl2tpd[2444]: "x.x.x.x" Jun 17 12:55:32.462 xl2tpd[2444]: "/dev/pts/1" Jun 17 12:55:32.583 pppd[2711]: Plugin passprompt.so loaded. Jun 17 12:55:32.583 pppd[2711]: pppd 2.4.5 started by root, uid 0 Jun 17 12:55:32.619 pppd[2711]: Using interface ppp0 Jun 17 12:55:32.620 pppd[2711]: Connect: ppp0 <--> /dev/pts/1 Jun 17 12:55:33.693 pppd[2711]: /usr/bin/L2tpIPsecVpn exited with code 0 Jun 17 12:55:34.454 [ERROR 404] Authentication failed: closing connection to 'L2TP' Jun 17 12:55:34.456 pppd[2711]: MS-CHAP authentication failed: E=691 Authentication failure Jun 17 12:55:34.457 pppd[2711]: CHAP authentication failed Jun 17 12:55:34.461 Stopping xl2tpd: xl2tpd. Jun 17 12:55:34.462 xl2tpd[2444]: death_handler: Fatal signal 15 received Jun 17 12:55:34.463 pppd[2711]: Modem hangup Jun 17 12:55:34.463 pppd[2711]: Connection terminated. Jun 17 12:55:34.474 ipsec_setup: Stopping Openswan IPsec... Jun 17 12:55:34.482 pppd[2711]: Exit. Jun 17 12:55:35.587 ipsec_setup: ERROR: Module xfrm4_mode_transport is in use Jun 17 12:55:35.665 ipsec_setup: ERROR: Module esp4 is in use I had this problem by ubuntu 11.10 though I can easily connect to the server from windows. I use ubuntu 12.0 64bit

    Read the article

  • The Interaction between Three-Tier Client/Server Model and Three-Tier Application Architecture Model

    The three-tier client/server model is a network architectural approach currently used in modern networking. This approach divides a network in to three distinct components. Three-Tier Client/Server Model Components Client Component Server Component Database Component The Client Component of the network typically represents any device on the network. A basic example of this would be computer or another network/web enabled devices that are connected to a network. Network clients request resources on the network, and are usually equipped with a user interface for the presentation of the data returned from the Server Component. This process is done through the use of various software clients, and example of this can be seen through the use of a web browser client. The web browser request information from the Server Component located on the network and then renders the results for the user to process. The Server Components of the network return data based on specific client request back to the requesting client.  Server Components also inherit the attributes of a Client Component in that they are a device on the network and that they can also request information from other Server Components. However what differentiates a Client Component from a Server Component is that a Server Component response to requests from devices on the network. An example of a Server Component can be seen in a web server. A web server listens for new requests and then interprets the request, processes the web pages, and then returns the processed data back to the web browser client so that it may render the data for the user to interpret. The Database Component of the network returns unprocessed data from databases or other resources. This component also inherits attributes from the Server Component in that it is a device on a network, it can request information from other server components and database components, and it also listens for new requests so that it can return data when needed. The three-tier client/server model is very similar to the three-tier application architecture model, and in fact the layers can be mapped to one another. Three-Tier Application Architecture Model Presentation Layer/Logic Business Layer/Logic Data Layer/Logic The Presentation Layer including its underlying logic is very similar to the Client Component of the three-tiered model. The Presentation Layer focuses on interpreting the data returned by the Business Layer as well as presents the data back to the user.  Both the Presentation Layer and the Client Component focus primarily on the user and their experience. This allows for segments of the Business Layer to be distributable and interchangeable because the Presentation Layer is not directly integrated in with Business Layer. The Presentation Layer does not care where the data comes from as long as it is in the proper format. This allows for the Presentation Layer and Business Layer to be stored on one or more different servers so that it can provide a higher availability to clients requesting data. A good example of this is a web site that uses load balancing. When a web site decides to take on the task of load balancing they must obtain a network device that sits in front of a one or machines in order to distribute the request across multiple servers. When a user comes in through the load balanced device they are redirected to a specific server based on a few factors. Common Load Balancing Factors Current Server Availability Current Server Response Time Current Server Priority The Business Layer and corresponding logic are business rules applied to data prior to it being sent to the Presentation Layer. These rules are used to manipulate the data coming from the Data Access Layer, in addition to validating any data prior to being stored in the Data Access Layer. A good example of this would be when a user is trying to create multiple accounts under one email address. The Business Layer logic can prevent duplicate accounts by enforcing a unique email for every new account before the data is even stored in the Data Access Layer. The Server Component can be directly tied to this layer in that the server typically stores and process the Business Layer before it is returned to the end-user via the Presentation Layer. In addition the Server Component can also run automated process through the Business Layer on the data in the Data Access Layer so that additional business analysis can be derived from the data that has been already collected. The Data Layer and its logic are responsible for storing information so that it can be easily retrieved. Typical in most modern applications data is stored in a database management system however data can also be in the form of files stored on a file server. In addition a database can take on one of several forms. Common Database Formats XML File Pipe Delimited File Tab Delimited File Comma Delimited File (CSV) Plain Text File Microsoft Access Microsoft SQL Server MySql Oracle Sybase The Database component of the Networking model can be directly tied to the Data Layer because this is where the Data Layer obtains the data to return back the Business Layer. The Database Component basically allows for a place on the network to store data for future use. This enables applications to save data when they can and then quickly recall the saved data as needed so that the application does not have to worry about storing the data in memory. This prevents overhead that could be created when an application must retain all data in memory. As you can see the Three-Tier Client/Server Networking Model and the Three-Tiered Application Architecture Model rely very heavily on one another to function especially if different aspects of an application are distributed across an entire network. The use of various servers and database servers are wonderful when an application has a need to distribute work across the network. Network Components and Application Layers Interaction Database components will store all data needed for the Data Access Layer to manipulate and return to the Business Layer Server Component executes the Business Layer that manipulates data so that it can be returned to the Presentation Layer Client Component hosts the Presentation Layer that  interprets the data and present it to the user

    Read the article

  • Cant correctly install Lazarus

    - by user206316
    I have a little problem with installing and running Lazarus. I just upgrade ubuntu from 13.04 to 13.10. When i had 13.04, i could install lazarus without any problems, but in 13.10 lazarus magicaly dissapeared, and when i tried install it from ubuntu software center, it said something like in my software resources lazarus-ide-0.9.30.4 doesnt exist. After some research on net i tried delete all files from earlier installations, download deb packages from sourceforge and install them, but when i want to instal fpc-src, error shows up with output: (Reading database ... 100% (Reading database ... 239063 files and directories currently installed.) Unpacking fpc-src (from .../Stiahnut/Lazarus/fpc-src.deb) ... dpkg: error processing /home/richi/Stiahnut/Lazarus/fpc-src.deb (--install): trying to overwrite '/usr/share/fpcsrc/2.6.2/rtl/nativent/tthread.inc', which is also in package fpc-source-2.6.2 2.6.2-5 dpkg-deb (subprocess): decompressing archive member: internal gzip write error: Broken pipe dpkg-deb: error: subprocess <decompress> returned error exit status 2 dpkg-deb (subprocess): cannot copy archive member from '/home/richi/Stiahnut/Lazarus/fpc-src.deb' to decompressor pipe: failed to write (Broken pipe) when i started lazarus, it of course tell me that it cant find fpc compier and fpc sources. So, please, i really need program for school and i dont wanna reinstall os anymore or something like that :( (Ubuntu 13.10 64bit) P.S: im not skilled in linux so if u know some commands to fix it just write them for copy and paste :) P.P.S:Sorry for bad English, im Slovak xD P.P.P.S: Thank so much for any answers update: output from sudo dpkg -l | grep "^rc" richi@Richi-Ubuntu:~/lazarus1.0.12$ sudo dpkg -l | grep "^rc" rc account-plugin-generic-oauth 0.10bzr13.03.26-0ubuntu1.1 amd64 GNOME Control Center account plugin for single signon - generic OAuth rc appmenu-gtk:amd64 12.10.3daily13.04.03-0ubuntu1 amd64 Export GTK menus over DBus rc appmenu-gtk3:amd64 12.10.3daily13.04.03-0ubuntu1 amd64 Export GTK menus over DBus rc fp-compiler-2.6.0 2.6.0-9 amd64 Free Pascal - compiler rc fp-utils-2.6.0 2.6.0-9 amd64 Free Pascal - utilities rc lazarus-ide-0.9.30.4 0.9.30.4-4 amd64 IDE for Free Pascal - common IDE files rc lazarus-ide-1.0.10 1.0.10+dfsg-1 amd64 IDE for Free Pascal - common IDE files rc lcl-utils-0.9.30.4 0.9.30.4-4 amd64 Lazarus Components Library - command line build tools rc lcl-utils-1.0.10 1.0.10+dfsg-1 amd64 Lazarus Components Library - command line build tools rc libbamf3-1:amd64 0.4.0daily13.06.19~13.04-0ubuntu1 amd64 Window matching library - shared library rc libboost-filesystem1.49.0 1.49.0-4 amd64 filesystem operations (portable paths, iteration over directories, etc) in C++ rc libboost-signals1.49.0 1.49.0-4 amd64 managed signals and slots library for C++ rc libboost-system1.49.0 1.49.0-4 amd64 Operating system (e.g. diagnostics support) library rc libboost-thread1.49.0 1.49.0-4 amd64 portable C++ multi-threading rc libbrlapi0.5:amd64 4.4-8ubuntu4 amd64 braille display access via BRLTTY - shared library rc libcamel-1.2-40 3.6.4-0ubuntu1.1 amd64 Evolution MIME message handling library rc libcolumbus0-0 0.4.0daily13.04.16~13.04-0ubuntu1 amd64 error tolerant matching engine - shared library rc libdns95 1:9.9.2.dfsg.P1-2ubuntu2.1 amd64 DNS Shared Library used by BIND rc libdvbpsi7 0.2.2-1 amd64 library for MPEG TS and DVB PSI tables decoding and generating rc libebackend-1.2-5 3.6.4-0ubuntu1.1 amd64 Utility library for evolution data servers rc libedata-book-1.2-15 3.6.4-0ubuntu1.1 amd64 Backend library for evolution address books rc libedata-cal-1.2-18 3.6.4-0ubuntu1.1 amd64 Backend library for evolution calendars rc libgc1c3:amd64 1:7.2d-0ubuntu5 amd64 conservative garbage collector for C and C++ rc libgd2-xpm:amd64 2.0.36~rc1~dfsg-6.1ubuntu1 amd64 GD Graphics Library version 2 rc libgd2-xpm:i386 2.0.36~rc1~dfsg-6.1ubuntu1 i386 GD Graphics Library version 2 rc libgnome-desktop-3-4 3.6.3-0ubuntu1 amd64 Utility library for loading .desktop files - runtime files rc libgphoto2-2:amd64 2.4.14-2 amd64 gphoto2 digital camera library rc libgphoto2-2:i386 2.4.14-2 i386 gphoto2 digital camera library rc libgphoto2-port0:amd64 2.4.14-2 amd64 gphoto2 digital camera port library rc libgphoto2-port0:i386 2.4.14-2 i386 gphoto2 digital camera port library rc libgtksourceview-3.0-0:amd64 3.6.3-0ubuntu1 amd64 shared libraries for the GTK+ syntax highlighting widget rc libgweather-3-1 3.6.2-0ubuntu1 amd64 GWeather shared library rc libharfbuzz0:amd64 0.9.13-1 amd64 OpenType text shaping engine rc libibus-1.0-0:amd64 1.4.2-0ubuntu2 amd64 Intelligent Input Bus - shared library rc libical0 0.48-2 amd64 iCalendar library implementation in C (runtime) rc libimobiledevice3 1.1.4-1ubuntu6.2 amd64 Library for communicating with the iPhone and iPod Touch rc libisc92 1:9.9.2.dfsg.P1-2ubuntu2.1 amd64 ISC Shared Library used by BIND rc libkms1:amd64 2.4.46-1 amd64 Userspace interface to kernel DRM buffer management rc libllvm3.2:i386 1:3.2repack-7ubuntu1 i386 Low-Level Virtual Machine (LLVM), runtime library rc libmikmod2:amd64 3.1.12-5 amd64 Portable sound library rc libpackagekit-glib2-14:amd64 0.7.6-3ubuntu1 amd64 Library for accessing PackageKit using GLib rc libpoppler28:amd64 0.20.5-1ubuntu3 amd64 PDF rendering library rc libraw5:amd64 0.14.7-0ubuntu1.13.04.2 amd64 raw image decoder library rc librhythmbox-core6 2.98-0ubuntu5 amd64 support library for the rhythmbox music player rc libsdl-mixer1.2:amd64 1.2.12-7ubuntu1 amd64 Mixer library for Simple DirectMedia Layer 1.2, libraries rc libsnmp15 5.4.3~dfsg-2.7ubuntu1 amd64 SNMP (Simple Network Management Protocol) library rc libsyncdaemon-1.0-1 4.2.0-0ubuntu1 amd64 Ubuntu One synchronization daemon library rc libunity-core-6.0-5 7.0.0daily13.06.19~13.04-0ubuntu1 amd64 Core library for the Unity interface. rc libusb-0.1-4:i386 2:0.1.12-23.2ubuntu1 i386 userspace USB programming library rc libwayland0:amd64 1.0.5-0ubuntu1 amd64 wayland compositor infrastructure - shared libraries rc linux-image-3.8.0-19-generic 3.8.0-19.30 amd64 Linux kernel image for version 3.8.0 on 64 bit x86 SMP rc linux-image-3.8.0-31-generic 3.8.0-31.46 amd64 Linux kernel image for version 3.8.0 on 64 bit x86 SMP rc linux-image-extra-3.8.0-19-generic 3.8.0-19.30 amd64 Linux kernel image for version 3.8.0 on 64 bit x86 SMP rc linux-image-extra-3.8.0-31-generic 3.8.0-31.46 amd64 Linux kernel image for version 3.8.0 on 64 bit x86 SMP rc screen-resolution-extra 0.15ubuntu1 all Extension for the GNOME screen resolution applet rc unity-common 7.0.0daily13.06.19~13.04-0ubuntu1 all Common files for the Unity interface.

    Read the article

  • Comments on Comments

    - by Joe Mayo
    I almost tweeted a reply to Capar Kleijne's question about comments on Twitter, but realized that my opinion exceeded 140 characters. The following is based upon my experience with extremes and approaches that I find useful in code comments. There are a couple extremes that I've seen and reasons why people go the distance in each approach. The most common extreme is no comments in the code at all.  A few bad reasons why this happens is because a developer is in a hurry, sloppy, or is interested in job preservation. The unfortunate result is that the code is difficult to understand and hard to maintain. The drawbacks to no comments in code are a primary reason why teachers drill the need for commenting code into our heads.  This viewpoint assumes the lack of comments are bad because the code is bad, but there is another reason for not commenting that is gaining more popularity. I've heard/and read that code should be self documenting. Following this thought pattern, if code is well written with meaningful names, there should not be a reason for comments.  An addendum to this argument is that comments are often neglected and get out-of-date, but the code is what is kept up-to-date. Presumably, if code contained very good naming, it would be easy to maintain.  This is a noble perspective and I like the practice of meaningful naming of identifiers. However, I think it's also an extreme approach that doesn't cover important cases.  i.e. If an identifier is named badly (subjective differences in opinion) or not changed appropriately during maintenance, then the badly named identifier is no more useful than a stale comment. These were the two no-comment extremes, so let's look at the too many comments extreme. On a regular basis, I'll see cases where the code is over-commented; not nearly as often as the no-comment scenarios, but still prevalent.  These are examples of where every single line in the code is commented.  These comments make the code harder to read because they get in the way of the algorithm.  In most cases, the comments parrot what each line of code does.  If a developer understands the language, then most statements are immediately intuitive.  i.e. what use is it to say that I'm assigning foo to bar when it's clear what the code is doing. I think that over-commenting code is a waste of time that slows down initial development and maintenance.  Understandably, the developer's intentions are admirable because they've had it beaten into their heads that they must comment. However, I think it's an extreme and prefer a more moderate approach. I don't think the extremes do justice to code because each can make maintenance harder.  No comments on bad code is obviously a problem, but the other two extremes are subtle and require qualification to address properly. The problem I see with the code-as-documentation approach is that it doesn't lift the developer out of the algorithm to identify dependencies, intentions, and hacks. Any developer can read code and follow an algorithm, but they still need to know where it fits into the big picture of the application. Because of indirections with language features like interfaces, delegates, and virtual members, code can become complex.  Occasionally, it's useful to point out a nuance or reason why a piece of code is there. i.e. If you've building an app that communicates via HTTP, you'll have certain headers to include for the endpoint, and it could be useful to point out why the code for setting those header values is there and how they affect the application. An argument against this could be that you should extract that code into a separate method with a meaningful name to describe the scenario.  My problem with such an approach would be that your code base becomes even more difficult to navigate and work with because you have all of this extra code just to make the code more meaningful. My opinion is that a simple and well-stated comment stating the reasons and intention for the code is more natural and convenient to the initial developer and maintainer.  I just don't agree with the approach of going out of the way to avoid making a comment.  I'm also concerned that some developers would take this approach as an excuse to not comment their bad code. Another area where I like comments is on documentation comments.  Java has it and so does C# and VB.  It's convenient because we can build automated tools that extract these comments.  These extracted comments are often much better than no documentation at all.  The "go read the code" answer always doesn't fulfill the need for a quick summary of an API. To summarize, I think that the extremes of no comments and too many comments are less than desirable approaches. I prefer documentation comments to explain each class and member (API level) and code comments as necessary to supplement well-written code. Joe

    Read the article

  • Azure &ndash; Part 6 &ndash; Blob Storage Service

    - by Shaun
    When migrate your application onto the Azure one of the biggest concern would be the external files. In the original way we understood and ensure which machine and folder our application (website or web service) is located in. So that we can use the MapPath or some other methods to read and write the external files for example the images, text files or the xml files, etc. But things have been changed when we deploy them on Azure. Azure is not a server, or a single machine, it’s a set of virtual server machine running under the Azure OS. And even worse, your application might be moved between thses machines. So it’s impossible to read or write the external files on Azure. In order to resolve this issue the Windows Azure provides another storage serviec – Blob, for us. Different to the table service, the blob serivce is to be used to store text and binary data rather than the structured data. It provides two types of blobs: Block Blobs and Page Blobs. Block Blobs are optimized for streaming. They are comprised of blocks, each of which is identified by a block ID and each block can be a maximum of 4 MB in size. Page Blobs are are optimized for random read/write operations and provide the ability to write to a range of bytes in a blob. They are a collection of pages. The maximum size for a page blob is 1 TB.   In the managed library the Azure SDK allows us to communicate with the blobs through these classes CloudBlobClient, CloudBlobContainer, CloudBlockBlob and the CloudPageBlob. Similar with the table service managed library, the CloudBlobClient allows us to reach the blob service by passing our storage account information and also responsible for creating the blob container is not exist. Then from the CloudBlobContainer we can save or load the block blobs and page blobs into the CloudBlockBlob and the CloudPageBlob classes.   Let’s improve our exmaple in the previous posts – add a service method allows the user to upload the logo image. In the server side I created a method name UploadLogo with 2 parameters: email and image. Then I created the storage account from the config file. I also add the validation to ensure that the email passed in is valid. 1: var storageAccount = CloudStorageAccount.FromConfigurationSetting("DataConnectionString"); 2: var accountContext = new DynamicDataContext<Account>(storageAccount); 3:  4: // validation 5: var accountNumber = accountContext.Load() 6: .Where(a => a.Email == email) 7: .ToList() 8: .Count; 9: if (accountNumber <= 0) 10: { 11: throw new ApplicationException(string.Format("Cannot find the account with the email {0}.", email)); 12: } Then there are three steps for saving the image into the blob service. First alike the table service I created the container with a unique name and create it if it’s not exist. 1: // create the blob container for account logos if not exist 2: CloudBlobClient blobStorage = storageAccount.CreateCloudBlobClient(); 3: CloudBlobContainer container = blobStorage.GetContainerReference("account-logo"); 4: container.CreateIfNotExist(); Then, since in this example I will just send the blob access URL back to the client so I need to open the read permission on that container. 1: // configure blob container for public access 2: BlobContainerPermissions permissions = container.GetPermissions(); 3: permissions.PublicAccess = BlobContainerPublicAccessType.Container; 4: container.SetPermissions(permissions); And at the end I combine the blob resource name from the input file name and Guid, and then save it to the block blob by using the UploadByteArray method. Finally I returned the URL of this blob back to the client side. 1: // save the blob into the blob service 2: string uniqueBlobName = string.Format("{0}_{1}.jpg", email, Guid.NewGuid().ToString()); 3: CloudBlockBlob blob = container.GetBlockBlobReference(uniqueBlobName); 4: blob.UploadByteArray(image); 5:  6: return blob.Uri.ToString(); Let’s update a bit on the client side application and see the result. Here I just use my simple console application to let the user input the email and the file name of the image. If it’s OK it will show the URL of the blob on the server side so that we can see it through the web browser. Then we can see the logo I’ve just uploaded through the URL here. You may notice that the blob URL was based on the container name and the blob unique name. In the document of the Azure SDK there’s a page for the rule of naming them, but I think the simple rule would be – they must be valid as an URL address. So that you cannot name the container with dot or slash as it will break the ADO.Data Service routing rule. For exmaple if you named the blob container as Account.Logo then it will throw an exception says 400 Bad Request.   Summary In this short entity I covered the simple usage of the blob service to save the images onto Azure. Since the Azure platform does not support the file system we have to migrate our code for reading/writing files to the blob service before deploy it to Azure. In order to reducing this effort Microsoft provided a new approch named Drive, which allows us read and write the NTFS files just likes what we did before. It’s built up on the blob serivce but more properly for files accessing. I will discuss more about it in the next post.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Add Hotmail & Live Email Accounts to Outlook 2010

    - by Matthew Guay
    Microsoft has recently been promoting upcoming updates to their Hotmail service, promising to make it an even better webmail service. But Microsoft’s revamped Outlook 2010 is already here. Here’s how to integrate Hotmail with Outlook. Outlook 2010 works with a wide variety of email accounts, including POP3, IMAP, and Exchange accounts.  The only problem with POP3 and IMAP accounts is that they only sync email, but not your calendar and contacts like Exchange does.  Hotmail, however, lets you sync your email, contacts, and calendar with Outlook with the Hotmail Connector.  This lets you keep all of your PIM data accessible from everywhere.  Let’s look at how we can set this up on our account. Getting Started The easiest way to add Hotmail to Outlook is to first install the Outlook Hotmail Connector (link below).  Make sure Outlook is closed first, and then proceed with the installation as usual. If you enter your Hotmail account into the New Account setup in Outlook before installing the Hotmail Connector, Outlook will prompt you to download the Hotmail Connector.  However, you’ll have to exit Outlook before you can install the Connector, and then will have to re-enter your information when you restart Outlook, so it’s easier to just install it first. Add Your Hotmail Account to Outlook Now you’re ready to add your Hotmail account to Outlook.  If this is the first time you’ve run Outlook 2010, you’ll be greeted with the following screen.  Click Next to proceed with setup. Then select Yes and click Next again. If you’ve already got an email account setup in Outlook, you can add a new account by clicking File and then selecting Add account. Now, enter your Hotmail account information, and click Next. Outlook will search for your account settings and automatically setup your account with the Hotmail connector we previously installed. If you entered your password incorrectly previously, you may see the following popup.  Re-enter your password and click OK, and Outlook will re-verify your settings. Once everything’s finished and setup, you’ll see the following completion screen.  Click Finish to complete the setup and check out your Hotmail in Outlook. Welcome to your Hotmail account in Outlook 2010.  You’ll notice a small notification at the bottom of the window notifying you that you’re connected to Windows Live Hotmail.  Now your email will synchronize with your Hotmail account, and your Outlook calendar and contacts will be synced with your Live calendar and contacts, respectively.  This is the closest you can get to full Exchange without an Exchange account, and in our experience it works great.  In fact, Hotmail Sync seems to work faster than IMAP sync for us. Setup Hotmail With POP3 Access If you need to access your Hotmail email account but don’t want to install the Outlook Connector, then you can add it with POP3 sync.  We recommend going with the Outlook Connector for the best experience, but if you can’t install it (eg. you’re not allowed to install applications on your work PC) then this is a good alternative. To do this, follow our tutorial on setting up a Gmail POP3 account in Outlook. Although the article concentrates on Gmail, the settings are essentially the same. The only thing you’ll want to change is the Incoming and Outgoing mail server. Incoming mail server – pop3.live.com Outgoing mail server – smtp.live.com User name – your Hotmail or Live email address Incoming Server (POP3) – 995 Outgoing Server (SMTP) – 587 Also, check This server requires and encrypted connection Just as in the Gmail example, select TLS for the type of encrypted connection.  Then, on the bottom, make sure to uncheck the box to Remove messages from the server after a number of days.  This way your messages will still be accessible from your Hotmail account online. Conclusion Even though Hotmail is generally not as popular as Gmail, it works great with Outlook integration.  If you’re a heavy user of Windows Live services, or want to try them out, Outlook Connector is the easiest way to keep your desktop activity synced with the cloud.  If you’re just one of the millions of Hotmail users who want to access their old Hotmail account alongside their other accounts, this method works great for you too. If you’re using Outlook 2003 or 2007, check out our article on using Hotmail from Microsoft Outlook. Links Download Outlook Hotmail Connector 32-bit Download Outlook Hotmail Connector 64-bit – note, only for users of Office 2010 x64 Similar Articles Productive Geek Tips Use Hotmail from Microsoft OutlookHow to add any POP3 Email Account to HotmailHow to Send and Receive Hotmail from Your Gmail AccountAdd Your Gmail To Windows Live MailManage Your Windows Live Account in Google Chrome TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Creating a Password Reset Disk in Windows Bypass Waiting Time On Customer Service Calls With Lucyphone MELTUP – "The Beginning Of US Currency Crisis And Hyperinflation" Enable or Disable the Task Manager Using TaskMgrED Explorer++ is a Worthy Windows Explorer Alternative Error Goblin Explains Windows Error Codes

    Read the article

  • Protecting offline IRM rights and the error "Unable to Connect to Offline database"

    - by Simon Thorpe
    One of the most common problems I get asked about Oracle IRM is in relation to the error message "Unable to Connect to Offline database". This error message is a result of how Oracle IRM is protecting the cached rights on the local machine and if that cache has become invalid in anyway, this error is thrown. Offline rights and security First we need to understand how Oracle IRM handles offline use. The way it is implemented is one of the main reasons why Oracle IRM is the leading document security solution and demonstrates our methodology to ensure that solutions address both security and usability and puts the balance of these two in your control. Each classification has a set of predefined roles that the manager of the classification can assign to users. Each role has an offline period which determines the amount of time a user can access content without having to communicate with the IRM server. By default for the context model, which is the classification system that ships out of the box with Oracle IRM, the offline period for each role is 3 days. This is easily changed however and can be as low as under an hour to as long as years. It is also possible to switch off the ability to access content offline which can be useful when content is very sensitive and requires a tight leash. So when a user is online, transparently in the background, the Oracle IRM Desktop communicates with the server and updates the users rights and offline periods. This transparent synchronization period is determined by the server and communicated to all IRM Desktops and allows for users rights to be kept up to date without their intervention. This allows us to support some very important scenarios which are key to a successful IRM solution. A user doesn't have to make any decision when going offline, they simply unplug their laptop and they already have their offline periods synchronized to the maximum values. Any solution that requires a user to make a decision at the point of going offline isn't going to work because people forget to do this and will therefore be unable to legitimately access their content offline. If your rights change to REMOVE your access to content, this also happens in the background. This is very useful when someone has an offline duration of a week and they happen to make a connection to the internet 3 days into that offline period, the Oracle IRM Desktop detects this online state and automatically updates all rights for the user. This means the business risk is reduced when setting long offline periods, because of the daily transparent sync, you can reflect changes as soon as the user is online. Of course, if they choose not to come online at all during that week offline period, you cannot effect change, but you take that risk in giving the 7 day offline period in the first place. If you are added to a NEW classification during the day, this will automatically be synchronized without the user even having to open a piece of content secured against that classification. This is very important, consider the scenario where a senior executive downloads all their email but doesn't open any of it. Disconnects the laptop and then gets on a plane. During the flight they attempt to open a document attached to a downloaded email which has been secured against an IRM classification the user was not even aware they had access to. Because their new role in this classification was automatically synchronized their experience is a good one and the document opens. More information on how the Oracle IRM classification model works can be found in this article by Martin Abrahams. So what about problems accessing the offline rights database? So onto the core issue... when these rights are cached to your machine they are stored in an encrypted database. The encryption of this offline database is keyed to the instance of the installation of the IRM Desktop and the Windows user account. Why? Well what you do not want to happen is for someone to get their rights for content and then copy these files across hundreds of other machines, therefore getting access to sensitive content across many environments. The IRM server has a setting which controls how many times you can cache these rights on unique machines. This is because people typically access IRM content on more than one computer. Their work desktop, a laptop and often a home computer. So Oracle IRM allows for the usability of caching rights on more than one computer whilst retaining strong security over this cache. So what happens if these files are corrupted in someway? That's when you will see the error, Unable to Connect to Offline database. The most common instance of seeing this is when you are using virtual machines and copy them from one computer to the next. The virtual machine software, VMWare Workstation for example, makes changes to the unique information of that virtual machine and as such invalidates the offline database. How do you solve the problem? Resolution is however simple. You just delete all of the offline database files on the machine and they will be recreated with working encryption when the Oracle IRM Desktop next starts. However this does mean that the IRM server will think you have your rights cached to more than one computer and you will need to rerequest your rights, even though you are only going to be accessing them on one. Because it still thinks the old cache is valid. So be aware, it is good practice to increase the server limit from the default of 1 to say 3 or 4. This is done using the Enterprise Manager instance of IRM. So to delete these offline files I have a simple .bat file you can use; Download DeleteOfflineDBs.bat Note that this uses pskillto stop the irmBackground.exe from running. This is part of the IRM Desktop and holds open a lock to the offline database. Either kill this from task manager or use pskillas part of the script.

    Read the article

  • Nerdstock 2012: A photo review of Microsoft TechEd North America 2012

    - by The Un-T Guy
    Not only could I not fathom that I would ever be attending a tech event of the magnitude of TechEd, neither could any of my co-workers.  As the least technical person in the history of Information Technology ever, I felt as though I were walking into the belly of the beast, fearing I’d not be allowed out until I could write SSIS packages, program in Visual Basic, or at least arm wrestle a DBA.  Most of my fears were unrealized.   But I made it.  I was here.  I even got to wear the Mark of the Geek neck package with schedule, eyeglass cleaners, name badge (company name obfuscated so they don’t fire me), and a pen.  The name  badge was seemingly the key element, as every vendor in the place wanted to scan it to capture name, email address, and numbers to show their bosses back home.  It also let me eat the food and drink the coffee so that’s a fair trade.   A recurring theme throughout the presentations and vendor demos was “the Cloud” and BYOD (bring your own device).  The below was a common site throughout the week, as attendees from all over the world brought their own devices and were able to (seemingly) seamlessly connect to the Worldwide Innerwebs.  Apparently proof that Microsoft and the event organizers were practicing what they were preaching.   “Cavernous” is one way to describe the downstairs facility itself.  “Freaking cavernous” might be more accurate.  Work sessions were held in classrooms on the second and third floors but the real action was happening downstairs.  Microsoft bookstore, blogger hub (shoutout to Geekswithblogs.net), The Wall (sans Pink Floyd, sadly), couches, recharging stations…   …a game zone with pool and air hockey tables, pinball machines, foosball…   …vintage video games…           …and a even giant chess board.  Looked like this guy was opening with the Kaspersky parry.   The blend of technology and fantasy even went so far as to bring childhood favorites to life.  Assuming, of course, your childhood was pre-video games (like mine) and you were stuck with electric football and Rock ‘em Sock ‘em robots:   And, lest the “combatants” become unruly or – God forbid – afternoon snacks were late, Orange County’s finest was on the scene to keep the peace.  On a high-tech mode of transport, of course.   She wasn’t the only one to think this was a swell way to transition from one concourse to the next.  Given the level of support provided by the entire Orange County Convention Center staff, I knew they had to have some secret.   Here’s one entrance to the vendor zone/”Technical Learning Center.”  Couldn’t help but think of them as the remora attached to the Whale Shark that is Microsoft…   …or perhaps planets orbiting the sun. Microsoft is just that huge and it seemed like every vendor in the industry looks forward to partnering with the tech behemoth.   Aside from the free stuff from the vendors, probably the most popular place in the house was the dining area.  Amazing spreads every day, multiple times a day.  While no attendance numbers were available at press time, literally thousands of attendees were fed, and fed well, every day.  And lest you think my post from earlier in the week exaggerated about the backpacks…   …or that I’m exaggerating about the lunch crowds.  This represents only about between 25-30% of the lunch crowd – it was all my camera could capture at once.  No one went away hungry.   The only thing missing was a a vat of Red Bull but apparently organizers went old school, with probably 100 urns of the original energy drink – coffee – all around the venue.   Of course, following lunch and afternoon sessions, some preferred the even older school method of re-energizing.  There were rumors that Microsoft was serving graham crackers and milk in this area.  But they were only rumors.   Cannot overstate the wonderful service provided by the Orange County Convention Center staff.  Coffee, soft drinks, juice, and water were available always.  Buffet meals were delicious with a wide range of healthy options available, in addition to hundreds (at least) special meal requests supported every day.  Ever tried to keep up with an estimated 9,000 hungry and thirsty IT-ers?  These folks did.  Kudos to all of the staff and many thanks!   And while I occasionally poke fun at the Whale Shark, if nothing else this experience convinced me of one thing:  Microsoft knows how to put on a professional event.  Hundreds of informative, professionally delivered sessions, covering a wide range of topics set at varying levels of expertise (some that even I was able to follow), social activities, vendor partnerships…they brought everything you could ask for to inform, educate, and inspire an entire IT industry.   So as I depart the belly of the beast, I can both take pride in the fact that I survived the week and marvel at the brilliance surrounding me.  The IT industry – or at least the segment associated with Microsoft – is in good, professional hands.  And what won’t fit in their hands can be toted in the Microsoft provided backpacks.  Win-win.   Until New Orleans…

    Read the article

  • Restore Your PC from Windows Home Server

    - by Mysticgeek
    If your computer crashes or you get a virus infection that makes it unrecoverable, doing a clean install can be a hassle, let alone getting your data back. If you’re backing up your computers to Windows Home Server, you can completely restore them to the last successful backup. Note: For this process to work you need to verify the PC you want to restore is connected to your network via Ethernet. If you have it connected wirelessly it won’t work. Restore a PC from Windows Home Server On the computer you want to restore, pop in the Windows Home Server Home Computer Restore disc and boot from it. If you don’t have one already made, you can easily make one following these instructions. We have also included the link to the restore disc below. Boot from the CD then select if your machine has 512MB or RAM or more. The disc will initialize… Then choose your language and keyboard settings. Hopefully if everything goes correctly, your network card will be detected and you can continue. However, if it doesn’t like in our example, click on the Show Details button. In the Detect Hardware screen click on the Install Drivers button. Now you will need to have a USB flash drive with the correct drivers on it. It has to be a flash drive or a floppy (if you happen to still have one of those) because you can’t take out the Restore CD. If you want to make sure you have the correct drivers on the USB flash drive, open the Windows Home Server Console on another computer on your network. In the Computers and Backup section right-click on the computer you want to restore and select View Backups. Select the backup you want to restore from and click the Open button in the Restore or view Files section. Now drag the entire contents of the folder named Windows Home Server Drivers for Restore to the USB flash drive. Back to the machine you’re trying to restore, insert the USB flash drive with the correct drivers and click the Scan button. Wait a few moments while the drivers are found then click Ok then Continue.   The Restore Computer Wizard starts up… Enter in your home server password and click Next. Select the computer you want to restore. If it isn’t selected by default you can pull it up from the dropdown list under Another Computer. Make certain you’re selecting the correct machine. Now select the backup you want to restore. In this example we only have one but chances are you’ll have several. If you have several backups to choose from, you might want to check out the details for them. Now you can select the disk from backup and and restore it to the destination volume. You might need to initialize a disk, change a drive letter, or other disk management tasks, if so, then click on Run Disk Manger. For example we want to change the destination drive letter to (C:).   After you’ve made all the changes to the destination disk you can continue with the restore process. If everything looks correct, confirm the restore configuration. If you need to make any changes at this point, you can still go back and make them. Now Windows Home Server will restore your drive. The amount of time it takes will vary depend on the amount of data you have to restore, network connection speed, and hardware. You are notified when the restore successfully completes. Click Finish and the PC will reboot and be restored and should be working correctly. All the updates, programs, and files will be back that were saved to the last successful backup. Anything you might have installed after that backup will be gone. If you have your computers set to backup every night, then hopefully it won’t be a big issue.   Conclusion Backing up the computers on your network to Windows Home Server is a valuable tool in your backup strategy. Sometimes you may only need to restore a couple files and we’ve covered how to restore them from backups on WHS and that works really well. If the unthinkable happens and you need to restore the entire computer, WHS makes that easy too.  Download Windows Home Server Home Computer Restore CD Similar Articles Productive Geek Tips Restore Files from Backups on Windows Home ServerCreate A Windows Home Server Home Computer Restore DiscGMedia Blog: Setting Up a Windows Home ServerShare Ubuntu Home Directories using SambaInstalling Windows Home Server TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Make your Joomla & Drupal Sites Mobile with OSMOBI Integrate Twitter and Delicious and Make Life Easier Design Your Web Pages Using the Golden Ratio Worldwide Growth of the Internet How to Find Your Mac Address Use My TextTools to Edit and Organize Text

    Read the article

  • Look Inside WebLogic Server Embedded LDAP with an LDAP Explorer

    - by james.bayer
    Today a question came up on our internal WebLogic Server mailing lists about an issue deleting a Group from WebLogic Server.  The group had a special character in the name. The WLS console refused to delete the group with the message a java.net.MalformedURLException and another message saying “Errors must be corrected before proceeding.” as shown below. The group aa:bb is the one with the issue.  Click to enlarge. WebLogic Server includes an embedded LDAP server that can be used for managing users and groups for “reasonably small environments (10,000 or fewer users)”.  For organizations scaling larger or using more high-end features, I recommend looking at one of Oracle’s very popular enterprise directory services products like Oracle Internet Directory or Oracle Directory Server Enterprise Edition.  You can configure multiple authenicators in WebLogic Server so that you can use multiple directories at the same time. I am not sure WebLogic Server supports special characters in group names for the Embedded LDAP server, but in this case both the console and WLST reported the same issue deleting the group with the special character in the name.  Here’s the WLST output: wls:/hotspot_domain/serverConfig/SecurityConfiguration/hotspot_domain/Realms/myrealm/AuthenticationProviders/DefaultAuthenticator> cmo.removeGroup('aa:bb') Traceback (innermost last): File "<console>", line 1, in ? weblogic.security.providers.authentication.LDAPAtnDelegateException: [Security:090296]invalid URL ldap:///ou=people,ou=myrealm,dc=hotspot_domain??sub?(&(objectclass=person)(wlsMemberOf=cn=aa:bb,ou=groups,ou=myrealm,dc=hotspot_domain)) at weblogic.security.providers.authentication.LDAPAtnGroupMembersNameList.advance(LDAPAtnGroupMembersNameList.java:254) at weblogic.security.providers.authentication.LDAPAtnGroupMembersNameList.<init>(LDAPAtnGroupMembersNameList.java:119) at weblogic.security.providers.authentication.LDAPAtnDelegate.listGroupMembers(LDAPAtnDelegate.java:1392) at weblogic.security.providers.authentication.LDAPAtnDelegate.removeGroup(LDAPAtnDelegate.java:1989) at weblogic.security.providers.authentication.DefaultAuthenticatorImpl.removeGroup(DefaultAuthenticatorImpl.java:242) at weblogic.security.providers.authentication.DefaultAuthenticatorMBeanImpl.removeGroup(DefaultAuthenticatorMBeanImpl.java:407) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at weblogic.management.jmx.modelmbean.WLSModelMBean.invoke(WLSModelMBean.java:437) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:836) at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:761) at weblogic.management.jmx.mbeanserver.WLSMBeanServerInterceptorBase$16.run(WLSMBeanServerInterceptorBase.java:449) at java.security.AccessController.doPrivileged(Native Method) at weblogic.management.jmx.mbeanserver.WLSMBeanServerInterceptorBase.invoke(WLSMBeanServerInterceptorBase.java:447) at weblogic.management.mbeanservers.internal.JMXContextInterceptor.invoke(JMXContextInterceptor.java:263) at weblogic.management.jmx.mbeanserver.WLSMBeanServerInterceptorBase$16.run(WLSMBeanServerInterceptorBase.java:449) at java.security.AccessController.doPrivileged(Native Method) at weblogic.management.jmx.mbeanserver.WLSMBeanServerInterceptorBase.invoke(WLSMBeanServerInterceptorBase.java:447) at weblogic.management.mbeanservers.internal.SecurityInterceptor.invoke(SecurityInterceptor.java:444) at weblogic.management.jmx.mbeanserver.WLSMBeanServer.invoke(WLSMBeanServer.java:323) at weblogic.management.mbeanservers.internal.JMXConnectorSubjectForwarder$11$1.run(JMXConnectorSubjectForwarder.java:663) at java.security.AccessController.doPrivileged(Native Method) at weblogic.management.mbeanservers.internal.JMXConnectorSubjectForwarder$11.run(JMXConnectorSubjectForwarder.java:661) at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:363) at weblogic.management.mbeanservers.internal.JMXConnectorSubjectForwarder.invoke(JMXConnectorSubjectForwarder.java:654) at javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1427) at javax.management.remote.rmi.RMIConnectionImpl.access$200(RMIConnectionImpl.java:72) at javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1265) at java.security.AccessController.doPrivileged(Native Method) at javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1367) at javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:788) at javax.management.remote.rmi.RMIConnectionImpl_WLSkel.invoke(Unknown Source) at weblogic.rmi.internal.BasicServerRef.invoke(BasicServerRef.java:667) at weblogic.rmi.internal.BasicServerRef$1.run(BasicServerRef.java:522) at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:363) at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:146) at weblogic.rmi.internal.BasicServerRef.handleRequest(BasicServerRef.java:518) at weblogic.rmi.internal.wls.WLSExecuteRequest.run(WLSExecuteRequest.java:118) at weblogic.work.ExecuteThread.execute(ExecuteThread.java:207) at weblogic.work.ExecuteThread.run(ExecuteThread.java:176) Caused by: java.net.MalformedURLException at netscape.ldap.LDAPUrl.readNextConstruct(LDAPUrl.java:651) at netscape.ldap.LDAPUrl.parseUrl(LDAPUrl.java:277) at netscape.ldap.LDAPUrl.<init>(LDAPUrl.java:114) at weblogic.security.providers.authentication.LDAPAtnGroupMembersNameList.advance(LDAPAtnGroupMembersNameList.java:224) ... 41 more It’s fairly clear that in order to work that the : character needs to be URL encoded to %3A or similar.  But all is not lost, there is another way.  You can configure an LDAP Explorer like JXplorer to WebLogic Server Embedded LDAP and browse/edit the entries. Follow the instructions here, being sure to change the authentication credentials to the Embedded LDAP server to some value you know, as by default they are some unknown value.  You’ll need to reboot the WebLogic Server Admin Server after making this change. Now configure JXplorer to connect as described in the documentation.  I’ve circled the important inputs.  In this example, my domain name is “hotspot_domain” which listens on the localhost listen address and port 7001.  The cn=Admin user name is a constant identifier for the Administrator of the embedded LDAP and that does not change, but you need to know what it is so you can enter it into the tool you use. Once you connect successfully, you can explore the entries and in this case delete the group that is no longer desired.

    Read the article

  • Integrate BING API for Search inside ASP.Net web application

    - by sreejukg
    As you might already know, Bing is the Microsoft Search engine and is getting popular day by day. Bing offers APIs that can be integrated into your website to increase your website functionality. At this moment, there are two important APIs available. They are Bing Search API Bing Maps The Search API enables you to build applications that utilize Bing’s technology. The API allows you to search multiple source types such as web; images, video etc. and supports various output prototypes such as JSON, XML, and SOAP. Also you will be able to customize the search results as you wish for your public facing website. Bing Maps API allows you to build robust applications that use Bing Maps. In this article I am going to describe, how you can integrate Bing search into your website. In order to start using Bing, First you need to sign in to http://www.bing.com/toolbox/bingdeveloper/ using your windows live credentials. Click on the Sign in button, you will be asked to enter your windows live credentials. Once signed in you will be redirected to the Developer page. Here you can create applications and get AppID for each application. Since I am a first time user, I don’t have any applications added. Click on the Add button to add a new application. You will be asked to enter certain details about your application. The fields are straight forward, only thing you need to note is the website field, here you need to enter the website address from where you are going to use this application, and this field is optional too. Of course you need to agree on the terms and conditions and then click Save. Once you click on save, the application will be created and application ID will be available for your use. Now we got the APP Id. Basically Bing supports three protocols. They are JSON, XML and SOAP. JSON is useful if you want to call the search requests directly from the browser and use JavaScript to parse the results, thus JSON is the favorite choice for AJAX application. XML is the alternative for applications that does not support SOAP, e.g. flash/ Silverlight etc. SOAP is ideal for strongly typed languages and gives a request/response object model. In this article I am going to demonstrate how to search BING API using SOAP protocol from an ASP.Net application. For the purpose of this demonstration, I am going to create an ASP.Net project and implement the search functionality in an aspx page. Open Visual Studio, navigate to File-> New Project, select ASP.Net empty web application, I named the project as “BingSearchSample”. Add a Search.aspx page to the project, once added the solution explorer will looks similar to the following. Now you need to add a web reference to the SOAP service available from Bing. To do this, from the solution explorer, right click your project, select Add Service Reference. Now the new service reference dialog will appear. In the left bottom of the dialog, you can find advanced button, click on it. Now the service reference settings dialog will appear. In the bottom left, you can find Add Web Reference button, click on it. The add web reference dialog will appear now. Enter the URL as http://api.bing.net/search.wsdl?AppID=<YourAppIDHere>&version=2.2 (replace <yourAppIDHere> with the appID you have generated previously) and click on the button next to it. This will find the web service methods available. You can change the namespace suggested by Bing, but for the purpose of this demonstration I have accepted all the default settings. Click on the Add reference button once you are done. Now the web reference to Search service will be added your project. You can find this under solution explorer of your project. Now in the Search.aspx, that you previously created, place one textbox, button and a grid view. For the purpose of this demonstration, I have given the identifiers (ID) as txtSearch, btnSearch, gvSearch respectively. The idea is to search the text entered in the text box using Bing service and show the results in the grid view. In the design view, the search.aspx looks as follows. In the search.aspx.cs page, add a using statement that points to net.bing.api. I have added the following code for button click event handler. The code is very straight forward. It just calls the service with your AppID, a query to search and a source for searching. Let us run this page and see the output when I enter Microsoft in my textbox. If you want to search a specific site, you can include the site name in the query parameter. For e.g. the following query will search the word Microsoft from www.microsoft.com website. searchRequest.Query = “site:www.microsoft.com Microsoft”; The output of this query is as follows. Integrating BING search API to your website is easy and there is no limit on the customization of the interface you can do. There is no Bing branding required so I believe this is a great option for web developers when they plan for site search.

    Read the article

  • How to configure a zone cluster on Solaris Cluster 4.0

    - by JuergenS
    This is a short overview on how to configure a zone cluster on Solaris Cluster 4.0. This is a little bit different as in Solaris Cluster 3.2/3.3 because Solaris Cluster 4.0 is only running on Solaris 11. The name of the zone cluster must be unique throughout the global Solaris Cluster and must be configured on a global Solaris Cluster. Please read all the requirements for zone cluster in Solaris Cluster Software Installation Guide for SC4.0. For Solaris Cluster 3.2/3.3 please refer to my previous blog Configuration steps to create a zone cluster in Solaris Cluster 3.2/3.3. A. Configure the zone cluster into the already running global clusterCheck if zone cluster can be created # cluster show-netprops to change number of zone clusters use # cluster set-netprops -p num_zoneclusters=12 Note: 12 zone clusters is the default, values can be customized! Create config file (zc1config) for zone cluster setup e.g: Configure zone cluster # clzc configure -f zc1config zc1 Note: If not using the config file the configuration can also be done manually # clzc configure zc1 Check zone configuration # clzc export zc1 Verify zone cluster # clzc verify zc1 Note: The following message is a notice and comes up on several clzc commands Waiting for zone verify commands to complete on all the nodes of the zone cluster "zc1"... Install the zone cluster # clzc install zc1 Note: Monitor the consoles of the global zone to see how the install proceed! (The output is different on the nodes) It's very important that all global cluster nodes have installed the same set of ha-cluster packages! Boot the zone cluster # clzc boot zc1 Login into non-global-zones of zone cluster zc1 on all nodes and finish Solaris installation. # zlogin -C zc1 Check status of zone cluster # clzc status zc1 Login into non-global-zones of zone cluster zc1 and configure the shell environment for root (for PATH: /usr/cluster/bin, for MANPATH: /usr/cluster/man) # zlogin -C zc1 If using additional name service configure /etc/nsswitch.conf of zone cluster non-global zones. hosts: cluster files netmasks: cluster files Configure /etc/inet/hosts of the zone cluster zones Enter all the logical hosts of non-global zones B. Add resource groups and resources to zone cluster Create a resource group in zone cluster # clrg create -n <zone-hostname-node1>,<zone-hostname-node2> app-rg Note1: Use command # cluster status for zone cluster resource group overview. Note2: You can also run all commands for zone cluster in global cluster by adding the option -Z to the command. e.g: # clrg create -Z zc1 -n <zone-hostname-node1>,<zone-hostname-node2> app-rg Set up the logical host resource for zone cluster In the global zone do: # clzc configure zc1 clzc:zc1 add net clzc:zc1:net set address=<zone-logicalhost-ip> clzc:zc1:net end clzc:zc1 commit clzc:zc1 exit Note: Check that logical host is in /etc/hosts file In zone cluster do: # clrslh create -g app-rg -h <zone-logicalhost> <zone-logicalhost>-rs Set up storage resource for zone cluster Register HAStoragePlus # clrt register SUNW.HAStoragePlus Example1) ZFS storage pool In the global zone do: Configure zpool eg: # zpool create <zdata> mirror cXtXdX cXtXdX and # clzc configure zc1 clzc:zc1 add dataset clzc:zc1:dataset set name=zdata clzc:zc1:dataset end clzc:zc1 verify clzc:zc1 commit clzc:zc1 exit Check setup with # clzc show -v zc1 In the zone cluster do: # clrs create -g app-rg -t SUNW.HAStoragePlus -p zpools=zdata app-hasp-rs Example2) HA filesystem In the global zone do: Configure SVM diskset and SVM devices. and # clzc configure zc1 clzc:zc1 add fs clzc:zc1:fs set dir=/data clzc:zc1:fs set special=/dev/md/datads/dsk/d0 clzc:zc1:fs set raw=/dev/md/datads/rdsk/d0 clzc:zc1:fs set type=ufs clzc:zc1:fs add options [logging] clzc:zc1:fs end clzc:zc1 verify clzc:zc1 commit clzc:zc1 exit Check setup with # clzc show -v zc1 In the zone cluster do: # clrs create -g app-rg -t SUNW.HAStoragePlus -p FilesystemMountPoints=/data app-hasp-rs Example3) Global filesystem as loopback file system In the global zone configure global filesystem and it to /etc/vfstab on all global nodes e.g.: /dev/md/datads/dsk/d0 /dev/md/datads/dsk/d0 /global/fs ufs 2 yes global,logging and # clzc configure zc1 clzc:zc1 add fs clzc:zc1:fs set dir=/zone/fs (zc-lofs-mountpoint) clzc:zc1:fs set special=/global/fs (globalcluster-mountpoint) clzc:zc1:fs set type=lofs clzc:zc1:fs end clzc:zc1 verify clzc:zc1 commit clzc:zc1 exit Check setup with # clzc show -v zc1 In the zone cluster do: (Create scalable rg if not already done) # clrg create -p desired_primaries=2 -p maximum_primaries=2 app-scal-rg # clrs create -g app-scal-rg -t SUNW.HAStoragePlus -p FilesystemMountPoints=/zone/fs hasp-rs More details of adding storage available in the Installation Guide for zone cluster Switch resource group and resources online in the zone cluster # clrg online -eM app-rg # clrg online -eM app-scal-rg Test: Switch of the resource group in the zone cluster # clrg switch -n zonehost2 app-rg # clrg switch -n zonehost2 app-scal-rg Add supported dataservice to zone cluster Documentation for SC4.0 is available here Example output: Appendix: To delete a zone cluster do: # clrg delete -Z zc1 -F + Note: Zone cluster uninstall can only be done if all resource groups are removed in the zone cluster. The command 'clrg delete -F +' can be used in zone cluster to delete the resource groups recursively. # clzc halt zc1 # clzc uninstall zc1 Note: If clzc command is not successful to uninstall the zone, then run 'zoneadm -z zc1 uninstall -F' on the nodes where zc1 is configured # clzc delete zc1

    Read the article

  • Social Business Forum Milano: Day 2

    - by me
    @YourService. The business world has flipped and small business can capitalize  by Frank Eliason (twitter: @FrankEliason ) Technology and social media tools have made it easier than ever for companies to communicate with consumers. They can listen and join in on conversations, solve problems, get instant feedback about their products and services, and more. So why, then, are most companies not doing this? Instead, it seems as if customer service is at an all time low, and that the few companies who are choosing to focus on their customers are experiencing a great competitive advantage. At Your Service explains the importance of refocusing your business on your customers and your employees, and just how to do it. Explains how to create a culture of empowered employees who understand the value of a great customer experience Advises on the need to communicate that experience to their customers and potential customers Frank Eliason, recognized by BusinessWeek as the 'most famous customer service manager in the US, possibly in the world,' has built a reputation for helping large businesses improve the way they connect with customers and enhance their relationships Quotes from the Audience: Bertrand Duperrin ?@bduperrin social service is not about shutting up the loudest cutsomers ! #sbf12 @frankeliason Paolo Pelloni ?@paolopelloniGautam Ghosh ?@GautamGhosh RT @cecildijoux: #sbf12 @frankeliason you need to change things and fix the approach it's not about social media it's about driving change  Peter H. Reiser ?@peterreiser #sbf12 Company Experience = Product Experience + Customer Interactions + Employee Experience @yourservice Engage or lose! Socialize, mobilize, conversify: engage your employees to improve business performance Christian Finn (twitter: @cfinn) First Christian was presenting the flying monkey   Then he outlined the four principals to fix the Intranet: 1. Socalize the Intranet 2. Get Thee to a Single Repository 3. Mobilize the Intranet 4. Conversationalize Your Processes Quotes from the Audience: Oscar Berg ?@oscarberg Engaged employees think their work bring out the best of their ideas @cfinn #sbf12 http://pic.twitter.com/68eddp48 John Stepper ?@johnstepper I like @cfinn's "conversify your processes" A nice related concept to "narrating your work", part of working out loud. http://johnstepper.com/2012/05/26/working-out-loud-your-personal-content-strategy/ Oscar Berg ?@oscarberg Organizations are talent markets - socializing your intranet makes this market function better @cfinn #sbf12 For profit, productivity, and personal benefit: creating a collaborative culture at Deutsche Bank John Stepper (twitter:@johnstepper) Driving adoption of collaboration + social media platforms at Deutsche Bank. John shared some great best practices on how to deploy an enterprise wide  community model  in a large company. He started with the most important question What is the commercial value of adding social ? Then he talked about the success of Community of Practices deployment and outlined some key use cases including the relevant measures to proof the ROI of the investment. Examples:  Community of practice -> measure: systematic collection of value stories  Self-service website  -> measure: based on representative models Optimizing asset inventory - > measure: Actual counts  This use case was particular interesting.  It is a crowd sourced spending/saving of infrastructure model.  User can cancel IT services they don't need (as example Software xx).  5% of the saving goes to social responsibility projects. The John outlined some  best practices on how to address the WIIFM (What's In It For Me) question of the individual users:  - change from hierarchy to graph -  working out loud = observable work + narrating  your work  - add social skills to career objectives - example: building a purposeful social network course/training as part of the job development curriculum And last but not least John gave some important tips on how to get senior management buy-in by establishing management sponsored division level collaboration boards which defines clear uses cases and measures. This divisional use cases are then implemented using a common social platform.  Thanks John - I learned a lot from your presentation!   Quotes from the Audience: Ana Silva ?@AnaDataGirl #sbf12 what's in it for individuals at Deutsche Bank? Shapping their reputations in a big org says @johnstepper #e20Ana Silva ?@AnaDataGirl Any reason why not? MT @magatorlibero #sbf12 is Deutsche B. experience on applying social inside company applicable to Italian people? Oscar Berg ?@oscarberg Your career is not a ladder, it is a network that opens up opportunities - @johnstepper #sbf12 Oscar Berg ?@oscarberg @johnstepper: Institutionalizing collaboration is next - collaboration woven into the fabric of daily work #sbf12 Ana Silva ?@AnaDataGirl #sbf12 @johnstepper talking about how Deutsche Bank is using #socbiz to build purposeful CoP & save money

    Read the article

  • Identity Management Monday at Oracle OpenWorld

    - by Tanu Sood
    What a great start to Oracle OpenWorld! Did you catch Larry Ellison’s keynote last evening? As expected, it was a packed house and the keynote received a tremendous response both from the live audience as well as the online community as evidenced by the frequent spontaneous applause in house and the twitter buzz. Here’s but a sampling of some of the tweets that flowed in: @paulvallee: I freaking love that #oracle has been born again in it's interest in core tech #oow (so good for #pythian) @rwang0: MyPOV: #oracle just leapfrogged the competition on the tech front across the board. All they need is the content delivery network #oow12 @roh1: LJE more astute & engaging this year. Nice announcements this year with 12c the MTDB sounding real good. #oow12 @brooke: Cool to see @larryellison interrupted multiple times by applause from the audience. Great speaker. #OOW And there’s lot more to come this week. Identity Management sessions kick-off today. Here’s a quick preview of what’s in store for you today for Identity Management: CON9405: Trends in Identity Management 10:45 a.m. – 11:45 a.m., Moscone West 3003 Hear directly from subject matter experts from Kaiser Permanente and SuperValu who would share the stage with Amit Jasuja, Senior Vice President, Oracle Identity Management and Security, to discuss how the latest advances in Identity Management that made it in Oracle Identity Management 11g Release 2 are helping customers address emerging requirements for securely enabling cloud, social and mobile environments. CON9492: Simplifying your Identity Management Implementation 3:15 p.m. – 4:15 p.m., Moscone West 3008 Implementation experts from British Telecom, Kaiser Permanente and UPMC participate in a panel to discuss best practices, key strategies and lessons learned based on their own experiences. Attendees will hear first-hand what they can do to streamline and simplify their identity management implementation framework for a quick return-on-investment and maximum efficiency. This session will also explore the architectural simplifications of Oracle Identity Governance 11gR2, focusing on how these enhancements simply deployments. CON9444: Modernized and Complete Access Management 4:45 p.m. – 5:45 p.m., Moscone West 3008 We have come a long way from the days of web single sign-on addressing the core business requirements. Today, as technology and business evolves, organizations are seeking new capabilities like federation, token services, fine grained authorizations, web fraud prevention and strong authentication. This session will explore the emerging requirements for access management, what a complete solution is like, complemented with real-world customer case studies from ETS, Kaiser Permanente and TURKCELL and product demonstrations. HOL10478: Complete Access Management Monday, October 1, 1:45 p.m. – 2:45 p.m., Marriott Marquis - Salon 1/2 And, get your hands on technology today. Register and attend the Hands-On-Lab session that demonstrates Oracle’s complete and scalable access management solution, which includes single sign-on, authorization, federation, and integration with social identity providers. Further, the session shows how to securely extend identity services to mobile applications and devices—all while leveraging a common set of policies and a single instance. Product Demonstrations The latest technology in Identity Management is also being showcased in the Exhibition Hall so do find some time to visit our product demonstrations there. Experts will be at hand to answer any questions. DEMOS LOCATION EXHIBITION HALL HOURS Access Management: Complete and Scalable Access Management Moscone South, Right - S-218 Monday, October 1 9:30 a.m.–6:00 p.m. 9:30 a.m.–10:45 a.m. (Dedicated Hours) Tuesday, October 2 9:45 a.m.–6:00 p.m. 2:15 p.m.–2:45 p.m. (Dedicated Hours) Wednesday, October 3 9:45 a.m.–4:00 p.m. 2:15 p.m.–3:30 p.m. (Dedicated Hours) Access Management: Federating and Leveraging Social Identities Moscone South, Right - S-220 Access Management: Mobile Access Management Moscone South, Right - S-219 Access Management: Real-Time Authorizations Moscone South, Right - S-217 Access Management: Secure SOA and Web Services Security Moscone South, Right - S-223 Identity Governance: Modern Administration and Tooling Moscone South, Right - S-210 Identity Management Monitoring with Oracle Enterprise Manager Moscone South, Right - S-212 Oracle Directory Services Plus: Performant, Cloud-Ready Moscone South, Right - S-222 Oracle Identity Management: Closed-Loop Access Certification Moscone South, Right - S-221 We recommend you keep the Focus on Identity Management document handy. And don’t forget, if you are not on site, you can catch all the keynotes LIVE from the comfort of your desk on YouTube.com/Oracle. Keep the conversation going on @oracleidm. Use #OOW and #IDM and get engaged today. Photo Courtesy: @OracleOpenWorld

    Read the article

  • Silverlight 5 &ndash; What&rsquo;s New? (Including Screenshots &amp; Code Snippets)

    - by mbcrump
    Silverlight 5 is coming next year (2011) and this blog post will tell you what you need to know before the beta ships. First, let me address people saying that it is dead after PDC 2010. I believe that it’s best to see what the market is doing, not the vendor. Below is a list of companies that are developing Silverlight 4 applications shown during the Silverlight Firestarter. Some of the companies have shipped and some haven’t. It’s just great to see the actual company names that are working on Silverlight instead of “people are developing for Silverlight”. The next thing that I wanted to point out was that HTML5, WPF and Silverlight can co-exist. In case you missed Scott Gutherie’s keynote, they actually had a slide with all three stacked together. This shows Microsoft will be heavily investing in each technology.  Even I, a Silverlight developer, am reading Pro HTML5. Microsoft said that according to the Silverlight Feature Voting site, 21k votes were entered. Microsoft has implemented about 70% of these votes in Silverlight 5. That is an amazing number, and I am crossing my fingers that Microsoft bundles Silverlight with Windows 8. Let’s get started… what’s new in Silverlight 5? I am going to show you some great application and actual code shown during the Firestarter event. Media Hardware Video Decode – Instead of using CPU to decode, we will offload it to GPU. This will allow netbooks, etc to play videos. Trickplay – Variable Speed Playback – Pitch Correction (If you speed up someone talking they won’t sound like a chipmunk). Power Management – Less battery when playing video. Screensavers will no longer kick in if watching a video. If you pause a video then screensaver will kick in. Remote Control Support – This will allow users to control playback functions like Pause, Rewind and Fastforward. IIS Media Services 4 has shipped and now supports Azure. Data Binding Layout Transitions – Just with a few lines of XAML you can create a really rich experience that is not using Storyboards or animations. RelativeSource FindAncestor – Ancestor RelativeSource bindings make it much easier for a DataTemplate to bind to a property on a container control. Custom Markup Extensions – Markup extensions allow code to be run at XAML parse time for both properties and event handlers. This is great for MVVM support. Changing Styles during Runtime By Binding in Style Setters – Changing Styles at runtime used to be a real pain in Silverlight 4, now it’s much easier. Binding in style setters allows bindings to reference other properties. XAML Debugging – Below you can see that we set a breakpoint in XAML. This shows us exactly what is going on with our binding.  WCF & RIA Services WS-Trust Support – Taken from Wikipedia: WS-Trust is a WS-* specification and OASIS standard that provides extensions to WS-Security, specifically dealing with the issuing, renewing, and validating of security tokens, as well as with ways to establish, assess the presence of, and broker trust relationships between participants in a secure message exchange. You can reduce network latency by using a background thread for networking. Supports Azure now.  Text and Printing Improved text clarity that enables better text rendering. Multi-column text flow, Character tracking and leading support, and full OpenType font support.  Includes a new Postscript Vector Printing API that provides control over what you print . Pivot functionality baked into Silverlight 5 SDK. Graphics Immediate mode graphics support that will enable you to use the GPU and 3D graphics supports. Take a look at what was shown in the demos below. 1) 3D view of the Earth – not really a real-world application though. A doctor’s portal. This demo really stood out for me as it shows what we can do with the 3D / GPU support. Out of Browser OOB applications can now create and manage childwindows as shown in the screenshot below.  Trusted OOB applications can use P/Invoke to call Win32 APIs and unmanaged libraries.  Enterprise Group Policy Support allow enterprises to lock down or up the sandbox capabilities of Silverlight 5 applications. In this demo, he tore the “notes” off of the application and it appeared in a new window. See the black arrow below. In this demo, he connected a USB Device which fired off a local Win32 application that provided the data off the USB stick to Silverlight. Another demo of a Silverlight 5 application exporting data right into Excel running inside of browser. Testing They demoed Coded UI, which is available now in the Visual Studio Feature Pack 2. This will allow you to create automated testing without writing any code manually. Performance: Microsoft has worked to improve the Silverlight startup time. Silverlight 5 provides 64-bit browser support.  Silverlight 5 also provides IE9 Hardware acceleration.   I am looking forward to Silverlight 5 and I hope you are too. Thanks for reading and I hope you visit again soon.  Subscribe to my feed CodeProject

    Read the article

  • $.fadeTo/fadeOut() operations on Table Rows in IE fail

    - by Rick Strahl
    Here’s a a small problem that one of customers ran into a few days ago: He was playing around with some of the sample code I’ve put out for one of my simple jQuery demos which deals with providing a simple pulse behavior plug-in: $.fn.pulse = function(time) { if (!time) time = 2000; // *** this == jQuery object that contains selections $(this).fadeTo(time, 0.20, function() { $(this).fadeTo(time, 1); }); return this; } it’s a very simplistic plug-in and it works fine for simple pulse animations. However he ran into a problem where it didn’t work when working with tables – specifically pulsing a table row in Internet Explorer. Works fine in FireFox and Chrome, but IE not so much. It also works just fine in IE as long as you don’t try it on tables or table rows specifically. Applying against something like this (an ASP.NET GridView): var sel = $("#gdEntries>tbody>tr") .not(":first-child") // no header .not(":last-child") // no footer .filter(":even") .addClass("gridalternate"); // *** Demonstrate simple plugin sel.pulse(2000); fails in IE. No pulsing happens in any version of IE. After some additional experimentation with single rows and various ways of selecting each and still failing, I’ve come to the conclusion that the various fade operations in jQuery simply won’t work correctly in IE (any version). So even something as ‘elemental’ as this: var el = $("#gdEntries>tbody>tr").get(0);$(el).fadeOut(2000); is not working correctly. The item will stick around for 2 seconds and then magically disappear. Likewise: sel.hide().fadeIn(5000); also doesn’t fade in although the items become immediately visible in IE. Go figure that behavior out. Thanks to a tweet from red_square and a link he provided here is a grid that explains what works and doesn’t in IE (and most last gen browsers) regarding opacity: http://www.quirksmode.org/js/opacity.html It appears from this link that table and row elements can’t be made opaque, but td elements can. This means for the row selections I can force each of the td elements to be selected and then pulse all of those. Once you have the rows it’s easy to explicitly select all the columns in those rows with .find(“td”). Aha the following actually works: var sel = $("#gdEntries>tbody>tr") .not(":first-child") // no header .not(":last-child") // no footer .filter(":even") .addClass("gridalternate"); // *** Demonstrate simple plugin sel.find("td").pulse(2000); A little unintuitive that, but it works. Stay away from <table> and <tr> Fades The moral of the story is – stay away from TR, TH and TABLE fades and opacity. If you have to do it on tables use the columns instead and if necessary use .find(“td”) on your row(s) selector to grab all the columns. I’ve been surprised by this uhm relevation, since I use fadeOut in almost every one of my applications for deletion of items and row deletions from grids are not uncommon especially in older apps. But it turns out that fadeOut actually works in terms of behavior: It removes the item when the timeout’s done and because the fade is relatively short lived and I don’t extensively test IE code any more I just never noticed that the fade wasn’t happening. Note – this behavior or rather lack thereof appears to be specific to table table,tr,th elements. I see no problems with other elements like <div> and <li> items. Chalk this one up to another of IE’s shortcomings. Incidentally I’m not the only one who has failed to address this in my simplistic plug-in: The jquery-ui pulsate effect also fails on the table rows in the same way. sel.effect("pulsate", { times: 3 }, 2000); and it also works with the same workaround. If you’re already using jquery-ui definitely use this version of the plugin which provides a few more options… Bottom line: be careful with table based fade operations and remember that if you do need to fade – fade on columns.© Rick Strahl, West Wind Technologies, 2005-2010Posted in jQuery  

    Read the article

  • To ref or not to ref

    - by nmarun
    So the question is what is the point of passing a reference type along with the ref keyword? I have an Employee class as below: 1: public class Employee 2: { 3: public string FirstName { get; set; } 4: public string LastName { get; set; } 5:  6: public override string ToString() 7: { 8: return string.Format("{0}-{1}", FirstName, LastName); 9: } 10: } In my calling class, I say: 1: class Program 2: { 3: static void Main() 4: { 5: Employee employee = new Employee 6: { 7: FirstName = "John", 8: LastName = "Doe" 9: }; 10: Console.WriteLine(employee); 11: CallSomeMethod(employee); 12: Console.WriteLine(employee); 13: } 14:  15: private static void CallSomeMethod(Employee employee) 16: { 17: employee.FirstName = "Smith"; 18: employee.LastName = "Doe"; 19: } 20: }   After having a look at the code, you’ll probably say, Well, an instance of a class gets passed as a reference, so any changes to the instance inside the CallSomeMethod, actually modifies the original object. Hence the output will be ‘John-Doe’ on the first call and ‘Smith-Doe’ on the second. And you’re right: So the question is what’s the use of passing this Employee parameter as a ref? 1: class Program 2: { 3: static void Main() 4: { 5: Employee employee = new Employee 6: { 7: FirstName = "John", 8: LastName = "Doe" 9: }; 10: Console.WriteLine(employee); 11: CallSomeMethod(ref employee); 12: Console.WriteLine(employee); 13: } 14:  15: private static void CallSomeMethod(ref Employee employee) 16: { 17: employee.FirstName = "Smith"; 18: employee.LastName = "Doe"; 19: } 20: } The output is still the same: Ok, so is there really a need to pass a reference type using the ref keyword? I’ll remove the ‘ref’ keyword and make one more change to the CallSomeMethod method. 1: class Program 2: { 3: static void Main() 4: { 5: Employee employee = new Employee 6: { 7: FirstName = "John", 8: LastName = "Doe" 9: }; 10: Console.WriteLine(employee); 11: CallSomeMethod(employee); 12: Console.WriteLine(employee); 13: } 14:  15: private static void CallSomeMethod(Employee employee) 16: { 17: employee = new Employee 18: { 19: FirstName = "Smith", 20: LastName = "John" 21: }; 22: } 23: } In line 17 you’ll see I’ve ‘new’d up the incoming Employee parameter and then set its properties to new values. The output tells me that the original instance of the Employee class does not change. Huh? But an instance of a class gets passed by reference, so why did the values not change on the original instance or how do I keep the two instances in-sync all the times? Aah, now here’s the answer. In order to keep the objects in sync, you pass them using the ‘ref’ keyword. 1: class Program 2: { 3: static void Main() 4: { 5: Employee employee = new Employee 6: { 7: FirstName = "John", 8: LastName = "Doe" 9: }; 10: Console.WriteLine(employee); 11: CallSomeMethod(ref employee); 12: Console.WriteLine(employee); 13: } 14:  15: private static void CallSomeMethod(ref Employee employee) 16: { 17: employee = new Employee 18: { 19: FirstName = "Smith", 20: LastName = "John" 21: }; 22: } 23: } Viola! Now, to prove it beyond doubt, I said, let me try with another reference type: string. 1: class Program 2: { 3: static void Main() 4: { 5: string name = "abc"; 6: Console.WriteLine(name); 7: CallSomeMethod(ref name); 8: Console.WriteLine(name); 9: } 10:  11: private static void CallSomeMethod(ref string name) 12: { 13: name = "def"; 14: } 15: } The output was as expected, first ‘abc’ and then ‘def’ - proves the 'ref' keyword works here as well. Now, what if I remove the ‘ref’ keyword? The output should still be the same as the above right, since string is a reference type? 1: class Program 2: { 3: static void Main() 4: { 5: string name = "abc"; 6: Console.WriteLine(name); 7: CallSomeMethod(name); 8: Console.WriteLine(name); 9: } 10:  11: private static void CallSomeMethod(string name) 12: { 13: name = "def"; 14: } 15: } Wrong, the output shows ‘abc’ printed twice. Wait a minute… now how could this be? This is because string is an immutable type. This means that any time you modify an instance of string, new memory address is allocated to the instance. The effect is similar to ‘new’ing up the Employee instance inside the CallSomeMethod in the absence of the ‘ref’ keyword. Verdict: ref key came to the rescue and saved the planet… again!

    Read the article

  • World Record Batch Rate on Oracle JD Edwards Consolidated Workload with SPARC T4-2

    - by Brian
    Oracle produced a World Record batch throughput for single system results on Oracle's JD Edwards EnterpriseOne Day-in-the-Life benchmark using Oracle's SPARC T4-2 server running Oracle Solaris Containers and consolidating JD Edwards EnterpriseOne, Oracle WebLogic servers and the Oracle Database 11g Release 2. The workload includes both online and batch workload. The SPARC T4-2 server delivered a result of 8,000 online users while concurrently executing a mix of JD Edwards EnterpriseOne Long and Short batch processes at 95.5 UBEs/min (Universal Batch Engines per minute). In order to obtain this record benchmark result, the JD Edwards EnterpriseOne, Oracle WebLogic and Oracle Database 11g Release 2 servers were executed each in separate Oracle Solaris Containers which enabled optimal system resources distribution and performance together with scalable and manageable virtualization. One SPARC T4-2 server running Oracle Solaris Containers and consolidating JD Edwards EnterpriseOne, Oracle WebLogic servers and the Oracle Database 11g Release 2 utilized only 55% of the available CPU power. The Oracle DB server in a Shared Server configuration allows for optimized CPU resource utilization and significant memory savings on the SPARC T4-2 server without sacrificing performance. This configuration with SPARC T4-2 server has achieved 33% more Users/core, 47% more UBEs/min and 78% more Users/rack unit than the IBM Power 770 server. The SPARC T4-2 server with 2 processors ran the JD Edwards "Day-in-the-Life" benchmark and supported 8,000 concurrent online users while concurrently executing mixed batch workloads at 95.5 UBEs per minute. The IBM Power 770 server with twice as many processors supported only 12,000 concurrent online users while concurrently executing mixed batch workloads at only 65 UBEs per minute. This benchmark demonstrates more than 2x cost savings by consolidating the complete solution in a single SPARC T4-2 server compared to earlier published results of 10,000 users and 67 UBEs per minute on two SPARC T4-2 and SPARC T4-1. The Oracle DB server used mirrored (RAID 1) volumes for the database providing high availability for the data without impacting performance. Performance Landscape JD Edwards EnterpriseOne Day in the Life (DIL) Benchmark Consolidated Online with Batch Workload System Rack Units BatchRate(UBEs/m) Online Users Users /Units Users /Core Version SPARC T4-2 (2 x SPARC T4, 2.85 GHz) 3 95.5 8,000 2,667 500 9.0.2 IBM Power 770 (4 x POWER7, 3.3 GHz, 32 cores) 8 65 12,000 1,500 375 9.0.2 Batch Rate (UBEs/m) — Batch transaction rate in UBEs per minute Configuration Summary Hardware Configuration: 1 x SPARC T4-2 server with 2 x SPARC T4 processors, 2.85 GHz 256 GB memory 4 x 300 GB 10K RPM SAS internal disk 2 x 300 GB internal SSD 2 x Sun Storage F5100 Flash Arrays Software Configuration: Oracle Solaris 10 Oracle Solaris Containers JD Edwards EnterpriseOne 9.0.2 JD Edwards EnterpriseOne Tools (8.98.4.2) Oracle WebLogic Server 11g (10.3.4) Oracle HTTP Server 11g Oracle Database 11g Release 2 (11.2.0.1) Benchmark Description JD Edwards EnterpriseOne is an integrated applications suite of Enterprise Resource Planning (ERP) software. Oracle offers 70 JD Edwards EnterpriseOne application modules to support a diverse set of business operations. Oracle's Day in the Life (DIL) kit is a suite of scripts that exercises most common transactions of JD Edwards EnterpriseOne applications, including business processes such as payroll, sales order, purchase order, work order, and manufacturing processes, such as ship confirmation. These are labeled by industry acronyms such as SCM, CRM, HCM, SRM and FMS. The kit's scripts execute transactions typical of a mid-sized manufacturing company. The workload consists of online transactions and the UBE – Universal Business Engine workload of 61 short and 4 long UBEs. LoadRunner runs the DIL workload, collects the user’s transactions response times and reports the key metric of Combined Weighted Average Transaction Response time. The UBE processes workload runs from the JD Enterprise Application server. Oracle's UBE processes come as three flavors: Short UBEs < 1 minute engage in Business Report and Summary Analysis, Mid UBEs > 1 minute create a large report of Account, Balance, and Full Address, Long UBEs > 2 minutes simulate Payroll, Sales Order, night only jobs. The UBE workload generates large numbers of PDF files reports and log files. The UBE Queues are categorized as the QBATCHD, a single threaded queue for large and medium UBEs, and the QPROCESS queue for short UBEs run concurrently. Oracle's UBE process performance metric is Number of Maximum Concurrent UBE processes at transaction rate, UBEs/minute. Key Points and Best Practices Two JD Edwards EnterpriseOne Application Servers, two Oracle WebLogic Servers 11g Release 1 coupled with two Oracle Web Tier HTTP server instances and one Oracle Database 11g Release 2 database on a single SPARC T4-2 server were hosted in separate Oracle Solaris Containers bound to four processor sets to demonstrate consolidation of multiple applications, web servers and the database with best resource utilizations. Interrupt fencing was configured on all Oracle Solaris Containers to channel the interrupts to processors other than the processor sets used for the JD Edwards Application server, Oracle WebLogic servers and the database server. A Oracle WebLogic vertical cluster was configured on each WebServer Container with twelve managed instances each to load balance users' requests and to provide the infrastructure that enables scaling to high number of users with ease of deployment and high availability. The database log writer was run in the real time RT class and bound to a processor set. The database redo logs were configured on the raw disk partitions. The Oracle Solaris Container running the Enterprise Application server completed 61 Short UBEs, 4 Long UBEs concurrently as the mixed size batch workload. The mixed size UBEs ran concurrently from the Enterprise Application server with the 8,000 online users driven by the LoadRunner. See Also SPARC T4-2 Server oracle.com OTN JD Edwards EnterpriseOne oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Oracle Fusion Middleware oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 09/30/2012.

    Read the article

  • Developer Training – Difficult Questions and Alternative Perspective – Part 3

    - by pinaldave
    Developer Training - Importance and Significance - Part 1 Developer Training – Employee Morals and Ethics – Part 2 Developer Training – Difficult Questions and Alternative Perspective - Part 3 Developer Training – Various Options for Developer Training – Part 4 Developer Training – A Conclusive Summary- Part 5 Congratulations!  You are now a fully trained developer!  You spent hours in a classroom, watching webinars, and reading materials.  You are now more educated and more prepared than ever before.  Now what? Stay or Quit The simple answer is that you now have two options – stay where you are or move on to a new job.  Even though you might now be smarter than you have ever felt before, this can still be a tough decision to make.  You feel extra trained and ready for a promotion or a raise, but you and your employer might not see eye to eye on this issue.  The logical conclusion is to go on a job hunt, but that might not be the most ethical thing to do. Click Image to Enlarge Manager’s Perspective Click Image to Enlarge Try to see the issue from your manager’s perspective.  You feel that you have just spent a lot of time and energy getting trained, and you should be rewarded.  But they have invested their time and energy in you.  They might see the training as a way to help you complete the goals they require from you, or as a way to help you complete tasks that will ultimately end in a reward or promotion. Moral Compass As in most cases, honesty is the best policy.  Be open with your manager about your expectations, and ask them to explain their goals.  When there is open and honest communication, everyone can walk away happy.  If you’re unable to discuss with your manager for one reason or another, just try to keep the company policy in mind and follow your own moral compass.  If all else fails, and your company is unwilling to make allowances for your new value, offer to pay the company back for the training before moving on your way. Whether you stay at your old job or move on to a new one, you are still faced with the question of what you’re going to do with all your new knowledge.  If you feel comfortable, offer to train others around you who are interested in the same subject.  This can look very good on your resume, and if you are working in a team environment it is sure to help you in the long run! What Next? You can even offer to train other trainers at the company – managers, those above you, or even report back to your original trainer about how your education is helping you in the work place.  Obviously this should be completely voluntary on the trainer’s part.  Taking advice from a “newbie” may not be their favorite idea, but it could also show the company that you are open to expanding your horizons and being helpful to everyone around you. Last in Line for Opportunity Click Image to Enlarge At this time, let us address a subject related to training and what to do with it – what if you are always overlooked for training?  This can as thorny a problem as receiving training in the first place.  The best advice is to let your supervisors know that you are always open to training and very interested in certain topics.  If you are consistently passed over, be patient.  Your turn will probably come, but the company as a whole has to focus on other problems at the moment.  If you feel that there are more personal issues at play, be sure to bring this up with your supervisor in a calm and professional manager so that everything can be worked out best for both parties. You, Yourself and Your Future! If all else fails, offer to pay for training yourself.  Perhaps money problems are at the root of being passed over.  Even if there are other reasons, offering to pay your own way shows your dedication and could work out well for you in the long run.  Always remember – in life you have to go out and make your own way, you cannot always sit and wait for things to land in your lap. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Developer Training, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Migrating SQL Server Databases – The DBA’s Checklist (Part 2)

    - by Sadequl Hussain
    Continuing from Part 1  , our Migration Checklist continues: Step 5: Update statistics It is always a good idea to update the statistics of the database that you have just installed or migrated. To do this, run the following command against the target database: sp_updatestats The sp_updatestats system stored procedure runs the UPDATE STATISTICS command against every user and system table in the database.  However, a word of caution: running the sp_updatestats against a database with a compatibility level below 90 (SQL Server 2005) will reset the automatic UPDATE STATISTICS settings for every index and statistics of every table in the database. You may therefore want to change the compatibility mode before you run the command. Another thing you should remember to do is to ensure the new database has its AUTO_CREATE_STATISTICS and AUTO_UPDATE_STATISTICS properties set to ON. You can do so using the ALTER DATABASE command or from the SSMS. Step 6: Set database options You may have to change the state of a database after it has been restored. If the database was changed to single-user or read-only mode before backup, the restored copy will also retain these settings. This may not be an issue when you are manually restoring from Enterprise Manager or the Management Studio since you can change the properties. However, this is something to be mindful of if the restore process is invoked by an automated job or script and the database needs to be written to immediately after restore. You may want to check the database’s status programmatically in such cases. Another important option you may want to set for the newly restored / attached database is PAGE_VERIFY. This option specifies how you want SQL Server to ensure the physical integrity of the data. It is a new option from SQL Server 2005 and can have three values: CHECKSUM (default for SQL Server 2005 and latter databases), TORN_PAGE_DETECTION (default when restoring a pre-SQL Server 2005 database) or NONE. Torn page detection was itself an option for SQL Server 2000 databases. From SQL Server 2005, when PAGE_VERIFY is set to CHECKSUM, the database engine calculates the checksum for a page’s contents and writes it to the page header before storing it in disk. When the page is read from the disk, the checksum is computed again and compared with the checksum stored in the header.  Torn page detection works much like the same way in that it stores a bit in the page header for every 512 byte sector. When data is read from the page, the torn page bits stored in the header is compared with the respective sector contents. When PAGE_VERIFY is set to NONE, SQL Server does not perform any checking, even if torn page data or checksums are present in the page header.  This may not be something you would want to set unless there is a very specific reason.  Microsoft suggests using the CHECKSUM page verify option as this offers more protection. Step 7: Map database users to logins A common database migration issue is related to user access. Windows and SQL Server native logins that existed in the source instance and had access to the database may not be present in the destination. Even if the logins exist in the destination, the mapping between the user accounts and the logins will not be automatic. You can use a special system stored procedure called sp_change_users_login to address these situations. The procedure needs to be run against the newly attached or restored database and can accept four parameters. Depending on what you want to do, you may be using less than four though. The first parameter, @Action, can take three values. When you specify @Action = ‘Report’, the system will provide you with a list of database users which are not mapped to any login. If you want to map a database user to an existing SQL Server login, the value for @Action will be ‘Update_One’. In this case, you will only need to provide the database user name and the login it will map to. So if your newly restored database has a user account called “bob” and there is already a SQL Server login with the same name and you want to map the user to the login, you will execute a query like the following: sp_change_users_login         @Action = ‘Update_One’,         @UserNamePattern = ‘bob’,         @LoginName = ‘bob’ If the login does not exist, you can instruct SQL Server to create the login with the same name. In this case you will need to provide a password for the login and the value of the @Action parameter will be ‘Auto_Fix’. If the login already exists, it will be automatically mapped to the user account. Unfortunately sp_change_users_login system stored procedure cannot be used to map database users to trusted logins (Windows accounts) in SQL Server. You will need to follow a manual process to re-map the database user accounts.  Continues…

    Read the article

  • doubleTwist is an iTunes Alternative that Supports Several Devices

    - by Mysticgeek
    There are a lot of iTunes users out there, but unfortunately you can’t use it with all of your portable devices. Today we take a look at doubleTwist, which allows you to sync your media with a multitude of portable devices and easily share it as well. Note: You can run doubleTwist on Windows or Mac, and here we take a look at the Windows version. Install & Setup doubleTwist Download and install doubleTwist using the defaults in the wizard… Installation takes several moments and you’ll see the progress while it finishes up. After installation is complete, sign up for an account if you don’t already have one. If you do have an account you can login right away. Enter in your username, email address, and password then click Sign Up.   You’ll get an confirmation email and need to activate the account before you can sign in. Once you’re all signed up, launch doubleTwist and you’ll be ready to start using it. doubleTwist Music The default music store is Amazon MP3 store which might appeal to those of you who are tired of the iTunes music store. A lot of times the music is cheaper and available at higher bit rates. You can start searching for music in the Amazon Music Store and previewing songs. To purchase anything though you will need to sign into your Amazon account.   Under Playlists it allows you to import your playlists from iTunes and Windows Media Player, which is a handy feature if you don’t want to set them up again. Of course you can play your songs through the music player on your desktop. Devices One of the coolest things about doubleTwist is that it supports a lot of different portable media devices including iPod, BlackBerry, Windows Mobile, Android, PSP, Smartphones, and much more. Unfortunately for Zune users…there isn’t any support for the Zune of Zune HD yet. Here we have a Creative Zen attached and can sync songs, pictures, and podcasts. An HTC-S620 Smartphone running Windows Mobile… Even a simple USB drive will be recognized and you can transfer your media to it as well.   Podcasts Finding your favorite audio and video podcasts is easy with the search feature. You can easily manage and subscribe to podcasts in the subscriptions section.   You can watch the video podcasts directly in doubleTwist. Sharing Media Also you can share digital media with your friends or add it to Flickr and YouTube. You can send any pictures, videos, or music in your library to other people by dragging it over. You can email users individually… Or access contacts from your Gmail and Yahoo accounts. There is a limit to how much you can send of video podcasts… only the first 10 minutes. The person you send it to will get a link in their email that points to your My Feed page on the doubleTwist site.   There they can access the media you sent…in this example it’s a video podcast but you can share any media. Other Features Under My Profile you can change your avatar and personal information.   In Preferences you can choose where media is stored, its startup actions, podcast subscriptions, and manage device syncing. Conclusion It’s still in beta stage so expect some bugs, but overall doubleTwist is a solid media player that is easy to use with a clean interface. It’s simple and doesn’t try to do too much so is fairly easy on system resources. The main annoyance is it tries to catalog all of your media out of the box. Which may be alright for some users with smaller media collections, but very irritating to advanced users with large collections. Also there is currently no support for the Zune, but according to their forums, it’s on the way. At the time of this writing it’s in public beta and can be downloaded for XP, Vista, Windows 7 (32 & 64 bit), and Mac OSX. If you’re looking for an iTunes alternative that works with several different portable devices, you might want to give DoubleTwist a try. Download DoubleTwist Public Beta See If Your Media Device is Supported by doubleTwist Similar Articles Productive Geek Tips MusicBee is a Fast and Powerful Music ManagerAvoid the Apple QuickTime Bloat with QT LiteBeginner Geek: Set Default Programs in Windows 7 and VistaBeginner Geeks: OpenOffice is a Free Cross Platform Alternative to MS OfficeManage Devices the Easy Way with Device Stage in Windows 7 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Play Music in Chrome by Simply Dragging a File 15 Great Illustrations by Chow Hon Lam Easily Sync Files & Folders with Friends & Family Amazon Free Kindle for PC Download Stretch popurls.com with a Stylish Script (Firefox) OldTvShows.org – Find episodes of Hitchcock, Soaps, Game Shows and more

    Read the article

  • The way I think about Diagnostic tools

    - by Daniel Moth
    Every software has issues, or as we like to call them "bugs". That is not a discussion point, just a mere fact. It follows that an important skill for developers is to be able to diagnose issues in their code. Of course we need to advance our tools and techniques so we can prevent bugs getting into the code (e.g. unit testing), but beyond designing great software, diagnosing bugs is an equally important skill. To diagnose issues, the most important assets are good techniques, skill, experience, and maybe talent. What also helps is having good diagnostic tools and what helps further is knowing all the features that they offer and how to use them. The following classification is how I like to think of diagnostics. Note that like with any attempt to bucketize anything, you run into overlapping areas and blurry lines. Nevertheless, I will continue sharing my generalizations ;-) It is important to identify at the outset if you are dealing with a performance or a correctness issue. If you have a performance issue, use a profiler. I hear people saying "I am using the debugger to debug a performance issue", and that is fine, but do know that a dedicated profiler is the tool for that job. Just because you don't need them all the time and typically they cost more plus you are not as familiar with them as you are with the debugger, doesn't mean you shouldn't invest in one and instead try to exclusively use the wrong tool for the job. Visual Studio has a profiler and a concurrency visualizer (for profiling multi-threaded apps). If you have a correctness issue, then you have several options - that's next :-) This is how I think of identifying a correctness issue Do you want a tool to find the issue for you at design time? The compiler is such a tool - it gives you an exact list of errors. Compilers now also offer warnings, which is their way of saying "this may be an error, but I am not smart enough to know for sure". There are also static analysis tools, which go a step further than the compiler in identifying issues in your code, sometimes with the aid of code annotations and other times just by pointing them at your raw source. An example is FxCop and much more in Visual Studio 11 Code Analysis. Do you want a tool to find the issue for you with code execution? Just like static tools, there are also dynamic analysis tools that instead of statically analyzing your code, they analyze what your code does dynamically at runtime. Whether you have to setup some unit tests to invoke your code at runtime, or have to manually run your app (and interact with it) under the tool, or have to use a script to execute your binary under the tool… that varies. The result is still a list of issues for you to address after the analysis is complete or a pause of the execution when the first issue is encountered. If a code path was not taken, no analysis for it will exist, obviously. An example is the GPU Race detection tool that I'll be talking about on the C++ AMP team blog. Another example is the MSR concurrency CHESS tool. Do you want you to find the issue at design time using a tool? Perform a code walkthrough on your own or with colleagues. There are code review tools that go beyond just diffing sources, and they help you with that aspect too. For example, there is a new one in Visual Studio 11 and searching with my favorite search engine yielded this article based on the Developer Preview. Do you want you to find the issue with code execution? Use a debugger - let’s break this down further next. This is how I think of debugging: There is post mortem debugging. That means your code has executed and you did something in order to examine what happened during its execution. This can vary from manual printf and other tracing statements to trace events (e.g. ETW) to taking dumps. In all cases, you are left with some artifact that you examine after the fact (after code execution) to discern what took place hoping it will help you find the bug. Learn how to debug dump files in Visual Studio. There is live debugging. I will elaborate on this in a separate post, but this is where you inspect the state of your program during its execution, and try to find what the problem is. More from me in a separate post on live debugging. There is a hybrid of live plus post-mortem debugging. This is for example what tools like IntelliTrace offer. If you are a tools vendor interested in the diagnostics space, it helps to understand where in the above classification your tool excels, where its primary strength is, so you can market it as such. Then it helps to see which of the other areas above your tool touches on, and how you can make it even better there. Finally, see what areas your tool doesn't help at all with, and evaluate whether it should or continue to stay clear. Even though the classification helps us think about this space, the reality is that the best tools are either extremely excellent in only one of this areas, or more often very good across a number of them. Another approach is to offer a toolset covering all areas, with appropriate integration and hand off points from one to the other. Anyway, with that brain dump out of the way, in follow-up posts I will dive into live debugging, and specifically live debugging in Visual Studio - stay tuned if that interests you. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Adding UCM as a search source in Windows Explorer

    - by kyle.hatlestad
    A customer recently pointed out to me that Windows 7 supports federated search within Windows Explorer. This means you can perform searches to external sources such as Google, Flickr, YouTube, etc right from within Explorer. While we do have the Desktop Integration Suite which offers searching within Explorer, I thought it would be interesting to look into this method which would not require any client software to implement. Basically, federated searching hooks up in Windows Explorer through the OpenSearch protocol. A Search Connector Descriptor file is run and it installs the search provider. The file is a .osdx file which is an OpenSearch Description document. It describes the search provider you are hooking up to along with the URL for the query. If those results can come back as an RSS or ATOM feed, then you're all set. So the first step is to install the RSS Feeds component from the UCM Samples page on OTN. If you're on 11g, I've found the RSS Feeds works just fine on that version too. Next, you want to perform a Quick Search with a particular search term and then copy the RSS link address for that search result. Here is what an example URL might looks like: http://server:16200/cs/idcplg?IdcService=GET_SCS_FEED&feedName=search_results&QueryText=%28+%3cqsch%3eoracle%3c%2fqsch %3e+%29&SortField=dInDate&SortOrder=Desc&ResultCount=20&SearchQueryFormat= Universal&SearchProviders=server& Now you want to create a new text file and start out with this information: <?xml version="1.0" encoding="UTF-8"?><OpenSearchDescription xmlns:ms-ose="http://schemas.microsoft.com/opensearchext/2009/"> <ShortName></ShortName> <Description></Description> <Url type="application/rss+xml" template=""/> <Url type="text/html" template=""/> </OpenSearchDescription> Enter a ShortName and Description. The ShortName will be the value used when displaying the search provider in Explorer. In the template attribute for the first Url element, enter the URL copied previously. You will then need to convert the ampersand symbols to '&' to make them XML compliant. Finally, you'll want to switch out the search term with '{searchTerms}'. For the second Url element, you can do the same thing except you want to copy the UCM search results URL from the page of results. That URL will look something like: http://server:16200/cs/idcplg?IdcService=GET_SEARCH_RESULTS&SortField=dInDate&SortOrder=Desc&ResultCount=20&QueryText=%3Cqsch%3Eoracle%3C%2Fqsch%3E&listTemplateId= &ftx=1&SearchQueryFormat=Universal&TargetedQuickSearchSelection= &MiniSearchText=oracle Again, convert the ampersand symbols and replace the search term with '{searchTerms}'. When complete, save the file with the .osdx extension. The completed file should look like: <?xml version="1.0" encoding="UTF-8"?> <OpenSearchDescription xmlns="http://a9.com/-/spec/opensearch/1.1/" xmlns:ms-ose="http://schemas.microsoft.com/opensearchext/2009/"> <ShortName>Universal Content Management</ShortName> <Description>OpenSearch for UCM via Windows 7 Search Federation.</Description> <Url type="application/rss+xml" template="http://server:16200/cs/idcplg?IdcService=GET_SCS_FEED&amp;feedName=search_results&amp;QueryText=%28+%3Cqsch%3E{searchTerms}%3C%2fqsch%3E+%29&amp;SortField=dInDate&amp;SortOrder=Desc&amp;ResultCount=200&amp;SearchQueryFormat=Universal"/> <Url type="text/html" template="http://server:16200/cs/idcplg?IdcService=GET_SEARCH_RESULTS&amp;SortField=dInDate&amp;SortOrder=Desc&amp;ResultCount=20&amp;QueryText=%3Cqsch%3E{searchTerms}%3C%2Fqsch%3E&amp;listTemplateId=&amp;ftx=1&amp;SearchQueryFormat=Universal&amp;TargetedQuickSearchSelection=&amp;MiniSearchText={searchTerms}"/> </OpenSearchDescription> After you save the file, simply double-click it to create the provider. It will ask if you want to add the search connector to Windows. Click Add and it will add it to the Searches folder in your user folder as well as your Favorites. Now just click on the search icon and in the upper right search box, enter your term. As you are typing, it begins executing searches and the results will come back in Explorer. Now when you double-click on an item, it will try and download the web viewable for viewing. You also have the ability to save the search, just as you would in UCM. And there is a link to Search On Website which will launch your browser and go directly to the search results page there. And with some tweaks to the RSS component, you can make the results a bit more interesting. It supports the Media RSS standard, so you can pass along the thumbnail of the documents in the results. To enable this, edit the rss_resources.htm file in the RSS Feeds component. In the std_rss_feed_begin resource include, add the namespace 'xmlns:media="http://search.yahoo.com/mrss/' to the rss definition: <rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:media="http://search.yahoo.com/mrss/"> Next, in the rss_channel_item_with_thumb include, below the closing image element, add this element: </images> <media:thumbnail url="<$if strIndexOf(thumbnailUrl, "@t") > 0 or strIndexOf(thumbnailUrl, "@g") > 0 or strIndexOf(thumbnailUrl, "@p") > 0$><$rssHttpHost$><$thumbnailUrl$><$elseif dGif$><$HttpWebRoot$>images/docgifs/<$dGif$><$endif$>" /> <description> This and lots of other tweaks can be done to the RSS component to help extend it for optimum use in Explorer. Hopefully this can get you started. *Note: This post also applies to Universal Records Management (URM).

    Read the article

  • JavaScript Class Patterns

    - by Liam McLennan
    To write object-oriented programs we need objects, and likely lots of them. JavaScript makes it easy to create objects: var liam = { name: "Liam", age: Number.MAX_VALUE }; But JavaScript does not provide an easy way to create similar objects. Most object-oriented languages include the idea of a class, which is a template for creating objects of the same type. From one class many similar objects can be instantiated. Many patterns have been proposed to address the absence of a class concept in JavaScript. This post will compare and contrast the most significant of them. Simple Constructor Functions Classes may be missing but JavaScript does support special constructor functions. By prefixing a call to a constructor function with the ‘new’ keyword we can tell the JavaScript runtime that we want the function to behave like a constructor and instantiate a new object containing the members defined by that function. Within a constructor function the ‘this’ keyword references the new object being created -  so a basic constructor function might be: function Person(name, age) { this.name = name; this.age = age; this.toString = function() { return this.name + " is " + age + " years old."; }; } var john = new Person("John Galt", 50); console.log(john.toString()); Note that by convention the name of a constructor function is always written in Pascal Case (the first letter of each word is capital). This is to distinguish between constructor functions and other functions. It is important that constructor functions be called with the ‘new’ keyword and that not constructor functions are not. There are two problems with the pattern constructor function pattern shown above: It makes inheritance difficult The toString() function is redefined for each new object created by the Person constructor. This is sub-optimal because the function should be shared between all of the instances of the Person type. Constructor Functions with a Prototype JavaScript functions have a special property called prototype. When an object is created by calling a JavaScript constructor all of the properties of the constructor’s prototype become available to the new object. In this way many Person objects can be created that can access the same prototype. An improved version of the above example can be written: function Person(name, age) { this.name = name; this.age = age; } Person.prototype = { toString: function() { return this.name + " is " + this.age + " years old."; } }; var john = new Person("John Galt", 50); console.log(john.toString()); In this version a single instance of the toString() function will now be shared between all Person objects. Private Members The short version is: there aren’t any. If a variable is defined, with the var keyword, within the constructor function then its scope is that function. Other functions defined within the constructor function will be able to access the private variable, but anything defined outside the constructor (such as functions on the prototype property) won’t have access to the private variable. Any variables defined on the constructor are automatically public. Some people solve this problem by prefixing properties with an underscore and then not calling those properties by convention. function Person(name, age) { this.name = name; this.age = age; } Person.prototype = { _getName: function() { return this.name; }, toString: function() { return this._getName() + " is " + this.age + " years old."; } }; var john = new Person("John Galt", 50); console.log(john.toString()); Note that the _getName() function is only private by convention – it is in fact a public function. Functional Object Construction Because of the weirdness involved in using constructor functions some JavaScript developers prefer to eschew them completely. They theorize that it is better to work with JavaScript’s functional nature than to try and force it to behave like a traditional class-oriented language. When using the functional approach objects are created by returning them from a factory function. An excellent side effect of this pattern is that variables defined with the factory function are accessible to the new object (due to closure) but are inaccessible from anywhere else. The Person example implemented using the functional object construction pattern is: var personFactory = function(name, age) { var privateVar = 7; return { toString: function() { return name + " is " + age * privateVar / privateVar + " years old."; } }; }; var john2 = personFactory("John Lennon", 40); console.log(john2.toString()); Note that the ‘new’ keyword is not used for this pattern, and that the toString() function has access to the name, age and privateVar variables because of closure. This pattern can be extended to provide inheritance and, unlike the constructor function pattern, it supports private variables. However, when working with JavaScript code bases you will find that the constructor function is more common – probably because it is a better approximation of mainstream class oriented languages like C# and Java. Inheritance Both of the above patterns can support inheritance but for now, favour composition over inheritance. Summary When JavaScript code exceeds simple browser automation object orientation can provide a powerful paradigm for controlling complexity. Both of the patterns presented in this article work – the choice is a matter of style. Only one question still remains; who is John Galt?

    Read the article

< Previous Page | 537 538 539 540 541 542 543 544 545 546 547 548  | Next Page >