Search Results

Search found 18598 results on 744 pages for 'result'.

Page 117/744 | < Previous Page | 113 114 115 116 117 118 119 120 121 122 123 124  | Next Page >

  • PHP/MySQL Interview - How would you have answered?

    - by martincarlin87
    I was asked this interview question so thought I would post it here to see how other users would answer: Please write some code which connects to a MySQL database (any host/user/pass), retrieves the current date & time from the database, compares it to the current date & time on the local server (i.e. where the application is running), and reports on the difference. The reporting aspect should be a simple HTML page, so that in theory this script can be put on a web server, set to point to a particular database server, and it would tell us whether the two servers’ times are in sync (or close to being in sync). This is what I put: // Connect to database server $dbhost = 'localhost'; $dbuser = 'xxx'; $dbpass = 'xxx'; $dbname = 'xxx'; $conn = mysql_connect($dbhost, $dbuser, $dbpass) or die (mysql_error()); // Select database mysql_select_db($dbname) or die(mysql_error()); // Retrieve the current time from the database server $sql = 'SELECT NOW() AS db_server_time'; // Execute the query $result = mysql_query($sql) or die(mysql_error()); // Since query has now completed, get the time of the web server $php_server_time = date("Y-m-d h:m:s"); // Store query results in an array $row = mysql_fetch_array($result); // Retrieve time result from the array $db_server_time = $row['db_server_time']; echo $db_server_time . '<br />'; echo $php_server_time; if ($php_server_time != $db_server_time) { // Server times are not identical echo '<p>Database server and web server are not in sync!</p>'; // Convert the time stamps into seconds since 01/01/1970 $php_seconds = strtotime($php_server_time); $sql_seconds = strtotime($db_server_time); // Subtract smaller number from biggest number to avoid getting a negative result if ($php_seconds > $sql_seconds) { $time_difference = $php_seconds - $sql_seconds; } else { $time_difference = $sql_seconds - $php_seconds; } // convert the time difference in seconds to a formatted string displaying hours, minutes and seconds $nice_time_difference = gmdate("H:i:s", $time_difference); echo '<p>Time difference between the servers is ' . $nice_time_difference; } else { // Timestamps are exactly the same echo '<p>Database server and web server are in sync with each other!</p>'; } Yes, I know that I have used the deprecated mysql_* functions but that aside, how would you have answered, i.e. what changes would you make and why? Are there any factors I have omitted which I should take into consideration? The interesting thing is that my results always seem to be an exact number of minutes apart when executed on my hosting account: 2012-12-06 11:47:07 2012-12-06 11:12:07

    Read the article

  • Optimizing division/exponential calculation

    - by Saltheart
    I've inherited a Visual Studio/VB.Net numerical simulation project that has a likely inefficient calculation. Profiling indicates that the function is called a lot (1 million times plus) and spends about 50% of the overall calculation within this function. Here is the problematic portion Result = (A * (E ^ C)) / (D ^ C * B) (where A-C are local double variables and D & E global double variables) Result is then compared to a threshold which might have additional improvements as well, but I'll leave them another day any thoughts or help would be appreciated Steve

    Read the article

  • Android 2.1 switch loop JRE 1.7

    - by Defuzer
    Hello how to use switch loop in my android project ? I want to use Android 2.1 I need JRE 1.7, but I want to use Android 2.1 I use loop like this: switch ((CHAR[Math.abs(intGen.nextInt()%2)])) { case "+": result = random2 + random3; break; case "-": result = random2 + random3; break; } LogCat: Cannot switch on a value of type String for source level below 1.7. Only convertible int values or enum variables are permitted

    Read the article

  • Is this possible in sql server 2005?

    - by chandru_cp
    This is my queries select ClientName,ClientMobNo from Clients select DriverName,DriverMobNo from Drivers It gives me two result tables... But i want to combine both the result tables into a single table... I tried union and union all it doesn't give me what i want.... Note: There is no relationship between the two tables...... There may be 200 clients and 100 drivers...

    Read the article

  • How to send data many times to web browser in response to one request?

    - by robert_d
    I have a form on my web page, it allows to submit many queries to my website, every query is on a separate line in TextArea. Because waiting for all queries to complete is too long I would like to update the web page after every query completes - send result of one query to a web browser, new result should be appended to old results that are already on the web page. How to do this in ASP.NET MVC 2? I will be grateful for helpful responses.

    Read the article

  • Mistake in the Asc() VB function?

    - by Alexander
    Hello! Could you please tell why the Asc() function returns incorrect result? Dim TestChar = Chr(128) Dim CharInt = Asc(TestChar) ' this is a mistake on Windows 7 x64. Asc(TestChar) returns 136 instead of 128 I executed this code on another computer and the result was 128. Thanks.

    Read the article

  • Get top rated item using AVG mysql

    - by user1876234
    I want to find top rated item using AVG function in mysql, right now my query looks like this: SELECT a.title, AVG(d.rating) as rating FROM in8ku_content a JOIN in8ku_content_ratings d ON a.id = d.article_id ORDER BY rating DESC Problem is that it takes AVG of all items and the result is not accurate, what should be changed here to get correct result ? Tables: in8ku_content [id, title] in8ku_content_ratings [id, article_id, rating]

    Read the article

  • FsUnit `should equal` fails on `Some []`

    - by SHiNKiROU
    When I run this FsUnit test with NUnit 2.6.3, let f xs = Some (List.map ((+) 2) xs) [<Test>] let test() = f [] |> should equal (Some []) I get: Result Message: Expected: <Some([])> But was: <Some([])> Result StackTrace: at FsUnit.TopLevelOperators.should[a,a](FSharpFunc`2 f, a x, Object y) The test fails even though the Expected and Actual in the message are the same. What happened?

    Read the article

  • Getting the first row of the mysql resource string?

    - by mrNepal
    Here is my problem. I need more than one row from the database, and i need the first row for certain task and then go through all the list again to create a record set. $query = "SELECT * FROM mytable"; $result = mysql_query($query); $firstrow = //extract first row from database //add display some field from it while($row = mysql_fetch_assoc($result)) { //display all of them } Now, how to extract just the first row?

    Read the article

  • populate href of link for next and previous

    - by sea_1987
    Hi There, I am struggling alot with some PHP I am needing to implement a next and previous link, basically I have a search function on my site, that returns multiple results and click on a result navigates to that results page, I want to then be able to click next on that page, and be taken to the next result in the sequence that was returned by the users original search? Is this possible, and how? I have no clue.

    Read the article

  • Regular expression to process key value pairs

    - by user677680
    I am attempting to write a regular expression to process a string of key value(s) pairs formatted like so KEY/VALUE KEY/VALUE VALUE KEY/VALUE A key can have multiple values separated by a space. I want to match a keys values together, so the result on the above string would be VALUE VALUE VALUE VALUE I currently have the following as my regex [A-Z0-9]+/([A-Z0-9 ]+)(?:(?!^[A-Z0-9]+/)) but this returns VALUE KEY as the first result.

    Read the article

  • process.waitFor()

    - by ashwani66476
    Hello Team, I am using the below code in my application ... Process process = Runtime.getRuntime().exec( "perl " + perlScript + " " + project + " " + fileName); : : : result = process.waitFor(); : : and this result gives the code 2 every time.....while running the application. what could be the reason for the "reason code" ??? Thanks In Advance

    Read the article

  • "=null" and select statement!

    - by user329820
    Hi I have asked this question before in this forum and they told me that it will retun an empty result set,I want to know that if I set the column with null values it will retun an empty result set?also the ANSI_NULLS is OFF ,thanks SELECT 'A' FROM T WHERE A = NULL;

    Read the article

  • Problem with Javascript object and accessing property which exists

    - by Newbie
    I have something like this: var test = {}; function blah() { test[2] = 'filled'; } console.log(test); //result test -> 2:"filled" console.log(test[2]); //result undefined I don't understand why I'm getting 'undefined' in the second instance when according to the first instance, the property of that object clearly exists! Does anyone have any ideas? Thanks

    Read the article

  • writing javascript div in html

    - by user333060
    i have put in my source code to show live twitter search result on my webpage. Although it shows the search result but when i open the source code of my webpage it don't shows the tweets text in my source code.iT DYNAMICALLY LOADS IT I GUESS. iS there a way out to fetch the content of div and write it with some functions like document.write or etc.

    Read the article

  • .Net Regex question!

    - by Tsury
    Supposed I have the following string: string str = "<tag>text</tag>"; And I would like to change 'tag' to 'newTag' so the result would be: "<newTag>text</newTag>" What is the best way to do it? I tried to search for <[/]*tag but then I don't know how to keep the optional [/] in my result...

    Read the article

  • else or return?

    - by Ram
    Which one out of following two is best wrt to performance and standard practice. How does .NET internally handles these two code snippets? Code1 If(result) { process1(); } else { process2(); } Or Code 2 If(result) { process1(); return; } process2();

    Read the article

  • Ubuntu 10.04 not detecting multiple monitors

    - by user28837
    I have 2 graphics cards, the output from the lspci: 01:00.0 VGA compatible controller: ATI Technologies Inc RV770 [Radeon HD 4850] 02:00.0 VGA compatible controller: ATI Technologies Inc RV710 [Radeon HD 4350] I have one monitor connected to the 4850 and 2 connected to the 4350. However when I go into System Preferences Monitors the only monitor shown is the one connected to the 4850. Is there something I need to enable for it to be able to use the other card? How do I get this to work. Thanks. As per request: X.Org X Server 1.7.6 Release Date: 2010-03-17 X Protocol Version 11, Revision 0 Build Operating System: Linux 2.6.24-25-server i686 Ubuntu Current Operating System: Linux jeff-desktop 2.6.32-22-generic-pae #33-Ubuntu SMP Wed Apr 28 14:57:29 UTC 2010 i686 Kernel command line: BOOT_IMAGE=/boot/vmlinuz-2.6.32-22-generic-pae root=UUID=852e1013-4ed6-40fd-a462-c29087888383 ro quiet splash Build Date: 23 April 2010 05:11:50PM xorg-server 2:1.7.6-2ubuntu7 (Bryce Harrington <[email protected]>) Current version of pixman: 0.16.4 Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: "/var/log/Xorg.0.log", Time: Tue May 11 08:24:52 2010 (==) Using config file: "/etc/X11/xorg.conf" (==) Using config directory: "/usr/lib/X11/xorg.conf.d" (==) No Layout section. Using the first Screen section. (**) |-->Screen "Default Screen" (0) (**) | |-->Monitor "<default monitor>" (==) No device specified for screen "Default Screen". Using the first device section listed. (**) | |-->Device "Default Device" (==) No monitor specified for screen "Default Screen". Using a default monitor configuration. (==) Automatically adding devices (==) Automatically enabling devices (WW) The directory "/usr/share/fonts/X11/cyrillic" does not exist. Entry deleted from font path. (==) FontPath set to: /usr/share/fonts/X11/misc, /usr/share/fonts/X11/100dpi/:unscaled, /usr/share/fonts/X11/75dpi/:unscaled, /usr/share/fonts/X11/Type1, /usr/share/fonts/X11/100dpi, /usr/share/fonts/X11/75dpi, /var/lib/defoma/x-ttcidfont-conf.d/dirs/TrueType, built-ins (==) ModulePath set to "/usr/lib/xorg/extra-modules,/usr/lib/xorg/modules" (II) The server relies on udev to provide the list of input devices. If no devices become available, reconfigure udev or disable AutoAddDevices. (II) Loader magic: 0x81f0e80 (II) Module ABI versions: X.Org ANSI C Emulation: 0.4 X.Org Video Driver: 6.0 X.Org XInput driver : 7.0 X.Org Server Extension : 2.0 (++) using VT number 7 (--) PCI:*(0:1:0:0) 1002:9442:174b:e104 ATI Technologies Inc RV770 [Radeon HD 4850] rev 0, Mem @ 0xc0000000/268435456, 0xfe7e0000/65536, I/O @ 0x0000a000/256, BIOS @ 0x????????/131072 (--) PCI: (0:2:0:0) 1002:954f:1462:1618 ATI Technologies Inc RV710 [Radeon HD 4350] rev 0, Mem @ 0xd0000000/268435456, 0xfe8e0000/65536, I/O @ 0x0000b000/256, BIOS @ 0x????????/131072 (WW) Open ACPI failed (/var/run/acpid.socket) (No such file or directory) (II) "extmod" will be loaded by default. (II) "dbe" will be loaded by default. (II) "glx" will be loaded. This was enabled by default and also specified in the config file. (II) "record" will be loaded by default. (II) "dri" will be loaded by default. (II) "dri2" will be loaded by default. (II) LoadModule: "glx" (II) Loading /usr/lib/xorg/extra-modules/modules/extensions/libglx.so (II) Module glx: vendor="FireGL - ATI Technologies Inc." compiled for 7.5.0, module version = 1.0.0 (II) Loading extension GLX (II) LoadModule: "extmod" (II) Loading /usr/lib/xorg/modules/extensions/libextmod.so (II) Module extmod: vendor="X.Org Foundation" compiled for 1.7.6, module version = 1.0.0 Module class: X.Org Server Extension ABI class: X.Org Server Extension, version 2.0 (II) Loading extension MIT-SCREEN-SAVER (II) Loading extension XFree86-VidModeExtension (II) Loading extension XFree86-DGA (II) Loading extension DPMS (II) Loading extension XVideo (II) Loading extension XVideo-MotionCompensation (II) Loading extension X-Resource (II) LoadModule: "dbe" (II) Loading /usr/lib/xorg/modules/extensions/libdbe.so (II) Module dbe: vendor="X.Org Foundation" compiled for 1.7.6, module version = 1.0.0 Module class: X.Org Server Extension ABI class: X.Org Server Extension, version 2.0 (II) Loading extension DOUBLE-BUFFER (II) LoadModule: "record" (II) Loading /usr/lib/xorg/modules/extensions/librecord.so (II) Module record: vendor="X.Org Foundation" compiled for 1.7.6, module version = 1.13.0 Module class: X.Org Server Extension ABI class: X.Org Server Extension, version 2.0 (II) Loading extension RECORD (II) LoadModule: "dri" (II) Loading /usr/lib/xorg/modules/extensions/libdri.so (II) Module dri: vendor="X.Org Foundation" compiled for 1.7.6, module version = 1.0.0 ABI class: X.Org Server Extension, version 2.0 (II) Loading extension XFree86-DRI (II) LoadModule: "dri2" (II) Loading /usr/lib/xorg/modules/extensions/libdri2.so (II) Module dri2: vendor="X.Org Foundation" compiled for 1.7.6, module version = 1.1.0 ABI class: X.Org Server Extension, version 2.0 (II) Loading extension DRI2 (II) LoadModule: "fglrx" (II) Loading /usr/lib/xorg/extra-modules/modules/drivers/fglrx_drv.so (II) Module fglrx: vendor="FireGL - ATI Technologies Inc." compiled for 1.7.1, module version = 8.72.11 Module class: X.Org Video Driver (II) Loading sub module "fglrxdrm" (II) LoadModule: "fglrxdrm" (II) Loading /usr/lib/xorg/extra-modules/modules/linux/libfglrxdrm.so (II) Module fglrxdrm: vendor="FireGL - ATI Technologies Inc." compiled for 1.7.1, module version = 8.72.11 (II) ATI Proprietary Linux Driver Version Identifier:8.72.11 (II) ATI Proprietary Linux Driver Release Identifier: 8.723.1 (II) ATI Proprietary Linux Driver Build Date: Apr 8 2010 21:40:29 (II) Primary Device is: PCI 01@00:00:0 (WW) Falling back to old probe method for fglrx (II) Loading PCS database from /etc/ati/amdpcsdb (--) Assigning device section with no busID to primary device (WW) fglrx: No matching Device section for instance (BusID PCI:0@2:0:0) found (--) Chipset Supported AMD Graphics Processor (0x9442) found (WW) fglrx: No matching Device section for instance (BusID PCI:0@1:0:1) found (WW) fglrx: No matching Device section for instance (BusID PCI:0@2:0:1) found (**) ChipID override: 0x954F (**) Chipset Supported AMD Graphics Processor (0x954F) found (II) AMD Video driver is running on a device belonging to a group targeted for this release (II) AMD Video driver is signed (II) fglrx(0): pEnt->device->identifier=0x9428aa0 (II) pEnt->device->identifier=(nil) (II) fglrx(0): === [atiddxPreInit] === begin (II) Loading sub module "vgahw" (II) LoadModule: "vgahw" (II) Loading /usr/lib/xorg/modules/libvgahw.so (II) Module vgahw: vendor="X.Org Foundation" compiled for 1.7.6, module version = 0.1.0 ABI class: X.Org Video Driver, version 6.0 (II) fglrx(0): Creating default Display subsection in Screen section "Default Screen" for depth/fbbpp 24/32 (**) fglrx(0): Depth 24, (--) framebuffer bpp 32 (II) fglrx(0): Pixel depth = 24 bits stored in 4 bytes (32 bpp pixmaps) (==) fglrx(0): Default visual is TrueColor (==) fglrx(0): RGB weight 888 (II) fglrx(0): Using 8 bits per RGB (==) fglrx(0): Buffer Tiling is ON (II) Loading sub module "fglrxdrm" (II) LoadModule: "fglrxdrm" (II) Reloading /usr/lib/xorg/extra-modules/modules/linux/libfglrxdrm.so ukiDynamicMajor: found major device number 251 ukiDynamicMajor: found major device number 251 ukiOpenByBusid: Searching for BusID PCI:1:0:0 ukiOpenDevice: node name is /dev/ati/card0 ukiOpenDevice: open result is 10, (OK) ukiOpenByBusid: ukiOpenMinor returns 10 ukiOpenByBusid: ukiGetBusid reports PCI:2:0:0 ukiOpenDevice: node name is /dev/ati/card1 ukiOpenDevice: open result is 10, (OK) ukiOpenByBusid: ukiOpenMinor returns 10 ukiOpenByBusid: ukiGetBusid reports PCI:1:0:0 ukiDynamicMajor: found major device number 251 ukiDynamicMajor: found major device number 251 ukiOpenByBusid: Searching for BusID PCI:2:0:0 ukiOpenDevice: node name is /dev/ati/card0 ukiOpenDevice: open result is 11, (OK) ukiOpenByBusid: ukiOpenMinor returns 11 ukiOpenByBusid: ukiGetBusid reports PCI:2:0:0 (--) fglrx(0): Chipset: "ATI Radeon HD 4800 Series" (Chipset = 0x9442) (--) fglrx(0): (PciSubVendor = 0x174b, PciSubDevice = 0xe104) (==) fglrx(0): board vendor info: third party graphics adapter - NOT original ATI (--) fglrx(0): Linear framebuffer (phys) at 0xc0000000 (--) fglrx(0): MMIO registers at 0xfe7e0000 (--) fglrx(0): I/O port at 0x0000a000 (==) fglrx(0): ROM-BIOS at 0x000c0000 (II) fglrx(0): AC Adapter is used (II) fglrx(0): Primary V_BIOS segment is: 0xc000 (II) Loading sub module "vbe" (II) LoadModule: "vbe" (II) Loading /usr/lib/xorg/modules/libvbe.so (II) Module vbe: vendor="X.Org Foundation" compiled for 1.7.6, module version = 1.1.0 ABI class: X.Org Video Driver, version 6.0 (II) fglrx(0): VESA BIOS detected (II) fglrx(0): VESA VBE Version 3.0 (II) fglrx(0): VESA VBE Total Mem: 16384 kB (II) fglrx(0): VESA VBE OEM: ATI ATOMBIOS (II) fglrx(0): VESA VBE OEM Software Rev: 11.13 (II) fglrx(0): VESA VBE OEM Vendor: (C) 1988-2005, ATI Technologies Inc. (II) fglrx(0): VESA VBE OEM Product: RV770 (II) fglrx(0): VESA VBE OEM Product Rev: 01.00 (II) fglrx(0): ATI Video BIOS revision 9 or later detected (--) fglrx(0): Video RAM: 524288 kByte, Type: GDDR3 (II) fglrx(0): PCIE card detected (--) fglrx(0): Using per-process page tables (PPPT) as GART. (WW) fglrx(0): board is an unknown third party board, chipset is supported (--) fglrx(0): Chipset: "ATI Radeon HD 4300/4500 Series" (Chipset = 0x954f) (--) fglrx(0): (PciSubVendor = 0x1462, PciSubDevice = 0x1618) (==) fglrx(0): board vendor info: third party graphics adapter - NOT original ATI (--) fglrx(0): Linear framebuffer (phys) at 0xd0000000 (--) fglrx(0): MMIO registers at 0xfe8e0000 (--) fglrx(0): I/O port at 0x0000b000 (==) fglrx(0): ROM-BIOS at 0x000c0000 (II) fglrx(0): AC Adapter is used (II) fglrx(0): Invalid ATI BIOS from int10, the adapter is not VGA-enabled (II) fglrx(0): ATI Video BIOS revision 9 or later detected (--) fglrx(0): Video RAM: 524288 kByte, Type: DDR2 (II) fglrx(0): PCIE card detected (--) fglrx(0): Using per-process page tables (PPPT) as GART. (WW) fglrx(0): board is an unknown third party board, chipset is supported (II) fglrx(0): Using adapter: 1:0.0. (II) fglrx(0): [FB] MC range(MCFBBase = 0xf00000000, MCFBSize = 0x20000000) (II) fglrx(0): Interrupt handler installed at IRQ 31. (II) fglrx(0): Using adapter: 2:0.0. (II) fglrx(0): [FB] MC range(MCFBBase = 0xf00000000, MCFBSize = 0x20000000) (II) fglrx(0): RandR 1.2 support is enabled! (II) fglrx(0): RandR 1.2 rotation support is enabled! (==) fglrx(0): Center Mode is disabled (II) Loading sub module "fb" (II) LoadModule: "fb" (II) Loading /usr/lib/xorg/modules/libfb.so (II) Module fb: vendor="X.Org Foundation" compiled for 1.7.6, module version = 1.0.0 ABI class: X.Org ANSI C Emulation, version 0.4 (II) Loading sub module "ddc" (II) LoadModule: "ddc" (II) Module "ddc" already built-in (II) fglrx(0): Finished Initialize PPLIB! (II) Loading sub module "ddc" (II) LoadModule: "ddc" (II) Module "ddc" already built-in (II) fglrx(0): Connected Display0: DFP on external TMDS [tmds2] (II) fglrx(0): Display0 EDID data --------------------------- (II) fglrx(0): Manufacturer: DEL Model: a038 Serial#: 810829397 (II) fglrx(0): Year: 2008 Week: 51 (II) fglrx(0): EDID Version: 1.3 (II) fglrx(0): Digital Display Input (II) fglrx(0): Max Image Size [cm]: horiz.: 53 vert.: 30 (II) fglrx(0): Gamma: 2.20 (II) fglrx(0): DPMS capabilities: StandBy Suspend Off (II) fglrx(0): Supported color encodings: RGB 4:4:4 YCrCb 4:4:4 (II) fglrx(0): Default color space is primary color space (II) fglrx(0): First detailed timing is preferred mode (II) fglrx(0): redX: 0.640 redY: 0.330 greenX: 0.300 greenY: 0.600 (II) fglrx(0): blueX: 0.150 blueY: 0.060 whiteX: 0.312 whiteY: 0.329 (II) fglrx(0): Supported established timings: (II) fglrx(0): 720x400@70Hz (II) fglrx(0): 640x480@60Hz (II) fglrx(0): 640x480@75Hz (II) fglrx(0): 800x600@60Hz (II) fglrx(0): 800x600@75Hz (II) fglrx(0): 1024x768@60Hz (II) fglrx(0): 1024x768@75Hz (II) fglrx(0): 1280x1024@75Hz (II) fglrx(0): Manufacturer's mask: 0 (II) fglrx(0): Supported standard timings: (II) fglrx(0): #0: hsize: 1152 vsize 864 refresh: 75 vid: 20337 (II) fglrx(0): #1: hsize: 1280 vsize 1024 refresh: 60 vid: 32897 (II) fglrx(0): #2: hsize: 1920 vsize 1080 refresh: 60 vid: 49361 (II) fglrx(0): Supported detailed timing: (II) fglrx(0): clock: 148.5 MHz Image Size: 531 x 298 mm (II) fglrx(0): h_active: 1920 h_sync: 2008 h_sync_end 2052 h_blank_end 2200 h_border: 0 (II) fglrx(0): v_active: 1080 v_sync: 1084 v_sync_end 1089 v_blanking: 1125 v_border: 0 (II) fglrx(0): Serial No: Y183D8CF0TFU (II) fglrx(0): Monitor name: DELL S2409W (II) fglrx(0): Ranges: V min: 50 V max: 76 Hz, H min: 30 H max: 83 kHz, PixClock max 170 MHz (II) fglrx(0): EDID (in hex): (II) fglrx(0): 00ffffffffffff0010ac38a055465430 (II) fglrx(0): 3312010380351e78eeee91a3544c9926 (II) fglrx(0): 0f5054a54b00714f8180d1c001010101 (II) fglrx(0): 010101010101023a801871382d40582c (II) fglrx(0): 4500132a2100001e000000ff00593138 (II) fglrx(0): 3344384346305446550a000000fc0044 (II) fglrx(0): 454c4c205332343039570a20000000fd (II) fglrx(0): 00324c1e5311000a2020202020200059 (II) fglrx(0): End of Display0 EDID data -------------------- (II) fglrx(0): Output DFP2 has no monitor section (II) fglrx(0): Output DFP_EXTTMDS has no monitor section (II) fglrx(0): Output CRT1 has no monitor section (II) fglrx(0): Output CRT2 has no monitor section (II) fglrx(0): Output DFP2 disconnected (II) fglrx(0): Output DFP_EXTTMDS connected (II) fglrx(0): Output CRT1 disconnected (II) fglrx(0): Output CRT2 disconnected (II) fglrx(0): Using exact sizes for initial modes (II) fglrx(0): Output DFP_EXTTMDS using initial mode 1920x1080 (II) fglrx(0): DPI set to (96, 96) (II) fglrx(0): Adapter ATI Radeon HD 4800 Series has 2 configurable heads and 1 displays connected. (==) fglrx(0): QBS disabled (==) fglrx(0): PseudoColor visuals disabled (II) Loading sub module "ramdac" (II) LoadModule: "ramdac" (II) Module "ramdac" already built-in (==) fglrx(0): NoAccel = NO (==) fglrx(0): NoDRI = NO (==) fglrx(0): Capabilities: 0x00000000 (==) fglrx(0): CapabilitiesEx: 0x00000000 (==) fglrx(0): OpenGL ClientDriverName: "fglrx_dri.so" (==) fglrx(0): UseFastTLS=0 (==) fglrx(0): BlockSignalsOnLock=1 (--) Depth 24 pixmap format is 32 bpp (II) Loading extension ATIFGLRXDRI (II) fglrx(0): doing swlDriScreenInit (II) fglrx(0): swlDriScreenInit for fglrx driver ukiDynamicMajor: found major device number 251 ukiDynamicMajor: found major device number 251 ukiDynamicMajor: found major device number 251 ukiOpenByBusid: Searching for BusID PCI:1:0:0 ukiOpenDevice: node name is /dev/ati/card0 ukiOpenDevice: open result is 17, (OK) ukiOpenByBusid: ukiOpenMinor returns 17 ukiOpenByBusid: ukiGetBusid reports PCI:2:0:0 ukiOpenDevice: node name is /dev/ati/card1 ukiOpenDevice: open result is 17, (OK) ukiOpenByBusid: ukiOpenMinor returns 17 ukiOpenByBusid: ukiGetBusid reports PCI:1:0:0 (II) fglrx(0): [uki] DRM interface version 1.0 (II) fglrx(0): [uki] created "fglrx" driver at busid "PCI:1:0:0" (II) fglrx(0): [uki] added 8192 byte SAREA at 0x2000 (II) fglrx(0): [uki] mapped SAREA 0x2000 to 0xb6996000 (II) fglrx(0): [uki] framebuffer handle = 0x3000 (II) fglrx(0): [uki] added 1 reserved context for kernel (II) fglrx(0): swlDriScreenInit done (II) fglrx(0): Kernel Module Version Information: (II) fglrx(0): Name: fglrx (II) fglrx(0): Version: 8.72.11 (II) fglrx(0): Date: Apr 8 2010 (II) fglrx(0): Desc: ATI FireGL DRM kernel module (II) fglrx(0): Kernel Module version matches driver. (II) fglrx(0): Kernel Module Build Time Information: (II) fglrx(0): Build-Kernel UTS_RELEASE: 2.6.32-22-generic-pae (II) fglrx(0): Build-Kernel MODVERSIONS: yes (II) fglrx(0): Build-Kernel __SMP__: yes (II) fglrx(0): Build-Kernel PAGE_SIZE: 0x1000 (II) fglrx(0): [uki] register handle = 0x00004000 (II) fglrx(0): DRI initialization successfull! (II) fglrx(0): FBADPhys: 0xf00000000 FBMappedSize: 0x01068000 (II) fglrx(0): FBMM initialized for area (0,0)-(1920,2240) (II) fglrx(0): FBMM auto alloc for area (0,0)-(1920,1920) (front color buffer - assumption) (II) fglrx(0): Largest offscreen area available: 1920 x 320 (==) fglrx(0): Backing store disabled (II) Loading extension FGLRXEXTENSION (==) fglrx(0): DPMS enabled (II) fglrx(0): Initialized in-driver Xinerama extension (**) fglrx(0): Textured Video is enabled. (II) LoadModule: "glesx" (II) Loading /usr/lib/xorg/extra-modules/modules/glesx.so (II) Module glesx: vendor="X.Org Foundation" compiled for 1.7.1, module version = 1.0.0 (II) Loading extension GLESX (II) Loading sub module "xaa" (II) LoadModule: "xaa" (II) Loading /usr/lib/xorg/modules/libxaa.so (II) Module xaa: vendor="X.Org Foundation" compiled for 1.7.6, module version = 1.2.1 ABI class: X.Org Video Driver, version 6.0 (II) fglrx(0): GLESX enableFlags = 94 (II) fglrx(0): Using XFree86 Acceleration Architecture (XAA) Screen to screen bit blits Solid filled rectangles Solid Horizontal and Vertical Lines Driver provided ScreenToScreenBitBlt replacement Driver provided FillSolidRects replacement (II) fglrx(0): GLESX is enabled (II) LoadModule: "amdxmm" (II) Loading /usr/lib/xorg/extra-modules/modules/amdxmm.so (II) Module amdxmm: vendor="X.Org Foundation" compiled for 1.7.1, module version = 1.0.0 (II) Loading extension AMDXVOPL (II) fglrx(0): UVD2 feature is available (II) fglrx(0): Enable composite support successfully (II) fglrx(0): X context handle = 0x1 (II) fglrx(0): [DRI] installation complete (==) fglrx(0): Silken mouse enabled (==) fglrx(0): Using HW cursor of display infrastructure! (II) fglrx(0): Disabling in-server RandR and enabling in-driver RandR 1.2. (--) RandR disabled (II) Found 2 VGA devices: arbiter wrapping enabled (II) Initializing built-in extension Generic Event Extension (II) Initializing built-in extension SHAPE (II) Initializing built-in extension MIT-SHM (II) Initializing built-in extension XInputExtension (II) Initializing built-in extension XTEST (II) Initializing built-in extension BIG-REQUESTS (II) Initializing built-in extension SYNC (II) Initializing built-in extension XKEYBOARD (II) Initializing built-in extension XC-MISC (II) Initializing built-in extension SECURITY (II) Initializing built-in extension XINERAMA (II) Initializing built-in extension XFIXES (II) Initializing built-in extension RENDER (II) Initializing built-in extension RANDR (II) Initializing built-in extension COMPOSITE (II) Initializing built-in extension DAMAGE ukiDynamicMajor: found major device number 251 ukiDynamicMajor: found major device number 251 ukiOpenByBusid: Searching for BusID PCI:1:0:0 ukiOpenDevice: node name is /dev/ati/card0 ukiOpenDevice: open result is 18, (OK) ukiOpenByBusid: ukiOpenMinor returns 18 ukiOpenByBusid: ukiGetBusid reports PCI:2:0:0 ukiOpenDevice: node name is /dev/ati/card1 ukiOpenDevice: open result is 18, (OK) ukiOpenByBusid: ukiOpenMinor returns 18 ukiOpenByBusid: ukiGetBusid reports PCI:1:0:0 (II) AIGLX: Loaded and initialized /usr/lib/dri/fglrx_dri.so (II) GLX: Initialized DRI GL provider for screen 0 (II) fglrx(0): Enable the clock gating! (II) fglrx(0): Setting screen physical size to 507 x 285 (II) XKB: reuse xkmfile /var/lib/xkb/server-B20D7FC79C7F597315E3E501AEF10E0D866E8E92.xkm (II) config/udev: Adding input device Power Button (/dev/input/event1) (**) Power Button: Applying InputClass "evdev keyboard catchall" (II) LoadModule: "evdev" (II) Loading /usr/lib/xorg/modules/input/evdev_drv.so (II) Module evdev: vendor="X.Org Foundation" compiled for 1.7.6, module version = 2.3.2 Module class: X.Org XInput Driver ABI class: X.Org XInput driver, version 7.0 (**) Power Button: always reports core events (**) Power Button: Device: "/dev/input/event1" (II) Power Button: Found keys (II) Power Button: Configuring as keyboard (II) XINPUT: Adding extended input device "Power Button" (type: KEYBOARD) (**) Option "xkb_rules" "evdev" (**) Option "xkb_model" "pc105" (**) Option "xkb_layout" "us" (II) config/udev: Adding input device Power Button (/dev/input/event0) (**) Power Button: Applying InputClass "evdev keyboard catchall" (**) Power Button: always reports core events (**) Power Button: Device: "/dev/input/event0" (II) Power Button: Found keys (II) Power Button: Configuring as keyboard (II) XINPUT: Adding extended input device "Power Button" (type: KEYBOARD) (**) Option "xkb_rules" "evdev" (**) Option "xkb_model" "pc105" (**) Option "xkb_layout" "us" (II) config/udev: Adding input device Logitech USB-PS/2 Optical Mouse (/dev/input/event3) (**) Logitech USB-PS/2 Optical Mouse: Applying InputClass "evdev pointer catchall" (**) Logitech USB-PS/2 Optical Mouse: always reports core events (**) Logitech USB-PS/2 Optical Mouse: Device: "/dev/input/event3" (II) Logitech USB-PS/2 Optical Mouse: Found 12 mouse buttons (II) Logitech USB-PS/2 Optical Mouse: Found scroll wheel(s) (II) Logitech USB-PS/2 Optical Mouse: Found relative axes (II) Logitech USB-PS/2 Optical Mouse: Found x and y relative axes (II) Logitech USB-PS/2 Optical Mouse: Configuring as mouse (**) Logitech USB-PS/2 Optical Mouse: YAxisMapping: buttons 4 and 5 (**) Logitech USB-PS/2 Optical Mouse: EmulateWheelButton: 4, EmulateWheelInertia: 10, EmulateWheelTimeout: 200 (II) XINPUT: Adding extended input device "Logitech USB-PS/2 Optical Mouse" (type: MOUSE) (II) Logitech USB-PS/2 Optical Mouse: initialized for relative axes. (II) config/udev: Adding input device Logitech USB-PS/2 Optical Mouse (/dev/input/mouse1) (II) No input driver/identifier specified (ignoring) (II) config/udev: Adding input device Logitech USB Multimedia Keyboard (/dev/input/event4) (**) Logitech USB Multimedia Keyboard: Applying InputClass "evdev keyboard catchall" (**) Logitech USB Multimedia Keyboard: always reports core events (**) Logitech USB Multimedia Keyboard: Device: "/dev/input/event4" (II) Logitech USB Multimedia Keyboard: Found keys (II) Logitech USB Multimedia Keyboard: Configuring as keyboard (II) XINPUT: Adding extended input device "Logitech USB Multimedia Keyboard" (type: KEYBOARD) (**) Option "xkb_rules" "evdev" (**) Option "xkb_model" "pc105" (**) Option "xkb_layout" "us" (II) config/udev: Adding input device Logitech USB Multimedia Keyboard (/dev/input/event5) (**) Logitech USB Multimedia Keyboard: Applying InputClass "evdev keyboard catchall" (**) Logitech USB Multimedia Keyboard: always reports core events (**) Logitech USB Multimedia Keyboard: Device: "/dev/input/event5" (II) Logitech USB Multimedia Keyboard: Found keys (II) Logitech USB Multimedia Keyboard: Configuring as keyboard (II) XINPUT: Adding extended input device "Logitech USB Multimedia Keyboard" (type: KEYBOARD) (**) Option "xkb_rules" "evdev" (**) Option "xkb_model" "pc105" (**) Option "xkb_layout" "us" (II) config/udev: Adding input device KEYBOARD (/dev/input/event6) (**) KEYBOARD: Applying InputClass "evdev keyboard catchall" (**) KEYBOARD: always reports core events (**) KEYBOARD: Device: "/dev/input/event6" (II) KEYBOARD: Found keys (II) KEYBOARD: Configuring as keyboard (II) XINPUT: Adding extended input device "KEYBOARD" (type: KEYBOARD) (**) Option "xkb_rules" "evdev" (**) Option "xkb_model" "pc105" (**) Option "xkb_layout" "us" (II) config/udev: Adding input device KEYBOARD (/dev/input/event7) (**) KEYBOARD: Applying InputClass "evdev keyboard catchall" (**) KEYBOARD: always reports core events (**) KEYBOARD: Device: "/dev/input/event7" (II) KEYBOARD: Found 14 mouse buttons (II) KEYBOARD: Found scroll wheel(s) (II) KEYBOARD: Found relative axes (II) KEYBOARD: Found keys (II) KEYBOARD: Configuring as mouse (II) KEYBOARD: Configuring as keyboard (**) KEYBOARD: YAxisMapping: buttons 4 and 5 (**) KEYBOARD: EmulateWheelButton: 4, EmulateWheelInertia: 10, EmulateWheelTimeout: 200 (II) XINPUT: Adding extended input device "KEYBOARD" (type: KEYBOARD) (**) Option "xkb_rules" "evdev" (**) Option "xkb_model" "pc105" (**) Option "xkb_layout" "us" (EE) KEYBOARD: failed to initialize for relative axes. (II) config/udev: Adding input device KEYBOARD (/dev/input/mouse2) (II) No input driver/identifier specified (ignoring) (II) config/udev: Adding input device Macintosh mouse button emulation (/dev/input/event2) (**) Macintosh mouse button emulation: Applying InputClass "evdev pointer catchall" (**) Macintosh mouse button emulation: always reports core events (**) Macintosh mouse button emulation: Device: "/dev/input/event2" (II) Macintosh mouse button emulation: Found 3 mouse buttons (II) Macintosh mouse button emulation: Found relative axes (II) Macintosh mouse button emulation: Found x and y relative axes (II) Macintosh mouse button emulation: Configuring as mouse (**) Macintosh mouse button emulation: YAxisMapping: buttons 4 and 5 (**) Macintosh mouse button emulation: EmulateWheelButton: 4, EmulateWheelInertia: 10, EmulateWheelTimeout: 200 (II) XINPUT: Adding extended input device "Macintosh mouse button emulation" (type: MOUSE) (II) Macintosh mouse button emulation: initialized for relative axes. (II) config/udev: Adding input device Macintosh mouse button emulation (/dev/input/mouse0) (II) No input driver/identifier specified (ignoring) (II) fglrx(0): Restoring Recent Mode via PCS is not supported in RANDR 1.2 capable environments

    Read the article

  • Using an alternate search platform in Commerce Server 2009

    - by Lewis Benge
    Although Microsoft Commerce Server 2009's architecture is built upon Microsoft SQL Server, and has the full power of the SQL Full Text Indexing Search Platform, there are time however when you may require a richer or alternate search platform. One of these scenarios if when you want to implement a faceted (refinement) search into your site, which provides dynamic refinements based on the search results dataset. Faceted search is becoming popular in most online retail environments as a way of providing an enhanced user experience when browsing a larger catalogue. This is powerful for two reasons, firstly with a traditional search it is down to a user to think of a search term suitable for the product they are trying to find. This typically will not return similar products or help in any way to refine a larger dataset. Faceted searches on the other hand provide a comprehensive list of product properties, grouped together by similarity to help the user narrow down the results returned, as the user progressively restricts the search criteria by selecting additional criteria to search again, these facets needs to continually refresh. The whole experience allows users to explore alternate brands, price-ranges, or find products they hadn't initially thought of or where looking for in a bid to enhance cross sell in the retail environment. The second advantage of this type of search from a business perspective is also to harvest the search result to start to profile your user. Even though anonymous users may routinely visit your site, and will not necessarily register or complete a transaction to build up marketing data- profiling, you can still achieve the same result by recording search facets used within the search sequence. Below is a faceted search scenario generated from eBay using the search term "server". By creating a search profile of clicking through Computer & Networking -> Servers -> Dell - > New and recording this information against my user profile you can start to predict with a lot more certainty what types of products I am interested in. This will allow you to apply shopping-cart analysis against your search data and provide great cross-sale or advertising opportunity, or personalise the user experience based on your prediction of what the user may be interested in. This type of search is extremely beneficial in e-Commerce environments but achieving it out of the box with Commerce Server and SQL Full Text indexing can be challenging. In many deployments it is often easier to use an alternate search platform such as Microsoft's FAST, Apache SOLR, or Endecca, however you still want these products to integrate natively into Commerce Server to ensure that up-to-date inventory information is presented, profile information is generated, and you provide a consistant API. To do so we make the most of the Commerce Server extensibilty points called operation sequence components. In this example I will be talking about Apache Solr hosted on Apache Tomcat, in this specific example I have used the SolrNet C# library to interface to the Java platform. Also I am not going to talk about Solr configuration of indexing – but in a production envionrment this would typically happen by using Powershell to call the Commerce Server management webservice to export your catalog as XML, apply an XSLT transform to the file to make it conform to SOLR and use a simple HTTP Post to send it to the search enginge for indexing. Essentially a sequance component is a step in a serial workflow used to call a data repository (which in most cases is usually the Commerce Server pipelines or databases) and map to and from a Commerce Entity object whilst enforcing any business rules. So the first step in the process is to add a new class library to your existing Commerce Server site. You will need to use a new library as Sequence Components will need to be strongly named to be deployed. Once you are inside of your new project, add a new class file and add a reference to the Microsoft.Commerce.Providers, Microsoft.Commerce.Contracts and the Microsoft.Commerce.Broker assemblies. Now make your new class derive from the base object Microsoft.Commerce.Providers.Components.OperationSequanceComponent and overide the ExecuteQueryMethod. Your screen will then look something similar ot this: As all we are doing on this component is conducting a search we are only interested in the ExecuteQuery method. This method accepts three arguments, queryOperation, operationCache, and response. The queryOperation will be the object in which we receive our search parameters, the cache allows access to the Commerce Server cache allowing us to store regulary accessed information, and the response object is the object which we will return the result of our search upon. Inside this method is simply where we are going to inject our logic for our third party search platform. As I am not going to explain the inner-workings of actually making a SOLR call, I'll simply provide the sample code here. I would highly recommend however looking at the SolrNet wiki as they have some great explinations of how the API works. What you will find however is that there are some further extensions required when attempting to integrate a custom search provider. Firstly you out of the box the CommerceQueryOperation you will receive into the method when conducting a search against a catalog is specifically geared towards a SQL Full Text Search with properties such as a Where clause. To make the operation you receive more relevant you will need to create another class, this time derived from Microsoft.Commerce.Contract.Messages.CommerceSearchCriteria and within this you need to detail the properties you will require to allow you to submit as parameters to the SOLR search API. My exmaple looks like this: [DataContract(Namespace = "http://schemas.microsoft.com/microsoft-multi-channel-commerce-foundation/types/2008/03")] public class CommerceCatalogSolrSearch : CommerceSearchCriteria { private Dictionary<string, string> _facetQueries;   public CommerceCatalogSolrSearch() { _facetQueries = new Dictionary<String, String>();   }     public Dictionary<String, String> FacetQueries { get { return _facetQueries; } set { _facetQueries = value; } }   public String SearchPhrase{ get; set; } public int PageIndex { get; set; } public int PageSize { get; set; } public IEnumerable<String> Facets { get; set; }   public string Sort { get; set; }   public new int FirstItemIndex { get { return (PageIndex-1)*PageSize; } }   public int LastItemIndex { get { return FirstItemIndex + PageSize; } } }  To allow you to construct a CommerceQueryOperation call within the API you will also need to construct another class to derived from Microsoft.Commerce.Common.MessageBuilders.CommerceSearchCriteriaBuilder and is simply used to construct an instance of the CommerceQueryOperation you have just created and expose the properties you want set. My Message builder looks like this: public class CommerceCatalogSolrSearchBuilder : CommerceSearchCriteriaBuilder { private CommerceCatalogSolrSearch _solrSearch;   public CommerceCatalogSolrSearchBuilder() { _solrSearch = new CommerceCatalogSolrSearch(); }   public String SearchPhrase { get { return _solrSearch.SearchPhrase; } set { _solrSearch.SearchPhrase = value; } }   public int PageIndex { get { return _solrSearch.PageIndex; } set { _solrSearch.PageIndex = value; } }   public int PageSize { get { return _solrSearch.PageSize; } set { _solrSearch.PageSize = value; } }   public Dictionary<String,String> FacetQueries { get { return _solrSearch.FacetQueries; } set { _solrSearch.FacetQueries = value; } }   public String[] Facets { get { return _solrSearch.Facets.ToArray(); } set { _solrSearch.Facets = value; } } public override CommerceSearchCriteria ToSearchCriteria() { return _solrSearch; } }  Once you have these two classes in place you can now safely cast the CommerceOperation you receive as an argument of the overidden ExecuteQuery method in the SequenceComponent to the CommerceCatalogSolrSearch operation you have just created, e.g. public CommerceCatalogSolrSearch TryGetSearchCriteria(CommerceOperation operation) { var searchCriteria = operation as CommerceQueryOperation; if (searchCriteria == null) throw new Exception("No search criteria present");   var local = (CommerceCatalogSolrSearch) searchCriteria.SearchCriteria; if (local == null) throw new Exception("Unexpected Search Criteria in Operation");   return local; }  Now you have all of your search parameters present, you can go off an call the external search platform API. You will of-course get proprietry objects returned, so the next step in the process is to convert the results being returned back into CommerceEntities. You do this via another extensibility point within the Commerce Server API called translatators. Translators are another separate class, this time derived inheriting the interface Microsoft.Commerce.Providers.Translators.IToCommerceEntityTranslator . As you can imaginge this interface is specific for the conversion of the object TO a CommerceEntity, you will need to implement a separate interface if you also need to go in the opposite direction. If you implement the required method for the interace you will get a single translate method which has a source onkect, destination CommerceEntity, and a collection of properties as arguments. For simplicity sake in this example I have hard-coded the mappings, however best practice would dictate you map the objects using your metadatadefintions.xml file . Once complete your translator would look something like the following: public class SolrEntityTranslator : IToCommerceEntityTranslator { #region IToCommerceEntityTranslator Members   public void Translate(object source, CommerceEntity destinationCommerceEntity, CommercePropertyCollection propertiesToReturn) { if (source.GetType().Equals(typeof (SearchProduct))) { var searchResult = (SearchProduct) source;   destinationCommerceEntity.Id = searchResult.ProductId; destinationCommerceEntity.SetPropertyValue("DisplayName", searchResult.Title); destinationCommerceEntity.ModelName = "Product";   } }  Once you have a translator in place you can then safely map the results of your search platform into Commerce Entities and attach them on to the CommerceResponse object in a fashion similar to this: foreach (SearchProduct result in matchingProducts) { var destinationEntity = new CommerceEntity(_returnModelName);   Translator.ToCommerceEntity(result, destinationEntity, _queryOperation.Model.Properties); response.CommerceEntities.Add(destinationEntity); }  In SOLR I actually have two objects being returned – a product, and a collection of facets so I have an additional translator for facet (which maps to a custom facet CommerceEntity) and my facet response from SOLR is passed into the Translator helper class seperatley. When all of this is pieced together you have sucessfully completed the extensiblity point coding. You would have created a new OperationSequanceComponent, a custom SearchCritiera object and message builder class, and translators to convert the objects into Commerce Entities. Now you simply need to configure them, and can start calling them in your code. Make sure you sign you assembly, compile it and identiy its signature. Next you need to put this a reference of your new assembly into the Channel.Config configuration file replacing that of the existing SQL Full Text component: You will also need to add your translators to the Translators node of your Channel.Config too: Lastly add any custom CommerceEntities you have developed to your MetaDataDefintions.xml file. Your configuration is now complete, and you should now be able to happily make a call to the Commerce Foundation API, which will act as a proxy to your third party search platform and return back CommerceEntities of your search results. If you require data to be enriched, or logged, or any other logic applied then simply add further sequence components into the OperationSequence (obviously keeping the search response first) to the node of your Channel.Config file. Now to call your code you simply request it as per any other CommerceQuery operation, but taking into account you may be receiving multiple types of CommerceEntity returned: public KeyValuePair<FacetCollection ,List<Product>> DoFacetedProductQuerySearch(string searchPhrase, string orderKey, string sortOrder, int recordIndex, int recordsPerPage, Dictionary<string, string> facetQueries, out int totalItemCount) { var products = new List<Product>(); var query = new CommerceQuery<CatalogEntity, CommerceCatalogSolrSearchBuilder>();   query.SearchCriteria.PageIndex = recordIndex; query.SearchCriteria.PageSize = recordsPerPage; query.SearchCriteria.SearchPhrase = searchPhrase; query.SearchCriteria.FacetQueries = facetQueries;     totalItemCount = 0; CommerceResponse response = SiteContext.ProcessRequest(query.ToRequest()); var queryResponse = response.OperationResponses[0] as CommerceQueryOperationResponse;   // No results. Return the empty list if (queryResponse != null && queryResponse.CommerceEntities.Count == 0) return new KeyValuePair<FacetCollection, List<Product>>();   totalItemCount = (int)queryResponse.TotalItemCount;   // Prepare a multi-operation to retrieve the product variants var multiOperation = new CommerceMultiOperation();     //Add products to results foreach (Product product in queryResponse.CommerceEntities.Where(x => x.ModelName == "Product")) { var productQuery = new CommerceQuery<Product>(Product.ModelNameDefinition); productQuery.SearchCriteria.Model.Id = product.Id; productQuery.SearchCriteria.Model.CatalogId = product.CatalogId;   var variantQuery = new CommerceQueryRelatedItem<Variant>(Product.RelationshipName.Variants);   productQuery.RelatedOperations.Add(variantQuery);   multiOperation.Add(productQuery); }   CommerceResponse variantsResponse = SiteContext.ProcessRequest(multiOperation.ToRequest()); foreach (CommerceQueryOperationResponse queryOpResponse in variantsResponse.OperationResponses) { if (queryOpResponse.CommerceEntities.Count() > 0) products.Add(queryOpResponse.CommerceEntities[0]); }   //Get facet collection FacetCollection facetCollection = queryResponse.CommerceEntities.Where(x => x.ModelName == "FacetCollection").FirstOrDefault();     return new KeyValuePair<FacetCollection, List<Product>>(facetCollection, products); }    ..And that is it – simply a few classes and some configuration will allow you to extend the Commerce Server query operations to call a third party search platform, whilst still maintaing a unifed API in the remainder of your code. This logic stands for any extensibility within CommerceServer, which requires excution in a serial fashioon such as call to LOB systems or web service to validate or enrich data. Feel free to use this example on other applications, and if you have any questions please feel free to e-mail and I'll help out where I can!

    Read the article

  • Microsoft Declares the Future of ASP.NET is Web API

    - by sbwalker
    Sitting on a plane on my way home from Tech Ed 2012 in Orlando, I thought it would be a good time to jot down some key takeaways from this year’s conference. Some of these items I have known since the Microsoft MVP Summit which occurred in Redmond in late February ( but due to NDA restrictions I could not share them with the developer community at large ) and some of them are a result of insightful conversations with a wide variety of industry insiders and Microsoft employees at the conference. First, let’s travel back in time 4 years to the Microsoft MVP Summit in 2008. Microsoft was facing some heat from market newcomer Ruby on Rails and responded with a new web development framework of its own, ASP.NET MVC. At the Summit they estimated that MVC would only be applicable for ~10% of all new web development projects. Based on that prediction I questioned why they were investing such considerable resources for such a relative edge case, but my guess is that they felt it was an important edge case at the time as some of the more vocal .NET evangelists as well as some very high profile start-ups ( ie. Twitter ) had publicly announced their intent to use Rails. Microsoft made a lot of noise about MVC. In fact, they focused so much of their messaging and marketing hype around MVC that it appeared that WebForms was essentially dead. Yes, it may have been true that Microsoft continued to invest in WebForms, but from an outside perspective it really appeared that MVC was the only framework getting any real attention. As a result, MVC started to gain market share. An inside source at Microsoft told me that MVC usage has grown at a rate of about 5% per year and now sits at ~30%. Essentially by focusing so much marketing effort on MVC, Microsoft actually created a larger market demand for it.  This is because in the Microsoft ecosystem there is somewhat of a bandwagon mentality amongst developers. If Microsoft spends a lot of time talking about a specific technology, developers get the perception that it must be really important. So rather than choosing the right tool for the job, they often choose the tool with the most marketing hype and then try to sell it to the customer. In 2010, I blogged about the fact that MVC did not make any business sense for the DotNetNuke platform. This was because our ecosystem relied on third party extensions which were dependent on the WebForms model. If we migrated the core to MVC it would mean that all of the third party extensions would no longer be compatible, which would be an irresponsible business decision for us to make at the expense of our users and customers. However, this did not stop the debate from continuing to occur in our ecosystem. Clearly some developers had drunk Microsoft’s Kool-Aid about MVC and were of the mindset, to paraphrase an old Scottish saying, “If its not MVC, it’s crap”. Now, this is a rather ignorant position to take as most of the benefits of MVC can be achieved in WebForms with solid architecture and responsible coding practices. Clean separation of concerns, unit testing, and direct control over page output are all possible in the WebForms model – it just requires diligence and discipline. So over the past few years some horror stories have begun to bubble to the surface of software development projects focused on ground-up rewrites of web applications for the sole purpose of migrating from WebForms to MVC. These large scale rewrites were typically initiated by engineering teams with only a single argument driving the business decision, that Microsoft was promoting MVC as “the future”. These ill-fated rewrites offered no benefit to end users or customers and in fact resulted in a less stable, less scalable and more complicated systems – basically taking one step forward and two full steps back. A case in point is the announcement earlier this week that a popular open source .NET CMS provider has decided to pull the plug on their new MVC product which has been under active development for more than 18 months and revert back to WebForms. The availability of multiple server-side development models has deeply fragmented the Microsoft developer community. Some folks like to compare it to the age-old VB vs. C# language debate. However, the VB vs. C# language debate was ultimately more of a religious war because at least the two dominant programming languages were compatible with one another and could be used interchangeably. The issue with WebForms vs. MVC is much more challenging. This is because the messaging from Microsoft has positioned the two solutions as being incompatible with one another and as a result web developers feel like they are forced to choose one path or another. Yes, it is true that it has always been technically possible to use WebForms and MVC in the same project, but the tooling support has always made this feel “dirty”. The fragmentation has also made it difficult to attract newcomers as the perceived barrier to entry for learning ASP.NET has become higher. As a result many new software developers entering the market are gravitating to environments where the development model seems more simple and intuitive ( ie. PHP or Ruby ). At the same time that the Web Platform team was busy promoting ASP.NET MVC, the Microsoft Office team has been promoting Sharepoint as a platform for building internal enterprise web applications. Sharepoint has great penetration in the enterprise and over time has been enhanced with improved extensibility capabilities for software developers. But, like many other mature enterprise ASP.NET web applications, it is built on the WebForms development model. Similar to DotNetNuke, Sharepoint leverages a rich third party ecosystem for both generic web controls and more specialized WebParts – both of which rely on WebForms. So basically this resulted in a situation where the Web Platform group had headed off in one direction and the Office team had gone in another direction, and the end customer was stuck in the middle trying to figure out what to do with their existing investments in Microsoft technology. It really emphasized the perception that the left hand was not speaking to the right hand, as strategically speaking there did not seem to be any high level plan from Microsoft to ensure consistency and continuity across the different product lines. With the introduction of ASP.NET MVC, it also made some of the third party control vendors scratch their heads, and wonder what the heck Microsoft was thinking. The original value proposition of ASP.NET over Classic ASP was the ability for web developers to emulate the highly productive desktop development model by using abstract components for creating rich, interactive web interfaces. Web control vendors like Telerik, Infragistics, DevExpress, and ComponentArt had all built sizable businesses offering powerful user interface components to WebForms developers. And even after MVC was introduced these vendors continued to improve their products, offering greater productivity and a superior user experience via AJAX to what was possible in MVC. And since many developers were comfortable and satisfied with these third party solutions, the demand remained strong and the third party web control market continued to prosper despite the availability of MVC. While all of this was going on in the Microsoft ecosystem, there has also been a fundamental shift in the general software development industry. Driven by the explosion of Internet-enabled devices, the focus has now centered on service-oriented architecture (SOA). Service-oriented architecture is all about defining a public API for your product that any client can consume; whether it’s a native application running on a smart phone or tablet, a web browser taking advantage of HTML5 and Javascript, or a rich desktop application running on a PC. REST-based services which utilize the less verbose characteristics of JSON as a transport mechanism, have become the preferred approach over older, more bloated SOAP-based techniques. SOA also has the benefit of producing a cross-platform API, as every major technology stack is able to interact with standard REST-based web services. And for web applications, more and more developers are turning to robust Javascript libraries like JQuery and Knockout for browser-based client-side development techniques for calling web services and rendering content to end users. In fact, traditional server-side page rendering has largely fallen out of favor, resulting in decreased demand for server-side frameworks like Ruby on Rails, WebForms, and (gasp) MVC. In response to these new industry trends, Microsoft did what it always does – it immediately poured some resources into developing a solution which will ensure they remain relevant and competitive in the web space. This work culminated in a new framework which was branded as Web API. It is convention-based and designed to embrace native HTTP standards without copious layers of abstraction. This framework is designed to be the ultimate replacement for both the REST aspects of WCF and ASP.NET MVC Web Services. And since it was developed out of band with a dependency only on ASP.NET 4.0, it means that it can be used immediately in a variety of production scenarios. So at Tech Ed 2012 it was made abundantly clear in numerous sessions that Microsoft views Web API as the “Future of ASP.NET”. In fact, one Microsoft PM even went as far as to say that if we look 3-4 years into the future, that all ASP.NET web applications will be developed using the Web API approach. This is a fairly bold prediction and clearly telegraphs where Microsoft plans to allocate its resources going forward. Currently Web API is being delivered as part of the MVC4 package, but this is only temporary for the sake of convenience. It also sounds like there are still internal discussions going on in terms of how to brand the various aspects of ASP.NET going forward – perhaps the moniker of “ASP.NET Web Stack” coined a couple years ago by Scott Hanselman and utilized as part of the open source release of ASP.NET bits on Codeplex a few months back will eventually stick. Web API is being positioned as the unification of ASP.NET – the glue that is able to pull this fragmented mess back together again. The  “One ASP.NET” strategy will promote the use of all frameworks - WebForms, MVC, and Web API, even within the same web project. Basically the message is utilize the appropriate aspects of each framework to solve your business problems. Instead of navigating developers to a fork in the road, the plan is to educate them that “hybrid” applications are a great strategy for delivering solutions to customers. In addition, the service-oriented approach coupled with client-side development promoted by Web API can effectively be used in both WebForms and MVC applications. So this means it is also relevant to application platforms like DotNetNuke and Sharepoint, which means that it starts to create a unified development strategy across all ASP.NET product lines once again. And so what about MVC? There have actually been rumors floated that MVC has reached a stage of maturity where, similar to WebForms, it will be treated more as a maintenance product line going forward ( MVC4 may in fact be the last significant iteration of this framework ). This may sound alarming to some folks who have recently adopted MVC but it really shouldn’t, as both WebForms and MVC will continue to play a vital role in delivering solutions to customers. They will just not be the primary area where Microsoft is spending the majority of its R&D resources. That distinction will obviously go to Web API. And when the question comes up of why not enhance MVC to make it work with Web API, you must take a step back and look at this from the higher level to see that it really makes no sense. MVC is a server-side page compositing framework; whereas, Web API promotes client-side page compositing with a heavy focus on web services. In order to make MVC work well with Web API, would require a complete rewrite of MVC and at the end of the day, there would be no upgrade path for existing MVC applications. So it really does not make much business sense. So what does this have to do with DotNetNuke? Well, around 8-12 months ago we recognized the software industry trends towards web services and client-side development. We decided to utilize a “hybrid” model which would provide compatibility for existing modules while at the same time provide a bridge for developers who wanted to utilize more modern web techniques. Customers who like the productivity and familiarity of WebForms can continue to build custom modules using the traditional approach. However, in DotNetNuke 6.2 we also introduced a new Service Framework which is actually built on top of MVC2 ( we chose to leverage MVC because it had the most intuitive, light-weight REST implementation in the .NET stack ). The Services Framework allowed us to build some rich interactive features in DotNetNuke 6.2, including the Messaging and Notification Center and Activity Feed. But based on where we know Microsoft is heading, it makes sense for the next major version of DotNetNuke ( which is expected to be released in Q4 2012 ) to migrate from MVC2 to Web API. This will likely result in some breaking changes in the Services Framework but we feel it is the best approach for ensuring the platform remains highly modern and relevant. The fact that our development strategy is perfectly aligned with the “One ASP.NET” strategy from Microsoft means that our customers and developer community can be confident in their current and future investments in the DotNetNuke platform.

    Read the article

  • TLS (STARTTLS) Failure After 10.6 Upgrade to Open Directory Master

    - by Thomas Kishel
    Hello, Environment: Mac OS X 10.6.3 install/import of a MacOS X 10.5.8 Open Directory Master server. After that upgrade, LDAP+TLS fails on our MacOS X 10.5, 10.6, CentOS, Debian, and FreeBSD clients (Apache2 and PAM). Testing using ldapsearch: ldapsearch -ZZ -H ldap://gnome.darkhorse.com -v -x -b "dc=darkhorse,dc=com" '(uid=donaldr)' uid ... fails with: ldap_start_tls: Protocol error (2) Testing adding "-d 9" fails with: res_errno: 2, res_error: <unsupported extended operation>, res_matched: <> Testing without requiring STARTTLS or with LDAPS: ldapsearch -H ldap://gnome.darkhorse.com -v -x -b "dc=darkhorse,dc=com" '(uid=donaldr)' uid ldapsearch -H ldaps://gnome.darkhorse.com -v -x -b "dc=darkhorse,dc=com" '(uid=donaldr)' uid ... succeeds with: # donaldr, users, darkhorse.com dn: uid=donaldr,cn=users,dc=darkhorse,dc=com uid: donaldr # search result search: 2 result: 0 Success # numResponses: 2 # numEntries: 1 result: 0 Success (We are specifying "TLS_REQCERT never" in /etc/openldap/ldap.conf) Testing with openssl: openssl s_client -connect gnome.darkhorse.com:636 -showcerts -state ... succeeds: CONNECTED(00000003) SSL_connect:before/connect initialization SSL_connect:SSLv2/v3 write client hello A SSL_connect:SSLv3 read server hello A depth=1 /C=US/ST=Oregon/L=Milwaukie/O=Dark Horse Comics, Inc./OU=Dark Horse Network/CN=DHC MIS Department verify error:num=19:self signed certificate in certificate chain verify return:0 SSL_connect:SSLv3 read server certificate A SSL_connect:SSLv3 read server done A SSL_connect:SSLv3 write client key exchange A SSL_connect:SSLv3 write change cipher spec A SSL_connect:SSLv3 write finished A SSL_connect:SSLv3 flush data SSL_connect:SSLv3 read finished A --- Certificate chain 0 s:/C=US/ST=Oregon/L=Milwaukie/O=Dark Horse Comics, Inc./OU=MIS/CN=gnome.darkhorse.com i:/C=US/ST=Oregon/L=Milwaukie/O=Dark Horse Comics, Inc./OU=Dark Horse Network/CN=DHC MIS Department 1 s:/C=US/ST=Oregon/L=Milwaukie/O=Dark Horse Comics, Inc./OU=Dark Horse Network/CN=DHC MIS Department i:/C=US/ST=Oregon/L=Milwaukie/O=Dark Horse Comics, Inc./OU=Dark Horse Network/CN=DHC MIS Department --- Server certificate -----BEGIN CERTIFICATE----- <deleted for brevity> -----END CERTIFICATE----- subject=/C=US/ST=Oregon/L=Milwaukie/O=Dark Horse Comics, Inc./OU=MIS/CN=gnome.darkhorse.com issuer=/C=US/ST=Oregon/L=Milwaukie/O=Dark Horse Comics, Inc./OU=Dark Horse Network/CN=DHC MIS Department --- No client certificate CA names sent --- SSL handshake has read 2640 bytes and written 325 bytes --- New, TLSv1/SSLv3, Cipher is AES256-SHA Server public key is 1024 bit Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : AES256-SHA Session-ID: D3F9536D3C64BAAB9424193F81F09D5C53B7D8E7CB5A9000C58E43285D983851 Session-ID-ctx: Master-Key: E224CC065924DDA6FABB89DBCC3E6BF89BEF6C0BD6E5D0B3C79E7DE927D6E97BF12219053BA2BB5B96EA2F6A44E934D3 Key-Arg : None Start Time: 1271202435 Timeout : 300 (sec) Verify return code: 0 (ok) So we believe that the slapd daemon is reading our certificate and writing it to LDAP clients. Apple Server Admin adds ProgramArguments ("-h ldaps:///") to /System/Library/LaunchDaemons/org.openldap.slapd.plist and TLSCertificateFile, TLSCertificateKeyFile, TLSCACertificateFile, and TLSCertificatePassphraseTool to /etc/openldap/slapd_macosxserver.conf when enabling SSL in the LDAP section of the Open Directory service. While that appears enough for LDAPS, it appears that this is not enough for TLS. Comparing our 10.6 and 10.5 slapd.conf and slapd_macosxserver.conf configuration files yields no clues. Replacing our certificate (generated with a self-signed ca) with an Apple Server Admin generated self signed certificate results in no change in ldapsearch results. Setting -d to 256 in /System/Library/LaunchDaemons/org.openldap.slapd.plist logs: 4/13/10 5:23:35 PM org.openldap.slapd[82162] conn=384 op=0 EXT oid=1.3.6.1.4.1.1466.20037 4/13/10 5:23:35 PM org.openldap.slapd[82162] conn=384 op=0 do_extended: unsupported operation "1.3.6.1.4.1.1466.20037" 4/13/10 5:23:35 PM org.openldap.slapd[82162] conn=384 op=0 RESULT tag=120 err=2 text=unsupported extended operation Any debugging advice much appreciated. -- Tom Kishel

    Read the article

  • Understanding the value of Customer Experience & Loyalty for the Telecommunications Industry

    - by raul.goycoolea
    Worried by economic woes and market forces, especially in mature markets, communications service providers (CSPs) increasingly focus on improving customer experience. In fact, it seems difficult to find a major message by a C-level executive in the developed world that does not include something on "meeting and exceeding customers' needs". Frequently in customer satisfaction studies by prominent firms, CSPs fall short of the leadership demonstrated by other industries that take customer-centric approaches to their bottom-line strategies. Consider the following:Despite the continued impact of global economic crisis, in July 2010, Apple Computer posted record revenue and net quarterly profit. Those who attribute the results primarily to the iPhone 4 launch should note that Apple also shipped around 30% more Macintosh computers than the same period the previous year. Even sales of the iPod line increased by 8% in a highly commoditized, shrinking media player market. Finally, Apple began selling iPads during the quarter, with total sales of more than 3 million units. What does Apple have that the others lack? Well, some great products (and services) to be sure, but it also excels at customer service and support, marketing, and distribution, and has one of the strongest brands globally. Its products are useful, simple to use, easy to acquire and augment, high quality, and considered very cool. They also evoke such an emotional response from many of Apple's customers, which they turn up their noses at competitive products.In other words, Apple appears to have mastered virtually every aspect of customer experience and the resultant loyalty of its customer base - even in difficult financial times. Through that unwavering customer focus, Apple continues to drive its revenues and profits to new heights. Other customer loyalty leaders like Wal-Mart, Google, Toyota and Honda are also doing well by focusing on customer experience as an essential driver of profitability. Service providers should note this performance and ask themselves how they might leverage the same principles to increase their own profitability. After all, that is what customer experience and loyalty are all about: profitability.To successfully manage all the critical touch points of customer experience, CSPs must shun the one-size-fits-all approach. They can no longer afford to view customer service fundamentally as an act of altruism - which mentality dates back to the industry's civil service days, when CSPs were typically government organizations that were critical to economic development and public safety.As regulators and public officials have pushed, and continue to push, service providers to new heights of reliability - using incentives and punishments - most CSPs already have some of the fundamental building blocks of customer service in place. Yet despite that history and experience, service providers still lag other industries in providing what is seen as good customer service.As we observed in the TMF's 2009 Insights Research report, Customer Experience Management: Driving Loyalty & Profitability there has been resurgence in interest by CSPs. More and more of them have stated ambitions to catch up other industries, and they are realizing that good customer service is a powerful strategy for increasing business performance and profitability, not an act of good will.CSPs are recognizing the connection between customer experience and profitability, as demonstrated in many studies. For example, according to research by Bain & Company, a 5 percent improvement in customer retention rates can yield as much as a 75 percent increase in profits for companies across a range of industries.After decades of customer experience strategy formulation, Bain partner and business author, Frederick Reichheld, considers "would you recommend us to a friend?" as the ultimate question for a customer. How many times have you or your friends recommended an iPod, iPhone or a Mac? What do your children recommend to their peers? Their peers to them?There are certain steps service providers have to take to create more personalized relationships with their customers, as well as reduce churn and increase profitability, all while becoming leaner and more agile. First, they have to define customer experience, we define it as the result of the sum of observations, perceptions, thoughts and feelings arising from interactions and relationships between customers and their service provider(s). Virtually every customer touch point - whether directly or indirectly linked to service providers and their partners - contributes to customer perception, satisfaction, loyalty, and ultimately profitability. Gaining leadership in customer experience and satisfaction will not be a simple task, as it is affected by virtually every customer-facing aspect of the service provider, and in turn impacts the service provider deeply - especially on the all-important bottom line. The scope of issues affecting customer experience is complex and dynamic.With new services, devices and applications extending the basis of customer experience to domains beyond the direct control of the service provider, it is likely to increase in complexity and dynamism.Customer loyalty = increased profitsAs stated earlier, customer experience programs are not fundamentally altruistic exercises, but a strategic means of improving competitiveness and profitability in the short and long term. Loyalty is essential to deriving long term profits from customers.Some of the earliest loyalty programs date back to the 1930s, when packaged goods companies offered embedded coupons for rewards to buyers, and eventually retail chains began offering reward programs to frequent shoppers. These programs continued for decades but were leapfrogged in the 1980s by more aggressive programs from the airlines.This movement was led by American Airlines, which launched the first full-scale loyalty marketing program of the modern era with the AAdvantage frequent flyer scheme. It was the first to reward frequent fliers with notional air miles that could be accumulated and later redeemed for free travel. Figure 1: Opportunities example of Customer loyalty driven profitOther airlines and travel providers were quick to grasp the incredible value of providing customers with an incentive to use their company exclusively. Within a few years, dozens of travel industry companies launched similar initiatives and now loyalty programs are achieving near-ubiquity in many service industries, especially those in which it is difficult to differentiate offerings by product attributes.The belief is that increased profitability will result from customer retention efforts because:•    The cost of acquisition occurs only at the beginning of a relationship: the longer the relationship, the lower the amortized cost;•    Account maintenance costs decline as a percentage of total costs, or as a percentage of revenue, over the lifetime of the relationship;•    Long term customers tend to be less inclined to switch and less price sensitive which can result in stable unit sales volume and increases in dollar-sales volume;•    Long term customers may initiate word-of-mouth promotions and referrals, which cost the company nothing and arguably are the most effective form of advertising;•    Long-term customers are more likely to buy ancillary products and higher margin supplemental products;•    Long term customers tend to be satisfied with their relationship with the company and are less likely to switch to competitors, making market entry or competitors gaining market share difficult;•    Regular customers tend to be less expensive to service, as they are familiar with the processes involved, require less 'education', and are consistent in their order placement;•    Increased customer retention and loyalty makes the employees' jobs easier and more satisfying. In turn, happy employees feed back into higher customer satisfaction in a virtuous circle. Figure 2: The virtuous circle of customer loyaltyFigure 2 represents a high-level example of a virtuous cycle driven by customer satisfaction and loyalty, depicting how superiority in product and service offerings, as well as strong customer support by competent employees, lead to higher sales and ultimately profitability. As stated above, this is not a new concept, but succeeding with it is difficult. It has eluded many a company driven to achieve profitability goals. Of course, for this circle to be virtuous, the customer relationship(s) must be profitable.Trying to maintain the loyalty of unprofitable customers is not a viable business strategy. It is, therefore, important that marketers can assess the profitability of each customer (or customer segment), and either improve or terminate relationships that are not profitable. This means each customer's 'relationship costs' must be understood and compared to their 'relationship revenue'. Customer lifetime value (CLV) is the most commonly used metric here, as it is generally accepted as a representation of exactly how much each customer is worth in monetary terms, and therefore a determinant of exactly how much a service provider should be willing to spend to acquire or retain that customer.CLV models make several simplifying assumptions and often involve the following inputs:•    Churn rate represents the percentage of customers who end their relationship with a company in a given period;•    Retention rate is calculated by subtracting the churn rate percentage from 100;•    Period/horizon equates to the units of time into which a customer relationship can be divided for analysis. A year is the most commonly used period for this purpose. Customer lifetime value is a multi-period calculation, often projecting three to seven years into the future. In practice, analysis beyond this point is viewed as too speculative to be reliable. The model horizon is the number of periods used in the calculation;•    Periodic revenue is the amount of revenue collected from a customer in a given period (though this is often extended across multiple periods into the future to understand lifetime value), such as usage revenue, revenues anticipated from cross and upselling, and often some weighting for referrals by a loyal customer to others; •    Retention cost describes the amount of money the service provider must spend, in a given period, to retain an existing customer. Again, this is often forecast across multiple periods. Retention costs include customer support, billing, promotional incentives and so on;•    Discount rate means the cost of capital used to discount future revenue from a customer. Discounting is an advanced method used in more sophisticated CLV calculations;•    Profit margin is the projected profit as a percentage of revenue for the period. This may be reflected as a percentage of gross or net profit. Again, this is generally projected across the model horizon to understand lifetime value.A strong focus on managing these inputs can help service providers realize stronger customer relationships and profits, but there are some obstacles to overcome in achieving accurate calculations of CLV, such as the complexity of allocating costs across the customer base. There are many costs that serve all customers which must be properly allocated across the base, and often a simple proportional allocation across the whole base or a segment may not accurately reflect the true cost of serving that customer;  This is made worse by the fragmentation of customer information, which is likely to be across a variety of product or operations groups, and may be difficult to aggregate due to different representations.In addition, there is the complexity of account relationships and structures to take into consideration. Complex account structures may not be understood or properly represented. For example, a profitable customer may have a separate account for a second home or another family member, which may appear to be unprofitable. If the service provider cannot relate the two accounts, CLV is not properly represented and any resultant cancellation of the apparently unprofitable account may result in the customer churning from the profitable one.In summary, if service providers are to realize strong customer relationships and their attendant profits, there must be a very strong focus on data management. This needs to be coupled with analytics that help business managers and those who work in customer-facing functions offer highly personalized solutions to customers, while maintaining profitability for the service provider. It's clear that acquiring new customers is expensive. Advertising costs, campaign management expenses, promotional service pricing and discounting, and equipment subsidies make a serious dent in a new customer's profitability. That is especially true given the rising subsidies for Smartphone users, which service providers hope will result in greater profits from profits from data services profitability in future.  The situation is made worse by falling prices and greater competition in mature markets.Customer acquisition through industry consolidation isn't cheap either. A North American service provider spent about $2,000 per subscriber in its acquisition of a smaller company earlier this year. While this has allowed it to leapfrog to become the largest mobile service provider in the country, it required a total investment of more than $28 billion (including assumption of the acquiree's debt).While many operating cost synergies clearly made this deal more attractive to the acquiring company, this is certainly an expensive way to acquire customers: the cost per subscriber in this case is not out of line with the prices others have paid for acquisitions.While growth by acquisition certainly increases overall revenues, it often creates tremendous challenges for profitability. Organic growth through increased customer loyalty and retention is a more effective driver of profit, as well as a stronger predictor of future profitability. Service providers, especially those in mature markets, are increasingly recognizing this and taking steps toward a creating a more personalized, flexible and satisfying experience for their customers.In summary, the clearest path to profitability for companies in virtually all industries is through customer retention and maximization of lifetime value. Service providers would do well to recognize this and focus attention on profitable customer relationships.

    Read the article

  • Cannot connect to MySQL Server on RHEL 5.7

    - by Jeffrey Wong
    I have a standard MySQL Server running on Red hat 5.7. I have edited /etc/my.cnf to specify the bind address as my server's public IP address. [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql # Default to using old password format for compatibility with mysql 3.x # clients (those using the mysqlclient10 compatibility package). old_passwords=1 # Disabling symbolic-links is recommended to prevent assorted security risks ; # to do so, uncomment this line: # symbolic-links=0 [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid bind-address=171.67.88.25 port=3306 And I have also restarted my firewall sudo /sbin/iptables -A INPUT -i eth0 -p tcp --destination-port 3306 -j ACCEPT /sbin/service iptables save The network administrator has already opened port 3306 for this box. When connecting from a remote computer (running Ubuntu 10.10, server is running RHEL 5.7), I issue mysql -u jeffrey -p --host=171.67.88.25 --port=3306 --socket=/var/lib/mysql/mysql.sock but receive a ERROR 2003 (HY000): Can't connect to MySQL server on '171.67.88.25' (113). I've noticed that the socket file /var/lib/mysql/mysql.sock is blank. Should this be the case? UPDATE The result of netstat -an | grep 3306 tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN Result of sudo netstat -tulpen Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State User Inode PID/Program name tcp 0 0 127.0.0.1:2208 0.0.0.0:* LISTEN 0 7602 3168/hpiod tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN 27 7827 3298/mysqld tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 0 5110 2802/portmap tcp 0 0 0.0.0.0:8787 0.0.0.0:* LISTEN 0 8431 3326/rserver tcp 0 0 0.0.0.0:915 0.0.0.0:* LISTEN 0 5312 2853/rpc.statd tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 0 7655 3188/sshd tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 0 7688 3199/cupsd tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 0 8025 3362/sendmail: acce tcp 0 0 127.0.0.1:2207 0.0.0.0:* LISTEN 0 7620 3173/python udp 0 0 0.0.0.0:909 0.0.0.0:* 0 5300 2853/rpc.statd udp 0 0 0.0.0.0:912 0.0.0.0:* 0 5309 2853/rpc.statd udp 0 0 0.0.0.0:68 0.0.0.0:* 0 4800 2598/dhclient udp 0 0 0.0.0.0:36177 0.0.0.0:* 70 8314 3476/avahi-daemon: udp 0 0 0.0.0.0:5353 0.0.0.0:* 70 8313 3476/avahi-daemon: udp 0 0 0.0.0.0:111 0.0.0.0:* 0 5109 2802/portmap udp 0 0 0.0.0.0:631 0.0.0.0:* 0 7691 3199/cupsd Result of sudo /sbin/iptables -L -v -n Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 6373 2110K RH-Firewall-1-INPUT all -- * * 0.0.0.0/0 0.0.0.0/0 Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 RH-Firewall-1-INPUT all -- * * 0.0.0.0/0 0.0.0.0/0 Chain OUTPUT (policy ACCEPT 1241 packets, 932K bytes) pkts bytes target prot opt in out source destination Chain RH-Firewall-1-INPUT (2 references) pkts bytes target prot opt in out source destination 572 861K ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 1 28 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0 icmp type 255 0 0 ACCEPT esp -- * * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT ah -- * * 0.0.0.0/0 0.0.0.0/0 46 6457 ACCEPT udp -- * * 0.0.0.0/0 224.0.0.251 udp dpt:5353 0 0 ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 udp dpt:631 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:631 782 157K ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 2 120 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:443 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:23 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:80 4970 1086K REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited Result of nmap -P0 -p3306 171.67.88.25 Host is up (0.027s latency). PORT STATE SERVICE 3306/tcp filtered mysql Nmap done: 1 IP address (1 host up) scanned in 0.09 seconds Solution When everything else fails, go GUI! system-config-securitylevel and add port 3306. All done!

    Read the article

  • C#/.NET Little Wonders: Interlocked CompareExchange()

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. Two posts ago, I discussed the Interlocked Add(), Increment(), and Decrement() methods (here) for adding and subtracting values in a thread-safe, lightweight manner.  Then, last post I talked about the Interlocked Read() and Exchange() methods (here) for safely and efficiently reading and setting 32 or 64 bit values (or references).  This week, we’ll round out the discussion by talking about the Interlocked CompareExchange() method and how it can be put to use to exchange a value if the current value is what you expected it to be. Dirty reads can lead to bad results Many of the uses of Interlocked that we’ve explored so far have centered around either reading, setting, or adding values.  But what happens if you want to do something more complex such as setting a value based on the previous value in some manner? Perhaps you were creating an application that reads a current balance, applies a deposit, and then saves the new modified balance, where of course you’d want that to happen atomically.  If you read the balance, then go to save the new balance and between that time the previous balance has already changed, you’ll have an issue!  Think about it, if we read the current balance as $400, and we are applying a new deposit of $50.75, but meanwhile someone else deposits $200 and sets the total to $600, but then we write a total of $450.75 we’ve lost $200! Now, certainly for int and long values we can use Interlocked.Add() to handles these cases, and it works well for that.  But what if we want to work with doubles, for example?  Let’s say we wanted to add the numbers from 0 to 99,999 in parallel.  We could do this by spawning several parallel tasks to continuously add to a total: 1: double total = 0; 2:  3: Parallel.For(0, 10000, next => 4: { 5: total += next; 6: }); Were this run on one thread using a standard for loop, we’d expect an answer of 4,999,950,000 (the sum of all numbers from 0 to 99,999).  But when we run this in parallel as written above, we’ll likely get something far off.  The result of one of my runs, for example, was 1,281,880,740.  That is way off!  If this were banking software we’d be in big trouble with our clients.  So what happened?  The += operator is not atomic, it will read in the current value, add the result, then store it back into the total.  At any point in all of this another thread could read a “dirty” current total and accidentally “skip” our add.   So, to clean this up, we could use a lock to guarantee concurrency: 1: double total = 0.0; 2: object locker = new object(); 3:  4: Parallel.For(0, count, next => 5: { 6: lock (locker) 7: { 8: total += next; 9: } 10: }); Which will give us the correct result of 4,999,950,000.  One thing to note is that locking can be heavy, especially if the operation being locked over is trivial, or the life of the lock is a high percentage of the work being performed concurrently.  In the case above, the lock consumes pretty much all of the time of each parallel task – and the task being locked on is relatively trivial. Now, let me put in a disclaimer here before we go further: For most uses, lock is more than sufficient for your needs, and is often the simplest solution!    So, if lock is sufficient for most needs, why would we ever consider another solution?  The problem with locking is that it can suspend execution of your thread while it waits for the signal that the lock is free.  Moreover, if the operation being locked over is trivial, the lock can add a very high level of overhead.  This is why things like Interlocked.Increment() perform so well, instead of locking just to perform an increment, we perform the increment with an atomic, lockless method. As with all things performance related, it’s important to profile before jumping to the conclusion that you should optimize everything in your path.  If your profiling shows that locking is causing a high level of waiting in your application, then it’s time to consider lighter alternatives such as Interlocked. CompareExchange() – Exchange existing value if equal some value So let’s look at how we could use CompareExchange() to solve our problem above.  The general syntax of CompareExchange() is: T CompareExchange<T>(ref T location, T newValue, T expectedValue) If the value in location == expectedValue, then newValue is exchanged.  Either way, the value in location (before exchange) is returned. Actually, CompareExchange() is not one method, but a family of overloaded methods that can take int, long, float, double, pointers, or references.  It cannot take other value types (that is, can’t CompareExchange() two DateTime instances directly).  Also keep in mind that the version that takes any reference type (the generic overload) only checks for reference equality, it does not call any overridden Equals(). So how does this help us?  Well, we can grab the current total, and exchange the new value if total hasn’t changed.  This would look like this: 1: // grab the snapshot 2: double current = total; 3:  4: // if the total hasn’t changed since I grabbed the snapshot, then 5: // set it to the new total 6: Interlocked.CompareExchange(ref total, current + next, current); So what the code above says is: if the amount in total (1st arg) is the same as the amount in current (3rd arg), then set total to current + next (2nd arg).  This check and exchange pair is atomic (and thus thread-safe). This works if total is the same as our snapshot in current, but the problem, is what happens if they aren’t the same?  Well, we know that in either case we will get the previous value of total (before the exchange), back as a result.  Thus, we can test this against our snapshot to see if it was the value we expected: 1: // if the value returned is != current, then our snapshot must be out of date 2: // which means we didn't (and shouldn't) apply current + next 3: if (Interlocked.CompareExchange(ref total, current + next, current) != current) 4: { 5: // ooops, total was not equal to our snapshot in current, what should we do??? 6: } So what do we do if we fail?  That’s up to you and the problem you are trying to solve.  It’s possible you would decide to abort the whole transaction, or perhaps do a lightweight spin and try again.  Let’s try that: 1: double current = total; 2:  3: // make first attempt... 4: if (Interlocked.CompareExchange(ref total, current + i, current) != current) 5: { 6: // if we fail, go into a spin wait, spin, and try again until succeed 7: var spinner = new SpinWait(); 8:  9: do 10: { 11: spinner.SpinOnce(); 12: current = total; 13: } 14: while (Interlocked.CompareExchange(ref total, current + i, current) != current); 15: } 16:  This is not trivial code, but it illustrates a possible use of CompareExchange().  What we are doing is first checking to see if we succeed on the first try, and if so great!  If not, we create a SpinWait and then repeat the process of SpinOnce(), grab a fresh snapshot, and repeat until CompareExchnage() succeeds.  You may wonder why not a simple do-while here, and the reason it’s more efficient to only create the SpinWait until we absolutely know we need one, for optimal efficiency. Though not as simple (or maintainable) as a simple lock, this will perform better in many situations.  Comparing an unlocked (and wrong) version, a version using lock, and the Interlocked of the code, we get the following average times for multiple iterations of adding the sum of 100,000 numbers: 1: Unlocked money average time: 2.1 ms 2: Locked money average time: 5.1 ms 3: Interlocked money average time: 3 ms So the Interlocked.CompareExchange(), while heavier to code, came in lighter than the lock, offering a good compromise of safety and performance when we need to reduce contention. CompareExchange() - it’s not just for adding stuff… So that was one simple use of CompareExchange() in the context of adding double values -- which meant we couldn’t have used the simpler Interlocked.Add() -- but it has other uses as well. If you think about it, this really works anytime you want to create something new based on a current value without using a full lock.  For example, you could use it to create a simple lazy instantiation implementation.  In this case, we want to set the lazy instance only if the previous value was null: 1: public static class Lazy<T> where T : class, new() 2: { 3: private static T _instance; 4:  5: public static T Instance 6: { 7: get 8: { 9: // if current is null, we need to create new instance 10: if (_instance == null) 11: { 12: // attempt create, it will only set if previous was null 13: Interlocked.CompareExchange(ref _instance, new T(), (T)null); 14: } 15:  16: return _instance; 17: } 18: } 19: } So, if _instance == null, this will create a new T() and attempt to exchange it with _instance.  If _instance is not null, then it does nothing and we discard the new T() we created. This is a way to create lazy instances of a type where we are more concerned about locking overhead than creating an accidental duplicate which is not used.  In fact, the BCL implementation of Lazy<T> offers a similar thread-safety choice for Publication thread safety, where it will not guarantee only one instance was created, but it will guarantee that all readers get the same instance.  Another possible use would be in concurrent collections.  Let’s say, for example, that you are creating your own brand new super stack that uses a linked list paradigm and is “lock free”.  We could use Interlocked.CompareExchange() to be able to do a lockless Push() which could be more efficient in multi-threaded applications where several threads are pushing and popping on the stack concurrently. Yes, there are already concurrent collections in the BCL (in .NET 4.0 as part of the TPL), but it’s a fun exercise!  So let’s assume we have a node like this: 1: public sealed class Node<T> 2: { 3: // the data for this node 4: public T Data { get; set; } 5:  6: // the link to the next instance 7: internal Node<T> Next { get; set; } 8: } Then, perhaps, our stack’s Push() operation might look something like: 1: public sealed class SuperStack<T> 2: { 3: private volatile T _head; 4:  5: public void Push(T value) 6: { 7: var newNode = new Node<int> { Data = value, Next = _head }; 8:  9: if (Interlocked.CompareExchange(ref _head, newNode, newNode.Next) != newNode.Next) 10: { 11: var spinner = new SpinWait(); 12:  13: do 14: { 15: spinner.SpinOnce(); 16: newNode.Next = _head; 17: } 18: while (Interlocked.CompareExchange(ref _head, newNode, newNode.Next) != newNode.Next); 19: } 20: } 21:  22: // ... 23: } Notice a similar paradigm here as with adding our doubles before.  What we are doing is creating the new Node with the data to push, and with a Next value being the original node referenced by _head.  This will create our stack behavior (LIFO – Last In, First Out).  Now, we have to set _head to now refer to the newNode, but we must first make sure it hasn’t changed! So we check to see if _head has the same value we saved in our snapshot as newNode.Next, and if so, we set _head to newNode.  This is all done atomically, and the result is _head’s original value, as long as the original value was what we assumed it was with newNode.Next, then we are good and we set it without a lock!  If not, we SpinWait and try again. Once again, this is much lighter than locking in highly parallelized code with lots of contention.  If I compare the method above with a similar class using lock, I get the following results for pushing 100,000 items: 1: Locked SuperStack average time: 6 ms 2: Interlocked SuperStack average time: 4.5 ms So, once again, we can get more efficient than a lock, though there is the cost of added code complexity.  Fortunately for you, most of the concurrent collection you’d ever need are already created for you in the System.Collections.Concurrent (here) namespace – for more information, see my Little Wonders – The Concurent Collections Part 1 (here), Part 2 (here), and Part 3 (here). Summary We’ve seen before how the Interlocked class can be used to safely and efficiently add, increment, decrement, read, and exchange values in a multi-threaded environment.  In addition to these, Interlocked CompareExchange() can be used to perform more complex logic without the need of a lock when lock contention is a concern. The added efficiency, though, comes at the cost of more complex code.  As such, the standard lock is often sufficient for most thread-safety needs.  But if profiling indicates you spend a lot of time waiting for locks, or if you just need a lock for something simple such as an increment, decrement, read, exchange, etc., then consider using the Interlocked class’s methods to reduce wait. Technorati Tags: C#,CSharp,.NET,Little Wonders,Interlocked,CompareExchange,threading,concurrency

    Read the article

< Previous Page | 113 114 115 116 117 118 119 120 121 122 123 124  | Next Page >