Search Results

Search found 1914 results on 77 pages for 'mongrel cluster'.

Page 30/77 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • Why isn't Passenger respecting my custom log format?

    - by Millisami
    I need to change the log format of my rails app. I put this file in lib directory and required it in development.rb env file. require 'hodel_3000_compliant_logger' config.logger = Hodel3000CompliantLogger.new(config.log_path) and I should get the output of the development.log file as follows: Jun 28 03:05:13 millisami-notebook rails[18243]: Memory usage: 86888 | PID: 18243 I get this exact log when I start my app with script/server (Mongrel). But when I run the app via Passenger, the format being logged is Rails' default. Why doesn't Passenger write to the log file like Mongrel does?

    Read the article

  • Can anyone tell me what the authors mean on this line?

    - by Anirudh Goel
    i was going through this link: FAT16 Basics to Assemble Clusters. I have read the structures involved in defining a directory entry in FAT. Now when giving the example for a FAT16 File, it says the data cluster is 0x03 for the example file MyFile.txt. Which means if we logically compute the Data Cluster we will be able to reach to the first node which happens to be cluster no 3. But what I fail to understand is what the author is trying to say in the next line where it says What we can see in the File Allocation Table at this moment? How suddenly we reach to the File Allocation Table? Weren't we already there when we were going through the information of Myfile.txt? I couldn't find any reason how suddenly the author jumped to an offset location of 00000200 and is identifying the emptiness of the clusters. It will be great if someone can help me understand. Thanks

    Read the article

  • How to find all clusters of forest on map?

    - by Damir
    How to find all clusters of forest on map ? I have simple class cell like (Type is enum {RIVER, FOREST,GRASS,HILL} class Cell{ public: Type type; int x; int y }; and map like vector<Cell> grid. Can anyone suggest me algorithm to create list<list<Cell>> clusters where list contains FOREST cells in same cluster (cluster are set of connected cells - connection can be in eight direction:up,down,left,right,up_right,up_left,down_left,down_right)? I need to find all clusters of forest on map and put every single cluster in list<Cell>.

    Read the article

  • Running Ruby app on Apache

    - by TandemAdam
    I have been learning Ruby lately, and I want to upload a test web application to my server. But I can't figure out how to get it to run on my shared hosting. My Hosting Details Shared Hosting with JustHost (see here for list of features) OS: Linux Apache: 2.2.11 cPanel: 11.25.0-STABLE No SSH access. Can install Ruby Gems. Can't install Apache modules. Can "Manage Ruby on Rails Applications" through cPanel. Mongrel gem is installed. I built the following simple HelloWorld Ruby Rack app using Sinatra: #!/usr/bin/ruby ruby require 'rubygems' require 'sinatra' get '/hi' do "Hello World!" end I just can can't figure out how to "start" the application. Do I need to tell Mongrel (or maybe Apache) that the application exists somehow? How do I start this app running? I am happy to provide more info if needed.

    Read the article

  • Why does Rails with Passenger/nginx only works in development mode? No logs available

    - by Michael W.
    Hey folks, I have a serious problem with one of our webservers... after having an internal alpha-testing with a mongrel/haproxy-cluster that worked well, we wanted to use nginx with passenger for our first production server (customers will access this server). However, I can only run the rails app via development mode with passenger/nginx. The app itself runs perfect with mongrel or webrick in production mode. My biggest problem with this case is that I don't find ANY information in the nginx or rails-logs (only when I use mongrel or webrick). Permissions are correct. Passenger-status shows that the app is running, but I always get the static 500.html-error page... It would be so nice if you guys could give me a hint and help me solve the problem. I put the config at the bottom of the post... This exact config works with rails_env development;but I'd like to use the production mode ;-) Thank you very much for your help! Version: Ubuntu 8.04.2 64bit / nginx-0.7.64 (compiled and installed via passenger-2.2.11) cat /opt/nginx/conf/nginx.conf user www-data; worker_processes 4; error_log logs/error.log; #pid logs/nginx.pid; events { worker_connections 1024; } http { passenger_root /usr/lib/ruby/gems/1.8/gems/passenger-2.2.11; passenger_ruby /usr/bin/ruby1.8; passenger_log_level 3; include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; access_log logs/access.log; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; server { listen 80; server_name <<servername>>; root /srv/app01/public; passenger_enabled on; }

    Read the article

  • Rails: How can I log all requests which take more than 4s to execute?

    - by Fedyashev Nikita
    I have a web app hosted in a cloud environment which can be expanded to multiple web-nodes to serve higher load. What I need to do is to catch this situation when we get more and more HTTP requests (assets are stored remotely). How can I do that? The problem I see from this point of view is that if we have more requests than mongrel cluster can handle then the queue will grow. And in our Rails app we can only count only after mongrel will receive the request from balancer.. Any recommendations?

    Read the article

  • LSI 9285-8e and Supermicro SC837E26-RJBOD1 duplicate enclosure ID and slot numbers

    - by Andy Shinn
    I am working with 2 x Supermicro SC837E26-RJBOD1 chassis connected to a single LSI 9285-8e card in a Supermicro 1U host. There are 28 drives in each chassis for a total of 56 drives in 28 RAID1 mirrors. The problem I am running in to is that there are duplicate slots for the 2 chassis (the slots list twice and only go from 0 to 27). All the drives also show the same enclosure ID (ID 36). However, MegaCLI -encinfo lists the 2 enclosures correctly (ID 36 and ID 65). My question is, why would this happen? Is there an option I am missing to use 2 enclosures effectively? This is blocking me rebuilding a drive that failed in slot 11 since I can only specify enclosure and slot as parameters to replace a drive. When I do this, it picks the wrong slot 11 (device ID 46 instead of device ID 19). Adapter #1 is the LSI 9285-8e, adapter #0 (which I removed due to space limitations) is the onboard LSI. Adapter information: Adapter #1 ============================================================================== Versions ================ Product Name : LSI MegaRAID SAS 9285-8e Serial No : SV12704804 FW Package Build: 23.1.1-0004 Mfg. Data ================ Mfg. Date : 06/30/11 Rework Date : 00/00/00 Revision No : 00A Battery FRU : N/A Image Versions in Flash: ================ BIOS Version : 5.25.00_4.11.05.00_0x05040000 WebBIOS Version : 6.1-20-e_20-Rel Preboot CLI Version: 05.01-04:#%00001 FW Version : 3.140.15-1320 NVDATA Version : 2.1106.03-0051 Boot Block Version : 2.04.00.00-0001 BOOT Version : 06.253.57.219 Pending Images in Flash ================ None PCI Info ================ Vendor Id : 1000 Device Id : 005b SubVendorId : 1000 SubDeviceId : 9285 Host Interface : PCIE ChipRevision : B0 Number of Frontend Port: 0 Device Interface : PCIE Number of Backend Port: 8 Port : Address 0 5003048000ee8e7f 1 5003048000ee8a7f 2 0000000000000000 3 0000000000000000 4 0000000000000000 5 0000000000000000 6 0000000000000000 7 0000000000000000 HW Configuration ================ SAS Address : 500605b0038f9210 BBU : Present Alarm : Present NVRAM : Present Serial Debugger : Present Memory : Present Flash : Present Memory Size : 1024MB TPM : Absent On board Expander: Absent Upgrade Key : Absent Temperature sensor for ROC : Present Temperature sensor for controller : Absent ROC temperature : 70 degree Celcius Settings ================ Current Time : 18:24:36 3/13, 2012 Predictive Fail Poll Interval : 300sec Interrupt Throttle Active Count : 16 Interrupt Throttle Completion : 50us Rebuild Rate : 30% PR Rate : 30% BGI Rate : 30% Check Consistency Rate : 30% Reconstruction Rate : 30% Cache Flush Interval : 4s Max Drives to Spinup at One Time : 2 Delay Among Spinup Groups : 12s Physical Drive Coercion Mode : Disabled Cluster Mode : Disabled Alarm : Enabled Auto Rebuild : Enabled Battery Warning : Enabled Ecc Bucket Size : 15 Ecc Bucket Leak Rate : 1440 Minutes Restore HotSpare on Insertion : Disabled Expose Enclosure Devices : Enabled Maintain PD Fail History : Enabled Host Request Reordering : Enabled Auto Detect BackPlane Enabled : SGPIO/i2c SEP Load Balance Mode : Auto Use FDE Only : No Security Key Assigned : No Security Key Failed : No Security Key Not Backedup : No Default LD PowerSave Policy : Controller Defined Maximum number of direct attached drives to spin up in 1 min : 10 Any Offline VD Cache Preserved : No Allow Boot with Preserved Cache : No Disable Online Controller Reset : No PFK in NVRAM : No Use disk activity for locate : No Capabilities ================ RAID Level Supported : RAID0, RAID1, RAID5, RAID6, RAID00, RAID10, RAID50, RAID60, PRL 11, PRL 11 with spanning, SRL 3 supported, PRL11-RLQ0 DDF layout with no span, PRL11-RLQ0 DDF layout with span Supported Drives : SAS, SATA Allowed Mixing: Mix in Enclosure Allowed Mix of SAS/SATA of HDD type in VD Allowed Status ================ ECC Bucket Count : 0 Limitations ================ Max Arms Per VD : 32 Max Spans Per VD : 8 Max Arrays : 128 Max Number of VDs : 64 Max Parallel Commands : 1008 Max SGE Count : 60 Max Data Transfer Size : 8192 sectors Max Strips PerIO : 42 Max LD per array : 16 Min Strip Size : 8 KB Max Strip Size : 1.0 MB Max Configurable CacheCade Size: 0 GB Current Size of CacheCade : 0 GB Current Size of FW Cache : 887 MB Device Present ================ Virtual Drives : 28 Degraded : 0 Offline : 0 Physical Devices : 59 Disks : 56 Critical Disks : 0 Failed Disks : 0 Supported Adapter Operations ================ Rebuild Rate : Yes CC Rate : Yes BGI Rate : Yes Reconstruct Rate : Yes Patrol Read Rate : Yes Alarm Control : Yes Cluster Support : No BBU : No Spanning : Yes Dedicated Hot Spare : Yes Revertible Hot Spares : Yes Foreign Config Import : Yes Self Diagnostic : Yes Allow Mixed Redundancy on Array : No Global Hot Spares : Yes Deny SCSI Passthrough : No Deny SMP Passthrough : No Deny STP Passthrough : No Support Security : No Snapshot Enabled : No Support the OCE without adding drives : Yes Support PFK : Yes Support PI : No Support Boot Time PFK Change : Yes Disable Online PFK Change : No PFK TrailTime Remaining : 0 days 0 hours Support Shield State : Yes Block SSD Write Disk Cache Change: Yes Supported VD Operations ================ Read Policy : Yes Write Policy : Yes IO Policy : Yes Access Policy : Yes Disk Cache Policy : Yes Reconstruction : Yes Deny Locate : No Deny CC : No Allow Ctrl Encryption: No Enable LDBBM : No Support Breakmirror : No Power Savings : Yes Supported PD Operations ================ Force Online : Yes Force Offline : Yes Force Rebuild : Yes Deny Force Failed : No Deny Force Good/Bad : No Deny Missing Replace : No Deny Clear : No Deny Locate : No Support Temperature : Yes Disable Copyback : No Enable JBOD : No Enable Copyback on SMART : No Enable Copyback to SSD on SMART Error : Yes Enable SSD Patrol Read : No PR Correct Unconfigured Areas : Yes Enable Spin Down of UnConfigured Drives : Yes Disable Spin Down of hot spares : No Spin Down time : 30 T10 Power State : Yes Error Counters ================ Memory Correctable Errors : 0 Memory Uncorrectable Errors : 0 Cluster Information ================ Cluster Permitted : No Cluster Active : No Default Settings ================ Phy Polarity : 0 Phy PolaritySplit : 0 Background Rate : 30 Strip Size : 64kB Flush Time : 4 seconds Write Policy : WB Read Policy : Adaptive Cache When BBU Bad : Disabled Cached IO : No SMART Mode : Mode 6 Alarm Disable : Yes Coercion Mode : None ZCR Config : Unknown Dirty LED Shows Drive Activity : No BIOS Continue on Error : No Spin Down Mode : None Allowed Device Type : SAS/SATA Mix Allow Mix in Enclosure : Yes Allow HDD SAS/SATA Mix in VD : Yes Allow SSD SAS/SATA Mix in VD : No Allow HDD/SSD Mix in VD : No Allow SATA in Cluster : No Max Chained Enclosures : 16 Disable Ctrl-R : Yes Enable Web BIOS : Yes Direct PD Mapping : No BIOS Enumerate VDs : Yes Restore Hot Spare on Insertion : No Expose Enclosure Devices : Yes Maintain PD Fail History : Yes Disable Puncturing : No Zero Based Enclosure Enumeration : No PreBoot CLI Enabled : Yes LED Show Drive Activity : Yes Cluster Disable : Yes SAS Disable : No Auto Detect BackPlane Enable : SGPIO/i2c SEP Use FDE Only : No Enable Led Header : No Delay during POST : 0 EnableCrashDump : No Disable Online Controller Reset : No EnableLDBBM : No Un-Certified Hard Disk Drives : Allow Treat Single span R1E as R10 : No Max LD per array : 16 Power Saving option : Don't Auto spin down Configured Drives Max power savings option is not allowed for LDs. Only T10 power conditions are to be used. Default spin down time in minutes: 30 Enable JBOD : No TTY Log In Flash : No Auto Enhanced Import : No BreakMirror RAID Support : No Disable Join Mirror : No Enable Shield State : Yes Time taken to detect CME : 60s Exit Code: 0x00 Enclosure information: # /opt/MegaRAID/MegaCli/MegaCli64 -encinfo -a1 Number of enclosures on adapter 1 -- 3 Enclosure 0: Device ID : 36 Number of Slots : 28 Number of Power Supplies : 2 Number of Fans : 3 Number of Temperature Sensors : 1 Number of Alarms : 1 Number of SIM Modules : 0 Number of Physical Drives : 28 Status : Normal Position : 1 Connector Name : Port B Enclosure type : SES VendorId is LSI CORP and Product Id is SAS2X36 VendorID and Product ID didnt match FRU Part Number : N/A Enclosure Serial Number : N/A ESM Serial Number : N/A Enclosure Zoning Mode : N/A Partner Device Id : 65 Inquiry data : Vendor Identification : LSI CORP Product Identification : SAS2X36 Product Revision Level : 0718 Vendor Specific : x36-55.7.24.1 Number of Voltage Sensors :2 Voltage Sensor :0 Voltage Sensor Status :OK Voltage Value :5020 milli volts Voltage Sensor :1 Voltage Sensor Status :OK Voltage Value :11820 milli volts Number of Power Supplies : 2 Power Supply : 0 Power Supply Status : OK Power Supply : 1 Power Supply Status : OK Number of Fans : 3 Fan : 0 Fan Speed :Low Speed Fan Status : OK Fan : 1 Fan Speed :Low Speed Fan Status : OK Fan : 2 Fan Speed :Low Speed Fan Status : OK Number of Temperature Sensors : 1 Temp Sensor : 0 Temperature : 48 Temperature Sensor Status : OK Number of Chassis : 1 Chassis : 0 Chassis Status : OK Enclosure 1: Device ID : 65 Number of Slots : 28 Number of Power Supplies : 2 Number of Fans : 3 Number of Temperature Sensors : 1 Number of Alarms : 1 Number of SIM Modules : 0 Number of Physical Drives : 28 Status : Normal Position : 1 Connector Name : Port A Enclosure type : SES VendorId is LSI CORP and Product Id is SAS2X36 VendorID and Product ID didnt match FRU Part Number : N/A Enclosure Serial Number : N/A ESM Serial Number : N/A Enclosure Zoning Mode : N/A Partner Device Id : 36 Inquiry data : Vendor Identification : LSI CORP Product Identification : SAS2X36 Product Revision Level : 0718 Vendor Specific : x36-55.7.24.1 Number of Voltage Sensors :2 Voltage Sensor :0 Voltage Sensor Status :OK Voltage Value :5020 milli volts Voltage Sensor :1 Voltage Sensor Status :OK Voltage Value :11760 milli volts Number of Power Supplies : 2 Power Supply : 0 Power Supply Status : OK Power Supply : 1 Power Supply Status : OK Number of Fans : 3 Fan : 0 Fan Speed :Low Speed Fan Status : OK Fan : 1 Fan Speed :Low Speed Fan Status : OK Fan : 2 Fan Speed :Low Speed Fan Status : OK Number of Temperature Sensors : 1 Temp Sensor : 0 Temperature : 47 Temperature Sensor Status : OK Number of Chassis : 1 Chassis : 0 Chassis Status : OK Enclosure 2: Device ID : 252 Number of Slots : 8 Number of Power Supplies : 0 Number of Fans : 0 Number of Temperature Sensors : 0 Number of Alarms : 0 Number of SIM Modules : 1 Number of Physical Drives : 0 Status : Normal Position : 1 Connector Name : Unavailable Enclosure type : SGPIO Failed in first Inquiry commnad FRU Part Number : N/A Enclosure Serial Number : N/A ESM Serial Number : N/A Enclosure Zoning Mode : N/A Partner Device Id : Unavailable Inquiry data : Vendor Identification : LSI Product Identification : SGPIO Product Revision Level : N/A Vendor Specific : Exit Code: 0x00 Now, notice that each slot 11 device shows an enclosure ID of 36, I think this is where the discrepancy happens. One should be 36. But the other should be on enclosure 65. Drives in slot 11: Enclosure Device ID: 36 Slot Number: 11 Drive's postion: DiskGroup: 5, Span: 0, Arm: 1 Enclosure position: 0 Device Id: 48 WWN: Sequence Number: 11 Media Error Count: 0 Other Error Count: 0 Predictive Failure Count: 0 Last Predictive Failure Event Seq Number: 0 PD Type: SATA Raw Size: 2.728 TB [0x15d50a3b0 Sectors] Non Coerced Size: 2.728 TB [0x15d40a3b0 Sectors] Coerced Size: 2.728 TB [0x15d400000 Sectors] Firmware state: Online, Spun Up Is Commissioned Spare : YES Device Firmware Level: A5C0 Shield Counter: 0 Successful diagnostics completion on : N/A SAS Address(0): 0x5003048000ee8a53 Connected Port Number: 1(path0) Inquiry Data: MJ1311YNG6YYXAHitachi HDS5C3030ALA630 MEAOA5C0 FDE Enable: Disable Secured: Unsecured Locked: Unlocked Needs EKM Attention: No Foreign State: None Device Speed: 6.0Gb/s Link Speed: 6.0Gb/s Media Type: Hard Disk Device Drive Temperature :30C (86.00 F) PI Eligibility: No Drive is formatted for PI information: No PI: No PI Drive's write cache : Disabled Drive's NCQ setting : Enabled Port-0 : Port status: Active Port's Linkspeed: 6.0Gb/s Drive has flagged a S.M.A.R.T alert : No Enclosure Device ID: 36 Slot Number: 11 Drive's postion: DiskGroup: 19, Span: 0, Arm: 1 Enclosure position: 0 Device Id: 19 WWN: Sequence Number: 4 Media Error Count: 0 Other Error Count: 0 Predictive Failure Count: 0 Last Predictive Failure Event Seq Number: 0 PD Type: SATA Raw Size: 2.728 TB [0x15d50a3b0 Sectors] Non Coerced Size: 2.728 TB [0x15d40a3b0 Sectors] Coerced Size: 2.728 TB [0x15d400000 Sectors] Firmware state: Online, Spun Up Is Commissioned Spare : NO Device Firmware Level: A580 Shield Counter: 0 Successful diagnostics completion on : N/A SAS Address(0): 0x5003048000ee8e53 Connected Port Number: 0(path0) Inquiry Data: MJ1313YNG1VA5CHitachi HDS5C3030ALA630 MEAOA580 FDE Enable: Disable Secured: Unsecured Locked: Unlocked Needs EKM Attention: No Foreign State: None Device Speed: 6.0Gb/s Link Speed: 6.0Gb/s Media Type: Hard Disk Device Drive Temperature :30C (86.00 F) PI Eligibility: No Drive is formatted for PI information: No PI: No PI Drive's write cache : Disabled Drive's NCQ setting : Enabled Port-0 : Port status: Active Port's Linkspeed: 6.0Gb/s Drive has flagged a S.M.A.R.T alert : No Update 06/28/12: I finally have some new information about (what we think) the root cause of this problem so I thought I would share. After getting in contact with a very knowledgeable Supermicro tech, they provided us with a tool called Xflash (doesn't appear to be readily available on their FTP). When we gathered some information using this utility, my colleague found something very strange: root@mogile2 test]# ./xflash.dat -i get avail Initializing Interface. Expander: SAS2X36 (SAS2x36) 1) SAS2X36 (SAS2x36) (50030480:00EE917F) (0.0.0.0) 2) SAS2X36 (SAS2x36) (50030480:00E9D67F) (0.0.0.0) 3) SAS2X36 (SAS2x36) (50030480:0112D97F) (0.0.0.0) This lists the connected enclosures. You see the 3 connected (we have since added a 3rd and a 4th which is not yet showing up) with their respective SAS address / WWN (50030480:00EE917F). Now we can use this address to get information on the individual enclosures: [root@mogile2 test]# ./xflash.dat -i 5003048000EE917F get exp Initializing Interface. Expander: SAS2X36 (SAS2x36) Reading the expander information.......... Expander: SAS2X36 (SAS2x36) B3 SAS Address: 50030480:00EE917F Enclosure Logical Id: 50030480:0000007F IP Address: 0.0.0.0 Component Identifier: 0x0223 Component Revision: 0x05 [root@mogile2 test]# ./xflash.dat -i 5003048000E9D67F get exp Initializing Interface. Expander: SAS2X36 (SAS2x36) Reading the expander information.......... Expander: SAS2X36 (SAS2x36) B3 SAS Address: 50030480:00E9D67F Enclosure Logical Id: 50030480:0000007F IP Address: 0.0.0.0 Component Identifier: 0x0223 Component Revision: 0x05 [root@mogile2 test]# ./xflash.dat -i 500304800112D97F get exp Initializing Interface. Expander: SAS2X36 (SAS2x36) Reading the expander information.......... Expander: SAS2X36 (SAS2x36) B3 SAS Address: 50030480:0112D97F Enclosure Logical Id: 50030480:0112D97F IP Address: 0.0.0.0 Component Identifier: 0x0223 Component Revision: 0x05 Did you catch it? The first 2 enclosures logical ID is partially masked out where the 3rd one (which has a correct unique enclosure ID) is not. We pointed this out to Supermicro and were able to confirm that this address is supposed to be set during manufacturing and there was a problem with a certain batch of these enclosures where the logical ID was not set. We believe that the RAID controller is determining the ID based on the logical ID and since our first 2 enclosures have the same logical ID, they get the same enclosure ID. We also confirmed that 0000007F is the default which comes from LSI as an ID. The next pointer that helps confirm this could be a manufacturing problem with a run of JBODs is the fact that all 6 of the enclosures that have this problem begin with 00E. I believe that between 00E8 and 00EE Supermicro forgot to program the logical IDs correctly and neglected to recall or fix the problem post production. Fortunately for us, there is a tool to manage the WWN and logical ID of the devices from Supermicro: ftp://ftp.supermicro.com/utility/ExpanderXtools_Lite/. Our next step is to schedule a shutdown of these JBODs (after data migration) and reprogram the logical ID and see if it solves the problem. Update 06/28/12 #2: I just discovered this FAQ at Supermicro while Google searching for "lsi 0000007f": http://www.supermicro.com/support/faqs/faq.cfm?faq=11805. I still don't understand why, in the last several times we contacted Supermicro, they would have never directed us to this article :\

    Read the article

  • Installing UCMA 3.0 and Creating a Communications Server "14"Trusted Application Pool

    A lot of setup and administration tasks have gotten a lot easier in Communications Server 14; one of them is building an application server to develop and run your UCMA 3.0 applications on. In this post, Ill walk you through installing the UCMA 3.0 Core SDK and creating a Trusted Application Pool on the server, thus adding it to the Communications Server 14 topology and allowing you to host and run UCMA 3.0 applications on it. Note: These instructions will change slightly as the bits get updated for the eventual Beta release I will update this post as soon as I get a chance to run this setup on a more recent build. Im doing the install on a simple Communications Server 14 topology consisting of the following Windows Server 2008 R2 Hyper-V images: DC Domain Controller ExchangeUM Exchange Server 2010 CS-SE Microsoft Communications Server 2010 Standard Edition TS Development machine Ill walk through setting up UCMA 3.0 on the TS VM, which is a fully patched Windows Server 2008 R2 machine that is joined to the Fabrikam domain.   Im also running Visual Studio 2010 on this VM because I intend to use it as a development machine.  In a future post, Ill walk through installing just the UCMA 3.0 run time to build a true production UCMA application server. Im making a couple of assumptions here: You have an existing CS 2010 site and cluster configured(well look at this in a future post) Youre starting with a fully patched Windows Server 2008 R2 machine The machine is joined to your domain This walkthrough was done in my Fabrikam VM environment but can easily be modified for your own environment. Installing the UCMA 3.0 SDK Lets start by installing the UCMA 3.0 SDK.  Run UcmaSdkWebDownload.msi to kick off the SDK installer package extract process. The installed package is extracted to C: >> Program Files >> Microsoft UCMA 3.0 >> SDK Installer Package.  Browse there and run setup.exe. Click Install to install the UCMA 3.0 Core SDK and Workflow SDK. Install Communications Server Core Components UCMA 3.0 introduces a new concept called Auto-provisioning, which is most easily explained from the developer point of view.  Remember what your app.config looked it in UCMA 2.0?  You had to store the application GRUU, the trusted contact SIP Uri, the port for your application, and the name of the certificate authority. Thats all gone with auto-provisioning all you need in your app.config is your ApplicationId, e.g.: urn:application:MyApplication. How does CS 2010 do this? All of the applications configuration data is associated with the applications id.  UCMA also queries a replicated copy of the Central Management Database to retrieve the applications configuration data and also the configuration data for any endpoints. In this step, well run Bootstrapper.exe to install the CS Core components, this checked for the following components and installs them if they are not already present: VcRedist Sqlexpress Sqlnativeclient Sqlbackcompat Ucmaredist OcsCore.msi Open a command window at C: >> Program Files >> Microsoft Communications Server 2010 >> Deployment and run the following command: Bootstrapper.exe /BootstrapReplica /MinCache /SourceDirectory:"%ProgramFiles%\Microsoft UCMA 3.0\SDK Installer Package\Prereq\BootstrapperCache" Create a New Trusted Application Pool The next step is to create a new trusted application pool for the new server.  Fire up the Communications Server Management Shell from Start >> Microsoft Communications Server 2010 >> Communications Server Management Shell and enter the following PowerShell command: New-CsTrustedApplicationPool -Identity <FQDN of Server> -Registrar <FQDN of CS Server> -Site <CS Site Name> Verify that the new server was added to the CS topology by running the following PowerShell command: (Get-CsTopology -AsXml).ToString() > Topology.xml This created a file called Topology.xml in the directory that you ran the command from.  Open the file and find the Clusters section and look for a node for the new server. The Cluster Fqdn is the name of your server, and note the name of the Site that this Cluster is a part of. <Cluster Fqdn="appsrv.fabrikam.com" RequiresReplication="true" RequiresSetup="true"> <ClusterId SiteId="UcMarketing2" Number="5" /> <Machine OrdinalInCluster="1" Fqdn="appsrv.fabrikam.com"> <NetInterface InterfaceSide="Primary" InterfaceNumber="1" IPAddress="0.0.0.0" /> </Machine> </Cluster> Configure CS Management Store Replication At this point, we have the CS Core components installed and the server configured as a trusted application pool.  We now need to set up replication so that the Central Management Store replicates down to the new server. From the Communications Server Management Shell, run the following PowerShell command to enable the Replica service on the new server: Enable-CSReplica The Replica service is enabled, but hasn't done anything yet. This can be verified by running the following PowerShell command to check the replication status for the various servers in the topology: Get-CSManagementStoreReplicationStatus You can see in the screenshot below that the UpToDate property of the new server is still False Run the following PowerShell command to force the replication to run: Invoke-CSManagementStoreReplicationStatus Run Get-CSManagementStoreReplicationStatus again to verify that the new service is now up to date Request and Set a New Certificate The last step in the process is to request a new certificate from the certificate authority on the domain and assign it to the new server. From the Communications Server Management Shell, run the following PowerShell command to request a new certificate: Request-CSCertificate -Action new -Type default -CA <Domain Controller FQDN>\<Certificate Authority> Setting the -Verbose switch on the cmdlet creates an Xml file with its output. Open the Xml file and copy the thumbprint of the generated certificate. <?xml version="1.0" encoding="utf-8"?> <Action Name="Request-CsCertificate" Time="20100512T212258"> <Action Name="Request-CsCertificate" Time="20100512T212258"> <Info Title="Connection" Time="20100512T212258">Data Source=(local)\rtclocal;Initial Catalog=xds;Integrated Security=True</Info> <Action Time="20100512T212258"> <Info Title="Certificate use" Time="20100512T212258">urn:certref:default</Info> <Info Title="Subject distinguished name" Time="20100512T212258">CN="appsrv2.fabrikam.com"</Info> <Info Time="20100512T212259">The certificate request is submitted to the Certification Authority dc.fabrikam.com\FabrikamCA.</Info> <Info Time="20100512T212259">The certificate was issued.</Info> <Info Time="20100512T212259">The certificate was imported with thumbprint AFC3C46E459C1A39AD06247676F3555826DBF705.</Info> <Complete Time="20100512T212259" /> </Action> <Info Title="command status" Time="20100512T212259">Command execution processing completed</Info> <Action Name="DeploymentXdsCmdlet.SaveCachedItems" Time="20100512T212259"> <Info Time="20100512T212259">0 updates</Info> <Complete Time="20100512T212259" /> </Action> <Info Title="command status" Time="20100512T212259">Command has completed</Info> </Action> </Action> Run the following PowerShell command to set the certificate: Set-CsCertificate -Type Default -Thumbprint <Thumbprint> Wrapping Up You now have a new UCMA 3.0 application server in your Communications Server 2010 server topology.  You can provision trusted applications and trusted application endpoints on the new server using the Communications Server 2010 Management Shell.  Well take a look at how to do that in another post. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Interview with Tomas Ulin at the MySQL Innovation Day

    - by Monica Kumar
    MySQL Innovation Day held on June 5, 2012 was a great event for the MySQL engineers, users and customers to gather, share and network. I was able to get a few minutes with Tomas Ulin, Vice President of MySQL Engineering at Oracle, to ask him some questions. Here are the highlights of my interview with Tomas. Monica: This was the first MySQL Innovation Day, correct?  Why now, what was the strategy behind hosting this kind of event? Tomas: In the last year, we have rolled out an incredible number of MySQL events worldwide – some targeted at developers that are new to MySQL and others for the MySQL savvy. At the MySQL Innovation Day, our first event of this kind,, we had a number of our key engineers presenting lightning talks delivering previews of key new features as well as discussing roadmap. Our goal is to keep an open dialogue with the MySQL community. In fact, we are hosting a two-day conference, another first, for the MySQL community called MySQL Connect on Sept. 29-30 in San Francisco. If you attended the MySQL Innovation Day and liked what we did, you are going to love MySQL Connect. We’ll have a lot more of our engineers and many users and community members presenting hour long sessions and hands on labs. Our engineers will be presenting new MySQL features as well offer previews of upcoming enhancements. Monica: What's the big take-away from today's MySQL Innovation Day? Tomas: I hope the most important takeaway for attendees was to see that Oracle has been driving, and continues to drive MySQL innovation with a steady stream of new great GA and Development Milestone releases. Monica: What were attendees most interested in? What feedback did they have? Tomas: Feedback from attendees was incredibly positive and encouraging. In particular, they liked the interaction with the MySQL engineers and were also excited about the new early access features in MySQL 5.6 and MySQL Cluster 7.3. In addition, sessions delivered by MySQL users like Facebook, Pinterest and Twitter were very well received. For example, Pinterest talked about using MySQL to scale from 0 to billions of page views/month, Twitter talked about “Scaling twitter with MySQL” and Facebook discussed the many options to implement MySQL master failover solutions. The presentations are already available for download while some of the session videos will be made available on the MySQL Innovation Day web page shortly. Monica: How would you distinguish the use of MySQL vs. Oracle Database? What key factors should customers consider? Tomas: MySQL and Oracle Database complement each other. They are very different products, best suited to different use cases. Customers can choose world-class solutions from Oracle to fulfill a variety of needs. MySQL is a great choice for enterprise web-based, custom and embedded apps. Oracle Database is the leading choice for enterprise packaged applications such as ERP, CRM as well as high-end data warehousing and business intelligence applications. Monica: What are the highlights of the current MySQL 5.6 Development Milestone Release and early access features for MySQL Cluster 7.3? Tomas: MySQL 5.6 development milestone release builds on MySQL 5.5 by improving: Optimizer for better Performance, Scalability Performance Schema for better instrumentation InnoDB for better transactional throughput Replication for higher availability, data integrity NoSQL options for more flexibility We announced some new early access features in MySQL 5.6, including binary log group commit. We also announced early access features in MySQL Cluster 7.3 including support for foreign key constraints. Monica: How do people get these releases? Tomas: You can access development milestone releases by going to: http://dev.mysql.com/downloads/mysqlThen select the “Development Release” tab. The MySQL Cluster 7.3 and other early access features can be downloaded at: http://labs.mysql.com Monica: What's coming up next for MySQL? Tomas: Our development team is working in overdrive, cranking out new features with community feedback. Don’t miss the MySQL Connect conference being held in San Francisco on Sept. 29 and 30th. My team and I will be there. I hope you can join us! Monica: Thank you for your time, Tomas. I look forward to seeing you at the MySQL Connect conference. To our followers, I hope you found this interview informative. I welcome your comments. Please stay tuned here for more updates on MySQL. Note: Monica Kumar is Senior Director of product marketing for Linux, Virtualization and MySQL at Oracle.

    Read the article

  • FAT Volume and CE

    - by Kate Moss' Open Space
    Whenever we format a disk volume, it is a good idea to name the label so it will be easier to categorize. To label a volume, we can use LABEL command or UI depends on your preference. Windows CE does provide FAT driver and support various format (FAT12, FAT16,FAT32, ExFAT and TFAT - transaction-safe FAT) and many feature to let you scan and even defrag the volume but not labeling. At any time you format a volume in CE and then mount it on PC, the label is always empty! Of course, you can always label the volume on PC, even it is formatted in CE. So looks like CE does not care about the volume label at all, neither report the label to OS nor changing the label on FAT.So how can we set the volume label in CE? To Answer this question, we need to know how does FAT stores the volume label. Here are some on-line resources are handy for parsing FAT. http://en.wikipedia.org/wiki/File_Allocation_Table http://www.pjrc.com/tech/8051/ide/fat32.html http://www.microsoft.com/whdc/system/platform/firmware/fatgen.mspx You can refer to PUBLIC\COMMON\OAK\DRIVERS\FSD\FATUTIL\MAIN\bootsec.h and dosbpb.h or the above links for the fields we discuss here. The first sector of a FAT Volume (it is not necessary to be the first sector of a disk.) is a FAT boot sector and BPB (BIOS Parameter Block). And at offset 43, bgbsVolumeLabel (or bsVolumeLabel on FAT16) is for storing the volume lable, but note in the spec also indicates "FAT file system drivers should make sure that they update this field when the volume label file in the root directory has its name changed or created.". So we can't just simply update the bgbsVolumeLabel but also need to create a volume lable file in root directory. The volume lable file is not a real file but just a file entry in root directory with zero file lenth and a very special file attribute, ATTR_VOLUME_ID. (defined in public\common\oak\drivers\fsd\fatutil\MAIN\fatutilp.h) Locating and accessing bootsector is quite straight forward, as long as we know the starting sector of a FAT volume, that's it. But where is the root directory? The layout of a typical FAT is like this Boot sector (Volume ID in the figure) followed by Reserved Sectors (1 on FAT12/16 and 32 on FAT32), then FAT chain table(s) (can be 1 or 2), after that is the root directory (FAT12/16 and not shows in the figure) then begining of the File and Directories. In FAT12/16, the root directory is placed right after FAT so it is not hard to caculate the offset in the volume. But in FAT32, this rule is no longer true: the first cluster of the root directory is determined by BGBPB_RootDirStrtClus (or offset 44 in boot sector). Although this field is usually 0x00000002 (it is how CE initial the root directory after formating a volume. Note we should never assume it is always true) which means the first cluster contains data but not like the root directory is contiguous in FAT12/16, it is just like a regular file can be fragmented. So we need to access the root directory (of FAT32) hopping one cluster to another by traversing FAT table. Let's trace the code now. Although the source of FAT driver is not available in CE Shared Source program, but the formatter, Fatutil.dll, is available in public\common\oak\drivers\fsd\fatutil\MAIN\formatdisk.cpp. Be aware the public code only provides formatter for FAT12/16/32 for ExFAT it is still not available. FormatVolumeInternal is the main worker function. With the knowledge here, you should be able the trace the code easily. But I would like to discuss the following code pieces     dwReservedSectors = (fo.dwFatVersion == 32) ? 32 : 1;     dwRootEntries = (fo.dwFatVersion == 32) ? 0 : fo.dwRootEntries; Note the dwReservedSectors is 32 in FAT32 and 1 in FAT12/16. Root Entries is another different mentioned in previous paragraph, 0 for FAT32 (dynamic allocated) and fixed size (usually 512, defined in DEFAULT_ROOT_ENTRIES in public\common\sdk\inc\fatutil.h) And then here   memset(pBootSec->bsVolumeLabel, 0x20, sizeof(pBootSec->bsVolumeLabel)); It sets the Volume Label as empty string. Now let's carry on to the next section - write the root directory.     if (fo.dwFatVersion == 32) {         if (!(fo.dwFlags & FATUTIL_FORMAT_TFAT)) {             dwRootSectors = dwSectorsPerCluster;         }         else {             DIRENTRY    dirEntry;             DWORD       offset;             int               iVolumeNo;             memset(pbBlock, 0, pdi->di_bytes_per_sect);             memset(&dirEntry, 0, sizeof(DIRENTRY));                         dirEntry.de_attr = ATTR_VOLUME_ID;             // the first one is volume label             memcpy(dirEntry.de_name, "TFAT       ", sizeof (dirEntry.de_name));             memcpy(pbBlock, &dirEntry, sizeof(dirEntry));              ...             // Skip the next step of zeroing out clusters             dwCurrentSec += dwSectorsPerCluster;             dwRootSectors = 0;         }     }     // Each new root directory sector needs to be zeroed.     memset(pbBlock, 0, cbSizeBlk);     iRootSec=0;     while ( iRootSec < dwRootSectors) { Basically, the code zero out the each entry in root directory depends on dwRootSectors. In FAT12/16, the dwRootSectors is calculated as the sectors we need for the root entries (512 for most of the case) and in FAT32 it just zero out the one cluster. Please note that, if it is a TFAT volume, it initialize the root directory with special volume label entries for some special purpose. Despite to its unusual initialization process for TFAT, it does provide a example for how to create a volume entry. With some minor modification, we can assign the volume label in FAT formatter and also remember to sync the volume label with bsVolumeLabel or bgbsVolumeLabel in boot sector.

    Read the article

  • Hyper-V File Server Clustering - at my wit’s end

    - by René Kåbis
    I am at my wit’s end with File Server clustering under Hyper-V. I am hoping that someone might be able to help me figure out this Gordian Knot of a technology that seems to have dead ends (like forcing cluster VMs to use iSCSI drives where normally-attached VHDX drives could suffice) where logic and reason would normally provide a logical solution. My hardware: I will be running three servers (in the end), but right now everything is taking place on one server. One of the secondary servers will exist purely as a witness/quorum, and another slightly more powerful one will be acting as an emergency backup (with additional storage, just not redundant) to hold the secondary AD VM and the other halves of a set of clustered VMs: the SQL VM and the file system VM. Please note, these each are the depreciated nodes of a cluster, the main nodes will be on the most powerful first machine. My heavy lifter is a machine that also contains all of the truly redundant storage on the network. If this gives anyone the heebie-geebies, too bad. It has a 6TB (usable) RAID-10 array, and will (in the end) hold the primary nodes of both aforementioned clusters, but is right now holding all VMs. This is, right now: DC01, DC02, SQL01, SQL02, FS01 & FS02. Eventually, I will be adding additional VMs to handle Exchange, Sharepoint and Lync, but only to this main server (the secondary server won't be able to handle more than three or four VMs, so why burden it? The AD, SQL & FS VMs are the most critical for the business). If anyone is now saying, “wait, what about a SAN or a NAS for the file servers?”, well too bad. What exists on the main machine is what I have to deal with. I followed these instructions, but I seem to be unable to get things to work. In order to make the file server truly redundant, I cannot trust any one machine to hold the only data store on the network. Therefore, I have created a set of iSCSI drives on the VM-host of the main machine, and attached one to each file server VM. The end result is that I want my FS01 to sit on the heavy lifter, along with its iSCSI “drive”, and FS02 will sit on the secondary machine with its own iSCSI “drive” there as well. That is, neither iSCSI drive will end up sitting on the same machine as the other. As such, the clustered FS will utterly duplicate the contents of the iSCSI drives between each other, so that if one physical machine (or the FS VM) goes toes-up, the other has got a full copy of the data on its own iSCSI drive. My problem occurs when I try to apply the file server role within the failover cluster manager. Actually, it is even before that -- it occurs when adding the disks. Since I have added each disk preferentially to a specific VM (by limiting the initiator by DNS hostname, and by adding two-way CHAP authentication), this forces each VM to be in control of its own iSCSI disk. However, when I try to add the disks to the Disks section of Storage within Failover Cluster Manager, the entire process fails for a random disk of the pair. That is, one will get online, but the other will remain offline because it does not have the correct “owner node”. I mean, really -- WTF? Of course it doesn’t have the right owner node, both drives are showing the same node name!! I cannot seem to have one drive show up with one node name as owner, and the other drive show up with the other node name as owner. And because both drives are not “online”, I cannot create a pool to apply to a cluster role. Talk about getting stuck between a rock and a hard place! I’ve got more to add, but my work is closing for the day and I have to wrap things up. I will try to add more tomorrow morning when I get in. My main objective is to have a file server VM on each machine, the storage on each machine, but a transparent failover in case one physical machine fails. Essentially, a failover FS that doesn’t care which machine fails -- the storage contents are replicated equally on each machine. Am I even heading in the right direction?

    Read the article

  • Today in the OTN Lounge (Thursday October 4, 2012)

    - by Bob Rhubart
    Here's a quick rundown of today's activities in the OTN Lounge: OTN Lounge hours today: 8:00 am - 2:00pm 9:00 am - 1:00 pm RAC Attack Learn about Oracle Real Application Clustering (RAC) in this collaborative event. You'll work with experts from the IOUG RAC SIG to get an Oracle Database 11gR2 RAC cluster running inside a virtual machine. For more information: RAC attack at Oracle Open World (Pythian Blog) RAC Attack - Oracle Cluster Database at Home/Events (WikiBooks) The OTN Lounge is located in the Howard St. Tent, between 3rd and 4th, directly between Moscone North and Moscone South. Access to the OTN Lounge requires an Oracle OpenWorld or JavaOne conference badge.

    Read the article

  • Is RAC One Node Certified for E-Business Suite?

    - by Steven Chan
    Oracle Real Application Clusters (RAC) is a cluster database with a shared cache architecture that supports the transparent deployment of a single database across a pool of servers.  RAC is certified with both Oracle E-Business Suite Release 11i and 12.  We publish best-practices documentation for specific combinations of EBS + RAC versions.  For example, if you were planning on implementing RAC for EBS 12, you would use this documentation:Using Oracle 11g Release 2 Real Application Clusters with Oracle E-Business Suite Release 12 (Note 823587.1)Many of the largest E-Business Suite users in the world run RAC today, including Oracle; see this Oracle R12 case study for details.A number of customers have recently asked whether RAC One Node can be used with the E-Business Suite.  From the RAC website:Oracle RAC One Node is a new option available with Oracle Database 11g Release 2. Oracle RAC One Node is a single instance of an Oracle RAC-enabled database running on one node in a cluster.

    Read the article

  • Windows Azure Use Case: High-Performance Computing (HPC)

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: High-Performance Computing (also called Technical Computing) at its most simplistic is a layout of computer workloads where a “head node” accepts work requests, and parses them out to “worker nodes'”. This is useful in cases such as scientific simulations, drug research, MatLab work and where other large compute loads are required. It’s not the immediate-result type computing many are used to; instead, a “job” or group of work requests is sent to a cluster of computers and the worker nodes work on individual parts of the calculations and return the work to the scheduler or head node for the requestor in a batch-request fashion. This is typical to the way that many mainframe computing use-cases work. You can use commodity-based computers to create an HPC Cluster, such as the Linux application called Beowulf, and Microsoft has a server product for HPC using standard computers, called the Windows Compute Cluster that you can read more about here. The issue with HPC (from any vendor) that some organization have is the amount of compute nodes they need. Having too many results in excess infrastructure, including computers, buildings, storage, heat and so on. Having too few means that the work is slower, and takes longer to return a result to the calling application. Unless there is a consistent level of work requested, predicting the number of nodes is problematic. Implementation: Recently, Microsoft announced an internal partnership between the HPC group (Now called the Technical Computing Group) and Windows Azure. You now have two options for implementing an HPC environment using Windows. You can extend the current infrastructure you have for HPC by adding in Compute Nodes in Windows Azure, using a “Broker Node”.  You can then purchase time for adding machines, and then stop paying for them when the work is completed. This is a common pattern in groups that have a constant need for HPC, but need to “burst” that load count under certain conditions. The second option is to install only a Head Node and a Broker Node onsite, and host all Compute Nodes in Windows Azure. This is often the pattern for organizations that need HPC on a scheduled and periodic basis, such as financial analysis or actuarial table calculations. References: Blog entry on Hybrid HPC with Windows Azure: http://blogs.msdn.com/b/ignitionshowcase/archive/2010/12/13/high-performance-computing-on-premise-and-in-the-windows-azure-cloud.aspx  Links for further research on HPC, includes Windows Azure information: http://blogs.msdn.com/b/ncdevguy/archive/2011/02/16/handy-links-for-hpc-and-azure.aspx 

    Read the article

  • MySQL Connect Only 10 Days Away - Focus on InnoDB Sessions

    - by Bertrand Matthelié
    Time flies and MySQL Connect is only 10 days away! You can check out the full program here as well as in the September edition of the MySQL newsletter. Mat recently blogged about the MySQL Cluster sessions you’ll have the opportunity to attend, and below are those focused on InnoDB. Remember you can plan your schedule with Schedule Builder. Saturday, 1.00 pm, Room Golden Gate 3: 10 Things You Should Know About InnoDB—Calvin Sun, Oracle InnoDB is the default storage engine for Oracle’s MySQL as of MySQL Release 5.5. It provides the standard ACID-compliant transactions, row-level locking, multiversion concurrency control, and referential integrity. InnoDB also implements several innovative technologies to improve its performance and reliability. This presentation gives a brief history of InnoDB; its main features; and some recent enhancements for better performance, scalability, and availability. Saturday, 5.30 pm, Room Golden Gate 4: Demystified MySQL/InnoDB Performance Tuning—Dimitri Kravtchuk, Oracle This session covers performance tuning with MySQL and the InnoDB storage engine for MySQL and explains the main improvements made in MySQL Release 5.5 and Release 5.6. Which setting for which workload? Which value will be better for my system? How can I avoid potential bottlenecks from the beginning? Do I need a purge thread? Is it true that InnoDB doesn't need thread concurrency anymore? These and many other questions are asked by DBAs and developers. Things are changing quickly and constantly, and there is no “silver bullet.” But understanding the configuration setting’s impact is already a huge step in performance improvement. Bring your ideas and problems to share them with others—the discussion is open, just moderated by a speaker. Sunday, 10.15 am, Room Golden Gate 4: Better Availability with InnoDB Online Operations—Calvin Sun, Oracle Many top Web properties rely on Oracle’s MySQL as a critical piece of infrastructure for serving millions of users. Database availability has become increasingly important. One way to enhance availability is to give users full access to the database during data definition language (DDL) operations. The online DDL operations in recent MySQL releases offer users the flexibility to perform schema changes while having full access to the database—that is, with minimal delay of operations on a table and without rebuilding the entire table. These enhancements provide better responsiveness and availability in busy production environments. This session covers these improvements in the InnoDB storage engine for MySQL for online DDL operations such as add index, drop foreign key, and rename column. Sunday, 11.45 am, Room Golden Gate 7: Developing High-Throughput Services with NoSQL APIs to InnoDB and MySQL Cluster—Andrew Morgan and John Duncan, Oracle Ever-increasing performance demands of Web-based services have generated significant interest in providing NoSQL access methods to MySQL (MySQL Cluster and the InnoDB storage engine of MySQL), enabling users to maintain all the advantages of their existing relational databases while providing blazing-fast performance for simple queries. Get the best of both worlds: persistence; consistency; rich SQL queries; high availability; scalability; and simple, flexible APIs and schemas for agile development. This session describes the memcached connectors and examines some use cases for how MySQL and memcached fit together in application architectures. It does the same for the newest MySQL Cluster native connector, an easy-to-use, fully asynchronous connector for Node.js. Sunday, 1.15 pm, Room Golden Gate 4: InnoDB Performance Tuning—Inaam Rana, Oracle The InnoDB storage engine has always been highly efficient and includes many unique architectural elements to ensure high performance and scalability. In MySQL 5.5 and MySQL 5.6, InnoDB includes many new features that take better advantage of recent advances in operating systems and hardware platforms than previous releases did. This session describes unique InnoDB architectural elements for performance, new features, and how to tune InnoDB to achieve better performance. Sunday, 4.15 pm, Room Golden Gate 3: InnoDB Compression for OLTP—Nizameddin Ordulu, Facebook and Inaam Rana, Oracle Data compression is an important capability of the InnoDB storage engine for Oracle’s MySQL. Compressed tables reduce the size of the database on disk, resulting in fewer reads and writes and better throughput by reducing the I/O workload. Facebook pushes the limit of InnoDB compression and has made several enhancements to InnoDB, making this technology ready for online transaction processing (OLTP). In this session, you will learn the fundamentals of InnoDB compression. You will also learn the enhancements the Facebook team has made to improve InnoDB compression, such as reducing compression failures, not logging compressed page images, and allowing changes of compression level. Not registered yet? You can still save US$ 300 over the on-site fee – Register Now!

    Read the article

  • A tour of the GlassFish 3.1.2 DCOM support

    - by alexismp
    While we've mentioned the DCOM support in GlassFish 3.1.2 several times before, you'll probably find Byron's DCOM blog entry to be useful if you're using Windows as a deployment platform for your GlassFish cluster. Byron discusses how DCOM is used to communicate with remote Windows nodes participating in a GlassFish cluster, what Java libraries were used to wrap around DCOM, what new asadmin commands were addd (in particular validate-dcom) as well as some tips to make this all work on your specific environment. In addition to this blog post, you should considering reading the official product documentation : • Considerations for Using DCOM for Centralized Administration • Setting Up DCOM and Testing the DCOM Set Up

    Read the article

  • OPN Developer Services for Solaris Developers

    - by user13333379
    Independent Software Vendors (ISVs) who develop applications for Solaris 11 can exploit a number of interesting services as long as they are OPN Members with a Gold (or above) status and a Solaris Knowledge specialization: Free access to a Solaris development cloud with preconfigured Solaris developer zones through the apply for the: Oracle Exastack Remote Labs to get free access to Solaris development environments for SPARC and x86. Free access to patches and support information through MOS for Oracle Solaris, Oracle Solaris Studio, Oracle Solaris Cluster including updates for development systems  apply for the Oracle Solaris Development Initiative. Free email developer support for all questions around Oracle Solaris, Oracle Solaris Studio, Oracle Solaris Cluster and Oracle technologies integrating with Solaris 11 apply for the Solaris Adoption Technical Assistance.  

    Read the article

  • MAAS/Juju architecture and issues

    - by Massimo
    I recently deployed a MAAS/Juju environment based on a six nodes cluster, using Ubuntu 12.04 LTS, to run a "proof of concept". I could appreciate how interesting the architecture is, and I'd like to understand if I can really base my future business on this technology. Therefore I'd like to understand more about any involved limitations, constraints and necessities. In particular I'd like to understand: 1) how can MAAS be made reliable, i.e. deployed over two or more physical nodes able to fail-over ? 2) can the same physical cluster host multiple juju environments, under the assumption openstack/virtualization is not used ? 3) can the same physical node host different charms of the same juju environment, deployed in a standard, reliable way by juju ?

    Read the article

  • Today in the OTN Lounge (Monday October 1, 2012)

    - by Bob Rhubart
    Here's a quick rundown of today's activities in the OTN Lounge: (OTN Lounge hours today: 8:00 am - 7:00 pm)  9:00 am - 1:00 pm RAC Attack Learn about Oracle Real Application Clustering (RAC) in this collaborative event. You'll work with experts from the IOUG RAC SIG to get an Oracle Database 11gR2 RAC cluster running inside a virtual machine. For more information: RAC attack at Oracle Open World (Pythian Blog) RAC Attack - Oracle Cluster Database at Home/Events (WikiBooks) 4:00 pm - 8:00 pm Oracle Social Network Developer Challenge Office Hours Find information, expertise, and a collaborative work environment for those participating in the OSN Developer Challenge. Click here for more information. The OTN Lounge is located in the Howard St. Tent, between 3rd and 4th, directly between Moscone North and Moscone South. Access to the OTN Lounge requires an Oracle OpenWorld or JavaOne conference badge.

    Read the article

  • Introducing the Hardware Sales Consultant (Presales) Team in Greece

    - by fboufis
    Hello World and welcome to the blog of the Oracle Hardware Presales Team in Athens.The team is responsible for a cluster of six (6) countries which includes Greece, Cyprus, Malta, The Former Yugoslav Republic of Macedonia, Albania and Kosovo.We handle the complete hardware & systems software portfolio, namely: Engineered Systems: Purpose-build and General-purpose solutions Servers: SPARC (M & T-Series) & x86 (X-Series) servers Operating Systems: Oracle Solaris & Oracle Linux Virtualization Technologies: Oracle VM, Solaris Zones & Dynamic Domains Storage: NAS (ZFSSA), SAN (Axiom) & Tape (StorageTek) Systems Software: High Availability (Solaris Cluster) & Systems Management (Ops Center) and a multitude of other products, all of which will be the main topic of our blog. We design and propose solutions based on these products and assist both customers and partners in integrating those solutions in existing datacenters.We will be happy to support you in your projects, provide information and discuss your business issues, so do not hesitate to contact us.Filippos Boufis – Oracle Hardware Principal Sales Consultant

    Read the article

  • Why doesn't Ubuntu detect my second hard drive?

    - by user93179
    I am new to Linux and to Ubuntu, I was wondering, I have two hard drives setup in SATA ports (non-raid, at least I don't think they are). I installed ubuntu unto the drives fresh without any previous versions or windows at all. However when I got the Ubuntu 12.04 LTS working, all I see is 1 x 120 gigabyte harddrive. Also, not sure if this is important or not, my hard drives are SSD. My computer specs are Asus P9Z77-V-LK Nvidia Geforce GTX 660 TI Intel i5 3570k 3.4 /proc/partitions shows: major minor #blocks name 8 0 117220824 sda 8 1 117219328 sda1 8 16 117220824 sdb 8 17 96256 sdb1 8 18 108780544 sdb2 8 19 8342528 sdb3 11 0 1048575 sr0 and ls -l /sys/block/ | grep -v /virtual/: lrwxrwxrwx 1 root root 0 Sep 27 17:26 sda - ../devices/pci0000:00/0000:00:1f.2/host0/target0:0:0/0:0:0:0/block/sda lrwxrwxrwx 1 root root 0 Sep 27 17:26 sdb - ../devices/pci0000:00/0000:00:1f.2/host1/target1:0:0/1:0:0:0/block/sdb lrwxrwxrwx 1 root root 0 Sep 27 22:26 sdc - ../devices/pci0000:00/0000:00:1a.0/usb1/1-1/1-1.1/1-1.1:1.0/host6/target6:0:0/6:0:0:0/block/sdc lrwxrwxrwx 1 root root 0 Sep 27 22:04 sr0 - ../devices/pci0000:00/0000:00:1f.2/host3/target3:0:0/3:0:0:0/block/sr0 sudo file -s /dev/sd*: /dev/sda: x86 boot sector; partition 1: ID=0x7, starthead 32, startsector 2048, 234438656 sectors, code offset 0xc0, OEM-ID " ?", Bytes/sector 190, sectors/cluster 124, reserved sectors 191, FATs 6, root entries 185, sectors 64514 (volumes 32 MB) , physical drive 0x7e, dos 32 MB) , FAT (32 bit), sectors/FAT 749, reserved3 0x800000, serial number 0x35361a2b, unlabeled /dev/sdb2: Linux rev 1.0 ext4 filesystem data, UUID=387761ac-5eba-4d0f-93ba-746a82fb541d (needs journal recovery) (extents) (large files) (huge files) /dev/sdb3: data /dev/sdc: x86 boot sector; partition 1: ID=0xc, active, starthead 0, startsector 8064, 30473088 sectors, code offset 0xc0 /dev/sdc1: x86 boot sector, code offset 0x58, OEM-ID "SYSLINUX", sectors/cluster 64, reserved sectors 944, Media descriptor 0xf8, heads 128, hidden sectors 8064, sectors 30473088 (volumes 32 MB) , FAT (32 bit), sectors/FAT 3720, Backup boot sector 8, serial number 0xf90c12e9, label: "KINGSTON " /dev/sda1: x86 boot sector, code offset 0x52, OEM-ID "NTFS ", sectors/cluster 8, reserved sectors 0, Media descriptor 0xf8, heads 255, hidden sectors 2048, dos 32 MB) , FAT (32 bit), sectors/FAT 749, reserved3 0x800000, serial number 0x35361a2b, unlabeled Any help would be greatly appreciated! Thanks Another thing I noticed is, when i use gparted to locate my drives, it seems that sda1 is my second drive that I am not detecting when I boot up and my ubuntu + FAT Boot files are installed in sdb1

    Read the article

  • MySQL User Group Meeting in Madrid, Spain

    - by Lenka Kasparova
    We are pleased to announce another MySQL User Group meeting scheduled for June 5 in Madrid, Spain. Keith Hollman, the MySQL Principal Sales Consultant will be talking about MySQL & Oracle strategy and MySQL Cluster. A small demo of MySQL Cluster will be part of the presentation.  Details about the event: Date: June 5, 2014 Time: 7:00 pm-8:30 pm Place: Edificio Telefonica, Gran via 28, Madrid, Entrada por C/ Valverde 2 We are looking forward to seeing you in Madrid! See more information & registration.

    Read the article

  • Hadoop:Only master node does the work

    - by user287722
    I've setup a Hadoop 2.2 cluster with 1 master node(namenode and secondary namenode) and 3 slave nodes(datanode and namenode on each one).All of the machines use Linux Mint 64bit. When I run my MapReduce program, writen in Java, I can only see that master node is using extra CPU and RAM. Slave nodes are not doing a thing. I've checked the logs from all of the namenodes and there is nothing wrong with the namenodes on slave nodes. Resource Manager is running and all of the slave nodes can see the Resource Manager. I used this http://n0where.net/hadoop-2-2-multi-node-cluster-setup/ tutorial to configure my nodes. Datanodes are working in terms of distributed data storing but I can't see any indication of distributed data processing. Do I have to configure the xml configuration files in some other way so all of the machines will process data while I'm running my MapReduce Job?

    Read the article

  • Design application to send messages by marking circle on the map where you want to send message

    - by jhamb
    This is question asked to me by an interviewer, in which a map of world is given, and for those country you want to send message, just marked circle on that area, and just send to all the people comes in that area. Question visual link is : Design this application The approach that I told him: Firstly build whole person's data (contacts , place information and all) Then where you mark on the map, just build a cluster of that country using Hadoop and fire the message to all the person's contact comes in that cluster. So help me for better understandings of this problem, and if have another good approach (all back-end ad front-end) , then please tell me or discuss here with me. Thanks in advance.

    Read the article

  • Do I need to manually create indexes for a DBIx::Class belongs_to relationship

    - by Dancrumb
    I'm using the DBIx::Class modules for an ORM approach to an application I have. I'm having some problems with my relationships. I have the following package MySchema::Result::ClusterIP; use strict; use warnings; use base qw/DBIx::Class::Core/; our $VERSION = '1.0'; __PACKAGE__->load_components(qw/InflateColumn::Object::Enum Core/); __PACKAGE__->table('cluster_ip'); __PACKAGE__->add_columns( # Columns here ); __PACKAGE__->set_primary_key('objkey'); __PACKAGE__->belongs_to( 'configuration' => 'MySchema::Result::Configuration', 'config_key'); __PACKAGE__->belongs_to( 'cluster' => 'MySchema::Result::Cluster', { 'foreign.config_key' => 'self.config_key', 'foreign.id' => 'self.cluster_id' } ); As well as package MySchema::Result::Cluster; use strict; use warnings; use base qw/DBIx::Class::Core/; our $VERSION = '1.0'; __PACKAGE__->load_components(qw/InflateColumn::Object::Enum Core/); __PACKAGE__->table('cluster'); __PACKAGE__->add_columns( # Columns here ); __PACKAGE__->set_primary_key('objkey'); __PACKAGE__->belongs_to( 'configuration' => 'MySchema::Result::Configuration', 'config_key'); __PACKAGE__->has_many('cluster_ip' => 'MySchema::Result::ClusterIP', { 'foreign.config_key' => 'self.config_key', 'foreign.cluster_id' => 'self.id' }); There are a couple of other modules, but I don't believe that they are relevant. When I attempt to deploy this schema, I get the following error: DBIx::Class::Schema::deploy(): DBI Exception: DBD::mysql::db do failed: Can't create table 'test.cluster_ip' (errno: 150) [ for Statement "CREATE TABLE `cluster_ip` ( `objkey` smallint(5) unsigned NOT NULL auto_increment, `config_key` smallint(5) unsigned NOT NULL, `cluster_id` char(16) NOT NULL, INDEX `cluster_ip_idx_config_key_cluster_id` (`config_key`, `cluster_id`), INDEX `cluster_ip_idx_config_key` (`config_key`), PRIMARY KEY (`objkey`), CONSTRAINT `cluster_ip_fk_config_key_cluster_id` FOREIGN KEY (`config_key`, `cluster_id`) REFERENCES `cluster` (`config_key`, `id`) ON DELETE CASCADE ON UPDATE CASCADE, CONSTRAINT `cluster_ip_fk_config_key` FOREIGN KEY (`config_key`) REFERENCES `configuration` (`config_key`) ON DELETE CASCADE ON UPDATE CASCADE ) ENGINE=InnoDB"] at test_deploy.pl line 18 (running "CREATE TABLE `cluster_ip` ( `objkey` smallint(5) unsigned NOT NULL auto_increment, `config_key` smallint(5) unsigned NOT NULL, `cluster_id` char(16) NOT NULL, INDEX `cluster_ip_idx_config_key_cluster_id` (`config_key`, `cluster_id`), INDEX `cluster_ip_idx_config_key` (`config_key`), PRIMARY KEY (`objkey`), CONSTRAINT `cluster_ip_fk_config_key_cluster_id` FOREIGN KEY (`config_key`, `cluster_id`) REFERENC ES `cluster` (`config_key`, `id`) ON DELETE CASCADE ON UPDATE CASCADE, CONSTRAINT `cluster_ip_fk_config_key` FOREIGN KEY (`config_key`) REFERENCES `configuration` (`conf ig_key`) ON DELETE CASCADE ON UPDATE CASCADE ) ENGINE=InnoDB") at test_deploy.pl line 18 From what I can tell, MySQL is complaining about the FOREIGN KEY constraint, in particular, the REFERENCE to (config_key, id) in the cluster table. From my reading of the MySQL documentation, this seems like a reasonable complaint, especially in regards to the third bullet point on this doc page. Here's my question. Am I missing something in the DBIx::Class module? I realize that I could explicitly create the necessary index to match up with this foreign key constraint, but that seems to be repetitive work. Is there something I should be doing to make this occur implicitly?

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >