Search Results

Search found 17847 results on 714 pages for 'virtual disk'.

Page 231/714 | < Previous Page | 227 228 229 230 231 232 233 234 235 236 237 238  | Next Page >

  • Ubuntu 12.04 doesn't recgonize m CPU correctly

    - by Nightshaxx
    My computer is running ubuntu 12.04 (64bit), and I have a AMD Athlon(tm) X4 760K Quad Core Processor which is about 3.8ghz (and an Radeon HD 7770 GPU). Yet, when I type in cat /proc/cpuinfo - I get: processor : 0 vendor_id : AuthenticAMD cpu family : 21 model : 19 model name : AMD Athlon(tm) X4 760K Quad Core Processor stepping : 1 microcode : 0x6001119 cpu MHz : 1800.000 cache size : 2048 KB physical id : 0 siblings : 4 core id : 0 cpu cores : 2 apicid : 16 initial apicid : 0 fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 popcnt aes xsave avx f16c lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs xop skinit wdt lwp fma4 tce nodeid_msr tbm topoext perfctr_core arat cpb hw_pstate npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold bmi1 bogomips : 7599.97 TLB size : 1536 4K pages clflush size : 64 cache_alignment : 64 address sizes : 48 bits physical, 48 bits virtual power management: ts ttp tm 100mhzsteps hwpstate cpb eff_freq_ro processor : 1 vendor_id : AuthenticAMD cpu family : 21 model : 19 model name : AMD Athlon(tm) X4 760K Quad Core Processor stepping : 1 microcode : 0x6001119 cpu MHz : 1800.000 cache size : 2048 KB physical id : 0 siblings : 4 core id : 1 cpu cores : 2 apicid : 17 initial apicid : 1 fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 popcnt aes xsave avx f16c lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs xop skinit wdt lwp fma4 tce nodeid_msr tbm topoext perfctr_core arat cpb hw_pstate npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold bmi1 bogomips : 7599.97 TLB size : 1536 4K pages clflush size : 64 cache_alignment : 64 address sizes : 48 bits physical, 48 bits virtual power management: ts ttp tm 100mhzsteps hwpstate cpb eff_freq_ro processor : 2 vendor_id : AuthenticAMD cpu family : 21 model : 19 model name : AMD Athlon(tm) X4 760K Quad Core Processor stepping : 1 microcode : 0x6001119 cpu MHz : 1800.000 cache size : 2048 KB physical id : 0 siblings : 4 core id : 2 cpu cores : 2 apicid : 18 initial apicid : 2 fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 popcnt aes xsave avx f16c lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs xop skinit wdt lwp fma4 tce nodeid_msr tbm topoext perfctr_core arat cpb hw_pstate npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold bmi1 bogomips : 7599.97 TLB size : 1536 4K pages clflush size : 64 cache_alignment : 64 address sizes : 48 bits physical, 48 bits virtual power management: ts ttp tm 100mhzsteps hwpstate cpb eff_freq_ro processor : 3 vendor_id : AuthenticAMD cpu family : 21 model : 19 model name : AMD Athlon(tm) X4 760K Quad Core Processor stepping : 1 microcode : 0x6001119 cpu MHz : 1800.000 cache size : 2048 KB physical id : 0 siblings : 4 core id : 3 cpu cores : 2 apicid : 19 initial apicid : 3 fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 popcnt aes xsave avx f16c lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs xop skinit wdt lwp fma4 tce nodeid_msr tbm topoext perfctr_core arat cpb hw_pstate npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold bmi1 bogomips : 7599.97 TLB size : 1536 4K pages clflush size : 64 cache_alignment : 64 address sizes : 48 bits physical, 48 bits virtual power management: ts ttp tm 100mhzsteps hwpstate cpb eff_freq_ro The important part of all this being, cpu MHz : 1800.000 which indicates that I have only 1.8ghz of processing power, which is totally wrong. Is it something with drivers or Ubuntu?? Also, will windows recognize all of my processing power? Thanks! (NOTE: My cpu doesn't have intigrated graphics

    Read the article

  • Converting an Oracle VM VirtualBox VM into an Oracle VM Server image

    - by wim.coekaerts
    As we are working on tighter seemless moving of VM's between the 2 products, here are a few simple steps to convert an existing Oracle VM VirtualBox image over. Steps involved to make it easy/straightforward : (1) When creating a VM in Virtualbox, using Oracle Linux as an example, make sure that /etc/fstab only uses labels. Do not use hardcoded device names. instead of an entry /dev/sda1 /u01 ext3 defaults 1 1 use LABEL=foo /u01 ext3 defaults 1 1 for more info on labels : man e2label or use a logical volume /dev/VolGroup00/LVfoo /u01 ext3 defaults 1 1 Doing so will make it easier to have an OS boot up on a different hypervisor with potentially different device names. For instance, the VirtualBox VM might expose a scsi driver while in Oracle VM Server you might end up with an ide disk, this then changes /dev/sda to /dev/hda. (2) If you have a VM created that you want to convert, then shut down the VM in VirtualBox and convert the image files : go the the directory that contains your HardDisk image files (.VirtualBox/HardDisks/* as an example) for each of the virtual disks run the following command : VBoxManage clonehd virtualdiskfilename.vdi system.img --format raw where virtualdiskfilename.vdi is the original VBox VM file (this can also be a vmdk file) and system.img is the name of the virtualdisk for Oracle VM. this can be any filename as well, I typically use system.img to specify the boot disk (as is common for Oracle VM template creation) (3) create a vm.cfg To run a VM converted from VirtualBox, you have to create a vm.cfg for Oracle VM server that creates an HVM guest. The easiest is to use a simple hvm vm.cfg and change it for your vm. I have an example here : acpi = 1 apic = 1 builder = 'hvm' device_model = '/usr/lib/xen/bin/qemu-dm' disk = ['file:system.img,hda,w', 'file:oracle.img,hdb,w',',hdc:cdrom,r',] kernel = '/usr/lib/xen/boot/hvmloader' memory = '1024' name = 'vmname' on_crash = 'restart' on_reboot = 'restart' pae = 1 serial = 'pty' timer_mode = '0' usbdevice = 'tablet' vcpus = 1 vif = ['bridge=xenbr0,type=ioemu'] vif_other_config = [] vnc = 1 vncconsole = 1 vnclisten = '0.0.0.0' vncpasswd = '' vncunused = 1 If you take the above vm.cfg, all you need to do - modify disk = (add your virtual disks in there) - modify memory = (amount of memory your VM needs) - modify name = (enter a name for your VM here) - modify vif = (might want to replace bridge=xenbr0 to the bridge you want to use) if you want more than 1 vcpu or other changes of course you have to make those as well. (4) copy this set of files onto your Oracle VM server or onto a webserver in a subdirectory and import the template through Oracle VM Manager. You can also just start the vm using xm create vm.cfg if you like. And that's it. As I said, we are working on automation around all this but it is relatively trivial to convert VM's over as long as you take the basic issues into account. Primarily the set up of the filesystems and the use of labels in /etc/fstab. There are other potential things to look at, such as network config. If you want to make that part clean then prior to shutting down the VM change /etc/modprobe.conf and/or add the mac address of the VM into the vm.cfg in the vifs line. The good thing, at least with Linux, is that even tho the virtual hardware changes, Linux will deal with it just fine (e1000 vs 8139 realtek, ide vs scsi etc). hope this helps.

    Read the article

  • Best Design Pattern for Coupling User Interface Components and Data Structures

    - by szahn
    I have a windows desktop application with a tree view. Due to lack of a sound data-binding solution for a tree view, I've implemented my own layer of abstraction on it to bind nodes to my own data structure. The requirements are as follows: Populate a tree view with nodes that resemble fields in a data structure. When a node is clicked, display the appropriate control to modify the value of that property in the instance of the data structure. The tree view is populated with instances of custom TreeNode classes that inherit from TreeNode. The responsibility of each custom TreeNode class is to (1) format the node text to represent the name and value of the associated field in my data structure, (2) return the control used to modify the property value, (3) get the value of the field in the control (3) set the field's value from the control. My custom TreeNode implementation has a property called "Control" which retrieves the proper custom control in the form of the base control. The control instance is stored in the custom node and instantiated upon first retrieval. So each, custom node has an associated custom control which extends a base abstract control class. Example TreeNode implementation: //The Tree Node Base Class public abstract class TreeViewNodeBase : TreeNode { public abstract CustomControlBase Control { get; } public TreeViewNodeBase(ExtractionField field) { UpdateControl(field); } public virtual void UpdateControl(ExtractionField field) { Control.UpdateControl(field); UpdateCaption(FormatValueForCaption()); } public virtual void SaveChanges(ExtractionField field) { Control.SaveChanges(field); UpdateCaption(FormatValueForCaption()); } public virtual string FormatValueForCaption() { return Control.FormatValueForCaption(); } public virtual void UpdateCaption(string newValue) { this.Text = Caption; this.LongText = newValue; } } //The tree node implementation class public class ExtractionTypeNode : TreeViewNodeBase { private CustomDropDownControl control; public override CustomControlBase Control { get { if (control == null) { control = new CustomDropDownControl(); control.label1.Text = Caption; control.comboBox1.Items.Clear(); control.comboBox1.Items.AddRange( Enum.GetNames( typeof(ExtractionField.ExtractionType))); } return control; } } public ExtractionTypeNode(ExtractionField field) : base(field) { } } //The custom control base class public abstract class CustomControlBase : UserControl { public abstract void UpdateControl(ExtractionField field); public abstract void SaveChanges(ExtractionField field); public abstract string FormatValueForCaption(); } //The custom control generic implementation (view) public partial class CustomDropDownControl : CustomControlBase { public CustomDropDownControl() { InitializeComponent(); } public override void UpdateControl(ExtractionField field) { //Nothing to do here } public override void SaveChanges(ExtractionField field) { //Nothing to do here } public override string FormatValueForCaption() { //Nothing to do here return string.Empty; } } //The custom control specific implementation public class FieldExtractionTypeControl : CustomDropDownControl { public override void UpdateControl(ExtractionField field) { comboBox1.SelectedIndex = comboBox1.FindStringExact(field.Extraction.ToString()); } public override void SaveChanges(ExtractionField field) { field.Extraction = (ExtractionField.ExtractionType) Enum.Parse(typeof(ExtractionField.ExtractionType), comboBox1.SelectedItem.ToString()); } public override string FormatValueForCaption() { return string.Empty; } The problem is that I have "generic" controls which inherit from CustomControlBase. These are just "views" with no logic. Then I have specific controls that inherit from the generic controls. I don't have any functions or business logic in the generic controls because the specific controls should govern how data is associated with the data structure. What is the best design pattern for this?

    Read the article

  • Archbeat Link-O-Rama Top 10 Facebook Faves for October 20-26, 2013

    - by OTN ArchBeat
    Here's this week's list of the Top 10 items shared on the OTN ArchBeat Facebook Page from October 27 - November 2, 2013. Visualizing and Process (Twitter) Events in Real Time with Oracle Coherence | Noah Arliss This OTN Virtual Developer Day session explores in detail how to create a dynamic HTML5 Web application that interacts with Oracle Coherence as it’s processing events in real time, using the Avatar project and Oracle Coherence’s Live Events feature. Part of OTN Virtual Developer Day: Harnessing the Power of Oracle WebLogic and Oracle Coherence, November 5, 2013. 9am to 1pm PT / 12pm to 4pm ET / 1pm to 5pm BRT. Register now! HTML5 Application Development with Oracle WebLogic Server | Doug Clarke This free OTN Virtual Developer Day session covers the support for WebSockets, RESTful data services, and JSON infrastructure available in Oracle WebLogic Server. Part of OTN Virtual Developer Day: Harnessing the Power of Oracle WebLogic and Oracle Coherence, November 5, 2013. 9am to 1pm PT / 12pm to 4pm ET / 1pm to 5pm BRT. Register now! Video: ADF BC and REST services | Frederic Desbiens Spend a few minutes with Oracle ADF principal product manager Frederic Desbiens and learn how to publish ADF Business Components as RESTful web services. One Client Two Clusters | David Felcey "Sometimes its desirable to have a client connect to multiple clusters, either because the data is dispersed or for instance the clusters are in different locations for high availability," says David Felcey. David shows you how in this post, which includes a simple example. Exceptions Handling and Notifications in ODI | Christophe Dupupet Oracle Fusion Middleware A-Team director Christophe Dupupet reviews the techniques that are available in Oracle Data Integrator to guarantee that the appropriate individuals are notified in the event that ODI processes are impacted by network outages or other mishaps. Securing WebSocket applications on Glassfish | Pavel Bucek WebSocket is a key capability standardized into Java EE 7. Many developers wonder how WebSockets can be secured. One very nice characteristic for WebSocket is that it in fact completely piggybacks on HTTP. In this post Pavel Bucek demonstrates how to secure WebSocket endpoints in GlassFish using TLS/SSL. Oracle Coherence, Split-Brain and Recovery Protocols In Detail | Ricardo Ferreira Ricardo Ferreira's article "provides a high level conceptual overview of Split-Brain scenarios in distributed systems," focusing on a "specific example of cluster communication failure and recovery in Oracle Coherence." Non-programmatic Authentication Using Login Form in JSF (For WebCenter & ADF) | JayJay Zheng Oracle ACE JayJay Zheng shares an approach that "avoids the programmatic authentication and works great for having a custom login page developed in WebCenter Portal integrated with OAM authentication." The latest article in the Industrial SOA series looks at mobile computing and how companies are developing SOA to go. http://pub.vitrue.com/PUxT Tech Article: SOA in Real Life: Mobile Solutions The ACE Director Thing | Dr. Frank Munz Frank Munz finally gets around to blogging about achieving Oracle ACE Director status and shares some interesting insight into what will change—and what won't—thanks to that new status. A good, short read for those interested in learning more about the Oracle ACE program. Thought for the Day "Even if you're on the right track, you'll get run over if you just sit there." — Will Rogers, American humorist (November 4, 1879 – August 15, 1935) Source: brainyquote.com

    Read the article

  • Eclipse Indigo very slow on Kubuntu 12.04

    - by herom
    hello fellow ubuntu users! I have a really big problem with my Eclipse Indigo running on Kubuntu 12.04 32bit, Dell Vostro 3500, Intel(R) Core(TM) i5 CPU M480 @ 2.67 (as cat /proc/cpuinfo said). It has 4GB RAM. cat /proc/cpuinfo brings up the following: processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 37 model name : Intel(R) Core(TM) i5 CPU M 480 @ 2.67GHz stepping : 5 microcode : 0x2 cpu MHz : 1197.000 cache size : 3072 KB physical id : 0 siblings : 4 core id : 0 cpu cores : 2 apicid : 0 initial apicid : 0 fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 11 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx rdtscp lm constant_tsc arch_perfmon pebs bts xtopology nonstop_tsc aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 popcnt lahf_lm ida arat dts tpr_shadow vnmi flexpriority ept vpid bogomips : 5319.85 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: processor : 1 vendor_id : GenuineIntel cpu family : 6 model : 37 model name : Intel(R) Core(TM) i5 CPU M 480 @ 2.67GHz stepping : 5 microcode : 0x2 cpu MHz : 1197.000 cache size : 3072 KB physical id : 0 siblings : 4 core id : 2 cpu cores : 2 apicid : 4 initial apicid : 4 fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 11 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx rdtscp lm constant_tsc arch_perfmon pebs bts xtopology nonstop_tsc aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 popcnt lahf_lm ida arat dts tpr_shadow vnmi flexpriority ept vpid bogomips : 5319.88 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: processor : 2 vendor_id : GenuineIntel cpu family : 6 model : 37 model name : Intel(R) Core(TM) i5 CPU M 480 @ 2.67GHz stepping : 5 microcode : 0x2 cpu MHz : 1197.000 cache size : 3072 KB physical id : 0 siblings : 4 core id : 0 cpu cores : 2 apicid : 1 initial apicid : 1 fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 11 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx rdtscp lm constant_tsc arch_perfmon pebs bts xtopology nonstop_tsc aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 popcnt lahf_lm ida arat dts tpr_shadow vnmi flexpriority ept vpid bogomips : 5319.88 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: processor : 3 vendor_id : GenuineIntel cpu family : 6 model : 37 model name : Intel(R) Core(TM) i5 CPU M 480 @ 2.67GHz stepping : 5 microcode : 0x2 cpu MHz : 1197.000 cache size : 3072 KB physical id : 0 siblings : 4 core id : 2 cpu cores : 2 apicid : 5 initial apicid : 5 fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 11 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx rdtscp lm constant_tsc arch_perfmon pebs bts xtopology nonstop_tsc aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 popcnt lahf_lm ida arat dts tpr_shadow vnmi flexpriority ept vpid bogomips : 5319.88 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: java -version brings the following: java version "1.7.0_04" Java(TM) SE Runtime Environment (build 1.7.0_04-b20) Java HotSpot(TM) Server VM (build 23.0-b21, mixed mode) it's the Oracle Java, not OpenJDK. I try to develop an Android application for GoogleTV and Eclipse is this slow, that it can't follow my typing (extreme lagging!!), but this issue makes it almost impossible! here is my eclipse.ini file: -startup plugins/org.eclipse.equinox.launcher_1.2.0.v20110502.jar --launcher.library plugins/org.eclipse.equinox.launcher.gtk.linux.x86_1.1.100.v20110505 -product org.eclipse.epp.package.java.product --launcher.defaultAction openFile -showsplash org.eclipse.platform --launcher.XXMaxPermSize 512m --launcher.defaultAction openFile -vmargs -Dosgi.requiredJavaVersion=1.5 -Declipse.p2.unsignedPolicy=allow -Xms256m -Xmx512m -Xss4m -XX:PermSize=128m -XX:MaxPermSize=384m -XX:CompileThreshold=5 -XX:MaxGCPauseMillis=10 -XX:MaxHeapFreeRatio=70 -XX:+CMSIncrementalPacing -XX:+UnlockExperimentalVMOptions -XX:+UseG1GC -XX:+UseFastAccessorMethods -XX:ReservedCodeCacheSize=64m -Dcom.sun.management.jmxremote has anybody faced the same problems? can anybody help me on this problem? it's really urgent as I'm sitting here at my company and am not able to do anything productive...

    Read the article

  • Making the most of next weeks SharePoint 2010 developer training

    - by Eric Nelson
    [you can still register if you are free on the afternoons of 9th to 11th – UK time] We have 50+ registrations with more coming in – which is fantastic. Please read on to make the most of the training. Background We have structured the training to make sure that you can still learn lots during the three days even if you do not have SharePoint 2010 installed. Additionally the course is based around a subset of the channel 9 training to allow you to easily dig deeper or look again at specific areas. Which means if you have zero time between now and next Wednesday then you are still good to go. But if you can do some pre-work you will likely get even more out of the three days. Step 1: Check out the topics and resources available on-demand The course is based around a subset of the channel 9 training to allow you to easily dig deeper or look again at specific areas. Take a lap around the SharePoint 2010 Training Course on Channel 9 Download the SharePoint Developer Training Kit Step 2: Use a pre-configured Virtual Machine which you can download (best start today – it is large!) Consider using the VM we created If you don't have access to SharePoint 2010. You will need a 64bit host OS and bare minimum of 4GB of RAM. 8GB recommended. Virtual PC can not be used with this VM – Virtual PC only supports 32bit guests. The 2010-7a Information Worker VM gives you everything you need to develop for SharePoint 2010. Watch the Video on how to use this VM Download the VM Remember you only need to download the “parts” for the 2010-7a VM. There are 3 subtly different ways of using this VM: Easiest is to follow the advice of the video and get yourself a host OS of Windows Server 2008 R2 with Hyper-V and simply use the VM Alternatively you can take the VHD and create a “Boot to VHD” if you have Windows 7 Ultimate or Enterprise Edition. This works really well – especially if you are already familiar with “Boot to VHD” (This post I did will help you get started) Or you can take the VHD and use an alternative VM tool such as VirtualBox if you have a different host OS. NB: This tends to involve some work to get everything running fine. Check out parts 1 to 3 from Rolly and if you go with Virtual Box use an IDE controller not SATA. SATA will blue screen. Note in the screenshot below I also converted the vhd to a vmdk. I used the FREE Starwind Converter to do this whilst I was fighting blue screens – not sure its necessary as VirtualBox does now work with VHDs. or Step 3 – Install SharePoint 2010 on a 64bit Windows 7 or Vista Host I haven’t tried this but it is now supported. Check out MSDN. Final notes: I am in the process of securing a number of hosted VMs for ISVs directly managed by my team. Your Architect Evangelist will have details once I have them! Else we can sort out on the Wed. Regrettably I am unable to give folks 1:1 support on any issues around Boot to VHD, 3rd party VM products etc. Related Links: Check you are fully plugged into the work of my team – have you done these simple steps including joining our new LinkedIn group?

    Read the article

  • Indexing data from multiple tables with Oracle Text

    - by Roger Ford
    It's well known that Oracle Text indexes perform best when all the data to be indexed is combined into a single index. The query select * from mytable where contains (title, 'dog') 0 or contains (body, 'cat') 0 will tend to perform much worse than select * from mytable where contains (text, 'dog WITHIN title OR cat WITHIN body') 0 For this reason, Oracle Text provides the MULTI_COLUMN_DATASTORE which will combine data from multiple columns into a single index. Effectively, it constructs a "virtual document" at indexing time, which might look something like: <title>the big dog</title> <body>the ginger cat smiles</body> This virtual document can be indexed using either AUTO_SECTION_GROUP, or by explicitly defining sections for title and body, allowing the query as expressed above. Note that we've used a column called "text" - this might have been a dummy column added to the table simply to allow us to create an index on it - or we could created the index on either of the "real" columns - title or body. It should be noted that MULTI_COLUMN_DATASTORE doesn't automatically handle updates to columns used by it - if you create the index on the column text, but specify that columns title and body are to be indexed, you will need to arrange triggers such that the text column is updated whenever title or body are altered. That works fine for single tables. But what if we actually want to combine data from multiple tables? In that case there are two approaches which work well: Create a real table which contains a summary of the information, and create the index on that using the MULTI_COLUMN_DATASTORE. This is simple, and effective, but it does use a lot of disk space as the information to be indexed has to be duplicated. Create our own "virtual" documents using the USER_DATASTORE. The user datastore allows us to specify a PL/SQL procedure which will be used to fetch the data to be indexed, returned in a CLOB, or occasionally in a BLOB or VARCHAR2. This PL/SQL procedure is called once for each row in the table to be indexed, and is passed the ROWID value of the current row being indexed. The actual contents of the procedure is entirely up to the owner, but it is normal to fetch data from one or more columns from database tables. In both cases, we still need to take care of updates - making sure that we have all the triggers necessary to update the indexed column (and, in case 1, the summary table) whenever any of the data to be indexed gets changed. I've written full examples of both these techniques, as SQL scripts to be run in the SQL*Plus tool. You will need to run them as a user who has CTXAPP role and CREATE DIRECTORY privilege. Part of the data to be indexed is a Microsoft Word file called "1.doc". You should create this file in Word, preferably containing the single line of text: "test document". This file can be saved anywhere, but the SQL scripts need to be changed so that the "create or replace directory" command refers to the right location. In the example, I've used C:\doc. multi_table_indexing_1.sql : creates a summary table containing all the data, and uses multi_column_datastore Download link / View in browser multi_table_indexing_2.sql : creates "virtual" documents using a procedure as a user_datastore Download link / View in browser

    Read the article

  • Rolling Along: PASS Board Year 2, Q2

    - by Denise McInerney
    Eighteen months into my time as a PASS Director I’m especially proud of what the Virtual Chapters have accomplished and want to share that progress with you. I'm also pleased that the organization has invested more resources to support the VCs. In this quarter I got to attend two conferences and meet more members of the SQL community. Virtual Chapters In the first six months of 2013 VCs have hosted more than 50 webinars, offering free technical education to over 6200 attendees. This is a great benefit to PASS members; thanks to the VC leaders, volunteers and speakers who contribute their time to produce these events. The Performance VC held their “Summer Performance Palooza”, an event featuring eight back-to-back sessions. Links to the session recordings can be found on the VCs web site. The new webinar platform, GoToWebinar, has been rolled out to all the VCs. This is a more stable, scalable platform and represents an important investment into the future of the VCs. A few new VCs are in the planning stages, including one focused on Security and one for Russian speakers. Visit the Virtual Chapter home page to sign up for the chapters that interest you. Each Virtual Chapter is offering a discount code for PASS Summit 2013. Be sure to ask your VC leader for the code to save $200 on Summit registration. 24 Hours of PASS The next 24HOP will be on July 31. This Summit Preview edition will feature 24 consecutive webcasts presented by experts who will be speaking at Summit in October. Registration for this free event is open now. And we will be using the GoToWebinar platform for 24HOP also. Business Analytics Conference April marked the first PASS Business Analytics Conference in Chicago. This introduced PASS to another segment of data professionals: the analysts and data scientists who work with the world’s growing collection of data. Overall the inaugural event was a success and gave us a glimpse into this increasingly important space. After Chicago the Board had several serious discussions about the lessons learned from this seven and what we should do next. We agreed to apply those lessons and continue to invest in this event; there will be a PASS Business Analytics Conference in 2014. I’m very pleased the next event will be in San Jose, CA, the heart of Silicon Valley, a place where a great deal of investment and innovation in data analytics is taking place. Global SQL Community Over the last couple of years PASS has been taking steps to become more relevant to SQL communities in different parts of the world. In May I had the opportunity to attend SQL Bits XI in Nottingham, England. It was enlightening to meet and talk with SQL professionals from around the U.K. as well as many other European countries. The many SQL Bits volunteers put on a great event and were gracious hosts. Budgets The Board passed the FY14 budget at the end of June. The  budget process can be challenging and requires the Board to make some difficult choices about where to allocate resources. Overall I’m satisfied with the decisions we made and think we are investing in the right activities and programs. Next Up The Board is meeting July 18-19 in Kansas City. We will be holding the Executive Committee election for the Exec Co that will take office in 2014. We will also be discussing plans for the next BA conference as well as the next steps for our Global Growth initiative. Applications for the upcoming Board of Directors election open on July 24. If you are considering running for the Board you can visit the PASS elections site to learn more about the election process. And I encourage anyone considering running to reach out to current and past Board members to learn about what the role entails. Plans for the next PASS Summit are in full swing. We are working on some fun new ideas to introduce attendees to the many ways to become involved in the SQL community.

    Read the article

  • saslauthd + PostFix producing password verification and authentication errors

    - by Aram Papazian
    So I'm trying to setup PostFix while using SASL (Cyrus variety preferred, I was using dovecot earlier but I'm switching from dovecot to courier so I want to use cyrus instead of dovecot) but I seem to be having issues. Here are the errors I'm receiving: ==> mail.log <== Aug 10 05:11:49 crazyinsanoman postfix/smtpd[779]: warning: SASL authentication failure: Password verification failed Aug 10 05:11:49 crazyinsanoman postfix/smtpd[779]: warning: ipname[xx.xx.xx.xx]: SASL PLAIN authentication failed: authentication failure ==> mail.info <== Aug 10 05:11:49 crazyinsanoman postfix/smtpd[779]: warning: SASL authentication failure: Password verification failed Aug 10 05:11:49 crazyinsanoman postfix/smtpd[779]: warning: ipname[xx.xx.xx.xx]: SASL PLAIN authentication failed: authentication failure ==> mail.warn <== Aug 10 05:11:49 crazyinsanoman postfix/smtpd[779]: warning: SASL authentication failure: Password verification failed Aug 10 05:11:49 crazyinsanoman postfix/smtpd[779]: warning: ipname[xx.xx.xx.xx]: SASL PLAIN authentication failed: authentication failure I tried $testsaslauthd -u xxxx -p xxxx 0: OK "Success." So I know that the password/user I'm using is correct. I'm thinking that most likely I have a setting wrong somewhere, but can't seem to find where. Here is my files. Here is my main.cf for postfix: # See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. myorigin = /etc/mailname # This is already done in /etc/mailname #myhostname = crazyinsanoman.xxxxx.com smtpd_banner = $myhostname ESMTP $mail_name #biff = no # appending .domain is the MUA's job. #append_dot_mydomain = no readme_directory = /usr/share/doc/postfix # TLS parameters smtpd_tls_cert_file = /etc/postfix/smtpd.cert smtpd_tls_key_file = /etc/postfix/smtpd.key smtpd_use_tls = yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # Relay smtp through another server or leave blank to do it yourself #relayhost = smtp.yourisp.com # Network details; Accept connections from anywhere, and only trust this machine mynetworks = 127.0.0.0/8 inet_interfaces = all #mynetworks_style = host #As we will be using virtual domains, these need to be empty local_recipient_maps = mydestination = # how long if undelivered before sending "delayed mail" warning update to sender delay_warning_time = 4h # will it be a permanent error or temporary unknown_local_recipient_reject_code = 450 # how long to keep message on queue before return as failed. # some have 3 days, I have 16 days as I am backup server for some people # whom go on holiday with their server switched off. maximal_queue_lifetime = 7d # max and min time in seconds between retries if connection failed minimal_backoff_time = 1000s maximal_backoff_time = 8000s # how long to wait when servers connect before receiving rest of data smtp_helo_timeout = 60s # how many address can be used in one message. # effective stopper to mass spammers, accidental copy in whole address list # but may restrict intentional mail shots. smtpd_recipient_limit = 16 # how many error before back off. smtpd_soft_error_limit = 3 # how many max errors before blocking it. smtpd_hard_error_limit = 12 # Requirements for the HELO statement smtpd_helo_restrictions = permit_mynetworks, warn_if_reject reject_non_fqdn_hostname, reject_invalid_hostname, permit # Requirements for the sender details smtpd_sender_restrictions = permit_mynetworks, warn_if_reject reject_non_fqdn_sender, reject_unknown_sender_domain, reject_unauth_pipelining, permit # Requirements for the connecting server smtpd_client_restrictions = reject_rbl_client sbl.spamhaus.org, reject_rbl_client blackholes.easynet.nl, reject_rbl_client dnsbl.njabl.org # Requirement for the recipient address smtpd_recipient_restrictions = reject_unauth_pipelining, permit_mynetworks, reject_non_fqdn_recipient, reject_unknown_recipient_domain, reject_unauth_destination, permit smtpd_data_restrictions = reject_unauth_pipelining # require proper helo at connections smtpd_helo_required = yes # waste spammers time before rejecting them smtpd_delay_reject = yes disable_vrfy_command = yes # not sure of the difference of the next two # but they are needed for local aliasing alias_maps = hash:/etc/postfix/aliases alias_database = hash:/etc/postfix/aliases # this specifies where the virtual mailbox folders will be located virtual_mailbox_base = /var/spool/mail/vmail # this is for the mailbox location for each user virtual_mailbox_maps = mysql:/etc/postfix/mysql_mailbox.cf # and this is for aliases virtual_alias_maps = mysql:/etc/postfix/mysql_alias.cf # and this is for domain lookups virtual_mailbox_domains = mysql:/etc/postfix/mysql_domains.cf # this is how to connect to the domains (all virtual, but the option is there) # not used yet # transport_maps = mysql:/etc/postfix/mysql_transport.cf # Setup the uid/gid of the owner of the mail files - static:5000 allows virtual ones virtual_uid_maps = static:5000 virtual_gid_maps = static:5000 inet_protocols=all # Cyrus SASL Support smtpd_sasl_path = smtpd smtpd_sasl_local_domain = xxxxx.com ####################### ## OLD CONFIGURATION ## ####################### #myorigin = /etc/mailname #mydestination = crazyinsanoman.xxxxx.com, localhost, localhost.localdomain #mailbox_size_limit = 0 #recipient_delimiter = + #html_directory = /usr/share/doc/postfix/html message_size_limit = 30720000 #virtual_alias_domains = ##virtual_alias_maps = hash:/etc/postfix/virtual #virtual_mailbox_base = /home/vmail ##luser_relay = webmaster #smtpd_sasl_type = dovecot #smtpd_sasl_path = private/auth smtpd_sasl_auth_enable = yes smtpd_sasl_security_options = noanonymous broken_sasl_auth_clients = yes #smtpd_sasl_authenticated_header = yes smtpd_recipient_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination #virtual_create_maildirsize = yes #virtual_maildir_extended = yes #proxy_read_maps = $local_recipient_maps $mydestination $virtual_alias_maps $virtual_alias_domains $virtual_mailbox_maps $virtual_mailbox_domains $relay_recipient_maps $relay_domains $canonical_maps $sender_canonical_maps $recipient_canonical_maps $relocated_maps $transport_maps $mynetworks $virtual_mailbox_limit_maps #virtual_transport = dovecot #dovecot_destination_recipient_limit = 1 Here is my master.cf: # # Postfix master process configuration file. For details on the format # of the file, see the master(5) manual page (command: "man 5 master"). # # Do not forget to execute "postfix reload" after editing this file. # # ========================================================================== # service type private unpriv chroot wakeup maxproc command + args # (yes) (yes) (yes) (never) (100) # ========================================================================== smtp inet n - - - - smtpd submission inet n - - - - smtpd -o smtpd_tls_security_level=encrypt -o smtpd_sasl_auth_enable=yes -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING #smtps inet n - - - - smtpd # -o smtpd_tls_wrappermode=yes # -o smtpd_sasl_auth_enable=yes # -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING #628 inet n - - - - qmqpd pickup fifo n - - 60 1 pickup cleanup unix n - - - 0 cleanup qmgr fifo n - n 300 1 qmgr #qmgr fifo n - - 300 1 oqmgr tlsmgr unix - - - 1000? 1 tlsmgr rewrite unix - - - - - trivial-rewrite bounce unix - - - - 0 bounce defer unix - - - - 0 bounce trace unix - - - - 0 bounce verify unix - - - - 1 verify flush unix n - - 1000? 0 flush proxymap unix - - n - - proxymap proxywrite unix - - n - 1 proxymap smtp unix - - - - - smtp # When relaying mail as backup MX, disable fallback_relay to avoid MX loops relay unix - - - - - smtp -o smtp_fallback_relay= # -o smtp_helo_timeout=5 -o smtp_connect_timeout=5 showq unix n - - - - showq error unix - - - - - error retry unix - - - - - error discard unix - - - - - discard local unix - n n - - local virtual unix - n n - - virtual lmtp unix - - - - - lmtp anvil unix - - - - 1 anvil scache unix - - - - 1 scache # # ==================================================================== # Interfaces to non-Postfix software. Be sure to examine the manual # pages of the non-Postfix software to find out what options it wants. # # Many of the following services use the Postfix pipe(8) delivery # agent. See the pipe(8) man page for information about ${recipient} # and other message envelope options. # ==================================================================== # # maildrop. See the Postfix MAILDROP_README file for details. # Also specify in main.cf: maildrop_destination_recipient_limit=1 # maildrop unix - n n - - pipe flags=DRhu user=vmail argv=/usr/bin/maildrop -d ${recipient} # # ==================================================================== # # Recent Cyrus versions can use the existing "lmtp" master.cf entry. # # Specify in cyrus.conf: # lmtp cmd="lmtpd -a" listen="localhost:lmtp" proto=tcp4 # # Specify in main.cf one or more of the following: # mailbox_transport = lmtp:inet:localhost # virtual_transport = lmtp:inet:localhost # # ==================================================================== # # Cyrus 2.1.5 (Amos Gouaux) # Also specify in main.cf: cyrus_destination_recipient_limit=1 # cyrus unix - n n - - pipe user=cyrus argv=/cyrus/bin/deliver -e -r ${sender} -m ${extension} ${user} # # ==================================================================== # Old example of delivery via Cyrus. # #old-cyrus unix - n n - - pipe # flags=R user=cyrus argv=/cyrus/bin/deliver -e -m ${extension} ${user} # # ==================================================================== # # See the Postfix UUCP_README file for configuration details. # uucp unix - n n - - pipe flags=Fqhu user=uucp argv=uux -r -n -z -a$sender - $nexthop!rmail ($recipient) # # Other external delivery methods. # ifmail unix - n n - - pipe flags=F user=ftn argv=/usr/lib/ifmail/ifmail -r $nexthop ($recipient) bsmtp unix - n n - - pipe flags=Fq. user=bsmtp argv=/usr/lib/bsmtp/bsmtp -t$nexthop -f$sender $recipient scalemail-backend unix - n n - 2 pipe flags=R user=scalemail argv=/usr/lib/scalemail/bin/scalemail-store ${nexthop} ${user} ${extension} mailman unix - n n - - pipe flags=FR user=list argv=/usr/lib/mailman/bin/postfix-to-mailman.py ${nexthop} ${user} #dovecot unix - n n - - pipe # flags=DRhu user=vmail:vmail argv=/usr/lib/dovecot/deliver -d ${recipient} Here is what I'm using for /etc/postfix/sasl/smtpd.conf log_level: 7 pwcheck_method: saslauthd pwcheck_method: auxprop mech_list: PLAIN LOGIN CRAM-MD5 DIGEST-MD5 allow_plaintext: true auxprop_plugin: mysql sql_hostnames: 127.0.0.1 sql_user: xxxxx sql_passwd: xxxxx sql_database: maildb sql_select: select crypt from users where id = '%u' As you can see I'm trying to use mysql as my authentication method. The password in 'users' is set through the 'ENCRYPT()' function. I also followed the methods found in http://www.jimmy.co.at/weblog/?p=52 in order to redo /var/spool/postfix/var/run/saslauthd as that seems to be a lot of people's problems, but that didn't help at all. Also, here is my /etc/default/saslauthd START=yes DESC="SASL Authentication Daemon" NAME="saslauthd" # Which authentication mechanisms should saslauthd use? (default: pam) # # Available options in this Debian package: # getpwent -- use the getpwent() library function # kerberos5 -- use Kerberos 5 # pam -- use PAM # rimap -- use a remote IMAP server # shadow -- use the local shadow password file # sasldb -- use the local sasldb database file # ldap -- use LDAP (configuration is in /etc/saslauthd.conf) # # Only one option may be used at a time. See the saslauthd man page # for more information. # # Example: MECHANISMS="pam" MECHANISMS="pam" MECH_OPTIONS="" THREADS=5 OPTIONS="-c -m /var/spool/postfix/var/run/saslauthd -r" I had heard that potentially changing MECHANISM to MECHANISMS="mysql" but obviously that didn't help as is shown by the options listed above and also by trying it out anyway in case the documentation was outdated. So, I'm now at a loss... I have no idea where to go from here or what steps I need to do to get this working =/ Anyone have any ideas? EDIT: Here is the error that is coming from auth.log ... I don't know if this will help at all, but here you go: Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql auxprop plugin using mysql engine Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin Parse the username [email protected] Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin try and connect to a host Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin trying to open db 'maildb' on host '127.0.0.1' Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin Parse the username [email protected] Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin try and connect to a host Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin trying to open db 'maildb' on host '127.0.0.1' Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: begin transaction Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin create statement from userPassword user xxxxxx.com Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin doing query select crypt from users where id = '[email protected]'; Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin create statement from cmusaslsecretPLAIN user xxxxxx.com Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin doing query select crypt from users where id = '[email protected]'; Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: commit transaction Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin Parse the username [email protected] Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin try and connect to a host Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin trying to open db 'maildb' on host '127.0.0.1' Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin Parse the username [email protected] Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin try and connect to a host Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin trying to open db 'maildb' on host '127.0.0.1' Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin Parse the username [email protected] Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin try and connect to a host Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin trying to open db 'maildb' on host '127.0.0.1' Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin Parse the username [email protected] Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin try and connect to a host Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin trying to open db 'maildb' on host '127.0.0.1' Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: begin transaction Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin create statement from userPassword user xxxxxx.com Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin doing query select crypt from users where id = '[email protected]'; Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin create statement from cmusaslsecretPLAIN user xxxxxx.com Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin doing query select crypt from users where id = '[email protected]'; Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: commit transaction Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin Parse the username [email protected] Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin try and connect to a host Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin trying to open db 'maildb' on host '127.0.0.1'

    Read the article

  • How do I repair the corrupted files found by sfc /scannow? "Windows Resource Protection found corrupt files but was unable to fix some of them."

    - by galacticninja
    After running chkdsk C: /F /R and finding out that my hard disk has 24 KB in bad sectors (log is posted below), I decided to run Windows 7's System File Checker utility (sfc /scannow). SFC showed the ff. message after I ran it: "Windows Resource Protection found corrupt files but was unable to fix some of them. Details are included in the CBS.Log windir\Logs\CBS\CBS.log." Since the CBS.log file is too large, I ran findstr /c:"[SR]" %windir%\Logs\CBS\CBS.log >"%userprofile%\Desktop\sfcdetails.txt" (as per Microsoft's KB 928228 article) to only get the log text pertaining to the corrupt files. (log is also posted below) How do I troubleshoot and repair the corrupted files mentioned by sfc /scannow? My OS is Windows 7, 64-bit. chkdsk log Checking file system on C: The type of the file system is NTFS. A disk check has been scheduled. Windows will now check the disk. CHKDSK is verifying files (stage 1 of 5)... 936192 file records processed. File verification completed. 25238 large file records processed. 0 bad file records processed. 4 EA records processed. 44 reparse records processed. CHKDSK is verifying indexes (stage 2 of 5)... 1051640 index entries processed. Index verification completed. 0 unindexed files scanned. 0 unindexed files recovered. CHKDSK is verifying security descriptors (stage 3 of 5)... 936192 file SDs/SIDs processed. Cleaning up 24 unused index entries from index $SII of file 0x9. Cleaning up 24 unused index entries from index $SDH of file 0x9. Cleaning up 24 unused security descriptors. Security descriptor verification completed. 57725 data files processed. CHKDSK is verifying Usn Journal... 36994248 USN bytes processed. Usn Journal verification completed. CHKDSK is verifying file data (stage 4 of 5)... 936176 files processed. File data verification completed. CHKDSK is verifying free space (stage 5 of 5)... 306238 free clusters processed. Free space verification is complete. Adding 1 bad clusters to the Bad Clusters File. Correcting errors in the Volume Bitmap. Windows has made corrections to the file system. 488282111 KB total disk space. 485595420 KB in 766458 files. 401856 KB in 57726 indexes. 24 KB in bad sectors. 1059863 KB in use by the system. 65536 KB occupied by the log file. 1224948 KB available on disk. 4096 bytes in each allocation unit. 122070527 total allocation units on disk. 306237 allocation units available on disk. Internal Info: 00 49 0e 00 81 93 0c 00 34 01 17 00 00 00 00 00 .I......4....... 6b 29 00 00 2c 00 00 00 00 00 00 00 00 00 00 00 k)..,........... 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ sfc /scannow log (through findstr /c:"[SR]" %windir%\Logs\CBS\CBS.log >"%userprofile%\Desktop\sfcdetails.txt") Note: The full log is at http://pastebin.com/raw.php?i=gTEGZmWj . I've only quoted parts of the full log below (mostly from the last part), as the full log won't fit within the character limit for questions. I've added it to serve as a preview. ... 2013-12-28 19:37:50, Info CSI00000542 [SR] Beginning Verify and Repair transaction 2013-12-28 19:37:55, Info CSI00000544 [SR] Verify complete 2013-12-28 19:37:56, Info CSI00000545 [SR] Verifying 95 (0x000000000000005f) components 2013-12-28 19:37:56, Info CSI00000546 [SR] Beginning Verify and Repair transaction 2013-12-28 19:38:03, Info CSI00000548 [SR] Verify complete 2013-12-28 19:38:03, Info CSI00000549 [SR] Repairing 43 (0x000000000000002b) components 2013-12-28 19:38:03, Info CSI0000054a [SR] Beginning Verify and Repair transaction ... 2013-12-28 19:38:15, Info CSI00000730 [SR] Could not reproject corrupted file [ml:520{260},l:84{42}]"\??\C:\Windows\System32\migwiz\dlmanifests"\[l:62{31}]"GroupPolicy-Admin-Gpedit-DL.man"; source file in store is also corrupted 2013-12-28 19:38:15, Info CSI00000733 [SR] Could not reproject corrupted file [ml:520{260},l:84{42}]"\??\C:\Windows\System32\migwiz\dlmanifests"\[l:30{15}]"frs-core-DL.man"; source file in store is also corrupted 2013-12-28 19:38:15, Info CSI00000736 [SR] Could not reproject corrupted file [ml:520{260},l:84{42}]"\??\C:\Windows\System32\migwiz\dlmanifests"\[l:26{13}]"gpmgmt-DL.man"; source file in store is also corrupted 2013-12-28 19:38:15, Info CSI00000739 [SR] Could not reproject corrupted file [ml:520{260},l:84{42}]"\??\C:\Windows\System32\migwiz\dlmanifests"\[l:74{37}]"MediaServer-ASPAdmin-Migration-DL.man"; source file in store is also corrupted 2013-12-28 19:38:15, Info CSI0000073c [SR] Could not reproject corrupted file [ml:520{260},l:84{42}]"\??\C:\Windows\System32\migwiz\dlmanifests"\[l:36{18}]"Ldap-Client-DL.man"; source file in store is also corrupted 2013-12-28 19:38:15, Info CSI0000073f [SR] Could not reproject corrupted file [ml:520{260},l:84{42}]"\??\C:\Windows\System32\migwiz\dlmanifests"\[l:38{19}]"iSNS_Service-DL.man"; source file in store is also corrupted 2013-12-28 19:38:15, Info CSI00000742 [SR] Could not reproject corrupted file [ml:520{260},l:84{42}]"\??\C:\Windows\System32\migwiz\dlmanifests"\[l:76{38}]"MediaServer-Multicast-Migration-DL.man"; source file in store is also corrupted 2013-12-28 19:38:15, Info CSI00000745 [SR] Could not reproject corrupted file [ml:520{260},l:84{42}]"\??\C:\Windows\System32\migwiz\dlmanifests"\[l:78{39}]"Kerberos-Key-Distribution-Center-DL.man"; source file in store is also corrupted 2013-12-28 19:38:15, Info CSI00000748 [SR] Could not reproject corrupted file [ml:520{260},l:84{42}]"\??\C:\Windows\System32\migwiz\dlmanifests"\[l:86{43}]"GroupPolicy-CSE-SoftwareInstallation-DL.man"; source file in store is also corrupted 2013-12-28 19:38:15, Info CSI0000074b [SR] Could not reproject corrupted file [ml:520{260},l:84{42}]"\??\C:\Windows\System32\migwiz\dlmanifests"\[l:28{14}]"ieframe-dl.man"; source file in store is also corrupted 2013-12-28 19:38:15, Info CSI0000074e [SR] Could not reproject corrupted file [ml:520{260},l:84{42}]"\??\C:\Windows\System32\migwiz\dlmanifests"\[l:76{38}]"GroupPolicy-Admin-Gpedit-Snapin-DL.man"; source file in store is also corrupted 2013-12-28 19:38:15, Info CSI00000751 [SR] Could not reproject corrupted file [ml:520{260},l:84{42}]"\??\C:\Windows\System32\migwiz\dlmanifests"\[l:32{16}]"IPSec-Svc-DL.man"; source file in store is also corrupted 2013-12-28 19:38:15, Info CSI00000754 [SR] Could not reproject corrupted file [ml:520{260},l:84{42}]"\??\C:\Windows\System32\migwiz\dlmanifests"\[l:22{11}]"HTTP-DL.man"; source file in store is also corrupted 2013-12-28 19:38:15, Info CSI00000757 [SR] Could not reproject corrupted file [ml:520{260},l:84{42}]"\??\C:\Windows\System32\migwiz\dlmanifests"\[l:56{28}]"MediaServer-Migration-DL.man"; source file in store is also corrupted 2013-12-28 19:38:15, Info CSI0000075a [SR] Could not reproject corrupted file [ml:520{260},l:84{42}]"\??\C:\Windows\System32\migwiz\dlmanifests"\[l:26{13}]"GPBase-DL.man"; source file in store is also corrupted 2013-12-28 19:38:15, Info CSI0000075d [SR] Could not reproject corrupted file [ml:520{260},l:84{42}]"\??\C:\Windows\System32\migwiz\dlmanifests"\[l:38{19}]"IasMigPlugin-DL.man"; source file in store is also corrupted 2013-12-28 19:38:15, Info CSI00000760 [SR] Could not reproject corrupted file [ml:520{260},l:84{42}]"\??\C:\Windows\System32\migwiz\dlmanifests"\[l:50{25}]"International-Core-DL.man"; source file in store is also corrupted 2013-12-28 19:38:16, Info CSI00000762 [SR] Cannot repair member file [l:24{12}]"wbemdisp.dll" of Microsoft-Windows-WMI-Scripting, Version = 6.1.7600.16385, pA = PROCESSOR_ARCHITECTURE_INTEL (0), Culture neutral, VersionScope = 1 nonSxS, PublicKeyToken = {l:8 b:31bf3856ad364e35}, Type neutral, TypeName neutral, PublicKey neutral in the store, hash mismatch 2013-12-28 19:38:16, Info CSI00000763 [SR] This component was referenced by [l:202{101}]"Microsoft-Windows-Foundation-Package~31bf3856ad364e35~amd64~~6.1.7601.17514.WindowsFoundationDelivery" 2013-12-28 19:38:16, Info CSI00000766 [SR] Could not reproject corrupted file [ml:58{29},l:56{28}]"\??\C:\Windows\SysWOW64\wbem"\[l:24{12}]"wbemdisp.dll"; source file in store is also corrupted 2013-12-28 19:38:16, Info CSI00000768 [SR] Cannot repair member file [l:56{28}]"Microsoft.MediaCenter.UI.dll" of Microsoft.MediaCenter.UI, Version = 6.1.7601.17514, pA = PROCESSOR_ARCHITECTURE_MSIL (8), Culture neutral, VersionScope = 1 nonSxS, PublicKeyToken = {l:8 b:31bf3856ad364e35}, Type neutral, TypeName neutral, PublicKey neutral in the store, hash mismatch 2013-12-28 19:38:16, Info CSI00000769 [SR] This component was referenced by [l:176{88}]"Microsoft-Windows-MediaCenter-Package~31bf3856ad364e35~amd64~~6.1.7601.17514.MediaCenter" 2013-12-28 19:38:16, Info CSI0000076c [SR] Could not reproject corrupted file [ml:520{260},l:40{20}]"\??\C:\Windows\ehome"\[l:56{28}]"Microsoft.MediaCenter.UI.dll"; source file in store is also corrupted 2013-12-28 19:38:16, Info CSI0000076e [SR] Cannot repair member file [l:24{12}]"ReAgentc.exe" of Microsoft-Windows-WinRE-RecoveryTools, Version = 6.1.7601.17514, pA = PROCESSOR_ARCHITECTURE_INTEL (0), Culture neutral, VersionScope = 1 nonSxS, PublicKeyToken = {l:8 b:31bf3856ad364e35}, Type neutral, TypeName neutral, PublicKey neutral in the store, hash mismatch 2013-12-28 19:38:16, Info CSI0000076f [SR] This component was referenced by [l:202{101}]"Microsoft-Windows-Foundation-Package~31bf3856ad364e35~amd64~~6.1.7601.17514.WindowsFoundationDelivery" 2013-12-28 19:38:16, Info CSI00000772 [SR] Could not reproject corrupted file [ml:48{24},l:46{23}]"\??\C:\Windows\SysWOW64"\[l:24{12}]"ReAgentc.exe"; source file in store is also corrupted 2013-12-28 19:38:16, Info CSI00000774 [SR] Cannot repair member file [l:82{41}]"System.Management.Automation.dll-Help.xml" of Microsoft-Windows-PowerShell-PreLoc.Resources, Version = 6.1.7600.16385, pA = PROCESSOR_ARCHITECTURE_AMD64 (9), Culture = [l:10{5}]"en-US", VersionScope = 1 nonSxS, PublicKeyToken = {l:8 b:31bf3856ad364e35}, Type neutral, TypeName neutral, PublicKey neutral in the store, hash mismatch 2013-12-28 19:38:16, Info CSI00000775 [SR] This component was referenced by [l:266{133}]"Microsoft-Windows-Client-Features-Package~31bf3856ad364e35~amd64~en-US~6.1.7601.17514.Microsoft-Windows-Client-Features-Language-Pack" 2013-12-28 19:38:16, Info CSI00000778 [SR] Could not reproject corrupted file [ml:520{260},l:104{52}]"\??\C:\Windows\System32\WindowsPowerShell\v1.0\en-US"\[l:82{41}]"System.Management.Automation.dll-Help.xml"; source file in store is also corrupted 2013-12-28 19:38:16, Info CSI0000077a [SR] Cannot repair member file [l:18{9}]"hlink.dll" of Microsoft-Windows-HLink, Version = 6.1.7600.16385, pA = PROCESSOR_ARCHITECTURE_INTEL (0), Culture neutral, VersionScope = 1 nonSxS, PublicKeyToken = {l:8 b:31bf3856ad364e35}, Type neutral, TypeName neutral, PublicKey neutral in the store, hash mismatch 2013-12-28 19:38:16, Info CSI0000077b [SR] This component was referenced by [l:202{101}]"Microsoft-Windows-Foundation-Package~31bf3856ad364e35~amd64~~6.1.7601.17514.WindowsFoundationDelivery" 2013-12-28 19:38:16, Info CSI0000077e [SR] Could not reproject corrupted file [ml:48{24},l:46{23}]"\??\C:\Windows\SysWOW64"\[l:18{9}]"hlink.dll"; source file in store is also corrupted 2013-12-28 19:38:16, Info CSI00000780 [SR] Repair complete 2013-12-28 19:38:16, Info CSI00000781 [SR] Committing transaction 2013-12-28 19:38:19, Info CSI00000785 [SR] Verify and Repair Transaction completed. All files and registry keys listed in this transaction have been successfully repaired

    Read the article

  • MySQL query, 2 similar servers, 2 minute difference in execution times

    - by mr12086
    I had a similar question on stack overflow, but it seems to be more server/mysql setup related than coding. The queries below all execute instantly on our development server where as they can take upto 2 minutes 20 seconds. The query execution time seems to be affected by home ambiguous the LIKE string's are. If they closely match a country that has few matches it will take less time, and if you use something like 'ge' for germany - it will take longer to execute. But this doesn't always work out like that, at times its quite erratic. Sending data appears to be the culprit but why and what does that mean. Also memory on production looks to be quite low (free memory)? Production: Intel Quad Xeon E3-1220 3.1GHz 4GB DDR3 2x 1TB SATA in RAID1 Network speed 100Mb Ubuntu Development Intel Core i3-2100, 2C/4T, 3.10GHz 500 GB SATA - No RAID 4GB DDR3 UPDATE 2 : mysqltuner output: [prod] -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.1.61-0ubuntu0.10.04.1 [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 103M (Tables: 180) [--] Data in InnoDB tables: 491M (Tables: 19) [!!] Total fragmented tables: 38 -------- Security Recommendations ------------------------------------------- [OK] All database users have passwords assigned -------- Performance Metrics ------------------------------------------------- [--] Up for: 77d 4h 6m 1s (53M q [7.968 qps], 14M conn, TX: 87B, RX: 12B) [--] Reads / Writes: 98% / 2% [--] Total buffers: 58.0M global + 2.7M per thread (151 max threads) [OK] Maximum possible memory usage: 463.8M (11% of installed RAM) [OK] Slow queries: 0% (12K/53M) [OK] Highest usage of available connections: 22% (34/151) [OK] Key buffer size / total MyISAM indexes: 16.0M/10.6M [OK] Key buffer hit rate: 98.7% (162M cached / 2M reads) [OK] Query cache efficiency: 20.7% (7M cached / 36M selects) [!!] Query cache prunes per day: 3934 [OK] Sorts requiring temporary tables: 1% (3K temp sorts / 230K sorts) [!!] Joins performed without indexes: 71068 [OK] Temporary tables created on disk: 24% (3M on disk / 13M total) [OK] Thread cache hit rate: 99% (690 created / 14M connections) [!!] Table cache hit rate: 0% (64 open / 85M opened) [OK] Open file limit used: 12% (128/1K) [OK] Table locks acquired immediately: 99% (16M immediate / 16M locks) [!!] InnoDB data size / buffer pool: 491.9M/8.0M -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance Enable the slow query log to troubleshoot bad queries Adjust your join queries to always utilize indexes Increase table_cache gradually to avoid file descriptor limits Variables to adjust: query_cache_size (> 16M) join_buffer_size (> 128.0K, or always use indexes with joins) table_cache (> 64) innodb_buffer_pool_size (>= 491M) [dev] -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.1.62-0ubuntu0.11.10.1 [!!] Switch to 64-bit OS - MySQL cannot currently use all of your RAM -------- Storage Engine Statistics ------------------------------------------- [--] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 185M (Tables: 632) [--] Data in InnoDB tables: 967M (Tables: 38) [!!] Total fragmented tables: 73 -------- Security Recommendations ------------------------------------------- [OK] All database users have passwords assigned -------- Performance Metrics ------------------------------------------------- [--] Up for: 1d 2h 26m 9s (5K q [0.058 qps], 1K conn, TX: 4M, RX: 1M) [--] Reads / Writes: 99% / 1% [--] Total buffers: 58.0M global + 2.7M per thread (151 max threads) [OK] Maximum possible memory usage: 463.8M (11% of installed RAM) [OK] Slow queries: 0% (0/5K) [OK] Highest usage of available connections: 1% (2/151) [OK] Key buffer size / total MyISAM indexes: 16.0M/18.6M [OK] Key buffer hit rate: 99.9% (60K cached / 36 reads) [OK] Query cache efficiency: 44.5% (1K cached / 2K selects) [OK] Query cache prunes per day: 0 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 44 sorts) [OK] Temporary tables created on disk: 24% (162 on disk / 666 total) [OK] Thread cache hit rate: 99% (2 created / 1K connections) [!!] Table cache hit rate: 1% (64 open / 4K opened) [OK] Open file limit used: 8% (88/1K) [OK] Table locks acquired immediately: 100% (1K immediate / 1K locks) [!!] InnoDB data size / buffer pool: 967.7M/8.0M -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance Enable the slow query log to troubleshoot bad queries Increase table_cache gradually to avoid file descriptor limits Variables to adjust: table_cache (> 64) innodb_buffer_pool_size (>= 967M) UPDATE 1: When testing the queries listed here there is usually no more than one other query taking place, and usually none. Because production is actually handling apache requests that development gets very few of as it's only myself and 1 other who accesses it - could the 4GB of RAM be getting exhausted by using the single machine for both apache and mysql server? Production: sudo hdparm -tT /dev/sda /dev/sda: Timing cached reads: 24872 MB in 2.00 seconds = 12450.72 MB/sec Timing buffered disk reads: 368 MB in 3.00 seconds = 122.49 MB/sec sudo hdparm -tT /dev/sdb /dev/sdb: Timing cached reads: 24786 MB in 2.00 seconds = 12407.22 MB/sec Timing buffered disk reads: 350 MB in 3.00 seconds = 116.53 MB/sec Server version(mysql + ubuntu versions): 5.1.61-0ubuntu0.10.04.1 Development: sudo hdparm -tT /dev/sda /dev/sda: Timing cached reads: 10632 MB in 2.00 seconds = 5319.40 MB/sec Timing buffered disk reads: 400 MB in 3.01 seconds = 132.85 MB/sec Server version(mysql + ubuntu versions): 5.1.62-0ubuntu0.11.10.1 ORIGINAL DATA : This query is NOT the query in question but is related so ill post it. SELECT f.form_question_has_answer_id FROM form_question_has_answer f INNER JOIN project_company_has_user p ON f.form_question_has_answer_user_id = p.project_company_has_user_user_id INNER JOIN company c ON p.project_company_has_user_company_id = c.company_id INNER JOIN project p2 ON p.project_company_has_user_project_id = p2.project_id INNER JOIN user u ON p.project_company_has_user_user_id = u.user_id INNER JOIN form f2 ON p.project_company_has_user_project_id = f2.form_project_id WHERE (f2.form_template_name = 'custom' AND p.project_company_has_user_garbage_collection = 0 AND p.project_company_has_user_project_id = '29') AND (LCASE(c.company_country) LIKE '%ge%' OR LCASE(c.company_country) LIKE '%abcde%') AND f.form_question_has_answer_form_id = '174' And the explain plan for the above query is, run on both dev and production produce the same plan. +----+-------------+-------+--------+----------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+----------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+-------------+ | 1 | SIMPLE | p2 | const | PRIMARY | PRIMARY | 4 | const | 1 | Using index | | 1 | SIMPLE | f | ref | form_question_has_answer_form_id,form_question_has_answer_user_id | form_question_has_answer_form_id | 4 | const | 796 | Using where | | 1 | SIMPLE | u | eq_ref | PRIMARY | PRIMARY | 4 | new_klarents.f.form_question_has_answer_user_id | 1 | Using index | | 1 | SIMPLE | p | ref | project_company_has_user_unique_key,project_company_has_user_user_id,project_company_has_user_company_id,project_company_has_user_project_id | project_company_has_user_user_id | 4 | new_klarents.f.form_question_has_answer_user_id | 1 | Using where | | 1 | SIMPLE | f2 | ref | form_project_id | form_project_id | 4 | const | 15 | Using where | | 1 | SIMPLE | c | eq_ref | PRIMARY | PRIMARY | 4 | new_klarents.p.project_company_has_user_company_id | 1 | Using where | +----+-------------+-------+--------+----------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+-------------+ This query takes 2 minutes ~20 seconds to execute. The query that is ACTUALLY being run on the server is this one: SELECT COUNT(*) AS num_results FROM (SELECT f.form_question_has_answer_id FROM form_question_has_answer f INNER JOIN project_company_has_user p ON f.form_question_has_answer_user_id = p.project_company_has_user_user_id INNER JOIN company c ON p.project_company_has_user_company_id = c.company_id INNER JOIN project p2 ON p.project_company_has_user_project_id = p2.project_id INNER JOIN user u ON p.project_company_has_user_user_id = u.user_id INNER JOIN form f2 ON p.project_company_has_user_project_id = f2.form_project_id WHERE (f2.form_template_name = 'custom' AND p.project_company_has_user_garbage_collection = 0 AND p.project_company_has_user_project_id = '29') AND (LCASE(c.company_country) LIKE '%ge%' OR LCASE(c.company_country) LIKE '%abcde%') AND f.form_question_has_answer_form_id = '174' GROUP BY f.form_question_has_answer_id;) dctrn_count_query; With explain plans (again same on dev and production): +----+-------------+-------+--------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+------------------------------+ | 1 | PRIMARY | NULL | NULL | NULL | NULL | NULL | NULL | NULL | Select tables optimized away | | 2 | DERIVED | p2 | const | PRIMARY | PRIMARY | 4 | | 1 | Using index | | 2 | DERIVED | f | ref | form_question_has_answer_form_id,form_question_has_answer_user_id | form_question_has_answer_form_id | 4 | | 797 | Using where | | 2 | DERIVED | p | ref | project_company_has_user_unique_key,project_company_has_user_user_id,project_company_has_user_company_id,project_company_has_user_project_id,project_company_has_user_garbage_collection | project_company_has_user_user_id | 4 | new_klarents.f.form_question_has_answer_user_id | 1 | Using where | | 2 | DERIVED | f2 | ref | form_project_id | form_project_id | 4 | | 15 | Using where | | 2 | DERIVED | c | eq_ref | PRIMARY | PRIMARY | 4 | new_klarents.p.project_company_has_user_company_id | 1 | Using where | | 2 | DERIVED | u | eq_ref | PRIMARY | PRIMARY | 4 | new_klarents.p.project_company_has_user_user_id | 1 | Using where; Using index | +----+-------------+-------+--------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+------------------------------+ On the production server the information I have is as follows. Upon execution: +-------------+ | num_results | +-------------+ | 3 | +-------------+ 1 row in set (2 min 14.28 sec) Show profile: +--------------------------------+------------+ | Status | Duration | +--------------------------------+------------+ | starting | 0.000016 | | checking query cache for query | 0.000057 | | Opening tables | 0.004388 | | System lock | 0.000003 | | Table lock | 0.000036 | | init | 0.000030 | | optimizing | 0.000016 | | statistics | 0.000111 | | preparing | 0.000022 | | executing | 0.000004 | | Sorting result | 0.000002 | | Sending data | 136.213836 | | end | 0.000007 | | query end | 0.000002 | | freeing items | 0.004273 | | storing result in query cache | 0.000010 | | logging slow query | 0.000001 | | logging slow query | 0.000002 | | cleaning up | 0.000002 | +--------------------------------+------------+ On development the results are as follows. +-------------+ | num_results | +-------------+ | 3 | +-------------+ 1 row in set (0.08 sec) Again the profile for this query: +--------------------------------+----------+ | Status | Duration | +--------------------------------+----------+ | starting | 0.000022 | | checking query cache for query | 0.000148 | | Opening tables | 0.000025 | | System lock | 0.000008 | | Table lock | 0.000101 | | optimizing | 0.000035 | | statistics | 0.001019 | | preparing | 0.000047 | | executing | 0.000008 | | Sorting result | 0.000005 | | Sending data | 0.086565 | | init | 0.000015 | | optimizing | 0.000006 | | executing | 0.000020 | | end | 0.000004 | | query end | 0.000004 | | freeing items | 0.000028 | | storing result in query cache | 0.000005 | | removing tmp table | 0.000008 | | closing tables | 0.000008 | | logging slow query | 0.000002 | | cleaning up | 0.000005 | +--------------------------------+----------+ If i remove user and/or project innerjoins the query is reduced to 30s. Last bit of information I have: Mysqlserver and Apache are on the same box, there is only one box for production. Production output from top: before & after. top - 15:43:25 up 78 days, 12:11, 4 users, load average: 1.42, 0.99, 0.78 Tasks: 162 total, 2 running, 160 sleeping, 0 stopped, 0 zombie Cpu(s): 0.1%us, 50.4%sy, 0.0%ni, 49.5%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4037868k total, 3772580k used, 265288k free, 243704k buffers Swap: 3905528k total, 265384k used, 3640144k free, 1207944k cached top - 15:44:31 up 78 days, 12:13, 4 users, load average: 1.94, 1.23, 0.87 Tasks: 160 total, 2 running, 157 sleeping, 0 stopped, 1 zombie Cpu(s): 0.2%us, 50.6%sy, 0.0%ni, 49.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4037868k total, 3834300k used, 203568k free, 243736k buffers Swap: 3905528k total, 265384k used, 3640144k free, 1207804k cached But this isn't a good representation of production's normal status so here is a grab of it from today outside of executing the queries. top - 11:04:58 up 79 days, 7:33, 4 users, load average: 0.39, 0.58, 0.76 Tasks: 156 total, 1 running, 155 sleeping, 0 stopped, 0 zombie Cpu(s): 3.3%us, 2.8%sy, 0.0%ni, 93.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4037868k total, 3676136k used, 361732k free, 271480k buffers Swap: 3905528k total, 268736k used, 3636792k free, 1063432k cached Development: This one doesn't change during or after. top - 15:47:07 up 110 days, 22:11, 7 users, load average: 0.17, 0.07, 0.06 Tasks: 210 total, 2 running, 208 sleeping, 0 stopped, 0 zombie Cpu(s): 0.1%us, 0.2%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4111972k total, 1821100k used, 2290872k free, 238860k buffers Swap: 4183036k total, 66472k used, 4116564k free, 921072k cached

    Read the article

  • DataContractSerializer and deserializing web service response types

    - by matra
    Hi, I am using calling web services and using WCF generated service-reference on the client. I have saved XML responses that are received from test service to disk (without SOAP envelope and body tags) I would like to load them from disk and create objects from them. Lets' take the following method from my web service: SomeMethodResponse SomeMethod(SomeMethodRequest req) I manually (through SOAP UI) save the response to disk to file, Sample response: < SomeMethodResponse xmlns="http://myNamespace"> <SomeMember1>value</SomeMember1> </SomeMethodResponse xmlns="http://myNamespace"> Then I try to deserialize the object from file using: DataContractSerializer dcs = new DataContractSerializer(typeof(SomeMethodResponse)) This fails – the serializer complains with the error, that it is expecting element in namespace 'http://schemas.datacontract.org/2004/07', but found element in 'http://myNamespace'. Question: Why does the DataContractSerializer not use the namespace, that is declared on SomeMethodResponseType with XmlTypeAttribute(Namespace="http://myNamespace")? I can work around this by explicitly providing the namespace and the root element to DataContractSerializer constructor. But then it fails with message similar to: Error in line X position Y (last line of the XMLdocument). 'EndElement' 'SomeMethodResponse from namespace 'httpmyNapespace’ is not expected. Expecting element 'someNameField'. SomeName is an element in the XSD that web service is using. It is also a property on the SomeMethodResponse type, backed by the private field called someNameField. It looks like DataContractSerializer is trying to deserialize the fields in addition to properties. How can I deserailize XML that I have saved from disk and get back the object of same type that SomeMethod is returning? Thanks, Matra

    Read the article

  • IIS HTTP Error 403.1 - Forbidden: Execute access is denied

    - by coxymla
    I have a ASP.NET 1.1 application running on IIS 6 / Windows Server 2003. It's our application, but we're trying to specifically replicate a customer's installation so the app folder has been copied entirely from their production server onto our test machine, and then we've created the Virtual Directory and Web Application for IIS manually. The problem I have is that when we access the app, we get the standard IIS security error message: The page cannot be displayed You have attempted to execute a CGI, ISAPI, or other executable program from a directory that does not allow programs to be executed. -------------------------------------------------------------------------------- Please try the following: •Contact the Web site administrator if you believe this directory should allow execute access. HTTP Error 403.1 - Forbidden: Execute access is denied. Internet Information Services (IIS) Now this is pretty standard, except as far as I can see it's not anything so simple. I have checked: IIS user has read access to the directory IIS user and Network Service users have read/write access to the Temporary ASP.NET Files folder Virtual directory is set to the correct version of ASP.NET ASP.NET 1.1 Web Service Extension is allowed Virtual directory has the correct mappings of file extensions and all verbs to the aspnet 1.1 DLL Virtual directory properties allow Scripts and Executables to be run Anonymous access is turned on and the username and password is correct What am I missing?

    Read the article

  • Maintaining ISAPI Rewrite Path with the ASP.NET tilde (~)

    - by Adam
    My team is upgrading from ASP.NET 3.5 to ASP.NET 4.0. We are currently using Helicon ISAPI Rewrite to map http://localhost/<account-name>/default.aspx to http://localhost/<virtual-directory>/default.aspx?AccountName=<account-name> where <account-name> is a query string variable and <virtual-directory> is a virtual directory (naturally). Before the upgrade the tilde (~) resolved to http://localhost/<account-name>/... (which I want it to do) and after the upgrade the tilde resolves to http://localhost/<virtual-directory>/... which results in an error because the <account-name> query string is required. I'd like to avoid going down the road of replacing everything with relative paths because there are several features in our system that use the entire URL instead of just the relative path. For what it's worth I'm using IIS7 in Windows 7, Visual Studio 2010 with ASP.NET 4.0 and the 64 bit Helicon ISAPI Rewrite. If I switch back to the ASP.NET 3.5 version then it still works fine (leading me to believe nothing changed in IIS unless it's within the 4.0 app pool - when I switch back and forth between 3.5 and 4.0 I have to change the app pool in IIS). Any ideas? Thanks in advance!

    Read the article

  • Why I am getting a Heap Corruption Error?

    - by vaidya.atul
    I am new to C++. I am getting HEAP CORRUPTION ERROR. Any help will be highly appreciated. Below is my code class CEntity { //some member variables CEntity(string section1,string section2); CEntity(); virtual ~CEntity(); //pure virtual function .. virtual CEntity* create()const =0; }; I derive CLine from CEntity as below class CLine:public CEntity { // Again some variables ... // Constructor and destructor CLine(string section1,string section2); CLine(); ~CLine(); CLine* Create() const; } // CLine Implementation CLine::CLine(string section1,string section2):CEntity(section1,section2){}; CLine::CLine(); CLine* CLine::create()const{return new CLine();} I have another class CReader which uses CLine object and populates it in a multimap as below class CReader { public: CReader(); ~CReader(); multimap<int,CEntity*>m_data_vs_entity; }; //CReader Implementation CReader::CReader() { m_data_vs_entity.clear(); }; CReader::~CReader() { multimap<int,CEntity*>::iterator iter; for(iter = m_data_vs_entity.begin();iter!=m_data_vs_entity.end();iter++) { CEntity* current_entity = iter->second; if(current_entity) delete current_entity; } m_data_vs_entity.clear(); } I am reading the data from a file and then populating the CLine Class.The map gets populated in a function of CReader class. Since CEntity has a virtual destructor, I hope the piece of code in CReader's destructor should work. In fact, it does work for small files but I get HEAP CORRUPTION ERROR while working with bigger files. If there is something fundamentally wrong, then, please help me find it, as I have been scratching my head for quit some time now. Thanks in advance and awaiting reply, Regards, Atul

    Read the article

  • Visual studio 2010 setup project problem.

    - by Guru
    Hi there, I've made an application that uses .NET framework 3.5 SP1 and SQL Server 2008 Express. Application is fine and now i'm going to to make a setup project for this. When I first build my setup it was fine as all the prerequisites were not included in setup. But I want my setup to install .NET 3.5 SP1 and SQL SERVER 2008 Express also. So for this I've changed the options in setup project's properties from "Download prerequisites from following location" to "Download prerequisites from the same location as my application". In addition to that I've also checked the options above like .NET 3.5 SP1 and SQL Server 2008 Express etc. After doing all this I build my project again. This time I'm Getting 57 Errors. Error 1 The install location for prerequisites has not been set to 'component vendor's web site' and the file 'DotNetFX35SP1\dotNetFX20\aspnet.msp' in item '.NET Framework 3.5 SP1' can not be located on disk. See Help for more information. D:\MindStrike Setup\MindStrike Setup.vdproj MindStrike Setup Error 2 The install location for prerequisites has not been set to 'component vendor's web site' and the file 'DotNetFX35SP1\dotNetFX20\aspnet_64.msp' in item '.NET Framework 3.5 SP1' can not be located on disk. See Help for more information. D:\MindStrike Setup\MindStrike Setup.vdproj MindStrike Setup Error 3 The install location for prerequisites has not been set to 'component vendor's web site' and the file 'DotNetFX35SP1\dotNetFX20\clr.msp' in item '.NET Framework 3.5 SP1' can not be located on disk. See Help for more information. D:\MindStrike Setup\MindStrike Setup.vdproj MindStrike Setup Error 4 The install location for prerequisites has not been set to 'component vendor's web site' and the file 'DotNetFX35SP1\dotNetFX20\clr_64.msp' in item '.NET Framework 3.5 SP1' can not be located on disk. See Help for more information. D:\MindStrike Setup\MindStrike Setup.vdproj MindStrike Setup As the question will become too large so I'm just pasting 3 errors but there are totally 57 errors. Please help me . Thanks in advance Guru

    Read the article

  • BindAttribute, Exclude nested properties for complex types

    - by David Board
    I have a 'Stream' model: public class Stream { public int ID { get; set; } [Required] [StringLength(50, ErrorMessage = "Stream name cannot be longer than 50 characters.")] public string Name { get; set; } [Required] [DataType(DataType.Url)] public string URL { get; set; } [Required] [Display(Name="Service")] public int ServiceID { get; set; } public virtual Service Service { get; set; } public virtual ICollection<Event> Events { get; set; } public virtual ICollection<Monitor> Monitors { get; set; } public virtual ICollection<AlertRule> AlertRules { get; set; } } For the 'create' view for this model, I have made a view model to pass some additional information to the view: public class StreamCreateVM { public Stream Stream { get; set; } public SelectList ServicesList { get; set; } public int SelectedService { get; set; } } Here is my create post action: [HttpPost] [ValidateAntiForgeryToken] public ActionResult Create([Bind(Include="Stream, Stream.Name, Stream.ServiceID, SelectedService")] StreamCreateVM viewModel) { if (ModelState.IsValid) { db.Streams.Add(viewModel.Stream); db.SaveChanges(); return RedirectToAction("Index", "Service", new { id = viewModel.Stream.ServiceID }); } return View(viewModel); } Now, this all works, apart from the [Bind(Include="Stream, Stream.Name, Stream.ServiceID, SelectedService")] bit. I can't seem to Include or Exclude properties within a nested object.

    Read the article

  • When I add a database table to a DBML file via LINQ to SQL, I get a slew of compiler errors.

    - by Zian Choy
    Whenever I add a certain table to a DBML file via LINQ to SQL, I get 102 errors in my VB NET project. Some of the errors: Error 1 Attribute 'TableAttribute' cannot be applied multiple times. C:\Documents and Settings\zchoy\My Documents\Virtual EMS Deployment\Life And Death\Life And Death\ShearwaterEMS.designer.vb 74 2 EMS Reality Check Error 2 'emptyChangingEventArgs' is already declared as 'Private Shared emptyChangingEventArgs As System.ComponentModel.PropertyChangingEventArgs' in this class. C:\Documents and Settings\zchoy\My Documents\Virtual EMS Deployment\Life And Death\Life And Death\ShearwaterEMS.designer.vb 78 17 EMS Reality Check Error 3 '_GroupID' is already declared as 'Private _GroupID As Integer' in this class. C:\Documents and Settings\zchoy\My Documents\Virtual EMS Deployment\Life And Death\Life And Death\ShearwaterEMS.designer.vb 80 10 EMS Reality Check Error 4 '_ID' is already declared as 'Private _ID As Integer' in this class. C:\Documents and Settings\zchoy\My Documents\Virtual EMS Deployment\Life And Death\Life And Death\ShearwaterEMS.designer.vb 82 10 EMS Reality Check Any suggestions for getting the table to work with LINQ to SQL will be welcomed. The table's properties: Group ID ID (Primary Key) Contact Title UseGroupAddress InternationalFormat Address1 Address2 City State ZipCode Country Phone Fax EMailAddress Notes DateAdded AddedBy DateChanged ChangedBy Active ExternalReference ChangeCounter PhoneLabel FaxLabel

    Read the article

  • Extend base type and automatically update audit information on Entity

    - by Nix
    I have an entity model that has audit information on every table (50+ tables) CreateDate CreateUser UpdateDate UpdateUser Currently we are programatically updating audit information. Ex: if(changed){ entity.UpdatedOn = DateTime.Now; entity.UpdatedBy = Environment.UserName; context.SaveChanges(); } But I am looking for a more automated solution. During save changes, if an entity is created/updated I would like to automatically update these fields before sending them to the database for storage. Any suggestion on how i could do this? I would prefer to not do any reflection, so using a text template is not out of the question. A solution has been proposed to override SaveChanges and do it there, but in order to achieve this i would either have to use reflection (in which I don't want to do ) or derive a base class. Assuming i go down this route how would I achieve this? For example EXAMPLE_DB_TABLE CODE NAME --Audit Tables CREATE_DATE CREATE_USER UPDATE_DATE UPDATE_USER And if i create a base class public abstract class IUpdatable{ public virtual DateTime CreateDate {set;} public virtual string CreateUser { set;} public virtual DateTime UpdateDate { set;} public virtual string UpdateUser { set;} } The end goal is to be able to do something like... public overrride void SaveChanges(){ //Go through state manager and update audit infromation //FOREACH changed entity in state manager if(entity is IUpdatable){ //If state is created... update create audit. //if state is updated... update update audit } } But I am not sure how I go about generating the code that would extend the interface.

    Read the article

  • IoC & Interfaces Best Practices

    - by n8wrl
    I'm experimenting with IoC on my way to TDD by fiddling with an existing project. In a nutshell, my question is this: what are the best practices around IoC when public and non-public methods are of interest? There are two classes: public abstract class ThisThingBase { public virtual void Method1() {} public virtual void Method2() {} public ThatThing GetThat() { return new ThatThing(this); } internal virtual void Method3() {} internal virtual void Method4() {} } public class Thathing { public ThatThing(ThisThingBase thing) { m_thing = thing; } ... } ThatThing does some stuff using its ThisThingBase reference to call methods that are often overloaded by descendents of ThisThingBase. Method1 and Method2 are public. Method3 and Method4 are internal and only used by ThatThings. I would like to test ThatThing without ThisThing and vice-versa. Studying up on IoC my first thought was that I should define an IThing interface, implement it by ThisThingBase and pass it to the ThatThing constructor. IThing would be the public interface clients could call but it doesn't include Method3 or Method4 that ThatThing also needs. Should I define a 2nd interface - IThingInternal maybe - for those two methods and pass BOTH interfaces to ThatThing?

    Read the article

  • Out of Core Implementation of a Quadtree

    - by Nima
    Hi, I am trying to build a Quadtree data structure(or let's just say a tree) on the secondary memory(Hard Disk). I have a C++ program to do so and I use fopen to create the files. Also, I am using tesseral coding to store each cell in a file named with its corresponding code to store it on the disk in one directory. The problem is that after creating about 1,100 files, fopen just returns NULL and stops creating new files. I can create further files manually in that directory, but using C++ it can not create any further files. I know about max limit of inode on ext3 filesystem which is (from Wikipedia) 32,000 but mine is way less than that, also note that I can create files manually on the disk; just not through fopen. Also, I really appreciate any idea regarding the best way to store a very dynamic quadtree on disk(I need the nodes to be in separate files and the quadtree might have a depth of 50). Using nested directories is one idea, but I think it will slow down the performance because of following the links on the filesystem to access the file. Thanks, Nima

    Read the article

  • Problem using custom HttpHandler to process requests for both .aspx and non-extension pages in IIS7

    - by Noel
    I am trying to process both ".aspx" and non-extension page requests (i.e. both contact.aspx and /contact/) using a custom HttpHandler in IIS7. My handler works just fine in either one case or the other, but as soon as I try to process both cases, it only works for one. Please see Handlers snippet from my web.config below: If i keep only mapping to "*.aspx" then all .aspx requests are processed correctly, but obviously extensionless requests won't work: <add name="AllPages.ASPX" path="*.aspx" verb="*" type="Test.PageHandlerFactory, Test" preCondition="" /> If i change the mapping to "*" then all extensionless requests are processed correctly, but ".aspx" requests that should still be handled by this handler stop working. Note that i added the StaticFiles entry in order to process files that are on disk like images, css, js, etc. <add name="WebResource" path="WebResource.axd" verb="GET" type="System.Web.Handlers.AssemblyResourceLoader" /> <add name="StaticFiles" verb="GET,HEAD" path="*.*" type="System.Web.StaticFileHandler" resourceType="File" /> <add name="AllPages" path="*" verb="*" type="Test.PageHandlerFactory, Test" preCondition="" /> The crazy thing is that when i load an ".aspx" request (with the 2nd configuration shown) IIS7 gives a 404 not found error. The error also says that the request is processed by the StaticFiles handler. But I made sure to add resourceType="File" to the StaticFileHandler in order to avoid this. According to MS this means the request is only for "physical files on disk". Am i misreading/interpreting the "on disk" part? My .aspx file isn't on disk, that's why i want to use the handler in the first place.

    Read the article

  • Why Finalize method not allowed to override

    - by somaraj
    I am new to .net ..and i am confused with the destructor mechanism in C# ..please clarify In C# destructors are converted to finalize method by CLR. If we try to override it (not using destructor ) , will get an error Error 2 Do not override object.Finalize. Instead, provide a destructor. But it seems that the Object calss implementation in mscorlib.dll has finalize defined as protected override void Finalize(){} , then why cant we override it , that what virtual function for . Why is the design like that , is it to be consistent with c++ destructor concept. Also when we go to the definition of the object class , there is no mention of the finalize method , then how does the hmscorlib.dll definition shows the finalize funtion . Does it mean that the default destructor is converted to finalize method. public class Object { public Object(); public virtual bool Equals(object obj); public static bool Equals(object objA, object objB); public virtual int GetHashCode(); public Type GetType(); protected object MemberwiseClone(); public static bool ReferenceEquals(object objA, object objB); public virtual string ToString(); }

    Read the article

  • SQL Server Clustered Index: (Physical) Data Page Order

    - by scherand
    I am struggling understanding what a clustered index in SQL Server 2005 is. I read the MSDN article Clustered Index Structures (among other things) but I am still unsure if I understand it correctly. The (main) question is: what happens if I insert a row (with a "low" key) into a table with a clustered index? The above mentioned MSDN article states: The pages in the data chain and the rows in them are ordered on the value of the clustered index key. And Using Clustered Indexes for example states: For example, if a record is added to the table that is close to the beginning of the sequentially ordered list, any records in the table after that record will need to shift to allow the record to be inserted. Does this mean that if I insert a row with a very "low" key into a table that already contains a gazillion rows literally all rows are physically shifted on disk? I cannot believe that. This would take ages, no? Or is it rather (as I suspect) that there are two scenarios depending on how "full" the first data page is. A) If the page has enough free space to accommodate the record it is placed into the existing data page and data might be (physically) reordered within that page. B) If the page does not have enough free space for the record a new data page would be created (anywhere on the disk!) and "linked" to the front of the leaf level of the B-Tree? This would then mean the "physical order" of the data is restricted to the "page level" (i.e. within a data page) but not to the pages residing on consecutive blocks on the physical hard drive. The data pages are then just linked together in the correct order. Or formulated in an alternative way: if SQL Server needs to read the first N rows of a table that has a clustered index it can read data pages sequentially (following the links) but these pages are not (necessarily) block wise in sequence on disk (so the disk head has to move "randomly"). How close am I? :)

    Read the article

  • C++ LNK2019 error with constructors and destructors in derived classes

    - by BLH
    I have two classes, one inherited from the other. When I compile, I get the following errors: 1Entity.obj : error LNK2019: unresolved external symbol "public: __thiscall Utility::Parsables::Base::Base(void)" (??0Base@Parsables@Utility@@QAE@XZ) referenced in function "public: __thiscall Utility::Parsables::Entity::Entity(void)" (??0Entity@Parsables@Utility@@QAE@XZ) 1Entity.obj : error LNK2019: unresolved external symbol "public: virtual __thiscall Utility::Parsables::Base::~Base(void)" (??1Base@Parsables@Utility@@UAE@XZ) referenced in function "public: virtual __thiscall Utility::Parsables::Entity::~Entity(void)" (??1Entity@Parsables@Utility@@UAE@XZ) 1D:\Programming\Projects\Caffeine\Debug\Caffeine.exe : fatal error LNK1120: 2 unresolved externals I really can't figure out what's going on.. can anyone see what I'm doing wrong? I'm using Visual C++ Express 2008. Here are the files.. "include/Utility/Parsables/Base.hpp" #ifndef CAFFEINE_UTILITY_PARSABLES_BASE_HPP #define CAFFEINE_UTILITY_PARSABLES_BASE_HPP namespace Utility { namespace Parsables { class Base { public: Base( void ); virtual ~Base( void ); }; } } #endif //CAFFEINE_UTILITY_PARSABLES_BASE_HPP "src/Utility/Parsables/Base.cpp" #include "Utility/Parsables/Base.hpp" namespace Utility { namespace Parsables { Base::Base( void ) { } Base::~Base( void ) { } } } "include/Utility/Parsables/Entity.hpp" #ifndef CAFFEINE_UTILITY_PARSABLES_ENTITY_HPP #define CAFFEINE_UTILITY_PARSABLES_ENTITY_HPP #include "Utility/Parsables/Base.hpp" namespace Utility { namespace Parsables { class Entity : public Base { public: Entity( void ); virtual ~Entity( void ); }; } } #endif //CAFFEINE_UTILITY_PARSABLES_ENTITY_HPP "src/Utility/Parsables/Entity.cpp" #include "Utility/Parsables/Entity.hpp" namespace Utility { namespace Parsables { Entity::Entity( void ) { } Entity::~Entity( void ) { } } }

    Read the article

< Previous Page | 227 228 229 230 231 232 233 234 235 236 237 238  | Next Page >