Search Results

Search found 2288 results on 92 pages for 'bugs bugs'.

Page 63/92 | < Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >

  • BIDS Helper 1.6 Beta Release (now with SQL 2012 support!)

    - by Darren Gosbell
    The beta for BIDS Helper 1.6 was just released. We have not updated the version notification just yet as we would like to get some feedback on people's experiences with the SQL 2012 version. So if you are using SQL 2012, go grab it and let us know how you go (you can post a comment on this blog post or on the BIDS Helper site itself). This is the first release that supports SQL 2012 and consequently also the first release that runs in Visual Studio 2010. A big thanks to Greg Galloway for doing the bulk of the work on this release. Please note that if you are doing an xcopy deploy that you will need to unblock the files you download or you will get a cryptic error message. This appears to be caused by a security update to either Visual Studio or the .Net framework – the xcopy deploy instructions have been updated to show you how to do this. Below are the notes from the release page. ====== This beta release is the first to support SQL Server 2012 (in addition to SQL Server 2005, 2008, and 2008 R2). Since it is marked as a beta release, we are looking for bug reports in the next few months as you use BIDS Helper on real projects. In addition to getting all existing BIDS Helper functionality working appropriately in SQL Server 2012 (SSDT), the following features are new... Analysis Services Tabular Smart Diff Tabular Actions Editor Tabular HideMemberIf Tabular Pre-Build Fixes and Updates The Unused Datasets feature for Reporting Services now accounts for new features in Reporting Services 2008 R2 like Lookups and new features in Reporting Services 2012. SSIS: emit an informational message when a variable has an expression defined and EvaluateAsExpression = False SSAS: roles reports points to wrong server SSIS - Variable Copy / Move broken in v1.5 "Unused DataSets Report" not showing up in Context menu on VS2005 if Solution Folders used SSAS Tabular: Create a UI for managing actions SSAS Tabular: Smart Diff improvements for new schema and Tabular models SSIS: Copy/Move Variable Erroring due to custom Control Flow item Icon SSIS Performance Visualization Index out of range fixing bugs in AggManager when aggregation design IDs don't match names The exe downloads are a self extracting installer, the zip downloads allow for an xcopy deploy. Make sure to note the updated xcopy deploy instructions for SQL Server 2012.

    Read the article

  • New ZFS Encryption features in Solaris 11.1

    - by darrenm
    Solaris 11.1 brings a few small but significant improvements to ZFS dataset encryption.  There is a new readonly property 'keychangedate' that shows that date and time of the last wrapping key change (basically the last time 'zfs key -c' was run on the dataset), this is similar to the 'rekeydate' property that shows the last time we added a new data encryption key. $ zfs get creation,keychangedate,rekeydate rpool/export/home/bob NAME PROPERTY VALUE SOURCE rpool/export/home/bob creation Mon Mar 21 11:05 2011 - rpool/export/home/bob keychangedate Fri Oct 26 11:50 2012 local rpool/export/home/bob rekeydate Tue Oct 30 9:53 2012 local The above example shows that we have changed both the wrapping key and added new data encryption keys since the filesystem was initially created.  If we haven't changed a wrapping key then it will be the same as the creation date.  It should be obvious but for filesystems that were created prior to Solaris 11.1 we don't have this data so it will be displayed as '-' instead. Another change that I made was to relax the restriction that the size of the wrapping key needed to match the size of the data encryption key (ie the size given in the encryption property).  In Solaris 11 Express and Solaris 11 if you set encryption=aes-256-ccm we required that the wrapping key be 256 bits in length.  This restriction was unnecessary and made it impossible to select encryption property values with key lengths 128 and 192 when the wrapping key was stored in the Oracle Key Manager.  This is because currently the Oracle Key Manager stores AES 256 bit keys only.  Now with Solaris 11.1 this restriciton has been removed. There is still one case were the wrapping key size and data encryption key size will always match and that is where they keysource property sets the format to be 'passphrase', since this is a key generated internally to libzfs and to preseve compatibility on upgrade from older releases the code will always generate a wrapping key (using PKCS#5 PBKDF2 as before) that matches the key length size of the encryption property. The pam_zfs_key module has been updated so that it allows you to specify encryption=off. There were also some bugs fixed including not attempting to load keys for datasets that are delegated to zones and some other fixes to error paths to ensure that we could support Zones On Shared Storage where all the datasets in the ZFS pool were encrypted that I discussed in my previous blog entry. If there are features you would like to see for ZFS encryption please let me know (direct email or comments on this blog are fine, or if you have a support contract having your support rep log an enhancement request).

    Read the article

  • How can I diagnose/debug "maximum number of clients reached" X errors?

    - by jmtd
    Hi, I'm hitting a problem whereby X prevents processes from creating windows, uttering something like the following into ~/.xsession-errors: cannot open display: :0.0 Maximum number of clients reached Searching around there are lots of examples of people facing this problem, and sometimes people identify which program they are running is using up all the client slots. See e.g. LP 70872 (Firefox), LP 263211 (gnome-screensaver). For what it's worth, I run gnome-terminal, thunderbird, chromium-browser, empathy, tomboy and virtualbox nearly all the time, on top of the normal stuff you get with the GNOME desktop, and occasionally some other bits and pieces. However my question is not "which of my programs is causing this problem" but rather, how can one go about diagnosing this problem? In the above (and other) bugs, forum reports, etc., a number of tools are suggested: xlsclients - lists the client applications for the given display, but I don't think that corresponds to 'X clients' xrestop - a top-style X resources tool, one row per X client. Lots of '' clients, not shown in xlsclients output xwininfo -root -children lists X window objects From what I can gather, the problem might not be too many clients at all, but rather resources kept around in the X server for clients who have long-since detached. But it would also appear that you cannot (easily?) relate X resources back to their client. Can one effectively diagnose this issue once it has started to occur, or is a tedious divide-and-conquer approach for the apps I run the only approach open to me? Update Jan 2011: I think I have resolved this issue. For the benefit of anyone stumbling across this, nautilus and/or compiz or something in that chain of software was segfaulting due to a wallpaper I had. I had chosen an XML file as my wallpaper, which defined a rotating gallery of images. It was hand-made, but based on /usr/share/backgrounds/contest/background-1.xml or similar. Disabling the wallpaper and I have not had a crash since. I'm not marking this as answered yet, since the actual specific problem was not my question, but how to diagnose it was. Unfortunately this was mostly trial-and-error which sucks.

    Read the article

  • Why do old programming languages continue to be revised?

    - by SunAvatar
    This question is not, "Why do people still use old programming languages?" I understand that quite well. In fact the two programming languages I know best are C and Scheme, both of which date back to the 70s. Recently I was reading about the changes in C99 and C11 versus C89 (which seems to still be the most-used version of C in practice and the version I learned from K&R). Looking around, it seems like every programming language in heavy use gets a new specification at least once per decade or so. Even Fortran is still getting new revisions, despite the fact that most people using it are still using FORTRAN 77. Contrast this with the approach of, say, the typesetting system TeX. In 1989, with the release of TeX 3.0, Donald Knuth declared that TeX was feature-complete and future releases would contain only bug fixes. Even beyond this, he has stated that upon his death, "all remaining bugs will become features" and absolutely no further updates will be made. Others are free to fork TeX and have done so, but the resulting systems are renamed to indicate that they are different from the official TeX. This is not because Knuth thinks TeX is perfect, but because he understands the value of a stable, predictable system that will do the same thing in fifty years that it does now. Why do most programming language designers not follow the same principle? Of course, when a language is relatively new, it makes sense that it will go through a period of rapid change before settling down. And no one can really object to minor changes that don't do much more than codify existing pseudo-standards or correct unintended readings. But when a language still seems to need improvement after ten or twenty years, why not just fork it or start over, rather than try to change what is already in use? If some people really want to do object-oriented programming in Fortran, why not create "Objective Fortran" for that purpose, and leave Fortran itself alone? I suppose one could say that, regardless of future revisions, C89 is already a standard and nothing stops people from continuing to use it. This is sort of true, but connotations do have consequences. GCC will, in pedantic mode, warn about syntax that is either deprecated or has a subtly different meaning in C99, which means C89 programmers can't just totally ignore the new standard. So there must be some benefit in C99 that is sufficient to impose this overhead on everyone who uses the language. This is a real question, not an invitation to argue. Obviously I do have an opinion on this, but at the moment I'm just trying to understand why this isn't just how things are done already. I suppose the question is: What are the (real or perceived) advantages of updating a language standard, as opposed to creating a new language based on the old?

    Read the article

  • Cheese won't start

    - by Anthony Hohenheim
    I can't start Cheese Webcam Booth. It starts loading and there is a brief moment when the window shows up but then it disappears, like it shuts itself down and it's not in system monitor. My webcam works perfectly in Skype video call. I installed and run Camorama and it gave me an error: Could not connect to video device (/dev/video0) Please check connection When I run the lsusb I get this line for my webcam: Bus 002 Device 002: ID 04f2:b210 Chicony Electronics Co., Ltd And for my graphic card, running lspci: VGA compatible controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 07) It's not a pressing matter, but it bugs my nerves, if it works on Skype, why does Cheese and other programs refuse to run. As I said, it's not a big deal but any help would be appreciated. Running Cheese in terminal: (cheese:11454): Gtk-WARNING **: Attempting to add a widget with type GtkImage to a GtkToggleButton, but as a GtkBin subclass a GtkToggleButton can only contain one widget at a time; it already contains a widget of type GtkLabel (cheese:11454): Gtk-WARNING **: Attempting to add a widget with type GtkImage to a GtkToggleButton, but as a GtkBin subclass a GtkToggleButton can only contain one widget at a time; it already contains a widget of type GtkLabel (cheese:11454): Gtk-WARNING **: Attempting to add a widget with type GtkImage to a GtkToggleButton, but as a GtkBin subclass a GtkToggleButton can only contain one widget at a time; it already contains a widget of type GtkLabel (cheese:11454): Gtk-WARNING **: Attempting to add a widget with type GtkHBox to a GtkButton, but as a GtkBin subclass a GtkButton can only contain one widget at a time; it already contains a widget of type GtkLabel (cheese:11454): Gtk-WARNING **: Attempting to add a widget with type GtkImage to a GtkButton, but as a GtkBin subclass a GtkButton can only contain one widget at a time; it already contains a widget of type GtkLabel (cheese:11454): Gtk-WARNING **: Attempting to add a widget with type GtkHBox to a GtkToggleButton, but as a GtkBin subclass a GtkToggleButton can only contain one widget at a time; it already contains a widget of type GtkLabel (cheese:11454): Gtk-WARNING **: Attempting to add a widget with type GtkImage to a GtkButton, but as a GtkBin subclass a GtkButton can only contain one widget at a time; it already contains a widget of type GtkLabel (cheese:11454): Gdk-WARNING **: The program 'cheese' received an X Window System error. This probably reflects a bug in the program. The error was 'BadDrawable (invalid Pixmap or Window parameter)'. (Details: serial 932 error_code 9 request_code 137 minor_code 9) (Note to programmers: normally, X errors are reported asynchronously; that is, you will receive the error a while after causing it. To debug your program, run it with the --sync command line option to change this behavior. You can then get a meaningful backtrace from your debugger if you break on the gdk_x_error() function.)

    Read the article

  • How far should an entity take care of its properties values by itself?

    - by Kharlos Dominguez
    Let's consider the following example of a class, which is an entity that I'm using through Entity Framework. - InvoiceHeader - BilledAmount (property, decimal) - PaidAmount (property, decimal) - Balance (property, decimal) I'm trying to find the best approach to keep Balance updated, based on the values of the two other properties (BilledAmount and PaidAmount). I'm torn between two practices here: Updating the balance amount every time BilledAmount and PaidAmount are updated (through their setters) Having a UpdateBalance() method that the callers would run on the object when appropriate. I am aware that I can just calculate the Balance in its getter. However, it isn't really possible because this is an entity field that needs to be saved back to the database, where it has an actual column, and where the calculated amount should be persisted to. My other worry about the automatically updating approach is that the calculated values might be a little bit different from what was originally saved to the database, due to rounding values (an older version of the software, was using floats, but now decimals). So, loading, let's say 2000 entities from the database could change their status and make the ORM believe that they have changed and be persisted back to the database the next time the SaveChanges() method is called on the context. It would trigger a mass of updates that I am not really interested in, or could cause problems, if the calculation methods changed (the entities fetched would lose their old values to be replaced by freshly recalculated ones, simply by being loaded). Then, let's take the example even further. Each invoice has some related invoice details, which also have BilledAmount, PaidAmount and Balance (I'm simplifying my actual business case for the sake of the example, so let's assume the customer can pay each item of the invoice separately rather than as a whole). If we consider the entity should take care of itself, any change of the child details should cause the Invoice totals to change as well. In a fully automated approach, a simple implementation would be looping through each detail of the invoice to recalculate the header totals, every time one the property changes. It probably would be fine for just a record, but if a lot of entities were fetched at once, it could create a significant overhead, as it would perform this process every time a new invoice detail record is fetched. Possibly worse, if the details are not already loaded, it could cause the ORM to lazy-load them, just to recalculate the balances. So far, I went with the Update() method-way, mainly for the reasons I explained above, but I wonder if it was right. I'm noticing I have to keep calling these methods quite often and at different places in my code and it is potential source of bugs. It also has a detrimental effect on data-binding because when the properties of the detail or header changes, the other properties are left out of date and the method has no way to be called. What is the recommended approach in this case?

    Read the article

  • MySQL Enterprise Monitor 3.0.11 has been released

    - by Andy Bang
    We are pleased to announce that MySQL Enterprise Monitor 3.0.11 is now available for download on the My Oracle Support (MOS) web site. It will also be available via the Oracle Software Delivery Cloud in about 1 week. This is a maintenance release that includes a few new features and fixes a number of bugs. You can find more information on the contents of this release in the change log. You will find binaries for the new release on My Oracle Support. Choose the "Patches & Updates" tab, and then choose the "Product or Family (Advanced Search)" side tab in the "Patch Search" portlet. You will also find the binaries on the Oracle Software Delivery Cloud in approximately 1 week. Choose "MySQL Database" as the Product Pack and you will find the Enterprise Monitor along with other MySQL products. Based on feedback from our customers, MySQL Enterprise Monitor (MEM) 3.0 offers many significant improvements over previous releases. Highlights include: Policy-based automatic scheduling of rules and event handling (including email notifications) make administration of scale-out easier and automatic Enhancements such as automatic discovery of MySQL instances, centralized agent configuration and multi-instance monitoring further improve ease of configuration and management The new cloud and virtualization-friendly, "agent-less" design allows remote monitoring of MySQL databases without the need for any remote agents Trends, projections and forecasting - Graphs and Event handlers inform you in advance of impending file system capacity problems Zero Configuration Query Analyzer - Works "out of the box" with MySQL 5.6 Performance_Schema (supported by 5.6.14 or later) False positives from flapping or spikes are avoided using exponential moving averages and other statistical techniques Advisors can analyze data across an entire group; for example, the Replication Configuration Advisor can scan an entire topology to find common configuration errors like duplicate server UUIDs or a slave whose version is less than its master's More information on the contents of this release is available here: What's new in MySQL Enterprise Monitor 3.0? MySQL Enterprise Edition: Demos MySQL Enterprise Monitor Frequently Asked Questions MySQL Enterprise Monitor Change History More information on MySQL Enterprise and the Enterprise Monitor can be found here: http://www.mysql.com/products/enterprise/ http://www.mysql.com/products/enterprise/monitor.html http://www.mysql.com/products/enterprise/query.html http://forums.mysql.com/list.php?142 If you are not a MySQL Enterprise customer and want to try the Monitor and Query Analyzer using our 30-day free customer trial, go to http://www.mysql.com/trials, or contact Sales at http://www.mysql.com/about/contact. If you haven't looked at MEM recently, and especially MEM 3.0, please do so now and let us know what you think. Thanks and Happy Monitoring! - The MySQL Enterprise Tools Development Team

    Read the article

  • How do you take into account usability and user requirements for your application?

    - by voroninp
    Our team supports BackOffice application: a mix of WinForm and WPF windows. (about 80 including dialogs). Really a kind of a Swiss Army Knife. It is used by developers, tech writers, security developers, testers. The requirements for new features come quite often and sometimes we play Wizard of Oz to decide which GUI our users like the most. And it usually happens (I admit it can be just my subjective interpretation of the reality) that one tiny detail giving the flavor of good usability to our app requires a lot of time. This time is being spent on 'fighting' with GUI framework making it act like we need. And it very difficult to make estimations for this type of tasks (at least for me and most members of our team). Scrum poker is not a help either. Management often considers this usability perfectionism to be a waste of time. On the other hand an accumulated affect of features where each has some little usability flaw frustrates users. But the same users want frequent releases and instant bug fixes. Hence, no way to get the positive feedback: there is always somebody who is snuffy. I constantly feel myself as competing with ourselves: more features - more bugs/tasks/architecture. We are trying to outrun the cart we are pushing. New technologies arrive and some of them can potentially help to improve the design or decrease task implementation time but these technologies require learning, prototyping and so on. Well, that was a story. And now is the question: How do you balance between time pressure, product quality, users and management satisfaction? When and how do you decide to leave the problem with not a perfect but to some extent acceptable solution, how often do you make these decisions? How do you do with your own satisfaction? What are your priorities? P.S. Please keep in mind, we are a BackOffice team, we have neither dedicated technical writer nor GUI designer. The tester have joined us recently. We've much work to do and much freedom concerning 'how'. I like it because it fosters creativity but I don't want to become too nerdy perfectionist.

    Read the article

  • Is is possible to get a patch included in the current release? If so, how?

    - by Oli
    So a while back I reported a bug in Compiz's Place Window plugin. It's a fairly major regression for people affected by it: mainly those using Gnome-Fallback, judging by the reports. A patch surfaced a short time later. I created a PPA for testing and everybody involved so far is reporting the issues are fixed. It even fixes another bug. I've done testing with a standard Unity desktop and can say (for my testing) no adverse effects were visible. I want to get this pushed to Ubuntu right now for two main reasons: I'm selfish. I don't want to need to update my PPA every time a new version of Compiz is pushed to 12.04. I don't want Ubuntu users seeing their windows flying around because of a silly little bug. I want this patch pushed to Ubuntu's version of Compiz as soon as possible, so we can mark these bugs fixed and move on with our lives. Whose leg do I have to hump to get this pulled into Ubuntu right now? I don't maintain this project and it's an upstream thing but it's fairly integral to Ubuntu. I could go to Compiz but I imagine that if they accept the patch, it'll be months (at least a release) before it's anywhere near Ubuntu. And when I do find the right person, how can I make the process as slick as possible for them? I want them to see my request, go "Yup, that all looks great, done" and that be it. I don't want seventeen rounds of emails addressing aspects of the patch. More importantly, I don't want to waste their time either. And what do I have to provide them? My packaging skills are... lamentable. This was my first attempt at patching a package for redistribution so I've probably made every single packaging error known to man. Will they be happy with the original patch (so they can apply it themselves) or should I repackage things so the diff/changelog is a little cleaner (it took me a few goes and the versioning is all over the place). Note: This question is about Compiz but I'd prefer if answers could address other styles of package too so we have an authoritative and comprehensive thread of how to get things fixed.

    Read the article

  • Why is everything crashing?

    - by Kopkins
    I've been using Ubuntu for a while now and I love it, I wouldn't think of using another OS unless I can't fix this issue. The install I'm on is only around a month and a half old. I'm running 12.04 64bit on a 8,1 MBP. Up until around 2 weeks ago everything was running smoothly. Around then applications started crashing and weird things started happening. At first I thought it was just certain applications. The first thing to start giving me trouble was compiz. Occasionally compiz will stop decorating windows and lost many other functionalities. running compiz --replace fixes this, but I don't feel like doing it usually once a day. The other thing with this is that after running compiz --replace, my conky window gets lost somewhere and so I run killall conky && conky -c .conkyrc. But this isn't with just a couple applications, it seems like it is proliferating through my system. Last week fontforge started crashing while doing whatever task. So I ended up unable to finish what I was working on to completeness. Didn't find a fix. Today rhythmbox started crashing. Whenever I try to play anything, Rhythmbox becomes unresponsive and needs to be forced to close. When I try to do certain things with the disk utility, it crashes. I get the Ubuntu has experienced an internal error message much more often than I would like. Frequently applications stop appearing in the launcher. Wine almost never does anymore. After not being active for a little while, thunderbird can only fetch my mail after restarting wireless, sudo rmmod b43 && sudo modprobe b43 Occasionally some of my startup apps don't start. What is my best option here? Could they be bugs? I don't want to submit a ton of vague bug reports. Reinstall? switch OS? Thank you to anyone who responds. Kopkins

    Read the article

  • Fans running very fast on MacBook Pro 8.1 ubuntu 12.04

    - by Tomasz Kacprzak
    I installed Ubuntu 12.04 on Macbook Pro 8.1 and one of the first things I noticed was that the fans were starting to spin very fast every few minutes for 10-30 sec and then going back to normal. That was happening even without any processor load, when completely idle. The fans were usually spinning at 4000 RPM and made much noise. The computer was not getting hotter than usual. When running OSX Lion there was no noise at all, fans almost all the time at 2000 RPM. I spent some time on it and found out that Precise uses a deamon to control the temperature, called macfanctld. You can use /etc/macfanctld.conf to set the configuration. I found out that the high fan speed is not due to the fact that the temperature is getting hot, but because there are two sensors which indicate wrong numbers (you can check that using 'sensors' command ): TW0P: +129.0°C TCTD: +256.0°C TCFC: +0.0°C TMBS: +0.0°C or setting the macfanctld log level to 2: Speed: 4992, *AVG: 56.9C, TC0P: 50.2C, TG0P: 51.5C, Sensors: TB0T:34 TB1T:34 TB2T:33 TC0C:58 TC0D:56 TC0E:59 TC0F:60 TC0P:50 TC1C:58 TC2C:58 TC3C:58 TC4C:57 TCFC:0 TCGC:57 TCSA:53 TCTD:256 TG0D:52 TG0P:52 THSP:42 TM0S:64 TMBS:0 TP0P:54 TPCD:60 TW0P:129 Th1H:51 Th2H:48 Tm0P:40 Ts0P:32 Ts0S:43 Moreover, TCTD was randomly jumping from temperatures of 0 to 256, so this may be the reason for unjustified random fan speeds. macfanctld is taking an average of the sensors including the values above, so the actual AVG temp used to control the fans is wrong, usually biased up, hence high RPM and noise. The workaround solution is to use an option in the macfanctld.conf which allows to ignore the malfunctioning sensors: exclude: 13 16 21 24 After reboot the reported temperatures are usually normal and the fans are working at reasonable speeds. I tested the response of the fans to heavy processor load by asking MATLAB to invert 10000x10000 matrix and the AVG temperature jumped to 63deg, and the fan to max 6200 RPM and then got it back to normal temperature. So I think it is safe so far. There is a expired bug about the failing sensor readings: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/955538 which may be good to open again. My question would be: does anyone know what the failing sensors do and if there is any danger in excluding them? Maybe some better solution to this problem?

    Read the article

  • MySQL Enterprise Monitor 3.0.3 Is Now Available

    - by Andy Bang
    We are pleased to announce that MySQL Enterprise Monitor 3.0.3 is now available for download on the My Oracle Support (MOS) web site. It will also be available via the Oracle Software Delivery Cloud with the November update in about 1 week. This is a maintenance release that fixes a number of bugs. You can find more information on the contents of this release in the change log. You will find binaries for the new release on My Oracle Support. Choose the "Patches & Updates" tab, and then use the "Product or Family (Advanced Search)" feature. You will also find the binaries on the Oracle Software Delivery Cloud in approximately 1 week. Choose "MySQL Database" as the Product Pack and you will find the Enterprise Monitor along with other MySQL products. Based on feedback from our customers, MySQL Enterprise Monitor (MEM) 3.0 offers many significant improvements over previous releases. Highlights include: Policy-based automatic scheduling of rules and event handling (including email notifications) make administration of scale-out easier and automatic Enhancements such as automatic discovery of MySQL instances, centralized agent configuration and multi-instance monitoring further improve ease of configuration and management The new cloud and virtualization-friendly, "agent-less" design allows remote monitoring of MySQL databases without the need for any remote agents Trends, projections and forecasting - Graphs and Event handlers inform you in advance of impending file system capacity problems Zero Configuration Query Analyzer - Works "out of the box" with MySQL 5.6 Performance_Schema (supported by 5.6.14 or later) False positives from flapping or spikes are avoided using exponential moving averages and other statistical techniques Advisors can analyze data across an entire group; for example, the Replication Configuration Advisor can scan an entire topology to find common configuration errors like duplicate server UUIDs or a slave whose version is less than its master's More information on the contents of this release is available here: What's new in MySQL Enterprise Monitor 3.0? MySQL Enterprise Edition: Demos MySQL Enterprise Monitor Frequently Asked Questions MySQL Enterprise Monitor Change History More information on MySQL Enterprise and the Enterprise Monitor can be found here: http://www.mysql.com/products/enterprise/ http://www.mysql.com/products/enterprise/monitor.html http://www.mysql.com/products/enterprise/query.html http://forums.mysql.com/list.php?142 If you are not a MySQL Enterprise customer and want to try the Monitor and Query Analyzer using our 30-day free customer trial, go to http://www.mysql.com/trials, or contact Sales at http://www.mysql.com/about/contact. If you haven't looked at MEM recently, and especially MEM 3.0, please do so now and let us know what you think. Thanks and Happy Monitoring! - The MySQL Enterprise Tools Development Team

    Read the article

  • Fans running very fast on MacBook Pro 8.1

    - by Tomasz Kacprzak
    I installed Ubuntu 12.04 on Macbook Pro 8.1 and one of the first things I noticed was that the fans were starting to spin very fast every few minutes for 10-30 sec and then going back to normal. That was happening even without any processor load, when completely idle. The fans were usually spinning at 4000 RPM and made much noise. The computer was not getting hotter than usual. When running OSX Lion there was no noise at all, fans almost all the time at 2000 RPM. I spent some time on it and found out that Precise uses a deamon to control the temperature, called macfanctld. You can use /etc/macfanctld.conf to set the configuration. I found out that the high fan speed is not due to the fact that the temperature is getting hot, but because there are two sensors which indicate wrong numbers (you can check that using 'sensors' command ): TW0P: +129.0°C TCTD: +256.0°C TCFC: +0.0°C TMBS: +0.0°C or setting the macfanctld log level to 2: Speed: 4992, *AVG: 56.9C, TC0P: 50.2C, TG0P: 51.5C, Sensors: TB0T:34 TB1T:34 TB2T:33 TC0C:58 TC0D:56 TC0E:59 TC0F:60 TC0P:50 TC1C:58 TC2C:58 TC3C:58 TC4C:57 TCFC:0 TCGC:57 TCSA:53 TCTD:256 TG0D:52 TG0P:52 THSP:42 TM0S:64 TMBS:0 TP0P:54 TPCD:60 TW0P:129 Th1H:51 Th2H:48 Tm0P:40 Ts0P:32 Ts0S:43 Moreover, TCTD was randomly jumping from temperatures of 0 to 256, so this may be the reason for unjustified random fan speeds. macfanctld is taking an average of the sensors including the values above, so the actual AVG temp used to control the fans is wrong, usually biased up, hence high RPM and noise. The workaround solution is to use an option in the macfanctld.conf which allows to ignore the malfunctioning sensors: exclude: 13 16 21 24 After reboot the reported temperatures are usually normal and the fans are working at reasonable speeds. I tested the response of the fans to heavy processor load by asking MATLAB to invert 10000x10000 matrix and the AVG temperature jumped to 63deg, and the fan to max 6200 RPM and then got it back to normal temperature. So I think it is safe so far. There is a expired bug about the failing sensor readings: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/955538 which may be good to open again. My question would be: does anyone know what the failing sensors do and if there is any danger in excluding them? Maybe some better solution to this problem?

    Read the article

  • Strategy for avoiding duplicate object ids for data shared across devices using iCloud

    - by rmaddy
    I have a data intensive iOS app that is not using CoreData nor does it support iCloud synching (yet). All of my objects are created with unique keys. I use a simple long long initialized with the current time. Then as I need a new key I increment the value by 1. This has all worked well for a few years with the app running isolated on a single device. Now I want to add support for automatic data sync across devices using iCloud. As my app is written, there is the possibility that two objects created on two different devices could end up with the same key. I need to avoid this possibility. I'm looking for ideas for solving this issue. I have a few requirements that the solution must meet: 1) The key needs to remain a single integral data type. Converting all existing keys to a compound key or to a string or other type would affect the entire code base and likely result in more bugs than it's worth. 2) The solution can't depend on an Internet connection. A user must be able to run the app and add data even with no Internet connection. The data should still resolve properly later when the data syncs through iCloud once a connection is available. I'll accept one exception to this rule. If no other option is available, I may be open to requiring an Internet connection the first time the app's data is initialized. One idea I have been toying around with in my head is logically splitting the integer key into two parts. The high 4 or 5 bits could be used as some sort of device id while the rest represents the actual key. The fuzzy part is figuring out how to come up with non-conflicting device ids that fit in a few bits. This should be viable since I don't need to deal will millions of devices. I just need to deal with the few devices that would be shared by a given iCloud account. I'm open to suggestions. Thanks.

    Read the article

  • while(true) and loop-breaking - anti-pattern?

    - by KeithS
    Consider the following code: public void doSomething(int input) { while(true) { TransformInSomeWay(input); if(ProcessingComplete(input)) break; DoSomethingElseTo(input); } } Assume that this process involves a finite but input-dependent number of steps; the loop is designed to terminate on its own as a result of the algorithm, and is not designed to run indefinitely (until cancelled by an outside event). Because the test to see if the loop should end is in the middle of a logical set of steps, the while loop itself currently doesn't check anything meaningful; the check is instead performed at the "proper" place within the conceptual algorithm. I was told that this is bad code, because it is more bug-prone due to the ending condition not being checked by the loop structure. It's more difficult to figure out how you'd exit the loop, and could invite bugs as the breaking condition might be bypassed or omitted accidentally given future changes. Now, the code could be structured as follows: public void doSomething(int input) { TransformInSomeWay(input); while(!ProcessingComplete(input)) { DoSomethingElseTo(input); TransformInSomeWay(input); } } However, this duplicates a call to a method in code, violating DRY; if TransformInSomeWay were later replaced with some other method, both calls would have to be found and changed (and the fact that there are two may be less obvious in a more complex piece of code). You could also write it like: public void doSomething(int input) { var complete = false; while(!complete) { TransformInSomeWay(input); complete = ProcessingComplete(input); if(!complete) { DoSomethingElseTo(input); } } } ... but you now have a variable whose only purpose is to shift the condition-checking to the loop structure, and also has to be checked multiple times to provide the same behavior as the original logic. For my part, I say that given the algorithm this code implements in the real world, the original code is the most readable. If you were going through it yourself, this is the way you'd think about it, and so it would be intuitive to people familiar with the algorithm. So, which is "better"? is it better to give the responsibility of condition checking to the while loop by structuring the logic around the loop? Or is it better to structure the logic in a "natural" way as indicated by requirements or a conceptual description of the algorithm, even though that may mean bypassing the loop's built-in capabilities?

    Read the article

  • "Whole-team" C++ features?

    - by Blaisorblade
    In C++, features like exceptions impact your whole program: you can either disable them in your whole program, or you need to deal with them throughout your code. As a famous article on C++ Report puts it: Counter-intuitively, the hard part of coding exceptions is not the explicit throws and catches. The really hard part of using exceptions is to write all the intervening code in such a way that an arbitrary exception can propagate from its throw site to its handler, arriving safely and without damaging other parts of the program along the way. Since even new throws exceptions, every function needs to provide basic exception safety — unless it only calls functions which guarantee throwing no exception — unless you disable exceptions altogether in your whole project. Hence, exceptions are a "whole-program" or "whole-team" feature, since they must be understood by everybody in a team using them. But not all C++ features are like that, as far as I know. A possible example is that if I don't get templates but I do not use them, I will still be able to write correct C++ — or will I not?. I can even call sort on an array of integers and enjoy its amazing speed advantage wrt. C's qsort (because no function pointer is called), without risking bugs — or not? It seems templates are not "whole-team". Are there other C++ features which impact code not directly using them, and are hence "whole-team"? I am especially interested in features not present in C. Update: I'm especially looking for features where there's no language-enforced sign you need to be aware of them. The first answer I got mentioned const-correctness, which is also whole-team, hence everybody needs to learn about it; however, AFAICS it will impact you only if you call a function which is marked const, and the compiler will prevent you from calling it on non-const objects, so you get something to google for. With exceptions, you don't even get that; moreover, they're always used as soon as you use new, hence exceptions are more "insidious". Since I can't phrase this as objectively, though, I will appreciate any whole-team feature. Appendix: Why this question is objective (if you wonder) C++ is a complex language, so many projects or coding guides try to select "simple" C++ features, and many people try to include or exclude some ones according to mostly subjective criteria. Questions about that get rightfully closed regularly here on SO. Above, instead, I defined (as precisely as possible) what a "whole-team" language feature is, provide an example (exceptions), together with extensive supporting evidence in the literature about C++, and ask for whole-team features in C++ beyond exceptions. Whether you should use "whole-team" features, or whether that's a relevant concept, might be subjective — but that only means the importance of this question is subjective, like always.

    Read the article

  • is a factory pattern to prevent multuple instances for same object (instance that is Equal) good design?

    - by dsollen
    I have a number of objects storing state. There are essentially two types of fields. The ones that uniquly define what the object is (what node, what edge etc), and the oens that store state describing how these things are connected (this node is connected to these edges, this edge is part of these paths) etc. My model is updating the state variables using package methdos, so these objects all act as immutable to anyone not in Model scope. All Objects extend one base type. I've toyed with the idea of a Factory approch which accepts a Builder object and construct the applicable object. However, if an instance of the object already exists (ie would return true if I created the object defined by the builder and passed it to the equal method for the existing instance) the factory returns the current object instead of creating a new instance. Because the Equal method would only compare what uniquly defines the type of object (this is node A nto node B) but won't check the dynamic state stuff (node A is currently connected to nodes C and E) this would be a way of ensuring anyone that wants my Node A automatically knows it's state connections. More importantly it would prevent aliasing nightmares of someone trying to pass an instance of node A with different state then the node A in my model has. I've never heard of this pattern before, and it's a bit odd. I would have to do some overiding of serlization methods to make it work (ensure when I read in a serilized object I add it to my facotry list of known instances, and/or return an existing factory in it's place), as well as using a weakHashMap as if it was a weakHashSet to know rather an instance exists without worrying about a quasi-memory leak occuring. I don't know if this is too confusing or prone to it's own obscure bugs. One thing I know is that plugins interface with lowest level hardware. The plugins have to be able to return state taht is different then my memory; to tell my memory when it's own state is inconsistent. I believe this is possible despit their fetching objects that exist in my memory; we allow building of objects without checking their consistency with the model until the addToModel is called anyways; and the existing plugins design was written before all this extra state existed and worked fine without ever being aware of it. Should I just be using some other design to avoid this crazyness? (I have another question to that affect I'm posting).

    Read the article

  • Is there a factory pattern to prevent multiple instances for same object (instance that is Equal) good design?

    - by dsollen
    I have a number of objects storing state. There are essentially two types of fields. The ones that uniquely define what the object is (what node, what edge etc), and the others that store state describing how these things are connected (this node is connected to these edges, this edge is part of these paths) etc. My model is updating the state variables using package methods, so all these objects act as immutable to anyone not in Model scope. All Objects extend one base type. I've toyed with the idea of a Factory approach which accepts a Builder object and constructs the applicable object. However, if an instance of the object already exists (ie would return true if I created the object defined by the builder and passed it to the equal method for the existing instance) the factory returns the current object instead of creating a new instance. Because the Equal method would only compare what uniquely defines the type of object (this is node A to node B) but won't check the dynamic state stuff (node A is currently connected to nodes C and E) this would be a way of ensuring anyone that wants my Node A automatically knows its state connections. More importantly it would prevent aliasing nightmares of someone trying to pass an instance of node A with different state then the node A in my model has. I've never heard of this pattern before, and it's a bit odd. I would have to do some overriding of serialization methods to make it work (ensure that when I read in a serilized object I add it to my facotry list of known instances, and/or return an existing factory in its place), as well as using a weakHashMap as if it was a weakHashSet to know whether an instance exists without worrying about a quasi-memory leak occuring. I don't know if this is too confusing or prone to its own obscure bugs. One thing I know is that plugins interface with lowest level hardware. The plugins have to be able to return state that is different than my memory; to tell my memory when its own state is inconsistent. I believe this is possible despite their fetching objects that exist in my memory; we allow building of objects without checking their consistency with the model until the addToModel is called anyways; and the existing plugins design was written before all this extra state existed and worked fine without ever being aware of it. Should I just be using some other design to avoid this crazyness? (I have another question to that affect that I'm posting).

    Read the article

  • [AJAX Numeric Updown Control] Microsoft JScript runtime error: The number of fractional digits is out of range

    - by Jenson
    If you have using Ajax control toolkits a lot (which I will skip the parts on where to download and how to configure it in Visual Studio 2010), you might have encountered some bugs or limitations of the controls, or rather, some weird behaviours. I would call them weird behaviours though. Recently, I've been working on a Ajax numeric updown control, which i remember clearly it was working fine without problems. In fact, I use 2 numeric updown control this time. So I went on to configure it to be as simple as possible and I will just use the default up and down buttons provided by it (so that I won't need to design my own). I have two textbox controls to display the value controlled by the updown control. One for month, and another for year. <asp:TextBox ID="txtMonth" runat="server" CssClass="txtNumeric" ReadOnly="True" Width="150px" /> <asp:TextBox ID="txtYear" runat="server" CssClass="txtNumeric" ReadOnly="True" Width="150px" /> So I will now drop 1 numeric updown control for each of the textboxes. <asp:NumericUpDownExtender ID="txtMonth_NumericUpDownExtender"     runat="server" TargetControlID="txtMonth" Maximum="12" Minimum="1" Width="152"> </asp:NumericUpDownExtender>                          <asp:NumericUpDownExtender ID="txtYear_NumericUpDownExtender"     runat="server" TargetControlID="txtYear" Width="152"> </asp:NumericUpDownExtender>                                                  You noticed that I configure the Maximum and Minimum value for the first numericupdown control, but I never did the same for the second one (for txtYear). That's because it won't work, well, at least for me. So I remove the Minimum="2000" and Maximum="2099" from there. Then I would configure the initial value to the the current year, and let the year to flow up and down freely. If you want, you want write the codes to restrict it. Here are the codes I used on PageLoad:     Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load         If Not Page.IsPostBack Then             If Trim(txtMonth.Text) = "" Then                 Me.txtMonth.Text = System.DateTime.Today.Month             End If             If Trim(txtYear.Text) = "" Then                 Me.txtYear.Text = System.DateTime.Today.Year             End If         End If     End Sub   Enjoy!

    Read the article

  • AWStats consumes too much resource, how to disable temporarily

    - by trante
    For some days AWStats takes %10-%20 of my CPU, takes 400-550 MB RAM and works for hours. Maybe my site's traffic became larger so process time takes more time than before or some bugs in program makes this. Anyway I want to disable AWStats temporarily. Maybe I would want to activate it in future. I found that answer. But it gives commands to remove AWStats. I only want to disable it temporarily. My system is Centos 6.3, Plesk 11.5.30 Update #19. I tried to disable cron jobs. I run this # killall awstats.pl I opened # vi /etc/cron.daily/awstats file and I changed it to this: #!/bin/sh #/usr/share/awstats/awstats_updateall.pl now -awstatsprog=/var/www/cgi-bin/awstats/awstats.pl -configdir=/etc/awstats >/dev/null 2>&1 exit 0 After some time I still see that awstats is running. What should I do more to not to awstats run again ? But without removing my files. After changing " /etc/cron.daily/awstats" file awstats doesn't start in daytime. But every night in 03:15 awstats starts again. Because of Plesk auto updates are working at that time, I changed from Plesk. Don't auto update automatically. But it seems like last night at 03:15 awstats started again. Is there any way to stop awstats temporarily except this solution ? Because this solution deletes awstats configs permanently and I don't know how to revert it back in future ? Turn off all AWStats for Plesk 11+ domains #!/bin/bash for i in /var/www/vhosts/*; do echo "Turning off and deleting Stats for" echo `basename $i` /usr/local/psa/admin/bin/webstatmng --unset-configs --stat-prog=awstats --domain-name=`basename $i` /usr/local/psa/admin/bin/webstatmng --clean --stat-prog=awstats --domain-name=`basename $i` done

    Read the article

  • mysqld crashes on any statement

    - by ??iu
    I restarted my slave to change configuration settings to skip reverse hostname lookup on connecting and to enable the slow query log. I edited /etc/my.cnf making only these changes, then restarted mysqld with /etc/init.d/mysql restart All appeared to be well but when I connect to msyqld remotely or locally though it connects okay a slight problem is that mysqld crashes whenever you try to issue any kind of statement. The client looks like: Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 3 Server version: 5.1.31-1ubuntu2-log Type 'help;' or '\h' for help. Type '\c' to clear the buffer. mysql> show tables; ERROR 2006 (HY000): MySQL server has gone away No connection. Trying to reconnect... Connection id: 1 Current database: mydb ERROR 2006 (HY000): MySQL server has gone away No connection. Trying to reconnect... ERROR 2003 (HY000): Can't connect to MySQL server on 'xx.xx.xx.xx' (61) ERROR: Can't connect to the server ERROR 2006 (HY000): MySQL server has gone away No connection. Trying to reconnect... ERROR 2003 (HY000): Can't connect to MySQL server on 'xx.xx.xx.xx' (61) ERROR: Can't connect to the server ERROR 2006 (HY000): MySQL server has gone away Bus error The mysqld error log looks like: 101210 16:35:51 InnoDB: Error: (1500) Couldn't read the MAX(job_id) autoinc value from the index (PRIMARY). 101210 16:35:51 InnoDB: Assertion failure in thread 140245598570832 in file handler/ha_innodb.cc line 2595 InnoDB: Failing assertion: error == DB_SUCCESS InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html InnoDB: about forcing recovery. 101210 16:35:51 - mysqld got signal 6 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=131072 max_used_connections=3 max_threads=600 threads_connected=3 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 1328077 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd: 0x18209220 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 0x7f8d791580d0 thread_stack 0x20000 /usr/sbin/mysqld(my_print_stacktrace+0x29) [0x8b4f89] /usr/sbin/mysqld(handle_segfault+0x383) [0x5f8f03] /lib/libpthread.so.0 [0x7f902a76a080] /lib/libc.so.6(gsignal+0x35) [0x7f90291f8fb5] /lib/libc.so.6(abort+0x183) [0x7f90291fabc3] /usr/sbin/mysqld(ha_innobase::open(char const*, int, unsigned int)+0x41b) [0x781f4b] /usr/sbin/mysqld(handler::ha_open(st_table*, char const*, int, int)+0x3f) [0x6db00f] /usr/sbin/mysqld(open_table_from_share(THD*, st_table_share*, char const*, unsigned int, unsigned int, unsigned int, st_table*, bool)+0x57a) [0x64760a] /usr/sbin/mysqld [0x63f281] /usr/sbin/mysqld(open_table(THD*, TABLE_LIST*, st_mem_root*, bool*, unsigned int)+0x626) [0x641e16] /usr/sbin/mysqld(open_tables(THD*, TABLE_LIST**, unsigned int*, unsigned int)+0x5db) [0x6429cb] /usr/sbin/mysqld(open_normal_and_derived_tables(THD*, TABLE_LIST*, unsigned int)+0x1e) [0x642b0e] /usr/sbin/mysqld(mysqld_list_fields(THD*, TABLE_LIST*, char const*)+0x22) [0x70b292] /usr/sbin/mysqld(dispatch_command(enum_server_command, THD*, char*, unsigned int)+0x146d) [0x60dc1d] /usr/sbin/mysqld(do_command(THD*)+0xe8) [0x60dda8] /usr/sbin/mysqld(handle_one_connection+0x226) [0x601426] /lib/libpthread.so.0 [0x7f902a7623ba] /lib/libc.so.6(clone+0x6d) [0x7f90292abfcd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort... thd->query at 0x18213c70 = thd->thread_id=3 thd->killed=NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. 101210 16:35:51 mysqld_safe Number of processes running now: 0 101210 16:35:51 mysqld_safe mysqld restarted InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 101210 16:35:54 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 101210 16:35:56 InnoDB: Started; log sequence number 456 143528628 101210 16:35:56 [Warning] 'user' entry 'root@PSDB102' ignored in --skip-name-resolve mode. 101210 16:35:56 [Warning] Neither --relay-log nor --relay-log-index were used; so replication may break when this MySQL server acts as a slave and has his hostname changed!! Please use '--relay-log=mysqld-relay-bin' to avoid this problem. 101210 16:35:56 [Note] Event Scheduler: Loaded 0 events 101210 16:35:56 [Note] /usr/sbin/mysqld: ready for connections. Version: '5.1.31-1ubuntu2-log' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu) 101210 16:36:11 InnoDB: Error: (1500) Couldn't read the MAX(job_id) autoinc value from the index (PRIMARY). 101210 16:36:11 InnoDB: Assertion failure in thread 139955151501648 in file handler/ha_innodb.cc line 2595 InnoDB: Failing assertion: error == DB_SUCCESS InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html InnoDB: about forcing recovery. 101210 16:36:11 - mysqld got signal 6 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=131072 max_used_connections=1 max_threads=600 threads_connected=1 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 1328077 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd: 0x18588720 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 0x7f49d916f0d0 thread_stack 0x20000 /usr/sbin/mysqld(my_print_stacktrace+0x29) [0x8b4f89] /usr/sbin/mysqld(handle_segfault+0x383) [0x5f8f03] /lib/libpthread.so.0 [0x7f4c8a73f080] /lib/libc.so.6(gsignal+0x35) [0x7f4c891cdfb5] /lib/libc.so.6(abort+0x183) [0x7f4c891cfbc3] /usr/sbin/mysqld(ha_innobase::open(char const*, int, unsigned int)+0x41b) [0x781f4b] /usr/sbin/mysqld(handler::ha_open(st_table*, char const*, int, int)+0x3f) [0x6db00f] /usr/sbin/mysqld(open_table_from_share(THD*, st_table_share*, char const*, unsigned int, unsigned int, unsigned int, st_table*, bool)+0x57a) [0x64760a] /usr/sbin/mysqld [0x63f281] /usr/sbin/mysqld(open_table(THD*, TABLE_LIST*, st_mem_root*, bool*, unsigned int)+0x626) [0x641e16] /usr/sbin/mysqld(open_tables(THD*, TABLE_LIST**, unsigned int*, unsigned int)+0x5db) [0x6429cb] /usr/sbin/mysqld(open_normal_and_derived_tables(THD*, TABLE_LIST*, unsigned int)+0x1e) [0x642b0e] /usr/sbin/mysqld(mysqld_list_fields(THD*, TABLE_LIST*, char const*)+0x22) [0x70b292] /usr/sbin/mysqld(dispatch_command(enum_server_command, THD*, char*, unsigned int)+0x146d) [0x60dc1d] /usr/sbin/mysqld(do_command(THD*)+0xe8) [0x60dda8] /usr/sbin/mysqld(handle_one_connection+0x226) [0x601426] /lib/libpthread.so.0 [0x7f4c8a7373ba] /lib/libc.so.6(clone+0x6d) [0x7f4c89280fcd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort... thd->query at 0x18599950 = thd->thread_id=1 thd->killed=NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. 101210 16:36:11 mysqld_safe Number of processes running now: 0 101210 16:36:11 mysqld_safe mysqld restarted The config is [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] innodb_file_per_table innodb_buffer_pool_size=10G innodb_log_buffer_size=4M innodb_flush_log_at_trx_commit=2 innodb_thread_concurrency=8 skip-slave-start server-id=3 # # * IMPORTANT # If you make changes to these settings and your system uses apparmor, you may # also need to also adjust /etc/apparmor.d/usr.sbin.mysqld. # user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /DB2/mysql tmpdir = /tmp skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. #bind-address = 127.0.0.1 # # * Fine Tuning # key_buffer = 16M max_allowed_packet = 16M thread_stack = 128K thread_cache_size = 8 # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP max_connections = 600 #table_cache = 64 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 1M query_cache_size = 32M # skip-federated slow-query-log skip-name-resolve Update: I followed the instructions as per http://dev.mysql.com/doc/refman/5.1/en/forcing-innodb-recovery.html and set innodb_force_recovery = 4 and the logs are showing a different error but the behavior is still the same: 101210 19:14:15 mysqld_safe mysqld restarted 101210 19:14:19 InnoDB: Started; log sequence number 456 143528628 InnoDB: !!! innodb_force_recovery is set to 4 !!! 101210 19:14:19 [Warning] 'user' entry 'root@PSDB102' ignored in --skip-name-resolve mode. 101210 19:14:19 [Warning] Neither --relay-log nor --relay-log-index were used; so replication may break when this MySQL server acts as a slave and has his hostname changed!! Please use '--relay-log=mysqld-relay-bin' to avoid this problem. 101210 19:14:19 [Note] Event Scheduler: Loaded 0 events 101210 19:14:19 [Note] /usr/sbin/mysqld: ready for connections. Version: '5.1.31-1ubuntu2-log' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu) 101210 19:14:32 InnoDB: error: space object of table mydb/__twitter_friend, InnoDB: space id 1602 did not exist in memory. Retrying an open. 101210 19:14:32 InnoDB: error: space object of table mydb/access_request, InnoDB: space id 1318 did not exist in memory. Retrying an open. 101210 19:14:32 InnoDB: error: space object of table mydb/activity, InnoDB: space id 1595 did not exist in memory. Retrying an open. 101210 19:14:32 - mysqld got signal 11 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=131072 max_used_connections=1 max_threads=600 threads_connected=1 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 1328077 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd: 0x1753c070 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 0x7f7a0b5800d0 thread_stack 0x20000 /usr/sbin/mysqld(my_print_stacktrace+0x29) [0x8b4f89] /usr/sbin/mysqld(handle_segfault+0x383) [0x5f8f03] /lib/libpthread.so.0 [0x7f7cbc350080] /usr/sbin/mysqld(ha_innobase::innobase_get_index(unsigned int)+0x46) [0x77c516] /usr/sbin/mysqld(ha_innobase::innobase_initialize_autoinc()+0x40) [0x77c640] /usr/sbin/mysqld(ha_innobase::open(char const*, int, unsigned int)+0x3f3) [0x781f23] /usr/sbin/mysqld(handler::ha_open(st_table*, char const*, int, int)+0x3f) [0x6db00f] /usr/sbin/mysqld(open_table_from_share(THD*, st_table_share*, char const*, unsigned int, unsigned int, unsigned int, st_table*, bool)+0x57a) [0x64760a] /usr/sbin/mysqld [0x63f281] /usr/sbin/mysqld(open_table(THD*, TABLE_LIST*, st_mem_root*, bool*, unsigned int)+0x626) [0x641e16] /usr/sbin/mysqld(open_tables(THD*, TABLE_LIST**, unsigned int*, unsigned int)+0x5db) [0x6429cb] /usr/sbin/mysqld(open_normal_and_derived_tables(THD*, TABLE_LIST*, unsigned int)+0x1e) [0x642b0e] /usr/sbin/mysqld(mysqld_list_fields(THD*, TABLE_LIST*, char const*)+0x22) [0x70b292] /usr/sbin/mysqld(dispatch_command(enum_server_command, THD*, char*, unsigned int)+0x146d) [0x60dc1d] /usr/sbin/mysqld(do_command(THD*)+0xe8) [0x60dda8] /usr/sbin/mysqld(handle_one_connection+0x226) [0x601426] /lib/libpthread.so.0 [0x7f7cbc3483ba] /lib/libc.so.6(clone+0x6d) [0x7f7cbae91fcd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort... thd->query at 0x1754d690 = thd->thread_id=1 thd->killed=NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash.

    Read the article

  • best practice to removing DC from Site that no longer connects via vpn in another city

    - by dasko
    hi i am looking for a recap of what i have done already to see if i missed anything. i had two cities connected by wan using a ipsec persistent tunnel between gateways. i had one DC (DOMAIN CONTROLLER) in each city that was a global catalog server (GC) they were set up to replicate and i had them configured under Sites and Servers with their own subnet etc... about 6 months ago the one city was removed and i was not able to gracefully remove, through dcpromo, the server that was there. it is no longer used and cannot be brought back. the company went from two sites down to single site. Problem is i had a whole bunch of kcc errors and replication bugs in the event viewer. i wanted to clean up my active directory and decided to use the ntdsutil metadata cleanup commands. i removed the server from the specifed site based on a procedure from petri website. I then removed the instances of the old DC and site from Sites and Servers. Then i went and cleaned up the DNS by removing Host A records, NS server name from both the local DNS forward lookup zone and the _msdcs i also removed the reverse lookup zone for the subnet that no longer exists. is there anything i missed? thanks in advance for any help. gd

    Read the article

  • mysqld_safe Can't log to error log and syslog at the same time. Remove all --log-error configuration options for --syslog to take effect

    - by photon
    When I'm trying to install MySQL 5.5 community edition on my Ubuntu 10.04 by compiling the source code, I met the following problem: $ fg % 1 sudo ../bin/mysqld_safe --basedir=/usr/local/mysql_community_5.5/data --user=mysql --defaults-file=/etc/my.cnf [sudo] password for linnan: Sorry, try again. [sudo] password for linnan: 121023 09:26:21 mysqld_safe Can't log to error log and syslog at the same time. Remove all --log-error configuration options for --syslog to take effect. Internal program error (non-fatal): unknown logging method '/usr/local/mysql_community_5.5/log/mysql.log' 121023 09:26:21 mysqld_safe Logging to '/var/log/mysql/error.log'. Internal program error (non-fatal): unknown logging method '/usr/local/mysql_community_5.5/log/mysql.log' 121023 09:26:22 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql 121023 09:26:23 mysqld_safe mysqld from pid file /var/lib/mysql/ubuntu.pid ended It seems that the problem is related to log configuration. I've noticed a bugfix related to this problem: http://bugs.mysql.com/bug.php?id=50083 But I still have no idea how to solve it. The relative content in /etc/my.cnf: [mysqld] port = 3306 socket = /tmp/mysql.sock skip-external-locking key_buffer_size = 384M max_allowed_packet = 1M table_open_cache = 512 sort_buffer_size = 2M read_buffer_size = 2M read_rnd_buffer_size = 8M myisam_sort_buffer_size = 64M thread_cache_size = 8 query_cache_size = 32M # Try number of CPU's*2 for thread_concurrency thread_concurrency = 8 character-set-server=utf8 [mysqld-safe] basedir=/usr/local/mysql_community_5.5 datadir=/usr/local/mysql_community_5.5/data mysqld_safe_syslog.cnf: /etc/mysql/conf.d/mysqld_safe_syslog.cnf: [mysqld_safe] syslog

    Read the article

  • How do I install OpenStack on a single Ubuntu 12.04 node?

    - by Sam Edwards
    I'm having trouble installing OpenStack in Ubuntu 12.04, for various reasons: The official Ubuntu website recommends Juju and MAAS. However, this is a single node I am trying to get OpenStack installed on, and MAAS requires "two or more nodes" according to the docs. Additionally, I don't have any experience in MAAS and Juju and would rather stick to technologies I am more familiar with so that I can debug problems that arise. I have tried StackGeek but this fails because the node only has a single Ethernet port. The node does, however, have the second hard drive required for the nova storage. I have tried DevStack but cannot log into the dashboard. The login form appears fine, but as soon as I try to submit the page, my browser begins loading indefinitely. I have tried installing straight from packages, but I get an Internal Server Error in the dashboard upon trying to log in, with no helpful logs anywhere in sight to aid me in debugging the issue. Each of these attempts was with a fresh Ubuntu 12.04 LTS setup; I'm finding it really strange that no matter what I try, I cannot get OpenStack installed. Is this even a stable/mature project? Why am I encountering so many bugs?

    Read the article

  • Lion server profile manager, device enrollment doesn't work

    - by user964406
    I am in the process of setting up Lion Servers profile manager to manage iPads on our local school network. I don't need to manage them while they are outside the network. I have successfully had it working on my personal network. The school network is behind a proxy which we have no control over. I can get the iPads to view the mydevices page and install a trust cert. I have managed to get an iPad to successfully install the remote management profile. After this the profile manager bugs out. It will list the active task of 'new device (sending)' but it's unable to complete the task. If I click on the device on profile manager and try any of the actions out they will all fail to complete. I am using the auto generated certificates and this works if I bring the server and iPad outside of the school network. Shortly after device enrollment the system log on the Lion server reports the following Replaced the actual ip address with INTERNALIP Jun 4 08:40:53 mini sandboxd[760] ([778]): applepushservice(778) deny network-outbound INTERNALIP:8080 Jun 4 08:40:53 mini sandboxd[760] ([778]): applepushservice(778) deny network-outbound INTERNALIP:8080 Jun 4 08:40:53 mini applepushserviced[778]: Got connection error Error Domain=NSPOSIXErrorDomain Code=1 "The operation couldn\u2019t be completed. Operation not permitted" UserInfo=0x7fa483b1a340 {NSErrorFailingURLStringKey=https://albert.apple.com/WebObjects/ALUnbrick.woa/wa/deviceActivation?device=Mac OS, NSErrorFailingURLKey=https://albert.apple.com/WebObjects/ALUnbrick.woa/wa/deviceActivation?device=Mac OS} Jun 4 08:40:53 mini applepushserviced[778]: Failed to get client cert on attempt 2, will retry in 15 seconds Does anyone have any ideas on how to get past this stage? Thanks in advance.

    Read the article

< Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >