Search Results

Search found 18981 results on 760 pages for 'plugin management'.

Page 29/760 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • How to use jQuery Date Range Picker plugin in asp.net

    - by alaa9jo
    I stepped by this page: http://www.filamentgroup.com/lab/date_range_picker_using_jquery_ui_16_and_jquery_ui_css_framework/ and let me tell you,this is one of the best and coolest daterangepicker in the web in my opinion,they did a great job with extending the original jQuery UI DatePicker.Of course I made enhancements to the original plugin (fixed few bugs) and added a new option (Clear) to clear the textbox. In this article I well use that updated plugin and show you how to use it in asp.net..you will definitely like it. So,What do I need? 1- jQuery library : you can use 1.3.2 or 1.4.2 which is the latest version so far,in my article I will use the latest version. 2- jQuery UI library (1.8): As I mentioned earlier,daterangepicker plugin is based on the original jQuery UI DatePicker so that library should be included into your page. 3- jQuery DateRangePicker plugin : you can go to the author page or use the modified one (it's included in the attachment),in this article I will use the modified one. 4- Visual Studio 2005 or later : very funny :D,in my article I will use VS 2008. Note: in the attachment,I included all CSS and JS files so don't worry. How to use it? First thing,you will have to include all of the CSS and JS files into your page like this: <script src="Scripts/jquery-1.4.2.min.js" type="text/javascript"></script> <script src="Scripts/jquery-ui-1.8.custom.min.js" type="text/javascript"></script> <script src="Scripts/daterangepicker.jQuery.js" type="text/javascript"></script> <link href="CSS/redmond/jquery-ui-1.8.custom.css" rel="stylesheet" type="text/css" /> <link href="CSS/ui.daterangepicker.css" rel="stylesheet" type="text/css" /> <style type="text/css"> .ui-daterangepicker { font-size: 10px; } </style> Then add this html: <asp:TextBox ID="TextBox1" runat="server" Font-Size="10px"></asp:TextBox><asp:Button ID="SubmitButton" runat="server" Text="Submit" OnClick="SubmitButton_Click" /> <span>First Date:</span><asp:Label ID="FirstDate" runat="server"></asp:Label> <span>Second Date:</span><asp:Label ID="SecondDate" runat="server"></asp:Label> As you can see,it includes TextBox1 which we are going to attach the daterangepicker to it,2 labels to show you later on by code on how to read the date from the textbox and set it to the labels Now we have to attach the daterangepicker to the textbox by using jQuery (Note:visit the author's website for more info on daterangerpicker's options and how to use them): <script type="text/javascript"> $(function() { $("#<%= TextBox1.ClientID %>").attr("readonly", "readonly"); $("#<%= TextBox1.ClientID %>").attr("unselectable", "on"); $("#<%= TextBox1.ClientID %>").daterangepicker({ presetRanges: [], arrows: true, dateFormat: 'd M, yy', clearValue: '', datepickerOptions: { changeMonth: true, changeYear: true} }); }); </script> Finally,add this C# code: protected void SubmitButton_Click(object sender, EventArgs e) { if (TextBox1.Text.Trim().Length == 0) { return; } string selectedDate = TextBox1.Text; if (selectedDate.Contains("-")) { DateTime startDate; DateTime endDate; string[] splittedDates = selectedDate.Split("-".ToCharArray(), StringSplitOptions.RemoveEmptyEntries); if (splittedDates.Count() == 2 && DateTime.TryParse(splittedDates[0], out startDate) && DateTime.TryParse(splittedDates[1], out endDate)) { FirstDate.Text = startDate.ToShortDateString(); SecondDate.Text = endDate.ToShortDateString(); } else { //maybe the client has modified/altered the input i.e. hacking tools } } else { DateTime selectedDateObj; if (DateTime.TryParse(selectedDate, out selectedDateObj)) { FirstDate.Text = selectedDateObj.ToShortDateString(); SecondDate.Text = string.Empty; } else { //maybe the client has modified/altered the input i.e. hacking tools } } } This is the way on how to read from the textbox,That's it!. FAQ: 1-Why did you add this code?: <style type="text/css"> .ui-daterangepicker { font-size: 10px; } </style> A:For two reasons: 1)To show the Daterangepicker in a smaller size because it's original size is huge 2)To show you how to control the size of it. 2- Can I change the theme? A: yes you can,you will notice that I'm using Redmond theme which you will find it in jQuery UI website,visit their website and download a different theme,you may also have to make modifications to the css of daterangepicker,it's all yours. 3- Why did you add a font size to the textbox? A: To make the design look better,try to remove it and see by your self. 4- Can I register the script at codebehind? A: yes you can 5- I see you have added these two lines,what they do? $("#<%= TextBox1.ClientID %>").attr("readonly", "readonly"); $("#<%= TextBox1.ClientID %>").attr("unselectable", "on"); A:The first line will make the textbox not editable by the user,the second will block the blinking typing cursor from appearing if the user clicked on the textbox,you will notice that both lines are necessary to be used together,you can't just use one of them...for logical reasons of course. Finally,I hope everyone liked the article and as always,your feedbacks are always welcomed and if anyone have any suggestions or made any modifications that might be useful for anyone else then please post it at at the author's website and post a reference to your post here.

    Read the article

  • local wordpress installation, plugin installation and file permissions

    - by user1205935
    I have a local wordpress installation and got everything working, until I tried to install a new plugin. Trying to activate the plugin, wordpress asked me for FTP connection information, which I understood to be a failure of write-access to the plugins directory. Apache runs as www-data, so I ran sudo chown -R www-data: /var/www/wordpress to make the wordpress directory writable for Apache. But now, I cannot edit the files as user anymore. Changing file permissions back to chown -R user: /var/www/wordpress/wp-content/themes, the wordpress dashboard complains again, that it doesn't have sufficient access. I tried various "solutions" online, but none have worked so far. Do I really need to install something like proftp and create an FTP user & password for my local server? Or can I circumvent the problem with some nifty file permission settings, which allow both me and Apache to access/write the files?

    Read the article

  • Google Talk Plugin in GMail on MacBook 2,1

    - by jrc03c
    I'd like to use the chat section in GMail to make phone calls. I've downloaded and installed the Google Talk plugin, and it acts like it knows what it's doing. But when I try to make calls, the internal laptop mic doesn't work at all (i.e., no one on the other end can hear me). In the GMail chat settings, I've tried selecting "Default Device" for the microphone, as well as "Internal Audio Analog Stereo." No matter which setting I try, none seem to work. As I said at the top, this is only a problem in Ubuntu; it works just fine in OSX and Windows (which means that yes, my Google Voice account is properly configured). Here are my tech specs: Ubuntu 10.10 Kernel Linux 2.6.35-24-generic Gnome 2.32.0 Google Chrome 8.0.552.237 Google Talk Plugin (google-talkplugin) 1.8.0.0-1 MacBook (2,1) w/ internal microphone Any help will be greatly appreciated! Thanks!

    Read the article

  • When is the best time to do self learning in relation with software management?

    - by shankbond
    It all started from here. I have been following Software Estimation: Demystifying the Black Art (Best Practices (Microsoft)). The third chapter says that in Software Management: You cannot give too much time to software developers, if you give it to them, then it is likely that extra time given to them will be filled by some other tasks (in other words, the developers will eat that time :)) Parkinson's Law You can also not squeeze the time from their schedule because if you do that, it is likely that they will develop poor quality product, poor design and will hurt you in the long run, there will be a panic situation and total chaos in the project, lots of rework etc. My question is related to the first point. If you don't give enough time then will the typical software engineer learn his/her skills? The market is always coming with new technologies, you need to learn them. Even with the existing familiar technologies there are always best practices and dos and don'ts.

    Read the article

  • Compiz command plugin won't register keyboard shortcuts

    - by David Moles
    Per this discussion I've enabled the Compiz commands plugin in order to try to bind some keyboard shortcuts to wmctrl actions. CCSM captures my keystrokes just fine, but no matter what keystroke I try or what command I bind it to (everything from my original intention of binding Super-1, Super-2 etc. to wmctrl -o 0,0, wmctrl -o 2560,0, etc., to binding Ctrl-Alt-Shift-L to gnome-terminal). Basic compiz shortcuts for window switching and so on -- even custom ones -- seem to work fine, but the command plugin doesn't seem to be working at all. I also notice the following symptom: when I open the keyboard shortcut tab in CCSM, the keyboard shortcuts often at first appear blank, though if you click on the blank button, the correct value is still there. Also possibly related, I've noticed that gnome-terminal doesn't seem to notice the Super key, though other apps (e.g. CCSM, Emacs) register it fine. Anyway, it seems like something's eating my keystrokes. Any ideas?

    Read the article

  • [News] Un plugin TFS pour Eclipse

    Suite au r?cent rachat de la soci?t? Teamprise en Novembre, Microsoft annonce la disponibilit? d'un plugin TFS pour Eclipse d?nomm? Visual Studio Team Explorer 2010 : " The beta release contains what we consider to be the essential features necessary to claim that we?re a client for TFS 2010. We?ve been trying to strike a balance between including 2010 features, and getting the product to market, so you won?t see everything here yet (...) " . Les copies d'?cran de ce plugin qui semble gratuit sont disponibles sur le blog en question.

    Read the article

  • Project Management Tool for developers and sysadmins: shared or separate?

    - by David
    Should a team of system administrators who are on a software development project share a project management tool with the developers or use their own separate one? We use Trac and I see the benefit in sharing since inter-team tasks can be maintained by a single system where there may be cross-over or misfiled bugs (e.g. an apparent bug which turns out to be a server configuration issue or a development cycle which needs a server to be configured before it can start) However sharing could be difficult since many system administration tasks don't coincide with a single development milestone if at all. So should a system administration team use a separate PM Tool or share the same one with the developers? If they should share, then how?

    Read the article

  • Ubuntu 12.04 version of Netbeans doesn't have the JavaFX plugin

    - by aliasbody
    I just want to know why does Netbeans 7.0.1 from the official Ubuntu 12.04 doesn't have all the plugins (in especial the JavaFX plugin) on it while the official Netbeans from the website (even the version with the less plugin), has it by default ? And how can we change that ? Because I love Ubuntu (even if it is extremly slow on my Asus 1215N (it is the only OS that is slow on it, but that's not very important), but I am trying to use it only with the software provided by the Original repositories without having to download manually or even use any PPA. Thanks in Advance

    Read the article

  • Create simple jQuery plugin

    - by ybbest
    In the last post, I have shown you how to add the function to jQuery. In this post, I will show you how to create plugin to achieve this. 1. You need to wrap your code in the following construct, this is because you should not use $ directly as $ is global variable, it could have clash with some other library which also use $.Basically, you can pass in jQuery object into the function, so that $ is made available inside the function. (JavaScript use function to create scope, so you can make sure $ is referred to jQuery inside the function ) (function($){ //Your code goes here. }; })(jQuery); 2. Put your code into the construct above. (function ($) { $.getParameterByName = function (name) { name = name.replace(/[\[]/, "\\\[").replace(/[\]]/, "\\\]"); var regexS = "[\\?&]" + name + "=([^&#]*)"; var regex = new RegExp(regexS); var results = regex.exec(window.location.search); if (results == null) return ""; else return decodeURIComponent(results[1].replace(/\+/g, " ")); }; })(jQuery); 3. Now you can reference the code into you project and you can call the method in you JavaScript References: Provides scope for variables Variables are scoped at the function level in javascript. This is different to what you might be used to in a language like C# or Java where the variables are scoped to the block. What this means is if you declare a variable inside a loop or an if statement, it will be available to the entire function. If you ever find yourself needing to explicitly scope a variable inside a function you can use an anonymous function to do this. You can actually create an anonymous function and then execute it straight away and all the variables inside will be scoped to the anonymous function: (function() { var myProperty = "hello world"; alert(myProperty); })(); alert(typeof(myProperty)); // undefined How does an anonymous function in JavaScript work? Building Your First jQuery Plugin A Plugin Development Pattern

    Read the article

  • How can I make a case for "dependency management"?

    - by C. Ross
    I'm currently trying to make a case for adopting dependency management for builds (ala Maven, Ivy, NuGet) and creating an internal repository for shared modules, of which we have over a dozen enterprise wide. What are the primary selling points of this build technique? The ones I have so far: Eases the process of distributing and importing shared modules, especially version upgrades. Requires the dependencies of shared modules to be precisely documented. Removes shared modules from source control, speeding and simplifying checkouts/check ins (when you have applications with 20+ libraries this is a real factor). Allows more control or awareness of what third party libs are used in your organization. Are there any selling points that I'm missing? Are there any studies or articles giving improvement metrics?

    Read the article

  • Compiz plugin (Grid) does not update in CCSM

    - by pileofrocks
    I upgraded to 13.10. Compiz itself has been updated properly to 0.9.10.2, but in CCSM, one* plugin (Grid) shows up as the old version. I know it has been changed and I can actually see the updated version when I log in with another user. This hints of some kind of a problem with per-user settings? (* Actually I'd expect this to involve other if not all other plugins too, but I have simply not yet noticed others.) So far I have tried: resetting Compiz settings to defaults (GUI-way) does not help completely removing & reinstalling compizconfig-settings-manager and compiz-plugins packages does not help In 13.04, I had a patched/old version of the plugin, but I doubt it is about that since everything is fine with the other user (that user account existed already in 13.04). What configuration files I should try deleting?

    Read the article

  • How to use Sprockets Rails plugin on Heroku?

    - by Kevin
    Hi, I just deployed my Rails app to Heroku, but the Javascripts that were using Sprockets plugin don't work. I understood that, because my Heroku app is read-only, Sprockets won't work. I've found this sprockets_on_heroku plugin that should do the work, but I don't really get how to use it : I added config.gem sprockets in config/environment.rb I added sprockets in my .gems file I pushed these on Heroku and Sprockets was successfully installed I locally ran script/plugin install git://github.com/jeffrydegrande/sprockets_on_heroku.git and the plugin was successfully installed Nothing changed on Heroku, so I tried to install the plugin on Heroku with heroku plugins:install git://github.com/jeffrydegrande/sprockets_on_heroku.git, which returned sprockets_on_heroku installedbut then, a heroku restartor a heroku pluginscommand would return this: ~/.heroku/plugins/sprockets_on_heroku/init.rb:1: uninitialized constant ActionController (NameError) from /opt/local/lib/ruby/gems/1.8/gems/heroku-1.8.3/bin/../lib/heroku/plugin.rb:25:in `load' from /opt/local/lib/ruby/gems/1.8/gems/heroku-1.8.3/bin/../lib/heroku/plugin.rb:25:in `load!' from /opt/local/lib/ruby/gems/1.8/gems/heroku-1.8.3/bin/../lib/heroku/plugin.rb:22:in `each' from /opt/local/lib/ruby/gems/1.8/gems/heroku-1.8.3/bin/../lib/heroku/plugin.rb:22:in `load!' from /opt/local/lib/ruby/gems/1.8/gems/heroku-1.8.3/bin/../lib/heroku/command.rb:14:in `run' from /opt/local/lib/ruby/gems/1.8/gems/heroku-1.8.3/bin/heroku:14 from /opt/local/bin/heroku:19:in `load' from /opt/local/bin/heroku:19 What should I do? Kevin

    Read the article

  • How Mature is Your Database Change Management Process?

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database Delivery Patterns & Practices Further Reading Organization and team processes How do you get your database schema changes live, on to your production system? As your team of developers and DBAs are working on the changes to the database to support your business-critical applications, how do these updates wend their way through from dev environments, possibly to QA, hopefully through pre-production and eventually to production in a controlled, reliable and repeatable way? In this article, I describe a model we use to try and understand the different stages that customers go through as their database change management processes mature, from the very basic and manual, through to advanced continuous delivery practices. I also provide a simple chart that will help you determine “How mature is our database change management process?” This process of managing changes to the database – which all of us who have worked in application/database development have had to deal with in one form or another – is sometimes known as Database Change Management (even if we’ve never used the term ourselves). And it’s a difficult process, often painfully so. Some developers take the approach of “I’ve no idea how my changes get live – I just write the stored procedures and add columns to the tables. It’s someone else’s problem to get this stuff live. I think we’ve got a DBA somewhere who deals with it – I don’t know, I’ve never met him/her”. I know I used to work that way. I worked that way because I assumed that making the updates to production was a trivial task – how hard can it be? Pause the application for half an hour in the middle of the night, copy over the changes to the app and the database, and switch it back on again? Voila! But somehow it never seemed that easy. And it certainly was never that easy for database changes. Why? Because you can’t just overwrite the old database with the new version. Databases have a state – more specifically 4Tb of critical data built up over the last 12 years of running your business, and if your quick hotfix happened to accidentally delete that 4Tb of data, then you’re “Looking for a new role” pretty quickly after the failed release. There are a lot of other reasons why a managed database change management process is important for organisations, besides job security, not least: Frequency of releases. Many business managers are feeling the pressure to get functionality out to their users sooner, quicker and more reliably. The new book (which I highly recommend) Lean Enterprise by Jez Humble, Barry O’Reilly and Joanne Molesky provides a great discussion on how many enterprises are having to move towards a leaner, more frequent release cycle to maintain their competitive advantage. It’s no longer acceptable to release once per year, leaving your customers waiting all year for changes they desperately need (and expect) Auditing and compliance. SOX, HIPAA and other compliance frameworks have demanded that companies implement proper processes for managing changes to their databases, whether managing schema changes, making sure that the data itself is being looked after correctly or other mechanisms that provide an audit trail of changes. We’ve found, at Red Gate that we have a very wide range of customers using every possible form of database change management imaginable. Everything from “Nothing – I just fix the schema on production from my laptop when things go wrong, and write it down in my notebook” to “A full Continuous Delivery process – any change made by a dev gets checked in and recorded, fully tested (including performance tests) before a (tested) release is made available to our Release Management system, ready for live deployment!”. And everything in between of course. Because of the vast number of customers using so many different approaches we found ourselves struggling to keep on top of what everyone was doing – struggling to identify patterns in customers’ behavior. This is useful for us, because we want to try and fit the products we have to different needs – different products are relevant to different customers and we waste everyone’s time (most notably, our customers’) if we’re suggesting products that aren’t appropriate for them. If someone visited a sports store, looking to embark on a new fitness program, and the store assistant suggested the latest $10,000 multi-gym, complete with multiple weights mechanisms, dumb-bells, pull-up bars and so on, then he’s likely to lose that customer. All he needed was a pair of running shoes! To solve this issue – in an attempt to simplify how we understand our customers and our offerings – we built a model. This is a an attempt at trying to classify our customers in to some sort of model or “Customer Maturity Framework” as we rather grandly term it, which somehow simplifies our understanding of what our customers are doing. The great statistician, George Box (amongst other things, the “Box” in the Box-Jenkins time series model) gave us the famous quote: “Essentially all models are wrong, but some are useful” We’ve taken this quote to heart – we know it’s a gross over-simplification of the real world of how users work with complex legacy and new database developments. Almost nobody precisely fits in to one of our categories. But we hope it’s useful and interesting. There are actually a number of similar models that exist for more general application delivery. We’ve found these from ThoughtWorks/Forrester, from InfoQ and others, and initially we tried just taking these models and replacing the word “application” for “database”. However, we hit a problem. From talking to our customers we know that users are far less further down the road of mature database change management than they are for application development. As a simple example, no application developer, who wants to keep his/her job would develop an application for an organisation without source controlling that code. Sure, he/she might not be using an advanced Gitflow branching methodology but they’ll certainly be making sure their code gets managed in a repo somewhere with all the benefits of history, auditing and so on. But this certainly isn’t the case (yet) for the database – a very large segment of the people we speak to have no source control set up for their databases whatsoever, even at the most basic level (for example, keeping change scripts in a source control system somewhere). By the way, if this is you, Red Gate has a great whitepaper here, on the barriers people face getting a source control process implemented at their organisations. This difference in maturity is the same as you move in to areas such as continuous integration (common amongst app developers, relatively rare for database developers) and automated release management (growing amongst app developers, very rare for the database). So, when we created the model we started from scratch and biased the levels of maturity towards what we actually see amongst our customers. But, what are these stages? And what level are you? The table below describes our definitions for four levels of maturity – Baseline, Beginner, Intermediate and Advanced. As I say, this is a model – you won’t fit any of these categories perfectly, but hopefully one will ring true more than others. We’ve also created a PDF with a flow chart to help you find which of these groups most closely matches your team:  Download the Database Delivery Maturity Framework PDF here   Level D1 – Baseline Work directly on live databases Sometimes work directly in production Generate manual scripts for releases. Sometimes use a product like SQL Compare or similar to do this Any tests that we might have are run manually Level D2 – Beginner Have some ad-hoc DB version control such as manually adding upgrade scripts to a version control system Attempt is made to keep production in sync with development environments There is some documentation and planning of manual deployments Some basic automated DB testing in process Level D3 – Intermediate The database is fully version-controlled with a product like Red Gate SQL Source Control or SSDT Database environments are managed Production environment schema is reproducible from the source control system There are some automated tests Have looked at using migration scripts for difficult database refactoring cases Level D4 – Advanced Using continuous integration for database changes Build, testing and deployment of DB changes carried out through a proper database release process Fully automated tests Production system is monitored for fast feedback to developers   Does this model reflect your team at all? Where are you on this journey? We’d be very interested in knowing how you get on. We’re doing a lot of work at the moment, at Red Gate, trying to help people progress through these stages. For example, if you’re currently not source controlling your database, then this is a natural next step. If you are already source controlling your database, what about the next stage – continuous integration and automated release management? To help understand these issues, there’s a summary of the Red Gate Database Delivery learning program on our site, alongside a Patterns and Practices library here on Simple-Talk and a Training Academy section on our documentation site to help you get up and running with the tools you need to progress. All feedback is welcome and it would be great to hear where you find yourself on this journey! This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

  • Package management fails in update-manager with gzip problems and compilation errors. U12.04LTS

    - by HarveyP
    Similar to but not the same as Package management system corrupted. Cannot install or remove packages. U12.04LTS (an earlier problem) with package management system. Followed all of L. D. James suggestions in his answer to no avail. This time as well as the gzip error I am also getting compilation errors. The difference may be due to a lack of compilation in my earlier problem so it may be the same error. The packages concerned are enumerated in the output from update-manager below. Also included below that is the output from apt-get -f install apt-get autoremove gives same output. Tried update without SSL updates - 9 to install and got "Unhandled Error in aptdaemon". Output number 3 below. One at a time - output 4 - is for firefox, first in the list of packages. Falls over at libssl1.0.0 despite deselection of it from update ... Tried apt-get install --reinstall dpkg which succeeded, apt-get install --reinstall tar and apt-get install --reinstall gzip both of which failed at libssl1.0.0 as ever. (as suggested by Subv3rsion elsewhere in this forum) Now cannot apt-get update with complete success even after changing server and apt-get clean - output number 5 below ... 1). Output from update-manager The following packages will be upgraded:<> firefox firefox-globalmenu firefox-locale-en libavcodec-extra-53 libavformat53 libavutil-extra-51 libjson0 libpostproc52 libssl1.0.0 libswscale2 openssl 11 to upgrade, 0 to newly install, 0 to remove and 0 not to upgrade.<br> Need to get 0 B/46.5 MB of archives. After this operation, 1,416 kB of additional disk space will be used.<br> Do you want to continue [Y/n]? y debconf: Perl may be unconfigured (Bareword "gensym" not allowed while "strict subs" in use at /usr/lib/perl/5.14/IO/Handle.pm line 67. BEGIN not safe after errors--compilation aborted at /usr/lib/perl/5.14/IO/Handle.pm line 366. Compilation failed in require at /usr/lib/perl/5.14/IO/Seekable.pm line 9. BEGIN failed--compilation aborted at /usr/lib/perl/5.14/IO/Seekable.pm line 9. Compilation failed in require at /usr/lib/perl/5.14/IO/File.pm line 11. BEGIN failed--compilation aborted at /usr/lib/perl/5.14/IO/File.pm line 11. Compilation failed in require at /usr/share/perl/5.14/FileHandle.pm line 9. Compilation failed in require at (eval 1) line 3. BEGIN failed--compilation aborted at (eval 1) line 3. ) -- aborting (Reading database ... 160575 files and directories currently installed.) Preparing to replace libssl1.0.0 1.0.1-4ubuntu5.14 (using .../libssl1.0.0_1.0.1-4ubuntu5.15_i386.deb) ... Unpacking replacement libssl1.0.0 ... dpkg-deb (subprocess): data: internal gzip read error: '<fd:4>: data error' dpkg-deb: error: subprocess <decompress> returned error exit status 2 dpkg: error processing /var/cache/apt/archives/libssl1.0.0_1.0.1-4ubuntu5.15_i386.deb (--unpack):<br> subprocess dpkg-deb --fsys-tarfile returned error exit status 2 No apport report written because MaxReports has already been reached Bareword "gensym" not allowed while "strict subs" in use at /usr/lib/perl/5.14/IO/Handle.pm line 67. BEGIN not safe after errors--compilation aborted at /usr/lib/perl/5.14/IO/Handle.pm line 366. Compilation failed in require at /usr/lib/perl/5.14/IO/Seekable.pm line 9. BEGIN failed--compilation aborted at /usr/lib/perl/5.14/IO/Seekable.pm line 9. Compilation failed in require at /usr/lib/perl/5.14/IO/File.pm line 11. BEGIN failed--compilation aborted at /usr/lib/perl/5.14/IO/File.pm line 11. Compilation failed in require at /usr/share/perl/5.14/FileHandle.pm line 9. Compilation failed in require at /usr/share/perl5/Debconf/Template.pm line 8. BEGIN failed--compilation aborted at /usr/share/perl5/Debconf/Template.pm line 8. Compilation failed in require at /usr/share/perl5/Debconf/Question.pm line 8. BEGIN failed--compilation aborted at /usr/share/perl5/Debconf/Question.pm line 8. Compilation failed in require at /usr/share/perl5/Debconf/Config.pm line 7. BEGIN failed--compilation aborted at /usr/share/perl5/Debconf/Config.pm line 7. Compilation failed in require at /usr/share/perl5/Debconf/Log.pm line 10. Compilation failed in require at /usr/share/perl5/Debconf/Db.pm line 7. BEGIN failed--compilation aborted at /usr/share/perl5/Debconf/Db.pm line 7. Compilation failed in require at /usr/share/debconf/frontend line 6. BEGIN failed--compilation aborted at /usr/share/debconf/frontend line 6. dpkg: error whale cleanang up: subprgcess installed post-installation script returned error exit status 2 Errors were encountered while processing: /var/cache/apt/archives/libssl1.0.0_1.0.1-4ubuntu5.15_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) 2). Output from install -f harveyp@harveyp:~$ sudo apt-get -f install [sudo] password for harveyp: Reading package lists... Done Building dependency tree Reading state information... Done 0 to upgrade, 0 to newly install, 0 to remove and 11 not to upgrade. 1 not fully installed or removed.<br> After this operation, 0 B of additional disk space will be used. E: Internal Error, No file name for libssl1.0.0 3). Unhandled error from aptdaemon Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/aptdaemon/worker.py", line 1045, in _simulate trans.unauthenticated = self.__simulate(trans) File "/usr/lib/python2.7/dist-packages/aptdaemon/worker.py", line 1160, in __simulate unauthenticated = self._get_unauthenticated() File "/usr/lib/python2.7/dist-packages/aptdaemon/worker.py", line 347, in _get_unauthenticated for pkg in self._iterate_packages(): File "/usr/lib/python2.7/dist-packages/aptdaemon/worker.py", line 1356, in _iterate_packages for enum, pkg in enumerate(self._cache): File "/usr/lib/python2.7/dist-packages/apt/cache.py", line 216, in __iter__ yield self[pkgname] File "/usr/lib/python2.7/dist-packages/apt/cache.py", line 201, in __getitem__ pkg = self._weakref[key] = Package(self, self._cache[key]) KeyError: 'librqrcode-rubq-doc 4). output from update of firefox installArchives() failed: Error in function: < Setting up libssl1.0.0 (1.0.1-4ubuntu5.14) ... Bareword "gensym" not allowed while "strict subs" in use at /usr/lib/perl/5.14/IO/Handle.pm line 67. BEGIN not safe after errors--compilation aborted at /usr/lib/perl/5.14/IO/Handle.pm line 366. Compilation failed in require at /usr/lib/perl/5.14/IO/Seekable.pm line 9. BEGIN failed--compilation aborted at /usr/lib/perl/5.14/IO/Seekable.pm line 9. Compilation failed in require at /usr/lib/perl/5.14/IO/File.pm line 11. BEGIN failed--compilation aborted at /usr/lib/perl/5.14/IO/File.pm line 11. Compilation failed in require at /usr/share/perl/5.14/FileHandle.pm line 9. Compilation failed in require at /usr/share/perl5/Debconf/Template.pm line 8. BEGIN failed--compilation aborted at /usr/share/perl5/Debconf/Template.pm line 8. Compilation failed in require at /usr/share/perl5/Debconf/Question.pm line 8. BEGIN failed--compilation aborted at /usr/share/perl5/Debconf/Question.pm line 8. Compilation failed in require at /usr/share/perl5/Debconf/Config.pm line 7. BEGIN failed--compilation aborted at /usr/share/perl5/Debconf/Config.pm line 7. Compilation failed in require at /usr/share/perl5/Debconf/Log.pm line 10. 5. output from apt-get update ...snip ... Hit http://ubuntu-archive.mirrors.free.org precise-security/multiverse Translation-en Hit http://ubuntu-archive.mirrors.free.org precise-security/restricted Translation-en Hit http://ubuntu-archive.mirrors.free.org precise-security/universe Translation-en Fetched 368 kB in 6s (59.5 kB/s) W: Failed to fetch gzip:/var/lib/apt/lists/partial/ubuntu-archive.mirrors.free.org_ubuntu_dists_precise_universe_source_Sources Hash Sum mismatch E: Some index files failed to download. They have been ignored, or old ones used instead.

    Read the article

  • Schema objects not visible in SQL Server Management Studio 2008

    - by Germ
    I'm experiencing a weird problem with a SQL login. When I connect to the server in Microsoft SQL Server Management Studio (2008) using this account, I cannot see any of the tables, stored procedures etc. that this account should have access to on a particular database. When I connect to the same server within Visual Studio (2008) with the same account everything is there. When I connect with the same account on a Virtual Machine everything is there. I've also had a co-worker connect to the server using the same login and he's able to view everything as well. I use Microsoft SQL Server Management Studio all day connecting to different servers and databases and I've never experienced this problem. Does anyone have any suggestions on how I can diagnose this problem? I've checked to make sure I don't have any Table filters etc. There's several database on this server and I'm able to see the correct tables that this account has access to in the other databases just fine. Running this query lists the tables I'm expecting to see. SELECT * FROM INFORMATION_SCHEMA.TABLES

    Read the article

  • Application log aggregation, management and notifications...

    - by Matthew Savage
    I'm wondering what everyone is using for logging, log management and log aggregation on their systems. I am working in a company which uses .NET for all it's applications and all systems are Windows based. Currently each application looks after its own logging and notifications of failures (e.g. if app A fails it will send out its own 'call for help' to an admin). While this current practice works its a bit hacky and hard to manage. I've been trying to find some options for making this work better and I've come up with the following: log4net & Chainsaw (ah, if it works). Logging via log4net or another framework into a central database & rolling our own management tool. Logging to the Windows event log and using MOM or System Center Operations Manager to aggregate and manage each of these servers & their apps. A hand-rolled solution to suck all the log files into one point and work some magic across them. Essentially what we are after is something which can pull log entries all together and allow for some analytics to be run across them, plus use a kind of event based system to, for example, send out a warning email when there have been 30+ warning level logs for an application in the last x minutes. So is there anything I've missed, or something someone else can suggest?

    Read the article

  • Objective-C Getter Memory Management

    - by Marian André
    I'm fairly new to Objective-C and am not sure how to correctly deal with memory management in the following scenario: I have a Core Data Entity with a to-many relationship for the key "children". In order to access the children as an array, sorted by the column "position", I wrote the model class this way: @interface AbstractItem : NSManagedObject { NSArray * arrangedChildren; } @property (nonatomic, retain) NSSet * children; @property (nonatomic, retain) NSNumber * position; @property (nonatomic, retain) NSArray * arrangedChildren; @end @implementation AbstractItem @dynamic children; @dynamic position; @synthesize arrangedChildren; - (NSArray*)arrangedChildren { NSArray* unarrangedChildren = [[self.children allObjects] retain]; NSSortDescriptor* sortDescriptor = [[NSSortDescriptor alloc] initWithKey:@"position" ascending:YES]; [arrangedChildren release]; arrangedChildren = [unarrangedChildren sortedArrayUsingDescriptors:[NSArray arrayWithObject:sortDescriptor]]; [sortDescriptor release]; [unarrangedChildren release]; return [arrangedChildren retain]; } @end I'm not sure whether or not to retain unarrangedChildren and the returned arrangedChildren (first and last line of the arrangedChildren getter). Does the NSSet allObjects method already return a retained array? It's probably too late and I have a coffee overdose. I'd be really thankful if someone could point me in the right direction. I guess I'm missing vital parts of memory management knowledge and I will definitely look into it thoroughly.

    Read the article

  • Content Management Systems for Adaptive Content [closed]

    - by andrewap
    Content management systems (CMS) allow us to easily maintain blogs, news sites, general websites, and so on. Many of them are designed to manage pages of content, and provide tools to organize and customize how that content is displayed on the web. However, as explained by Mark Boulton in his Adaptive Content Management article, and by Karen McGrane in her talk on Adapting Ourselves to Adaptive Content, we are increasingly delivering content not just to the web, but also to other platforms and channels. We need tools to manage pieces of content with meaningful metadata attached. Create once, publish everywhere. The main idea is to store content cleanly, without intertwining it with presentation markup specific to the web. Because pieces of content is compartmentalized semantically, it can easily adapt to fit in different platforms and channels. Hence, it's called adaptive content. Let's look at a quick example to compare: Say I manage news articles and events. To create a news article, I would tell the CMS the type of content I'm creating, and be asked to fill in a form with individual fields tailored to news articles (e.g. headline, subtitle, full text, short snippet, and images). — i.e. pieces of content With a traditional web publishing tool, I would probably have had to create a new page under News, and then type in and format the news article in a blank WYSIWYG text editor. — i.e. pages of content As you can see, the first design allows me to individually specify content in its smallest semantic unit. When I want to display or consume it, the system can easily provide the pieces I need. So here's my question: Is there a CMS that is designed specifically with adaptive content in mind, and that is decoupled with the presentation layer? Note: This is not a discussion about the best CMS, or which CMS I should use. I am asking whether a very specific type of tool — CMS designed for adaptive content — exists for developers to use.

    Read the article

  • Intel MKL memory management and exceptions

    - by Andrew
    Hello everyone, I am trying out Intel MKL and it appears that they have their own memory management (C-style). They suggest using their MKL_malloc/MKL_free pairs for vectors and matrices and I do not know what is a good way to handle it. One of the reasons for that is that memory-alignment is recommended to be at least 16-byte and with these routines it is specified explicitly. I used to rely on auto_ptr and boost::smart_ptr a lot to forget about memory clean-ups. How can I write an exception-safe program with MKL memory management or should I just use regular auto_ptr's and not bother? Thanks in advance. EDIT http://software.intel.com/sites/products/documentation/hpc/mkl/win/index.htm this link may explain why I brought up the question UPDATE I used an idea from the answer below for allocator. This is what I have now: template <typename T, size_t TALIGN=16, size_t TBLOCK=4> class aligned_allocator : public std::allocator<T> { public: pointer allocate(size_type n, const void *hint) { pointer p = NULL; size_t count = sizeof(T) * n; size_t count_left = count % TBLOCK; if( count_left != 0 ) count += TBLOCK - count_left; if ( !hint ) p = reinterpret_cast<pointer>(MKL_malloc (count,TALIGN)); else p = reinterpret_cast<pointer>(MKL_realloc((void*)hint,count,TALIGN)); return p; } void deallocate(pointer p, size_type n){ MKL_free(p); } }; If anybody has any suggestions, feel free to make it better.

    Read the article

  • Memory management of objects returned by methods (iOS / Objective-C)

    - by iOSNewb
    I am learning Objective-C and iOS programming through the terrific iTunesU course posted by Stanford (http://www.stanford.edu/class/cs193p/cgi-bin/drupal/) Assignment 2 is to create a calculator with variable buttons. The chain of commands (e.g. 3+x-y) is stored in a NSMutableArray as "anExpression", and then we sub in random values for x and y based on an NSDictionary to get a solution. This part of the assignment is tripping me up: The final two [methods] “convert” anExpression to/from a property list: + (id)propertyListForExpression:(id)anExpression; + (id)expressionForPropertyList:(id)propertyList; You’ll remember from lecture that a property list is just any combination of NSArray, NSDictionary, NSString, NSNumber, etc., so why do we even need this method since anExpression is already a property list? (Since the expressions we build are NSMutableArrays that contain only NSString and NSNumber objects, they are, indeed, already property lists.) Well, because the caller of our API has no idea that anExpression is a property list. That’s an internal implementation detail we have chosen not to expose to callers. Even so, you may think, the implementation of these two methods is easy because anExpression is already a property list so we can just return the argument right back, right? Well, yes and no. The memory management on this one is a bit tricky. We’ll leave it up to you to figure out. Give it your best shot. Obviously, I am missing something with respect to memory management because I don't see why I can't just return the passed arguments right back. Thanks in advance for any answers!

    Read the article

  • Project management software, available options

    - by canni
    Hey, sorry for posting this here, I know that this question better suites into SuperUser, but I would like to know answers from developers point of view. I have been using Indefero for project management etc. for some time, but I found that Indefero limitations are too big for my team. I'm searching project-management software that best suites this needs: Open-Source, but I can consider commercial apps GIT integration is mandatory, best if it can support multiple repos per project Time-tracking, good if it can have Gannt chart connected with issues etc. Issue, milestone, task tracking Good if it can be integrated with Gitosis, or have similar repository access control It must have an option, to setup on our own server Markdown syntax support is mandatory (or easy way to install plugin for this etc.) Issue tagging will be and advantage It will be used by developers team by 99% of time, but it has to have some simple interface, that clients can fill up bug reports etc. per project. It does not have to fill all this needs, but good if it can :) What options do You know, and can recommend?

    Read the article

  • A Patent for Workload Management Based on Service Level Objectives

    - by jsavit
    I'm very pleased to announce that after a tiny :-) wait of about 5 years, my patent application for a workload manager was finally approved. Background Many operating systems have a resource manager which lets you control machine resources. For example, Solaris provides controls for CPU with several options: shares for proportional CPU allocation. If you have twice as many shares as me, and we are competing for CPU, you'll get about twice as many CPU cycles), dedicated CPU allocation in which a number of CPUs are exclusively dedicated to an application's use. You can say that a zone or project "owns" 8 CPUs on a 32 CPU machine, for example. And, capped CPU in which you specify the upper bound, or cap, of how much CPU an application gets. For example, you can throttle an application to 0.125 of a CPU. (This isn't meant to be an exhaustive list of Solaris RM controls.) Workload management Useful as that is (and tragic that some other operating systems have little resource management and isolation, and frighten people into running only 1 app per OS instance - and wastefully size every server for the peak workload it might experience) that's not really workload management. With resource management one controls the resources, and hope that's enough to meet application service objectives. In fact, we hold resource distribution constant, see if that was good enough, and adjust resource distribution if that didn't meet service level objectives. Here's an example of what happens today: Let's try 30% dedicated CPU. Not enough? Let's try 80% Oh, that's too much, and we're achieving much better response time than the objective, but other workloads are starving. Let's back that off and try again. It's not the process I object to - it's that we to often do this manually. Worse, we sometimes identify and adjust the wrong resource and fiddle with that to no useful result. Back in my days as a customer managing large systems, one of my users would call me up to beg for a "CPU boost": Me: "it won't make any difference - there's plenty of spare CPU to be had, and your application is completely I/O bound." User: "Please do it anyway." Me: "oh, all right, but it won't do you any good." (I did, because he was a friend, but it didn't help.) Prior art There are some operating environments that take a stab about workload management (rather than resource management) but I find them lacking. I know of one that uses synthetic "service units" composed of the sum of CPU, I/O and memory allocations multiplied by weighting factors. A workload is set to make a target rate of service units consumed per second. But this seems to be missing a key point: what is the relationship between artificial 'service units' and actually meeting a throughput or response time objective? What if I get plenty of one of the components (so am getting enough service units), but not enough of the resource whose needed to remove the bottleneck? Actual workload management That's not really the answer either. What is needed is to specify a workload's service levels in terms of externally visible metrics that are meaningful to a business, such as response times or transactions per second, and have the workload manager figure out which resources are not being adequately provided, and then adjust it as needed. If an application is not meeting its service level objectives and the reason is that it's not getting enough CPU cycles, adjust its CPU resource accordingly. If the reason is that the application isn't getting enough RAM to keep its working set in memory, then adjust its RAM assignment appropriately so it stops swapping. Simple idea, but that's a task we keep dumping on system administrators. In other words - don't hold the number of CPU shares constant and watch the achievement of service level vary. Instead, hold the service level constant, and dynamically adjust the number of CPU shares (or amount of other resources like RAM or I/O bandwidth) in order to meet the objective. Instrumenting non-instrumented applications There's one little problem here: how do I measure application performance in a way relating to a service level. I don't want to do it based on internal resources like number of CPU seconds it received per minute - We need to make resource decisions based on externally visible and meaningful measures of performance, not synthetic items or internal resource counters. If I have a way of marking the beginning and end of a transaction, I can then measure whether or not the application is meeting an objective based on it. If I can observe the delay factors for an application, I can see which resource shortages are slowing an application enough to keep it from meeting its objectives. I can then adjust resource allocations to relieve those shortages. Fortunately, Solaris provides facilities for both marking application progress and determining what factors cause application latency. The Solaris DTrace facility let's me introspect on application behavior: in particular I can see events like "receive a web hit" and "respond to that web hit" so I can get transaction rate and response time. DTrace (and tools like prstat) let me see where latency is being added to an application, so I know which resource to adjust. Summary After a delay of a mere few years, I am the proud creator of a patent (advice to anyone interested in going through the process: don't hold your breath!). The fundamental idea is fairly simple: instead of holding resource constant and suffering variable levels of success meeting service level objectives, properly characterise the service level objective in meaningful terms, instrument the application to see if it's meeting the objective, and then have a workload manager change resource allocations to remove delays preventing service level attainment. I've done it by hand for a long time - I think that's what a computer should do for me.

    Read the article

  • My company's management wants to deduct from the salary of under performing employees. How can I convince them not to?

    - by Sparky
    My company's management wants to deduct from the salary of under performing employees. I'm a member of the Core Strategy committee and they want my opinion also. I believe that the throughput from an employee depends on a lot of things such as the particular work assigned to them, other members of his/her team, other reasons etc. Such penalizations will be demoralizing to the people. How can I convince my management not to do so?

    Read the article

  • What's the best project management software for internal dev. 5 man shop

    - by P.Brian.Mackey
    I work for a large corporation, but we do small intranet web application development. Our project management tracking sucks. Its custom software built by a jr. intern. For what its worth, our development style is akin to agile, but there's nothing set in stone...very customer oriented approach. I need project tracking that meets the criteria: Intranet, internal products. Mostly maintenance, some new development. 5 developers 12 products 1 hands-off manager. He really just wants to know estimated man hours, due date for dev, QA and release. Along with a short description of the project. Free or super cheap. Bonus Simple pretty UI. Think pretty charts. Hope I covered everything. Please ask for any clarification. If you read dreaming in code, the company uses some project tracking software that sounds pretty sweet. Note, we do have Team Foundation Server. I already tried pushing its use as PM tracking, but its too complicated. I can't get people to sit and train. So this software has to be easy.

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >