Search Results

Search found 3469 results on 139 pages for 'broken harddrive'.

Page 126/139 | < Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >

  • Ubuntu Sudo apt-get -f install

    - by Justin
    I was trying to install a program. And It said that my Dependencies were unmet. And that I should run, sudo apt-get -f install. I have moved everything I didn't need in /etc/apt/sources.list.d/ into the trash. My source.list is all Natty while I am running Oneiric. So maybe I need a new source.list? But here are the things I have: justin@justin-000:~$ sudo apt-get -f install [sudo] password for justin: Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: linux-image-3.0.0-13-generic Suggested packages: fdutils linux-doc-3.0.0 linux-source-3.0.0 linux-tools The following NEW packages will be installed: linux-image-3.0.0-13-generic 0 upgraded, 1 newly installed, 0 to remove and 3 not upgraded. 2 not fully installed or removed. Need to get 0 B/36.5 MB of archives. After this operation, 117 MB of additional disk space will be used. Do you want to continue [Y/n]? y (Reading database ... 270736 files and directories currently installed.) Unpacking linux-image-3.0.0-13-generic (from .../linux-image-3.0.0-13- generic_3.0.0-13.22_i386.deb) ... Done. dpkg: error processing /var/cache/apt/archives/linux-image-3.0.0-13 generic_3.0.0-13.22_i386.deb (--unpack): corrupted filesystem tarfile - corrupted package archive No apport report written because MaxReports is reached already dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 3.0.0-13-generic /boot/vmlinuz-3.0.0-13-generic run-parts: executing /etc/kernel/postrm.d/zz-extlinux 3.0.0-13-generic /boot/vmlinuz-3.0.0-13-generic P: Checking for EXTLINUX directory... found. P: Writing config for /boot/vmlinuz-3.0.0-12-generic... P: Writing config for /boot/vmlinuz-2.6.38-11-generic... P: Installing debian theme... done. run-parts: executing /etc/kernel/postrm.d/zz-update-grub 3.0.0-13-generic /boot/vmlinuz-3.0.0-13-generic Errors were encountered while processing: /var/cache/apt/archives/linux-image-3.0.0-13-generic_3.0.0-13.22_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) justin@justin-000:~$ sudo apt-get update justin@justin-000:~$ sudo apt-get update Ign dl.google.com stable InRelease Ign dl.google.com stable InRelease Get:1dl.google.com stable Release.gpg [198 B] Ign us.archive.ubuntu.com oneiric InRelease Ign us.archive.ubuntu.com oneiric-security InRelease Ign http://us.archive.ubuntu.com oneiric-updates InRelease Get:2 dl.google.com stable Release.gpg [198 B] Get:3 dl.google.com stable Release [1,347 B] Get:4 dl.google.com stable Release [1,338 B] Hit us.archive.ubuntu.com oneiric Release.gpg Hit us.archive.ubuntu.com oneiric-security Release.gpg Get:5/dl.google.com stable/main i386 Packages [1,220 B] Hit tp://us.archive.ubuntu.com oneiric-updates Release.gpg Ign tp://dl.google.com stable/main TranslationIndex Get:6 tp://dl.google.com stable/main i386 Packages [464 B] Ign ttp://ppa.launchpad.net oneiric InRelease Hit ttp://us.archive.ubuntu.com oneiric Release Ign ttp://ppa.launchpad.net oneiric InRelease Ign ttp://dl.google.com stable/main TranslationIndex Hit ttp://ppa.launchpad.net oneiric Release.gpg Hit ttp://us.archive.ubuntu.com oneiric-security Release Hit ttp://ppa.launchpad.net oneiric Release.gpg Hit ttp://us.archive.ubuntu.com oneiric-updates Release Hit ttp://us.archive.ubuntu.com oneiric/main Sources Hit ttp://us.archive.ubuntu.com oneiric/restricted Sources Hit ttp://us.archive.ubuntu.com oneiric/universe Sources Hit ttp://us.archive.ubuntu.com oneiric/multiverse Sources Hit ttp://us.archive.ubuntu.com oneiric/main i386 Packages Hit ttp://ppa.launchpad.net oneiric Release Hit ttp://us.archive.ubuntu.com oneiric/restricted i386 Packages Hit ttp://us.archive.ubuntu.com oneiric/universe i386 Packages Hit ttp://us.archive.ubuntu.com oneiric/multiverse i386 Packages Hit ttp://us.archive.ubuntu.com oneiric/main TranslationIndex Hit ttp://us.archive.ubuntu.com oneiric/multiverse TranslationIndex Hit ://us.archive.ubuntu.com oneiric/restricted TranslationIndex Hit htp://us.archive.ubuntu.com oneiric/universe TranslationIndex Hit htp://us.archive.ubuntu.com oneiric-security/main Sources Hit ttp://us.archive.ubuntu.com oneiric-security/restricted Sources Hit ttp://ppa.launchpad.net oneiric Release Hit ttp://us.archive.ubuntu.com oneiric-security/universe Sources Hit tp://us.archive.ubuntu.com oneiric-security/multiverse Sources Hit htp://us.archive.ubuntu.com oneiric-security/main i386 Packages Hit tp://us.archive.ubuntu.com oneiric-security/restricted i386 Packages Hit ttp://us.archive.ubuntu.com oneiric-security/universe i386 Packages Hit ttp://us.archive.ubuntu.com oneiric-security/multiverse i386 Packages Hit htp://ppa.launchpad.net oneiric/main Sources Hit ttp://ppa.launchpad.net oneiric/main i386 Packages Ign htp://ppa.launchpad.net oneiric/main TranslationIndex Hit ttp://us.archive.ubuntu.com oneiric-security/main TranslationIndex Hit ttp://us.archive.ubuntu.com oneiric-security/multiverse TranslationIndex Hit htp://us.archive.ubuntu.com oneiric-security/restricted TranslationIndex Hit htp://us.archive.ubuntu.com oneiric-security/universe TranslationIndex Ign htp://dl.google.com stable/main Translation-en_US Hit htp://us.archive.ubuntu.com oneiric-updates/main Sources Hit htp://us.archive.ubuntu.com oneiric-updates/restricted Sources Hit tp://us.archive.ubuntu.com oneiric-updates/universe Sources Hit htp://us.archive.ubuntu.com oneiric-updates/multiverse Sources Hit htp://us.archive.ubuntu.com oneiric-updates/main i386 Packages Ign htp://dl.google.com stable/main Translation-en Hit ttp://ppa.launchpad.net oneiric/main Sources Hit htp://ppa.launchpad.net oneiric/main i386 Packages Hit ttp://us.archive.ubuntu.com oneiric-updates/restricted i386 Packages Ign htp://dl.google.com stable/main Translation-en_US Ign htp://ppa.launchpad.net oneiric/main TranslationIndex Hit hp://us.archive.ubuntu.com oneiric-updates/universe i386 Packages Hit ttp://us.archive.ubuntu.com oneiric-updates/multiverse i386 Packages Hit htp://us.archive.ubuntu.com oneiric-updates/main TranslationIndex Hit htp://us.archive.ubuntu.com oneiric-updates/multiverse TranslationIndex Hit htp://us.archive.ubuntu.com oneiric-updates/restricted TranslationIndex Ign htp://dl.google.com stable/main Translation-en Hit htp://us.archive.ubuntu.com oneiric-updates/universe TranslationIndex Hit ttp://us.archive.ubuntu.com oneiric/main Translation-en Hit ttp://us.archive.ubuntu.com oneiric/multiverse Translation-en Hit htp://us.archive.ubuntu.com oneiric/restricted Translation-en Hit htp://us.archive.ubuntu.com oneiric/universe Translation-en Hit htp://us.archive.ubuntu.com oneiric-security/main Translation-en Hit hp://us.archive.ubuntu.com oneiric-security/multiverse Translation-en Hit htp://us.archive.ubuntu.com oneiric-security/restricted Translation-en Hit htp://us.archive.ubuntu.com oneiric-security/universe Translation-en Hit htp://us.archive.ubuntu.com oneiric-updates/main Translation-en Hit htp://us.archive.ubuntu.com oneiric-updates/multiverse Translation-en Hit htp://us.archive.ubuntu.com oneiric-updates/restricted Translation-en Hit htp://us.archive.ubuntu.com oneiric-updates/universe Translation-en Ign htp://ppa.launchpad.net oneiric/main Translation-en_US Ign htt://ppa.launchpad.net oneiric/main Translation-en Ign htp://ppa.launchpad.net oneiric/main Translation-en_US Ign htp://ppa.launchpad.net oneiric/main Translation-en Fetched 4,765 B in 2s (2,158 B/s) Reading package lists... Done justin@justin-000:~$

    Read the article

  • Top 4 Lame Tech Blogging Posts

    - by jkauffman
    From a consumption point of view, tech blogging is a great resource for one-off articles on niche subjects. If you spend any time reading tech blogs, you may find yourself running into several common, useless types of posts tech bloggers slip into. Some of these lame posts may just be natural due to common nerd psychology, and some others are probably due to lame, lemming-like laziness. I’m sure I’ll do my fair share of fitting the mold, but I quickly get bored when I happen upon posts that hit these patterns without any real purpose or personal touches. 1. The Content Regurgitation Posts This is a common pattern fueled by the starving pan-handlers in the web traffic economy. These are posts that are terse opinions or addendums to an existing post. I commonly see these involve huge block quotes from the linked article which almost always produces over 50% of the post itself. I’ve accidentally gone to these posts when I’m knowingly only interested in the source material. Web links can degrade as well, so if the source link is broken, then, well, I’m pretty steamed. I see this occur with simple opinions on technologies, Stack Overflow solutions, or various tech news like posts from Microsoft. It’s not uncommon to go to the linked article and see the author announce that he “added a blog post” as a response or summary of the topic. This is just rude, but those who do it are probably aware of this. It’s a matter of winning that sweet, juicy web traffic. I doubt this leeching is fooling anybody these days. I would like to rally human dignity and urge people to avoid these types of posts, and just leave a comment on the source material. 2. The “Sorry I Haven’t Posted In A While” Posts This one is far too common. You’ll most likely see this quote somewhere in the body of the offending post: I have been really busy. If the poster is especially guilt-ridden, you’ll see a few volleys of excuses. Here are some common reasons I’ve seen, which I’ll list from least to most painfully awkward. Out of town Vague allusions to personal health problems (these typically includes phrases like “sick”, “treatment'”, and “all better now!”) “Personal issues” (which I usually read as "divorce”) Graphic or specific personal health problems (maximum awkwardness potential is achieved if you see links to charity fund websites) I can’t help but to try over-analyzing why this occurs. Personally, I see this an an amalgamation of three plain factors: Life happens Us nerds are duty-driven, and driven to guilt at personal inefficiencies Tech blogs can become personal journals I don’t think we can do much about the first two, but on the third I think we could certainly contain our urges. I’m a pretty boring guy and, whether or I like it or not, I have an unspoken duty to protect the world from hearing about my unremarkable existence. Nobody cares what kind of sandwich I’m eating. Similarly, if I disappear for a while, it’s unlikely that anybody who happens upon my blog would care why. Rest assured, if I stop posting for a while due to a vasectomy, you will be the first to know. 3. The “At A Conference”, or “Conference Review” Posts I don’t know if I’m like everyone else on this one, but I have never been successfully interested in these posts. It even sounds like a good idea: if I can’t make it to a particular conference (like the KCDC this year), wouldn’t I be interested in a concentrated summary of events? Apparently, no! Within this realm, I’ve never read a post by a blogger that held my interest. What really baffles is is that, for whatever reason, I am genuinely engaged and interested when talking to someone in person regarding the same topic. I have noticed the same phenomenon when hearing about others’ vacations. If someone sends me an email about their vacation, I gloss over it and forget about it quickly. In contrast, if I’m speaking to that individual in person about their vacation, I’m actually interested. I’m unsure why the written medium eradicates the intrigue. I was raised by a roaming pack of friendly wild video games, so that may be a factor. 4. The “Top X Number of Y’s That Z” Posts I’ve seen this one crop up a lot more in the past few of years. Here are some fabricated examples: 5 Easy Ways to Improve Your Code Top 7 Good Habits Programmers Learn From Experience The 8 Things to Consider When Giving Estimates Top 4 Lame Tech Blogging Posts These are attention-grabbing headlines, and I’d assume they rack up hits. In fact, I enjoy a good number of these. But, I’ve been drawn to articles like this just to find an endless list of identically formatted posts on the blog’s archive sidebar. Often times these posts have overlapping topics, too. These types of posts give the impression that the author has given thought to prioritize and organize the points as a result of a comprehensive consideration of a particular topic. Did the author really weigh all the possibilities when identifying the “Top 4 Lame Tech Blogging Patterns”? Unfortunately, probably not. What a tool. To reiterate, I still enjoy the format, but I feel it is abused. Nowadays, I’m pretty skeptical when approaching posts in this format. If these trends continue, my brain will filter these blog posts out just as effectively as it ignores the encroaching “do xxx with this one trick” advertisements. Conclusion To active blog readers, I hope my guide has served you precious time in being able to identify lame blog posts at a glance. Save time and energy by skipping over the chaff of the internet! And if you author a blog, perhaps my insight will help you to avoid the occasional urge to produce these needless filler posts.

    Read the article

  • Change Comes from Within

    - by John K. Hines
    I am in the midst of witnessing a variety of teams moving away from Scrum. Some of them are doing things like replacing Scrum terms with more commonly understood terminology. Mainly they have gone back to using industry standard terms and more traditional processes like the RAPID decision making process. For example: Scrum Master becomes Project Lead. Scrum Team becomes Project Team. Product Owner becomes Stakeholders. I'm actually quite sad to see this happening, but I understand that Scrum is a radical change for most organizations. Teams are slowly but surely moving away from Scrum to a process that non-software engineers can understand and follow. Some could never secure the education or personnel (like a Product Owner) to get the whole team engaged. And many people with decision-making authority do not see the value in Scrum besides task planning and tracking. You see, Scrum cannot be mandated. No one can force a team to be Agile, collaborate, continuously improve, and self-reflect. Agile adoptions must start from a position of mutual trust and willingness to change. And most software teams aren't like that. Here is my personal epiphany from over a year of attempting to promote Agile on a small development team: The desire to embrace Agile methodologies must come from each and every member of the team. If this desire does not exist - if the team is satisfied with its current process, if the team is not motivated to improve, or if the team is afraid of change - the actual demonstration of all the benefits prescribed by Agile and Scrum will take years. I've read some blog posts lately that criticise Scrum for demanding "Big Change Up Front." One's opinion of software methodologies boils down to one's perspective. If you see modern software development as successful, you will advocate for small, incremental changes to how it is done. If you see it as broken, you'll be much more motivated to take risks and try something different. So my question to you is this - is modern software development healthy or in need of dramatic improvement? I can tell you from personal experience that any project that requires exploration, planning, development, stabilisation, and deployment is hard. Trying to make that process better with only a slightly modified approach is a mistake. You will become completely dependent upon the skillset of your team (the only variable you can change). But the difficulty of planned work isn't one of skill. It isn't until you solve the fundamental challenges of communication, collaboration, quality, and efficiency that skill even comes into play. So I advocate for Big Change Up Front. And I advocate for it to happen often until those involved can say, from experience, that it is no longer needed. I hope every engineer has the opportunity to see the benefits of Agile and Scrum on a highly functional team. I'll close with more key learnings that can help with a Scrum adoption: Your leaders must understand Scrum. They must understand software development, its inherent difficulties, and how Scrum helps. If you attempt to adopt Scrum before the understanding is there, your leaders will apply traditional solutions to your problems - often creating more problems. Success should be measured by quality, not revenue. Namely, the value of software to an organization is the revenue it generates minus ongoing support costs. You should identify quality-based metrics that show the effect Agile techniques have on your software. Motivation is everything. I finally understand why so many Agile advocates say you that if you are not on a team using Agile, you should leave and find one. Scrum and especially Agile encompass many elegant solutions to a wide variety of problems. If you are working on a team that has not encountered these problems the the team may never see the value in the solutions.   Having said all that, I'm not giving up on Agile or Scrum. I am convinced it is a better approach for software development. But reality is saying that its adoption is not straightforward and highly subject to disruption. Unless, that is, everyone really, really wants it.

    Read the article

  • Help Protect Your Children with the CEOP Enhanced Internet Explorer 8

    - by Asian Angel
    Do you want to make Internet Explorer safer and more helpful for you and family? Then join us as we look at the CEOP (Child Exploitation and Online Protection Centre) enhanced version of Internet Explorer 8. Setting CEOP Up We chose to install the whole CEOP pack in order to have access to complete set of CEOP Tools. The install process will be comprised of two parts…it will begin with CEOP branded windows showing the components being installed… Note: The components can be downloaded separately for those who only want certain CEOP components added to their browser. Then it will move to the traditional Microsoft Internet Explorer 8 install windows. One thing that we did notice is that here you will be told that you will need to restart your computer but in other windows a log off/log on process is mentioned. Just to make certain that everything goes smoothly we recommend restarting your computer when the installation process is complete. In the EULA section you can see the versions of Windows that the CEOP Pack works with. Once you get past the traditional Microsoft install windows you will be dropped back into the CEOP branded windows. CEOP in Action After you have restarted your computer and opened Internet Explorer you will notice that your homepage has been changed. When it comes to your children that is not a bad thing in this instance. It will also give you an opportunity to look through the CEOP online resources. For the moment you may be wondering where everything is but do not worry. First you can find the two new search providers in the drop-down menu for your “Search Bar” and select a new default if desired. The second thing to look for are the new links that have been added to your “Favorites Menu”. These links can definitely be helpful for you and your family. The third part will require your “Favorites Bar” to be visible in order to see the “Click CEOP Button”. If you have not previously done so you will need to turn on subscribing for “Web Slices”. Click on “Yes” to finish the subscription process. Clicking on the “CEOP Button” again will show all kinds of new links to help provide information for you and your children. Notice that the top part is broken down into “topic categories” while the bottom part is set up for “age brackets”…very nice for helping you focus on the information that you want and/or need. Looking for information and help on a particular topic? Clicking on the “Cyberbullying Link” for example will open the following webpage with information about cyberbullying and a link to get help with the problem. Need something that is focused on your child’s age group? Clicking on the “8-10? Link” as an example opened this page. Want information that is focused on you? The “Parent? Link” leads to this page. The “topic categories & age brackets” make the CEOP Button a very helpful and “family friendly” addition to Internet Explorer. Perhaps you (or your child) want to conduct a search for something that is affecting your child. As you type in a “search term” both of the search providers will provide helpful suggestions for dealing with the problem. We felt that these were very nice suggestions in both instances here… Conclusion We have been able to give you a good peek at what the CEOP Tools can do but the best way to see how helpful it can be for you and your family is try it for yourself. Your children’s safety and happiness is worth it. Links Download the Internet Explorer CEOP Pack (link at bottom of webpage) Note: If you are interested in a singular component or only some use these links. Download the Click CEOP Button Download Search CEOP Download Internet Safety and Security Search Similar Articles Productive Geek Tips Mysticgeek Blog: A Look at Internet Explorer 8 Beta 1 on Windows XPWhen to Use Protect Tab vs Lock Tab in FirefoxMake Ctrl+Tab in Internet Explorer 7 Use Most Recent OrderRemove ISP Text or Corporate Branding from Internet Explorer Title BarQuick Hits: 11 Firefox Tab How-Tos TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Download Microsoft Office Help tab The Growth of Citibank Quickly Switch between Tabs in IE Windows Media Player 12: Tweak Video & Sound with Playback Enhancements Own a cell phone, or does a cell phone own you? Make your Joomla & Drupal Sites Mobile with OSMOBI

    Read the article

  • Connection Pooling is Busted

    - by MightyZot
    A few weeks ago we started getting complaints about performance in an application that has performed very well for many years.  The application is a n-tier application that uses ADODB with the SQLOLEDB provider to talk to a SQL Server database.  Our object model is written in such a way that each public method validates security before performing requested actions, so there is a significant number of queries executed to get information about file cabinets, retrieve images, create workflows, etc.  (PaperWise is a document management and workflow system.)  A common factor for these customers is that they have remote offices connected via MPLS networks. Naturally, the first thing we looked at was the query performance in SQL Profiler.  All of the queries were executing within expected timeframes, most of them were so fast that the duration in SQL Profiler was zero.  After getting nowhere with SQL Profiler, the situation was escalated to me.  I decided to take a peek with Process Monitor.  Procmon revealed some “gaps” in the TCP/IP traffic.  There were notable delays between send and receive pairs.  The send and receive pairs themselves were quite snappy, but quite often there was a notable delay between a receive and the next send.  You might expect some delay because, presumably, the application is doing some thinking in-between the pairs.  But, comparing the procmon data at the remote locations with the procmon data for workstations on the local network showed that the remote workstations were significantly delayed.  Procmon also showed a high number of disconnects. Wireshark traces showed that connections to the database were taking between 75ms and 150ms.  Not only that, but connections to a file share containing images were taking 2 seconds!  So, I asked about a trust.  Sure enough there was a trust between two domains and the file share was on the second domain.  Joining a remote workstation to the domain hosting the share containing images alleviated the time delay in accessing the file share.  Removing the trust had no affect on the connections to the database. Microsoft Network Monitor includes filters that parse TDS packets.  TDS is the protocol that SQL Server uses to communicate.  There is a certificate exchange and some SSL that occurs during authentication.  All of this was evident in the network traffic.  After staring at the network traffic for a while, and examining packets, I decided to call it a night.  On the way home that night, something about the traffic kept nagging at me.  Then it dawned on me…at the beginning of the dance of packets between the client and the server all was well.  Connection pooling was working and I could see multiple queries getting executed on the same connection and ethereal port.  After a particular query, connecting to two different servers, I noticed that ADODB and SQLOLEDB started making repeated connections to the database on different ethereal ports.  SQL Server would execute a single query and respond on a port, then open a new port and execute the next query.  Connection pooling appeared to be broken. The next morning I wrote a test to confirm my hypothesis.  Turns out that the sequence causing the connection nastiness goes something like this: Make a connection to the database. Open a result set that returns enough records to require multiple roundtrips to the server. For each result, query for some other data in the database (this will open a new implicit connection.) Close the inner result set and repeat for every item in the original result set. Close the original connection. Provided that the first result set returns enough data to require multiple roundtrips to the server, ADODB and SQLOLEDB will start making new connections to the database for each query executed in the loop.  Originally, I thought this might be due to Microsoft’s denial of service (ddos) attack protection.  After turning those features off to no avail, I eventually thought to switch my queries to client-side cursors instead of server-side cursors.  Server-side cursors are the default, by the way.  Voila!  After switching to client-side cursors, the disconnects were gone and the above sequence yielded two connections as expected. While the real problem is the amount of time it takes to make connections over these MPLS networks (100ms on average), switching to client-side cursors made the problem go away.  Believe it or not, this is actually documented by Microsoft, and rather difficult to find.  (At least it was while we were trying to troubleshoot the problem!)  So, if you’re noticing performance issues on slower networks, or networks with slower switching, take a look at the traffic in a tool like Microsoft Network Monitor.  If you notice a high number of disconnects, and you’re using fire-hose or server-side cursors, then try switching to client-side cursors and you may see the problem go away. Most likely, Microsoft believes this to be appropriate behavior, because ADODB can’t guarantee that all of the data has been retrieved when you execute the inner queries.  I’m not convinced, though, because the problem remains even after replacing all of the implicit connections with explicit connections and closing those connections in-between each of the inner queries.  In that case, there doesn’t seem to be a reason why ADODB can’t use a single connection from the connection pool to make the additional queries, bringing the total number of connections to two.  Instead ADO appears to make an assumption about the state of the connection. I’ve reported the behavior to Microsoft and am awaiting to hear from the appropriate team, so that I can demonstrate the problem.  Maybe they can explain to us why this is appropriate behavior.  :)

    Read the article

  • "corrupted filesystem tarfile - corrupted package archive" error

    - by Justin
    I was trying to install a program and it said that my dependencies were unmet, and that I should run, sudo apt-get -f install. I have moved everything I didn't need in /etc/apt/sources.list.d/ into the trash. My source.list is all Natty while I am running Oneiric. So maybe I need a new source.list? But here are the things I have: justin@justin-000:~$ sudo apt-get -f install [sudo] password for justin: Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: linux-image-3.0.0-13-generic Suggested packages: fdutils linux-doc-3.0.0 linux-source-3.0.0 linux-tools The following NEW packages will be installed: linux-image-3.0.0-13-generic 0 upgraded, 1 newly installed, 0 to remove and 3 not upgraded. 2 not fully installed or removed. Need to get 0 B/36.5 MB of archives. After this operation, 117 MB of additional disk space will be used. Do you want to continue [Y/n]? y (Reading database ... 270736 files and directories currently installed.) Unpacking linux-image-3.0.0-13-generic (from .../linux-image-3.0.0-13- generic_3.0.0-13.22_i386.deb) ... Done. dpkg: error processing /var/cache/apt/archives/linux-image-3.0.0-13 generic_3.0.0-13.22_i386.deb (--unpack): corrupted filesystem tarfile - corrupted package archive No apport report written because MaxReports is reached already dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 3.0.0-13-generic /boot/vmlinuz-3.0.0-13-generic run-parts: executing /etc/kernel/postrm.d/zz-extlinux 3.0.0-13-generic /boot/vmlinuz-3.0.0-13-generic P: Checking for EXTLINUX directory... found. P: Writing config for /boot/vmlinuz-3.0.0-12-generic... P: Writing config for /boot/vmlinuz-2.6.38-11-generic... P: Installing debian theme... done. run-parts: executing /etc/kernel/postrm.d/zz-update-grub 3.0.0-13-generic /boot/vmlinuz-3.0.0-13-generic Errors were encountered while processing: /var/cache/apt/archives/linux-image-3.0.0-13-generic_3.0.0-13.22_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) justin@justin-000:~$ sudo apt-get update justin@justin-000:~$ sudo apt-get update Ign dl.google.com stable InRelease Ign dl.google.com stable InRelease Get:1dl.google.com stable Release.gpg [198 B] Ign us.archive.ubuntu.com oneiric InRelease Ign us.archive.ubuntu.com oneiric-security InRelease Ign http://us.archive.ubuntu.com oneiric-updates InRelease Get:2 dl.google.com stable Release.gpg [198 B] Get:3 dl.google.com stable Release [1,347 B] Get:4 dl.google.com stable Release [1,338 B] Hit us.archive.ubuntu.com oneiric Release.gpg Hit us.archive.ubuntu.com oneiric-security Release.gpg Get:5/dl.google.com stable/main i386 Packages [1,220 B] Hit tp://us.archive.ubuntu.com oneiric-updates Release.gpg Ign tp://dl.google.com stable/main TranslationIndex Get:6 tp://dl.google.com stable/main i386 Packages [464 B] Ign ttp://ppa.launchpad.net oneiric InRelease Hit ttp://us.archive.ubuntu.com oneiric Release Ign ttp://ppa.launchpad.net oneiric InRelease Ign ttp://dl.google.com stable/main TranslationIndex Hit ttp://ppa.launchpad.net oneiric Release.gpg Hit ttp://us.archive.ubuntu.com oneiric-security Release Hit ttp://ppa.launchpad.net oneiric Release.gpg Hit ttp://us.archive.ubuntu.com oneiric-updates Release Hit ttp://us.archive.ubuntu.com oneiric/main Sources Hit ttp://us.archive.ubuntu.com oneiric/restricted Sources Hit ttp://us.archive.ubuntu.com oneiric/universe Sources Hit ttp://us.archive.ubuntu.com oneiric/multiverse Sources Hit ttp://us.archive.ubuntu.com oneiric/main i386 Packages Hit ttp://ppa.launchpad.net oneiric Release Hit ttp://us.archive.ubuntu.com oneiric/restricted i386 Packages Hit ttp://us.archive.ubuntu.com oneiric/universe i386 Packages Hit ttp://us.archive.ubuntu.com oneiric/multiverse i386 Packages Hit ttp://us.archive.ubuntu.com oneiric/main TranslationIndex Hit ttp://us.archive.ubuntu.com oneiric/multiverse TranslationIndex Hit ://us.archive.ubuntu.com oneiric/restricted TranslationIndex Hit htp://us.archive.ubuntu.com oneiric/universe TranslationIndex Hit htp://us.archive.ubuntu.com oneiric-security/main Sources Hit ttp://us.archive.ubuntu.com oneiric-security/restricted Sources Hit ttp://ppa.launchpad.net oneiric Release Hit ttp://us.archive.ubuntu.com oneiric-security/universe Sources Hit tp://us.archive.ubuntu.com oneiric-security/multiverse Sources Hit htp://us.archive.ubuntu.com oneiric-security/main i386 Packages Hit tp://us.archive.ubuntu.com oneiric-security/restricted i386 Packages Hit ttp://us.archive.ubuntu.com oneiric-security/universe i386 Packages Hit ttp://us.archive.ubuntu.com oneiric-security/multiverse i386 Packages Hit htp://ppa.launchpad.net oneiric/main Sources Hit ttp://ppa.launchpad.net oneiric/main i386 Packages Ign htp://ppa.launchpad.net oneiric/main TranslationIndex Hit ttp://us.archive.ubuntu.com oneiric-security/main TranslationIndex Hit ttp://us.archive.ubuntu.com oneiric-security/multiverse TranslationIndex Hit htp://us.archive.ubuntu.com oneiric-security/restricted TranslationIndex Hit htp://us.archive.ubuntu.com oneiric-security/universe TranslationIndex Ign htp://dl.google.com stable/main Translation-en_US Hit htp://us.archive.ubuntu.com oneiric-updates/main Sources Hit htp://us.archive.ubuntu.com oneiric-updates/restricted Sources Hit tp://us.archive.ubuntu.com oneiric-updates/universe Sources Hit htp://us.archive.ubuntu.com oneiric-updates/multiverse Sources Hit htp://us.archive.ubuntu.com oneiric-updates/main i386 Packages Ign htp://dl.google.com stable/main Translation-en Hit ttp://ppa.launchpad.net oneiric/main Sources Hit htp://ppa.launchpad.net oneiric/main i386 Packages Hit ttp://us.archive.ubuntu.com oneiric-updates/restricted i386 Packages Ign htp://dl.google.com stable/main Translation-en_US Ign htp://ppa.launchpad.net oneiric/main TranslationIndex Hit hp://us.archive.ubuntu.com oneiric-updates/universe i386 Packages Hit ttp://us.archive.ubuntu.com oneiric-updates/multiverse i386 Packages Hit htp://us.archive.ubuntu.com oneiric-updates/main TranslationIndex Hit htp://us.archive.ubuntu.com oneiric-updates/multiverse TranslationIndex Hit htp://us.archive.ubuntu.com oneiric-updates/restricted TranslationIndex Ign htp://dl.google.com stable/main Translation-en Hit htp://us.archive.ubuntu.com oneiric-updates/universe TranslationIndex Hit ttp://us.archive.ubuntu.com oneiric/main Translation-en Hit ttp://us.archive.ubuntu.com oneiric/multiverse Translation-en Hit htp://us.archive.ubuntu.com oneiric/restricted Translation-en Hit htp://us.archive.ubuntu.com oneiric/universe Translation-en Hit htp://us.archive.ubuntu.com oneiric-security/main Translation-en Hit hp://us.archive.ubuntu.com oneiric-security/multiverse Translation-en Hit htp://us.archive.ubuntu.com oneiric-security/restricted Translation-en Hit htp://us.archive.ubuntu.com oneiric-security/universe Translation-en Hit htp://us.archive.ubuntu.com oneiric-updates/main Translation-en Hit htp://us.archive.ubuntu.com oneiric-updates/multiverse Translation-en Hit htp://us.archive.ubuntu.com oneiric-updates/restricted Translation-en Hit htp://us.archive.ubuntu.com oneiric-updates/universe Translation-en Ign htp://ppa.launchpad.net oneiric/main Translation-en_US Ign htt://ppa.launchpad.net oneiric/main Translation-en Ign htp://ppa.launchpad.net oneiric/main Translation-en_US Ign htp://ppa.launchpad.net oneiric/main Translation-en Fetched 4,765 B in 2s (2,158 B/s) Reading package lists... Done justin@justin-000:~$

    Read the article

  • How to restore your production database without needing additional storage

    - by David Atkinson
    Production databases can get very large. This in itself is to be expected, but when a copy of the database is needed the database must be restored, requiring additional and costly storage.  For example, if you want to give each developer a full copy of your production server, you'll need n times the storage cost for your n-developer team. The same is true for any test databases that are created during the course of your project lifecycle. If you've read my previous blog posts, you'll be aware that I've been focusing on the database continuous integration theme. In my CI setup I create a "production"-equivalent database directly from its source control representation, and use this to test my upgrade scripts. Despite this being a perfectly valid and practical thing to do as part of a CI setup, it's not the exact equivalent to running the upgrade script on a copy of the actual production database. So why shouldn't I instead simply restore the most recent production backup as part of my CI process? There are two reasons why this would be impractical. 1. My CI environment isn't an exact copy of my production environment. Indeed, this would be the case in a perfect world, and it is strongly recommended as a good practice if you follow Jez Humble and David Farley's "Continuous Delivery" teachings, but in practical terms this might not always be possible, especially where storage is concerned. It may just not be possible to restore a huge production database on the environment you've been allotted. 2. It's not just about the storage requirements, it's also the time it takes to do the restore. The whole point of continuous integration is that you are alerted as early as possible whether the build (yes, the database upgrade script counts!) is broken. If I have to run an hour-long restore each time I commit a change to source control I'm just not going to get the feedback quickly enough to react. So what's the solution? Red Gate has a technology, SQL Virtual Restore, that is able to restore a database without using up additional storage. Although this sounds too good to be true, the explanation is quite simple (although I'm sure the technical implementation details under the hood are quite complex!) Instead of restoring the backup in the conventional sense, SQL Virtual Restore will effectively mount the backup using its HyperBac technology. It creates a data and log file, .vmdf, and .vldf, that becomes the delta between the .bak file and the virtual database. This means that both read and write operations are permitted on a virtual database as from SQL Server's point of view it is no different from a conventional database. Instead of doubling the storage requirements upon a restore, there is no 'duplicate' storage requirements, other than the trivially small virtual log and data files (see illustration below). The benefit is magnified the more databases you mount to the same backup file. This technique could be used to provide a large development team a full development instance of a large production database. It is also incredibly easy to set up. Once SQL Virtual Restore is installed, you simply run a conventional RESTORE command to create the virtual database. This is what I have running as part of a nightly "release test" process triggered by my CI tool. RESTORE DATABASE WidgetProduction_virtual FROM DISK=N'C:\WidgetWF\ProdBackup\WidgetProduction.bak' WITH MOVE N'WidgetProduction' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_WidgetProduction_Virtual.vmdf', MOVE N'WidgetProduction_log' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_log_WidgetProduction_Virtual.vldf', NORECOVERY, STATS=1, REPLACE GO RESTORE DATABASE mydatabase WITH RECOVERY   Note the only change from what you would do normally is the naming of the .vmdf and .vldf files. SQL Virtual Restore intercepts this by monitoring the extension and applies its magic, ensuring the 'virtual' restore happens rather than the conventional storage-heavy restore. My automated release test then applies the upgrade scripts to the virtual production database and runs some validation tests, giving me confidence that were I to run this on production for real, all would go smoothly. For illustration, here is my 8Gb production database: And its corresponding backup file: Here are the .vldf and .vmdf files, which represent the only additional used storage for the new database following the virtual restore.   The beauty of this product is its simplicity. Once it is installed, the interaction with the backup and virtual database is exactly the same as before, as the clever stuff is being done at a lower level. SQL Virtual Restore can be downloaded as a fully functional 14-day trial. Technorati Tags: SQL Server

    Read the article

  • How to restore your production database without needing additional storage

    - by David Atkinson
    Production databases can get very large. This in itself is to be expected, but when a copy of the database is needed the database must be restored, requiring additional and costly storage.  For example, if you want to give each developer a full copy of your production server, you’ll need n times the storage cost for your n-developer team. The same is true for any test databases that are created during the course of your project lifecycle. If you’ve read my previous blog posts, you’ll be aware that I’ve been focusing on the database continuous integration theme. In my CI setup I create a “production”-equivalent database directly from its source control representation, and use this to test my upgrade scripts. Despite this being a perfectly valid and practical thing to do as part of a CI setup, it’s not the exact equivalent to running the upgrade script on a copy of the actual production database. So why shouldn’t I instead simply restore the most recent production backup as part of my CI process? There are two reasons why this would be impractical. 1. My CI environment isn’t an exact copy of my production environment. Indeed, this would be the case in a perfect world, and it is strongly recommended as a good practice if you follow Jez Humble and David Farley’s “Continuous Delivery” teachings, but in practical terms this might not always be possible, especially where storage is concerned. It may just not be possible to restore a huge production database on the environment you’ve been allotted. 2. It’s not just about the storage requirements, it’s also the time it takes to do the restore. The whole point of continuous integration is that you are alerted as early as possible whether the build (yes, the database upgrade script counts!) is broken. If I have to run an hour-long restore each time I commit a change to source control I’m just not going to get the feedback quickly enough to react. So what’s the solution? Red Gate has a technology, SQL Virtual Restore, that is able to restore a database without using up additional storage. Although this sounds too good to be true, the explanation is quite simple (although I’m sure the technical implementation details under the hood are quite complex!) Instead of restoring the backup in the conventional sense, SQL Virtual Restore will effectively mount the backup using its HyperBac technology. It creates a data and log file, .vmdf, and .vldf, that becomes the delta between the .bak file and the virtual database. This means that both read and write operations are permitted on a virtual database as from SQL Server’s point of view it is no different from a conventional database. Instead of doubling the storage requirements upon a restore, there is no ‘duplicate’ storage requirements, other than the trivially small virtual log and data files (see illustration below). The benefit is magnified the more databases you mount to the same backup file. This technique could be used to provide a large development team a full development instance of a large production database. It is also incredibly easy to set up. Once SQL Virtual Restore is installed, you simply run a conventional RESTORE command to create the virtual database. This is what I have running as part of a nightly “release test” process triggered by my CI tool. RESTORE DATABASE WidgetProduction_Virtual FROM DISK=N'D:\VirtualDatabase\WidgetProduction.bak' WITH MOVE N'WidgetProduction' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_WidgetProduction_Virtual.vmdf', MOVE N'WidgetProduction_log' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_log_WidgetProduction_Virtual.vldf', NORECOVERY, STATS=1, REPLACE GO RESTORE DATABASE WidgetProduction_Virtual WITH RECOVERY   Note the only change from what you would do normally is the naming of the .vmdf and .vldf files. SQL Virtual Restore intercepts this by monitoring the extension and applies its magic, ensuring the ‘virtual’ restore happens rather than the conventional storage-heavy restore. My automated release test then applies the upgrade scripts to the virtual production database and runs some validation tests, giving me confidence that were I to run this on production for real, all would go smoothly. For illustration, here is my 8Gb production database: And its corresponding backup file: Here are the .vldf and .vmdf files, which represent the only additional used storage for the new database following the virtual restore.   The beauty of this product is its simplicity. Once it is installed, the interaction with the backup and virtual database is exactly the same as before, as the clever stuff is being done at a lower level. SQL Virtual Restore can be downloaded as a fully functional 14-day trial. Technorati Tags: SQL Server

    Read the article

  • Auto DOP and Concurrency

    - by jean-pierre.dijcks
    After spending some time in the cloud, I figured it is time to come down to earth and start discussing some of the new Auto DOP features some more. As Database Machines (the v2 machine runs Oracle Database 11.2) are effectively selling like hotcakes, it makes some sense to talk about the new parallel features in more detail. For basic understanding make sure you have read the initial post. The focus there is on Auto DOP and queuing, which is to some extend the focus here. But now I want to discuss the concurrency a little and explain some of the relevant parameters and their impact, specifically in a situation with concurrency on the system. The goal of Auto DOP The idea behind calculating the Automatic Degree of Parallelism is to find the highest possible DOP (ideal DOP) that still scales. In other words, if we were to increase the DOP even more  above a certain DOP we would see a tailing off of the performance curve and the resource cost / performance would become less optimal. Therefore the ideal DOP is the best resource/performance point for that statement. The goal of Queuing On a normal production system we should see statements running concurrently. On a Database Machine we typically see high concurrency rates, so we need to find a way to deal with both high DOP’s and high concurrency. Queuing is intended to make sure we Don’t throttle down a DOP because other statements are running on the system Stay within the physical limits of a system’s processing power Instead of making statements go at a lower DOP we queue them to make sure they will get all the resources they want to run efficiently without trashing the system. The theory – and hopefully – practice is that by giving a statement the optimal DOP the sum of all statements runs faster with queuing than without queuing. Increasing the Number of Potential Parallel Statements To determine how many statements we will consider running in parallel a single parameter should be looked at. That parameter is called PARALLEL_MIN_TIME_THRESHOLD. The default value is set to 10 seconds. So far there is nothing new here…, but do realize that anything serial (e.g. that stays under the threshold) goes straight into processing as is not considered in the rest of this post. Now, if you have a system where you have two groups of queries, serial short running and potentially parallel long running ones, you may want to worry only about the long running ones with this parallel statement threshold. As an example, lets assume the short running stuff runs on average between 1 and 15 seconds in serial (and the business is quite happy with that). The long running stuff is in the realm of 1 – 5 minutes. It might be a good choice to set the threshold to somewhere north of 30 seconds. That way the short running queries all run serial as they do today (if it ain’t broken, don’t fix it) and allows the long running ones to be evaluated for (higher degrees of) parallelism. This makes sense because the longer running ones are (at least in theory) more interesting to unleash a parallel processing model on and the benefits of running these in parallel are much more significant (again, that is mostly the case). Setting a Maximum DOP for a Statement Now that you know how to control how many of your statements are considered to run in parallel, lets talk about the specific degree of any given statement that will be evaluated. As the initial post describes this is controlled by PARALLEL_DEGREE_LIMIT. This parameter controls the degree on the entire cluster and by default it is CPU (meaning it equals Default DOP). For the sake of an example, let’s say our Default DOP is 32. Looking at our 5 minute queries from the previous paragraph, the limit to 32 means that none of the statements that are evaluated for Auto DOP ever runs at more than DOP of 32. Concurrently Running a High DOP A basic assumption about running high DOP statements at high concurrency is that you at some point in time (and this is true on any parallel processing platform!) will run into a resource limitation. And yes, you can then buy more hardware (e.g. expand the Database Machine in Oracle’s case), but that is not the point of this post… The goal is to find a balance between the highest possible DOP for each statement and the number of statements running concurrently, but with an emphasis on running each statement at that highest efficiency DOP. The PARALLEL_SERVER_TARGET parameter is the all important concurrency slider here. Setting this parameter to a higher number means more statements get to run at their maximum parallel degree before queuing kicks in.  PARALLEL_SERVER_TARGET is set per instance (so needs to be set to the same value on all 8 nodes in a full rack Database Machine). Just as a side note, this parameter is set in processes, not in DOP, which equates to 4* Default DOP (2 processes for a DOP, default value is 2 * Default DOP, hence a default of 4 * Default DOP). Let’s say we have PARALLEL_SERVER_TARGET set to 128. With our limit set to 32 (the default) we are able to run 4 statements concurrently at the highest DOP possible on this system before we start queuing. If these 4 statements are running, any next statement will be queued. To run a system at high concurrency the PARALLEL_SERVER_TARGET should be raised from its default to be much closer (start with 60% or so) to PARALLEL_MAX_SERVERS. By using both PARALLEL_SERVER_TARGET and PARALLEL_DEGREE_LIMIT you can control easily how many statements run concurrently at good DOPs without excessive queuing. Because each workload is a little different, it makes sense to plan ahead and look at these parameters and set these based on your requirements.

    Read the article

  • Slide Creation Checklist

    - by Daniel Moth
    PowerPoint is a great tool for conference (large audience) presentations, which is the context for the advice below. The #1 thing to keep in mind when you create slides (at least for conference sessions), is that they are there to help you remember what you were going to say (the flow and key messages) and for the audience to get a visual reminder of the key points. Slides are not there for the audience to read what you are going to say anyway. If they were, what is the point of you being there? Slides are not holders for complete sentences (unless you are quoting) – use Microsoft Word for that purpose either as a physical handout or as a URL link that you share with the audience. When you dry run your presentation, if you find yourself reading the bullets on your slide, you have missed the point. You have a message to deliver that can be done regardless of your slides – remember that. The focus of your audience should be on you, not the screen. Based on that premise, I have created a checklist that I go over before I start a new deck and also once I think my slides are ready. Turn AutoFit OFF. I cannot stress this enough. For each slide, explicitly pick a slide layout. In my presentations, I only use one Title Slide, Section Header per demo slide, and for the rest of my slides one of the three: Title and Content, Title Only, Blank. Most people that are newbies to PowerPoint, get whatever default layout the New Slide creates for them and then start deleting and adding placeholders to that. You can do better than that (and you'll be glad you did if you also follow item #11 below). Every slide must have an image. Remove all punctuation (e.g. periods, commas) other than exclamation points and question marks (! ?). Don't use color or other formatting (e.g. italics, bold) for text on the slide. Check your animations. Avoid animations that hide elements that were on the slide (instead use a new slide and transition). Ensure that animations that bring new elements in, bring them into white space instead of over other existing elements. A good test is to print the slide and see that it still makes sense even without the animation. Print the deck in black and white choosing the "6 slides per page" option. Can I still read each slide without losing any information? If the answer is "no", go back and fix the slides so the answer becomes "yes". Don't have more than 3 bullet levels/indents. In other words: you type some text on the slide, hit 'Enter', hit 'Tab', type some more text and repeat at most one final time that sequence. Ideally your outer bullets have only level of sub-bullets (i.e. one level of indentation beneath them). Don't have more than 3-5 outer bullets per slide. Space them evenly horizontally, e.g. with blank lines in between. Don't wrap. For each bullet on all slides check: does the text for that bullet wrap to a second line? If it does, change the wording so it doesn't. Or create a terser bullet and make the original long text a sub-bullet of that one (thus decreasing the font size, but still being consistent) and have no wrapping. Use the same consistent fonts (i.e. Font Face, Font Size etc) throughout the deck for each level of bullet. In other words, don't deviate form the PowerPoint template you chose (or that was chosen for you). Go on each slide and hit 'Reset'. 'Reset' is a button on the 'Home' tab of the ribbon or you can find the 'Reset Slide' menu when you right click on a slide on the left 'Slides' list. If your slides can survive doing that without you "fixing" things after the Reset action, you are golden! For each slide ask yourself: if I had to replace this slide with a single sentence that conveys the key message, what would that sentence be? This exercise leads you to merge slides (where the key message is split) or split a slide into many, if there were too many key messages on the slide in the first place. It can also lead you to redesign a slide so the text on it really is just explanation or evidence for the key message you are trying to convey. Get the length right. Is the length of this deck suitable for the time you have been given to present? If not, cut content! It is far better to deliver less in a relaxed, polished engaging, memorable way than to deliver in great haste more content. As a rule of thumb, multiply 2 minutes by the number of slides you have, add the time you need for each demo and check if that add to more than the time you have allotted. If it does, start cutting content – we've all been there and it has to be done. As always, rules and guidelines are there to be bent and even broken some times. Start with the above and on a slide-by-slide basis decide which rules you want to bend. That is smarter than throwing all the rules out from the start, right? Comments about this post welcome at the original blog.

    Read the article

  • How to restore your production database without needing additional storage

    - by David Atkinson
    Production databases can get very large. This in itself is to be expected, but when a copy of the database is needed the database must be restored, requiring additional and costly storage.  For example, if you want to give each developer a full copy of your production server, you'll need n times the storage cost for your n-developer team. The same is true for any test databases that are created during the course of your project lifecycle. If you've read my previous blog posts, you'll be aware that I've been focusing on the database continuous integration theme. In my CI setup I create a "production"-equivalent database directly from its source control representation, and use this to test my upgrade scripts. Despite this being a perfectly valid and practical thing to do as part of a CI setup, it's not the exact equivalent to running the upgrade script on a copy of the actual production database. So why shouldn't I instead simply restore the most recent production backup as part of my CI process? There are two reasons why this would be impractical. 1. My CI environment isn't an exact copy of my production environment. Indeed, this would be the case in a perfect world, and it is strongly recommended as a good practice if you follow Jez Humble and David Farley's "Continuous Delivery" teachings, but in practical terms this might not always be possible, especially where storage is concerned. It may just not be possible to restore a huge production database on the environment you've been allotted. 2. It's not just about the storage requirements, it's also the time it takes to do the restore. The whole point of continuous integration is that you are alerted as early as possible whether the build (yes, the database upgrade script counts!) is broken. If I have to run an hour-long restore each time I commit a change to source control I'm just not going to get the feedback quickly enough to react. So what's the solution? Red Gate has a technology, SQL Virtual Restore, that is able to restore a database without using up additional storage. Although this sounds too good to be true, the explanation is quite simple (although I'm sure the technical implementation details under the hood are quite complex!) Instead of restoring the backup in the conventional sense, SQL Virtual Restore will effectively mount the backup using its HyperBac technology. It creates a data and log file, .vmdf, and .vldf, that becomes the delta between the .bak file and the virtual database. This means that both read and write operations are permitted on a virtual database as from SQL Server's point of view it is no different from a conventional database. Instead of doubling the storage requirements upon a restore, there is no 'duplicate' storage requirements, other than the trivially small virtual log and data files (see illustration below). The benefit is magnified the more databases you mount to the same backup file. This technique could be used to provide a large development team a full development instance of a large production database. It is also incredibly easy to set up. Once SQL Virtual Restore is installed, you simply run a conventional RESTORE command to create the virtual database. This is what I have running as part of a nightly "release test" process triggered by my CI tool. RESTORE DATABASE WidgetProduction_virtual FROM DISK=N'C:\WidgetWF\ProdBackup\WidgetProduction.bak' WITH MOVE N'WidgetProduction' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_WidgetProduction_Virtual.vmdf', MOVE N'WidgetProduction_log' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_log_WidgetProduction_Virtual.vldf', NORECOVERY, STATS=1, REPLACE GO RESTORE DATABASE mydatabase WITH RECOVERY   Note the only change from what you would do normally is the naming of the .vmdf and .vldf files. SQL Virtual Restore intercepts this by monitoring the extension and applies its magic, ensuring the 'virtual' restore happens rather than the conventional storage-heavy restore. My automated release test then applies the upgrade scripts to the virtual production database and runs some validation tests, giving me confidence that were I to run this on production for real, all would go smoothly. For illustration, here is my 8Gb production database: And its corresponding backup file: Here are the .vldf and .vmdf files, which represent the only additional used storage for the new database following the virtual restore.   The beauty of this product is its simplicity. Once it is installed, the interaction with the backup and virtual database is exactly the same as before, as the clever stuff is being done at a lower level. SQL Virtual Restore can be downloaded as a fully functional 14-day trial. Technorati Tags: SQL Server

    Read the article

  • Kill a tree, save your website? Content strategy in action, part III

    - by Roger Hart
    A lot has been written about how driving content strategy from within an organisation is hard. And that's true. Red Gate is pretty receptive to new ideas, so although I've not had a total walk in the park, it's been a hike with charming scenery. But I'm one of the lucky ones. Lots of people are involved in content, and depending on your organisation some of those people might be the kind who'll gleefully call themselves "stakeholders". People holding a stake generally want to stick it through something's heart and bury it at a crossroads. Winning them over is not always easy. (Richard Ingram has made a nice visual summary of how this can feel - Content strategy Snakes & ladders - pdf ) So yes, a lot of content strategy advocates are having a hard time. And sure, we've got a nice opportunity to get together and have a hug and a cry, but in the interim we could use a hand. What to do? My preferred approach is, I'll confess, brutal. I'd like nothing so much as to take a scorched earth approach to our website. Burn it, salt the ground, and build the new one right: focusing on clearly delineated business and user content goals, and instrumented so we can tell if we're doing it right. I'm never getting buy-in for that, but a boy can dream. So how about just getting buy-in for some small, tenable improvements? Easier, but still non-trivial. I sat down for a chat with our marketing and design guys. It seemed like a good place to start, even if they weren't up for my "Ctrl-A + Delete"  solution. We talked through some of this stuff, and we pretty much agreed that our content is a bit more broken than we'd ideally like. But to get everybody on board, the problems needed visibility. Doing a visual content inventory Print out the internet. Make a Wall Of Content. Seriously. If you've already done a content inventory, you know your architecture, and you know the scale of the problem. But it's quite likely that very few other people do. So make it big and visual. I'm going to carbon hell, but it seems to be working. This morning, I printed out a tiny, tiny part of our website: the non-support content pertaining to SQL Compare I made big, visual, A3 blowups of each page, and covered a wall with them. A page per web page, spread over something like 6M x 2M, with metrics, right in front of people. Even if nobody reads it (and they are doing) the sheer scale is shocking. 53 pages, all told. Some are redundant, some outdated, some trivial, a few fantastic, and frighteningly many that are great ideas delivered not-quite-right. You have to stand quite far away to get it all in your field of vision. For a lot of today, a whole bunch of folks have been gawping in amazement, talking each other through it, peering at the details, and generally getting excited about content. Developers, sales guys, our CEO, the marketing folks - they're engaged. Will it last? I make no promises. But this sort of wave of interest is vital to getting a content strategy project kicked off. While the content strategist is a saucer-eyed orphan in the cupboard under the stairs, they're not getting a whole lot done. Of course, just printing the site won't necessarily cut it. You have to know your content, and be able to talk about it. Ideally, you'll also have page view and time-on-page metrics. One of the most powerful things you can do is, when people are staring at your wall of content, ask them what they think half of it is for. Pretty soon, you've made a case for content strategy. We're also going to get folks to mark it up - cover it with notes and post-its, let us know how they feel about our content. I'll be blogging about how that goes, but it's exciting. Different business functions have different needs from content, so the more exposure the content gets, and the more feedback, the more you know about those needs. Fingers crossed for awesome.

    Read the article

  • CodePlex Daily Summary for Tuesday, November 29, 2011

    CodePlex Daily Summary for Tuesday, November 29, 2011Popular ReleasesMFCMAPI: November 2011 Release: Build: 15.0.0.1029 Full release notes at SGriffin's blog. If you just want to run the MFCMAPI or MrMAPI, get the executables. If you want to debug them, get the symbol files and the source. The 64 bit builds will only work on a machine with Outlook 2010 64 bit installed. All other machines should use the 32 bit builds, regardless of the operating system. Facebook BadgeWCF Community Site: WCF Web API Preview 6: Welcome to the sixth preview release of WCF Web API on Codeplex! Install WCF Web API Preview 6 using NuGet New Features/EnhancementsURL form encoding - Http request bodies sent as application/x-www-form-urlencoded can now deserialize into objects and participate in content negotiation. Custom OData entity keys - The ODataFormatter now supports custom conventions for determining which properties identify an entity key. [ServiceContract] is no longer required on the Web API class definiti...Devpad: 4.11: Whats new for Devpad 4.11: New Import Archive Improved Save Dialog Improved Replace Dialog Improved Go To Dialog Minor Bug Fix's, improvements and speed upsSubExtractor: Release 1021: Feature: Allow OcrMap.bin file to be placed in same directory as EXE Feature: Choose Subtitle dialog directory remembered while app is up Feature: Allow Save As when creating Subtitle files (initial directory is remembered) Feature: Allow resizing of all main window dialogs Fix: Issue Id #682 (Dynamically generating the expected baseline and top offsets for characters. This fixes the problem of manually entered characters not in character selector incorrectly being put on a different line or...Home Access Plus+: v7.7: v7.7.1128.2200Added: ShowTo Option on Resources Added: Basic Authentication Logon Method Added: Button on the Live Tracker page to clear logons in the database but not do a remote logoff Updated: jquery from v1.6.2 to v1.7.1 Fixed: Another attempt at fixing the SIMS import in the booking system Fixed: JSON Issues with the Setup Added: More DEBUG Events to the Event Log (note DEBUG not RELEASE) Added: Support for Week A Week B instead of Week 1 and Week 2 Updated: The My Files ...Diagnostics Tool for Microsoft Dynamics CRM 2011: CrmDiagTool2011 (5.1.27.82): NEW FEATURE: DevErrors flag is now accessible for remote server (need to be local admin to access web.config file through administrative share)CommonLibrary.NET: CommonLibrary.NET 0.9.8 - Alpha: A collection of very reusable code and components in C# 4.0 ranging from ActiveRecord, Csv, Command Line Parsing, Configuration, Holiday Calendars, Logging, Authentication, and much more. Samples in <root>\src\Lib\CommonLibrary.NET\Samples CommonLibrary.NET 0.9.8 AlphaNew Dynamic Scripting Language : workitem : 7493 Fixes 1622 6803Widget Suite for DotNetNuke: 01.04.00: The following features/enhancements are associated with this release: Bug: Removed the empty box/white space created by some widgets New Widget: FlexSlider New Widget: Google+ Button New Widget: Klout Badge Sample Widget Script FileFxCop Integrator for Visual Studio 2010: FxCop Integrator 2.0.0 RC: Replaced the MSBuild Tasks installer to fix the bug of the targets file. FxCop Integrator is not affected by this bug. (Nov 28 2011) New FeatureSupported calculating code metrics with Code Metrics PowerTool. (Work Item #6568: 6568). Provided MSBuild tasks. #7454: 7454 Supported to filter out auto-generated code from code analysis result. #7485: 7485 Supported exporting report of code analysis result. Supported multi-project analysis. Supported file level analysis. Added the featu...Terminals: Version 2 - Beta 4 Release: Beta 4 Refresh Build Dont forget to backup your config files BEFORE upgrading! As usual, please take time to use and abuse this release. We left logging in place, and this is a debug build so be sure to submit your logs on each bug reported, and please do report all bugs! Updated the About form to include the date and time of the build. Useful for CI builds to ensure we have the correct version "Favourites" and "History" save their expanded states after app restarts Code cleanup, secu...MiniTwitter: 1.76: MiniTwitter 1.76 ???? ?? ?????????? User Streams ???????????? User Streams ???????????、??????????????? REST ?????????? ?????????????????????????????? ??????????????????????????????Media Companion: MC 3.424b Weekly: Ensure .NET 4.0 Full Framework is installed. (Available from http://www.microsoft.com/download/en/details.aspx?id=17718) Ensure the NFO ID fix is applied when transitioning from versions prior to 3.416b. (Details here) Movie Show Resolutions... Resolved issue when reverting multiselection of movies to "-none-" Added movie rename support for subtitle files '.srt' & '.sub' Finalised code for '-1' fix - radiobutton to choose either filename or title Fixed issue with Movie Batch Wizard Fanart - ...Advanced Windows Phone Enginering Tool: WPE Downloads: This version of WPE gives you basic updating, restoring, and, erasing for your Windows Phone device.Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.37: Fix for issue #16936 - CSS strings containing ASP.NET replacement blocks that contain the same delimiter character(s) as the CSS string cause the CSS parser to barf. Need to use the -aspnet:1 flag to treat the ASP.NET blocks as single entities. Added support for CSS3 @keyframes syntax. Added NuGet package support to the DLL project. Expand the -line switch to also optionally specify the multiline/single line mode, and the spaces-per-tab for multiline mode. Tweak the -pretty switch to be more ...Anno 2070 Assistant: Beta v1.0 (STABLE): Anno 2070 Assistant Beta v1.0 Released! Features Included: Complete Building Layouts for Ecos, Tycoons & Techs Complete Production Chains for Ecos, Tycoons & Techs Completed Credits Screen Known Issues: Not all production chains and building layouts may be on the lists because they have not yet been discovered. However, data is still 99.9% complete. Currently the Supply & Demand, including Calculator screen are disabled until version 1.1.Oil Prices: Oil Prices V1.1: Oil Prices V1.1 Fix Bangchak price listTAXILISM: TAXILISM V1.0: TAXILISMExamine: v1.4 - Beta: A fairly mega release which borrows some behaviors from the currently under development v2.0 version, this means there are some breaking changes which are listed below, though I don't think these breaking changes will affect many. FeaturesUpgraded DLLs to .Net 4.0 runtime Azure support No more file queue, all asynchronous operations are handled by .Net 4.0's async Task scheduling system, this not only increases performance but better handles async operations. Running in async mode will...Minemapper: Minemapper v0.1.7: Including updated Minecraft Biome Extractor and mcmap to support the new Minecraft 1.0.0 release (new block types, etc).Visual Leak Detector for Visual C++ 2008/2010: v2.2.1: Enhancements: * strdup and _wcsdup functions support added. * Preliminary support for VS 11 added. Bugs Fixed: * Low performance after upgrading from VLD v2.1. * Memory leaks with static linking fixed (disabled calloc support). * Runtime error R6002 fixed because of wrong memory dump format. * version.h fixed in installer. * Some PVS studio warning fixed.New Projects.NET WindowsCe MVP Framework: .NET WindowsCe MVP Framework for .NET Compact Framework 2 or higherAndaman Alumini: AluminiSite for andaman studentsAudio Macro Recorder Plugin for Notepad++: kkkkCassandra .Net Provider: Cassandra .Net Provider is a project created to help .Net developers to start to work with Apache Cassandra. Under Cassandra .Net Provider project developers will find a client fully configurable, working with polled connection, where CQL commands can be executed.Code samples for deepcode.co.uk: Code samples to support articles posted on www.deepcode.co.ukContador de Caracteres: Contador de caracteres para Windows Phone 7Data Ductus Utilities: Simple utilities that other may find useful like mongo DataContract mapping etc.dIRC: dIRC is a cutting-edge .Net IRC client library written in C#.Distributed Blackboard: Distributed virtual drawing canvasFileSplit: FileSplit lets you split large files into smaller files for easier sharing and backup. After splitting the files, FileSplit also creates a .bat (batch) file that when clicked, automatically joins the split files into the original file. This is especially useful for those recipients who are not computer saavy.FlagSync: FlagSync is a synchronization and backup tool, which allows you to synchronize or backup your files with local folders (or an extern harddrive or flashdrive) and FTP-Servers, as well as synchronize your non-iPod MP3-Player with iTunes.Fodda: A small utility for downloading pictures and movies from Canon cameras to your computer. The transferred pictures and movies are placed in a directory structure based on the date of creation. E.g. My Pictures\2011\11\27_11_2011. It will probably work equally well with Cameras from other manufacturers.Google Authenticator TOTP C#: An implementation of Google's Authenticator in C# and WPF. It's a Time-based One-time Password (TOTP) described in RFC 6238. You could use it to implement two-factor authentication in your own .Net application.gppfree: gpp freeiOSApps: A collection of iOS appsLineSeriesDemo: LineSeriesDemolospacos23: klk NHash: This is a simple project that integrates with Explorer and Computes MD5 and SHA1.OpenGL Engine: Projekt pisany na projekt zespolowy.PolarConverter: Adjust the measured distance of HRM files created by Polar Heart Rate monitorsRASSUS - Sustav za dijeljenje fotografija: Sustav za dijeljenje fotografija - projekt studenata na fakultetu elektrotehnike i racunarstva.Space Monopoly: Space monopoly is the freeware game of buying and selling shares.StarCraft 2 ladder tiles for Windows Phone 7: StarCraft 2 ladder tiles for Windows Phone 7Validador de CPF/CNPJ: Validador Lógico de CPF ou CNPJ para Windows Phonever2web: Ver2web is an online Club management software developed for the course Webprogramming at Technikum Wien. It is programmed in C# using ASP.net MVC 3, Entity Framework 4.x and Unity 2.2. WPF Wizard Control: WPF user control (not a custom control) for generically hosting multiple-step wizards. Uses MVVM, you create the model and view for each step and optionally configure steps that guide the wizard along a different workflow.yankee-private-project: ??????????,????!

    Read the article

  • Using SQL Source Control with Fortress or Vault &ndash; Part 1

    - by AjarnMark
    I am fanatical when it comes to managing the source code for my company.  Everything that we build (in source form) gets put into our source control management system.  And I’m not just talking about the UI and middle-tier code written in C# and ASP.NET, but also the back-end database stuff, which at times has been a pain.  We even script out our Scheduled Jobs and keep a copy of those under source control. The UI and middle-tier stuff has long been easy to manage as we mostly use Visual Studio which has integration with source control systems built in.  But the SQL code has been a little harder to deal with.  I have been doing this for many years, well before Microsoft came up with Data Dude, so I had already established a methodology that, while not as smooth as VS, nonetheless let me keep things well controlled, and allowed doing my database development in my tool of choice, Query Analyzer in days gone by, and now SQL Server Management Studio.  It just makes sense to me that if I’m going to do database development, let’s use the database tool set.  (Although, I have to admit I was pretty impressed with the demo of Juneau that Don Box did at the PASS Summit this year.)  So as I was saying, I had developed a methodology that worked well for us (and I’ll probably outline in a future post) but it could use some improvement. When Solutions and Projects were first introduced in SQL Management Studio, I thought we were finally going to get our same experience that we have in Visual Studio.  Well, let’s say I was underwhelmed by Version 1 in SQL 2005, and apparently so were enough other people that by the time SQL 2008 came out, Microsoft decided that Solutions and Projects would be deprecated and completely removed from a future version.  So much for that idea. Then I came across SQL Source Control from Red-Gate.  I have used several tools from Red-Gate in the past, including my favorites SQL Compare, SQL Prompt, and SQL Refactor.  SQL Prompt is worth its weight in gold, and the others are great, too.  Earlier this year, we upgraded from our earlier product bundles to the new Developer Bundle, and in the process added SQL Source Control to our collection.  I thought this might really be the golden ticket I was looking for.  But my hopes were quickly dashed when I discovered that it only integrated with Microsoft Team Foundation Server and Subversion as the source code repositories.  We have been using SourceGear’s Vault and Fortress products for years, and I wholeheartedly endorse them.  So I was out of luck for the time being, although there were a number of people voting for Vault/Fortress support on their feedback forum (as did I) so I had hope that maybe next year I could look at it again. But just a couple of weeks ago, I was pleasantly surprised to receive notice in my email that Red-Gate had an Early Access version of SQL Source Control that worked with Vault and Fortress, so I quickly downloaded it and have been putting it through its paces.  So far, I really like what I see, and I have been quite impressed with Red-Gate’s responsiveness when I have contacted them with any issues or concerns that I have had.  I have had several communications with Gyorgy Pocsi at Red-Gate and he has been immensely helpful and responsive. I must say that development with SQL Source Control is very different from what I have been used to.  This post is getting long enough, so I’ll save some of the details for a separate write-up, but the short story is that in my regular mode, it’s all about the script files.  Script files are King and you dare not make a change to the database other than by way of a script file, or you are in deep trouble.  With SQL Source Control, you make your changes to your development database however you like.  I still prefer writing most of my changes in T-SQL, but you can also use any of the GUI functionality of SSMS to make your changes, and SQL Source Control “manages” the script for you.  Basically, when you first link your database to source control, the tool generates scripts for every primary object (tables and their indexes are together in one script, not broken out into separate scripts like DB Projects do) and those scripts are checked into your source control.  So, if you needed to, you could still do a GET from your source control repository and build the database from scratch.  But for the day-to-day work, SQL Source Control uses the same technique as SQL Compare to determine what changes have been made to your development database and how to represent those in your repository scripts.  I think that once I retrain myself to just work in the database and quit worrying about having to find and open the right script file, that this will actually make us more efficient. And for deployment purposes, SQL Source Control integrates with the full SQL Compare utility to produce a synchronization script (or do a live sync).  This is similar in concept to Microsoft’s DACPAC, if you’re familiar with that. If you are not currently keeping your database development efforts under source control, definitely examine this tool.  If you already have a methodology that is working for you, then I still think this is worth a review and comparison to your current approach.  You may find it more efficient.  But remember that the version which integrates with Vault/Fortress is still in pre-release mode, so treat it with a little caution.  I have found it to be fairly stable, but there was one bug that I found which had inconvenient side-effects and could have really been frustrating if I had been running this on my normal active development machine.  However, I can verify that that bug has been fixed in a more recent build version (did I mention Red-Gate’s responsiveness?).

    Read the article

  • CodePlex Daily Summary for Wednesday, March 14, 2012

    CodePlex Daily Summary for Wednesday, March 14, 2012Popular ReleasesAcDown????? - Anime&Comic Downloader: AcDown????? v3.9.2: ?? ●AcDown??????????、??、??????,????1M,????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。??????AcPlay?????,??????、????????????????。 ● AcDown???????????????????????????,???,???????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ????32??64? Windows XP/Vista/7/8 ????????????? ??:????????Windows XP???,?????????.NET Framework 2.0???(x86),?????"?????????"??? ??????????????,??????????: ??"AcDo...C.B.R. : Comic Book Reader: CBR 0.6: 20 Issue trackers are closed and a lot of bugs too Localize view is now MVVM and delete is working. Added the unused flag (take care that it goes to true only when displaying screen elements) Backstage - new input/output format choice control for the conversion Backstage - Add display, behaviour and register file type options in the extended options dialog Explorer list view has been transformed to a custom control. New group header, colunms order and size are saved Single insta...Windows Azure Toolkit for Windows 8: Windows Azure Toolkit for Windows 8 Consumer Prv: Windows Azure Toolkit for Windows 8 Consumer Preview - Preview Release v 1.2.0 Please download this for Windows Azure Toolkit for Windows 8 functionality on Windows 8 Consumer Preview. The core features of the toolkit include: Automated Install – Scripted install of all dependencies including Visual Studio 2010 Express and the Windows Azure SDK on Windows 8 Consumer Preview. Project Templates – Windows 8 Metro Style app project templates in Dev 11 in both XAML/C# and HTML5/JS with a suppor...CODE Framework: 4.0.20312.0: This version includes significant improvements in the WPF system (and the WPF MVVM/MVC system). This includes new styles for Metro controls and layouts. Improved color handling. It also includes an improved theme/style swapping engine down to active (open) views. There also are various other enhancements and small fixes throughout the entire framework.Visual Studio ALM Quick Reference Guidance: v3 - For Visual Studio 11: RELEASE README Welcome to the BETA release of the Quick Reference Guide preview As this is a BETA release and the quality bar for the final Release has not been achieved, we value your candid feedback and recommend that you do not use or deploy these BETA artifacts in a production environment. Quality-Bar Details Documentation has been reviewed by Visual Studio ALM Rangers Documentation has not been through an independent technical review Documentation ...AvalonDock: AvalonDock 2.0.0345: Welcome to early alpha release of AvalonDock 2.0 I've completely rewritten AvalonDock in order to take full advantage of the MVVM pattern. New version also boost a lot of new features: 1) Deep separation between model and layout. 2) Full WPF binding support thanks to unified logical tree between main docking manager, auto-hide windows and floating windows. 3) Support for Aero semi-maximized windows feature. 4) Support for multiple panes in the same floating windows. For a short list of new f...Google Books Downloader for Windows: Google Books Downloader-1.9.0.0.: Google Books DownloaderWindows Azure PowerShell Cmdlets: Windows Azure PowerShell Cmdlets 2.2.2: Changes Added Start Menu Item for Easy Startup Added Link to Getting Started Document Added Ability to Persist Subscription Data to Disk Fixed Get-Deployment to not throw on empty slot Simplified numerous default values for cmdlets Breaking Changes: -SubscriptionName is now mandatory in Set-Subscription. -DefaultStorageAccountName and -DefaultStorageAccountKey parameters were removed from Set-Subscription. Instead, when adding multiple accounts to a subscription, each one needs to be added ...IronPython: 2.7.2.1: On behalf of the IronPython team, I'm happy to announce the final release IronPython 2.7.2. This release includes everything from IronPython 54498 and 62475 as well. Like all IronPython 2.7-series releases, .NET 4 is required to install it. Installing this release will replace any existing IronPython 2.7-series installation. Unlike previous releases, the assemblies for all supported platforms are included in the installer as well as the zip package, in the "Platforms" directory. IronPython 2...Kooboo CMS: Kooboo CMS 3.2.0.0: Breaking changes: When upgrade from previous versions, MUST reset the all the content type templates, otherwise the content manager might get a compile error. New features Integrate with Windows azure. See: http://wiki.kooboo.com/?wiki=Kooboo CMS on Azure Complete solution to deploy on load balance servers. See: http://wiki.kooboo.com/?wiki=Kooboo CMS load balance Update Jquery and Jquery ui to the lastest version(Jquery 1.71, Jquery UI 1.8.16). Tree style text content editing. See:h...Home Access Plus+: v7.10: Don't forget to add your location to the list: http://www.nbdev.co.uk/projects/hap/locations.aspx Changes: Added: CompressJS controls to the Help Desk & Booking System (reduces page size) Fixed: Debug/Release mode detection in CompressJS control Added: Older Browsers will use an iframe and the old uploadh.aspx page (works better than the current implementation on older browsers) Added: Permalinks for my files, you can give out links that redirect to the correct location when you log i...SubExtractor: Release 1026: Fix: multi-colored bluray subs will no longer result in black blob for OCR Fix: dvds with no language specified will not cause exception in name creation of subtitle files Fix: Root directory Dvds will use volume label as their directory nameExtensions for Reactive Extensions (Rxx): Rxx 1.3: Please read the latest release notes for details about what's new. Related Work Items Content SummaryRxx provides the following features. See the Documentation for details. Many IObservable<T> extension methods and IEnumerable<T> extension methods. Many wrappers that convert asynchronous Framework Class Library APIs into observables. Many useful types such as ListSubject<T>, DictionarySubject<T>, CommandSubject, ViewModel, ObservableDynamicObject, Either<TLeft, TRight>, Maybe<T>, Scala...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.47: Properly output escaped characters in CSS identifiers throw an EOF error when parsing a CSS selector that doesn't end in a declaration block chased down a stack-overflow issue with really large JS sources. Needed to flatten out the AST tree for adjacent expression statements that the application merges into a single expression statement, or that already contain large, comma-separated expressions in the original source. fix issue #17569: tie together the -debug switch with the DEBUG defi...Player Framework by Microsoft: Player Framework for Windows 8 Metro (Preview): Player Framework for HTML/JavaScript and XAML/C# Metro Style Applications. Additional DownloadsIIS Smooth Streaming Client SDK for Windows 8WPF Application Framework (WAF): WAF for .NET 4.5 (Experimental): Version: 2.5.0.440 (Experimental): This is an experimental release! It can be used to investigate the new .NET Framework 4.5 features. The ideas shown in this release might come in a future release (after 2.5) of the WPF Application Framework (WAF). More information can be found in this dicussion post. Requirements .NET Framework 4.5 (The package contains a solution file for Visual Studio 11) The unit test projects require Visual Studio 11 Professional Changelog All: Upgrade all proje...SSH.NET Library: 2012.3.9: There are still few outstanding issues I wanted to include in this release but since its been a while and there are few new features already I decided to create a new release now. New Features Add SOCKS4, SOCKS5 and HTTP Proxy support when connecting to remote server. For silverlight only IP address can be used for server address when using proxy. Add dynamic port forwarding support using ForwardedPortDynamic class. Add new ShellStream class to work with SSH Shell. Add supports for mu...Test Case Import Utilities for Visual Studio 2010 and Visual Studio 11 Beta: V1.2 RTM: This release (V1.2 RTM) includes: Support for connecting to Hosted Team Foundation Server Preview. Support for connecting to Team Foundation Server 11 Beta. Fix to issue with read-only attribute being set for LinksMapping-ReportFile which may have led to problems when saving the report file. Fix to issue with “related links” not being set properly in certain conditions. Fix to ensure that tool works fine when the Excel file contained rich text data. Note: Data is still imported in pl...DotNetNuke® Community Edition CMS: 06.01.04: Major Highlights Fixed issue with loading the splash page skin in the login, privacy and terms of use pages Fixed issue when searching for words with special characters in them Fixed redirection issue when the user does not have permissions to access a resource Fixed issue when clearing the cache using the ClearHostCache() function Fixed issue when displaying the site structure in the link to page feature Fixed issue when inline editing the title of modules Fixed issue with ...Mayhem: Mayhem Developer Preview: This is the developer preview of Mayhem. Enjoy! If you are looking for the Phone applications, please visit this discussionNew ProjectsAD Test Tool: After trying to figure out replication and other user-based issues with a local domain system we built this simple tool to test what the directory was currently reflecting for each user who accessed the tool.BuildUp: MSBuild scripts for use with CI servers.CDRtoSQL: CDRtoSQLCloudStorage4Net: A set of C# wrapper used to access cloud storage services. Such aliyun, grandcloud, baiduyun etc. The project is created to generalize the interfaces of difference storeage platform. We expect programmers could use all these cloud storage in a uniform way. D.E.M.O.N MySQL Tweaker: D.E.M.O.N Studio MySQL Tweaker, a fast and easy-to-use Mysql tools. Add this to your project, install , start, and stop mysql service with a little coding work. Love MySQL , Love MySQL TweakerEarth2012: Earth2012 shows the possible 31 events of 2012 according to the research at www.heliwave.com. For each event, the Earth map shows the danger latitude and all danger and semi-danger cities around it. The user can adjust the starting date of the events once we see a 4-day event.fjyc documents: this is a documentation share for fjyc projectFolderDrive: FolderDrive is a small tool that turns your folders into drives. Imagine having your Dropbox folder behaving as the (almost real) harddrive. It's another handy solution in the ongoing quest for quick access to your favorite folders. The project is developed in C# using WPF - you'll need .NET framework 4 to use it.GralicNew: dHarness: Harness is Web Browser Automation Framework. implemented as COM (Common Object Library) Harness will automate the testing of Web applications by automatic operation the Internet Explorer. you can easily use from your favorite development language.IIS2SQL: Powershell script for IIS7.5 that will send IIS logs to an MSSQL database. Separate table is used to keep track of when the last update occurred so only new events are inserted. Writes to the Application log status messages.Inside Sales Api Client: InsideSales.com api client. Current valid operations are Login and AddLead only. Developed in C#, .NET Framework 4.jqGrid Helper Library: This library is intended for use with the jqGrid jQuery plugin. It provides classes for handling grid settings, building filter and order by expressions, and will build a JQGrid object and then serialize it to either JSON or XML. longSudoku: A simple sudoku game! Use WinForm development! Imitate Metro UI!MantisWare Proxy Switcher: This is a small application that runs in your task bar and gives you the capability to add a proxy list and switch between them quickly, either by selecting the appropriate proxy or using the shortcut keys. It's developed in C#.net, MSSQL Compact.Map Wisata: Map , Dotspatial , Ribbon, Visual Basic.Netmapabc?? API WP SDK: mapabc?? API WP SDKNetWinsock: Provides a simple, event-driven Windows Forms Component capable of serving as a TCP client or server. Utilizes an asynchronous/multithreaded pattern and raises data reception and error events on the UI thread for easy processing. Reminiscent of the VB6 Winsock control.Orchard Zencoder: Orchard module that enables usage of zencoder for encoding media files. http://zencoder.comPleXKeys Predictive Typing: PleXKeys is a predictive typing app for Microsoft Windows. It allows you to type faster and more accurately. With adaptive typing and instant spelling suggestions. Oh and Macros too! Automatic Word Prediction As you type, PleXKeys will predict the next word you are going to type. Then, by simply pressing the ENTER key, PleXKeys will insert the word for you, saving wasted keystrokes. After you have used PleXKeys enough for it to adapt to your typing style, typing entire sentences can b...Samples for my book "Getting Started with the Internet of Things": All examples for my book "Getting Started with the Internet of Things", the Gsiot.PachubeClient library, and the Gsiot.Server library are included in this project.SharePoint Hive - Projects & Articles: TBDSharePoint webparts for the community (Enterprise ready and free): SharePoint webparts for the community (Enterprise ready and free)SimpleCritters: An experiment with neural networks. This project is chiefly academic in nature. This project is concerned with evolving simple critter behavior using simulated neurons.Snip-A-Dillie-O Web: My venture into MVC3 / EF Code First / SQL CE. Snip-A-Dillie-O is just one piece of the bigger picture as a simple code snippet management toolTestMum: C#, Asp.net test projectVisDiskUse: Visualize how much disk space is being used by folders in your hard drive.Windows Phone App Accelerators (WPAA): Windows Phone App Accelerator aims to make creating a Windows Phone application even easier by creating a Visual Studio Template (or Templates) that do the bulk of the common work for you. This is NOT an app factory.

    Read the article

  • Adventures in Scrum: Lesson 1 &ndash; The failed Sprint

    - by Martin Hinshelwood
    I recently had a conversation with a product owner that wanted to have the Scrum team broken up into smaller units so that less time was wasted on the Scrum Ceremonies! Their complaint was around the need in Scrum to have the entire “Team” (7+-2) involved in the sizing of the work during the “Sprint Planning Meeting”.  The standard flippant answer of all Scrum professionals, “Well that's not Scrum”, does not get you any brownie points in these situations. The response could be “Well we are not doing Scrum then” which in turn leads to “We are doing Scrum…But, we have split the scrum team into units of 2/3 so that they can concentrate on a specific area of work”. While this may work, it is not Scrum and should not be called so… It is just a form of Agile. Don’t get me wrong at this stage, there is nothing wrong with Agile, just don’t call it Scrum. The reason that the Product Owner wants to do this is that, in effect, through a number of miscommunications and failings in our implementation of Scrum, there was NO unit of potentially Shippable software at the end of the first sprint. It does not matter to them that most Scrum teams will fail the first Sprint, even those that are high performing teams. Remember it is the product owners their money! We should NOT break up scrum teams into smaller units for the purpose of having less people tied up in the Scrum Ceremonies. The amount of backlog the Team selects is solely up to the Team… Only the Team can assess what it can accomplish over the upcoming Sprint. - Scrum Guide, Scrum.org The entire team must accept the work and in order to understand what they can accept they must be free to size it as a team. This both encourages common understanding and increases visibility on why team members think a task is of a particular size. This has the benefit of increasing the knowledge of the entire team in the problem domain. A new Team often first realizes that it will either sink or swim as a Team, not individually, in this meeting. The Team realizes that it must rely on itself. As it realizes this, it starts to self-organize to take on the characteristics and behaviour of a real Team. - Scrum Guide, Scrum.org This paragraph goes to the why of having the whole team at the meeting; The goal of Scrum it to produce a unit of potentially shippable software at the end of every Sprint. In order to achieve this we need high performing teams and this is what Scrum as a framework has been optimised to produce. I think that our Product Owner is understandably upset over loosing two weeks work and is losing sight the end goal of Scrum in the failures of the moment. As the man spending the money, I completely understand his perspective and I think that we should not have started Scrum on an internal project, but selected a customer  that is open to the ideas and complications of Scrum. So, what should we have NOT done on our first Scrum project: Should not have had 3 interns as the only on site resource – This lead to bad practices as the experienced guys were not there helping and correcting as they usually would. Should not have had the only experienced guys offsite – With both the experienced technical guys in completely different time zones it was difficult to get time for questions. Helping the guys on site was just plain impossible. Should not have used a part time ScrumMaster – Although the ScrumMaster attended all of the Ceremonies, because they are only in 2 full days of the week it makes it difficult for the team to raise impediments as they go. Should not have used a proxy product owner. – This was probably the worst decision that was made. Mainly because the proxy product owner did not have the same vision as the product owner. While Scrum does not explicitly reject the idea of a Proxy Product Owner, I do not think it works very well in practice. The “single wringable neck” needs to contain both the Money and the Vision as well as attending the required meetings. I will be brining all of these things up at the Sprint Retrospective and we will learn from our mistakes and move on. Do, Inspect then Adapt…   Technorati Tags: Scrum,Sprint Planing,Sprint Retrospective,Scrum.org,Scrum Guide,Scrum Ceremonies,Scrummaster,Product Owner Need Help? Professional Scrum Developer Training SSW has six Professional Scrum Developer Trainers who specialise in training your developers in implementing Scrum with Microsoft's Visual Studio ALM tools.

    Read the article

  • Problems with opening CHM Help files from Network or Internet

    - by Rick Strahl
    As a publisher of a Help Creation tool called Html Help Help Builder, I’ve seen a lot of problems with help files that won't properly display actual topic content and displays an error message for topics instead. Here’s the scenario: You go ahead and happily build your fancy, schmanzy Help File for your application and deploy it to your customer. Or alternately you've created a help file and you let your customers download them off the Internet directly or in a zip file. The customer downloads the file, opens the zip file and copies the help file contained in the zip file to disk. She then opens the help file and finds the following unfortunate result:     The help file  comes up with all topics in the tree on the left, but a Navigation to the WebPage was cancelled or Operation Aborted error in the Help Viewer's content window whenever you try to open a topic. The CHM file obviously opened since the topic list is there, but the Help Viewer refuses to display the content. Looks like a broken help file, right? But it's not - it's merely a Windows security 'feature' that tries to be overly helpful in protecting you. The reason this happens is because files downloaded off the Internet - including ZIP files and CHM files contained in those zip files - are marked as as coming from the Internet and so can potentially be malicious, so do not get browsing rights on the local machine – they can’t access local Web content, which is exactly what help topics are. If you look at the URL of a help topic you see something like this:   mk:@MSITStore:C:\wwapps\wwIPStuff\wwipstuff.chm::/indexpage.htm which points at a special Microsoft Url Moniker that in turn points the CHM file and a relative path within that HTML help file. Try pasting a URL like this into Internet Explorer and you'll see the help topic pop up in your browser (along with a warning most likely). Although the URL looks weird this still equates to a call to the local computer zone, the same as if you had navigated to a local file in IE which by default is not allowed.  Unfortunately, unlike Internet Explorer where you have the option of clicking a security toolbar, the CHM viewer simply refuses to load the page and you get an error page as shown above. How to Fix This - Unblock the Help File There's a workaround that lets you explicitly 'unblock' a CHM help file. To do this: Open Windows Explorer Find your CHM file Right click and select Properties Click the Unblock button on the General tab Here's what the dialog looks like:   Clicking the Unblock button basically, tells Windows that you approve this Help File and allows topics to be viewed.   Is this insecure? Not unless you're running a really old Version of Windows (XP pre-SP1). In recent versions of Windows Internet Explorer pops up various security dialogs or fires script errors when potentially malicious operations are accessed (like loading Active Controls), so it's relatively safe to run local content in the CHM viewer. Since most help files don't contain script or only load script that runs pure JavaScript access web resources this works fine without issues. How to avoid this Problem As an application developer there's a simple solution around this problem: Always install your Help Files with an Installer. The above security warning pop up because Windows can't validate the source of the CHM file. However, if the help file is installed as part of an installation the installation and all files associated with that installation including the help file are trusted. A fully installed Help File of an application works just fine because it is trusted by Windows. Summary It's annoying as all hell that this sort of obtrusive marking is necessary, but it's admittedly a necessary evil because of Microsoft's use of the insecure Internet Explorer engine that drives the CHM Html Engine's topic viewer. Because help files are viewing local content and script is allowed to execute in CHM files there's potential for malicious code hiding in CHM files and the above precautions are supposed to avoid any issues. © Rick Strahl, West Wind Technologies, 2005-2012 Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • High Jinks, Hi Jacks, Exceptional DBA Awards and PASS

    - by Rodney
    The countdown to PASS has counted down.  The day after tomorrow I will board a plane, like many others, on my way for the 4th year in a row to SQL PASS Summit.  The anticipation has been excruciating but luckily I have this little thing called a day job as a DBA that has kept me busy and not thinking too much about the event. Well that is not exactly true since my beautiful wife works for PASS so we get to talk about SQL from the time we wake up until late in the evening. I would not have it any other way and I feel very fortunate to be a part of this great event and to have been chosen as the Exceptional DBA Award judge also for the 4th year in a row.  This year, I will have been again tasked with presenting the award to the winner, Mr. Jeff Moden and it will be a true honor to meet him in person as I have read many of his articles on SSC and have attended his session at PASS previously.  The speech is all ready but one item remains, which will be a surprise to all who attend the party on Tuesday night in Seattle (see links below).  Let's face it, Exceptional DBAs everywhere work very hard protecting our data stores, tuning queries, mentoring, saving money, installing clusters, etc and once in a while there is time to be exceptionally non-professional and have a bit of fun. Once incident that happened this year that falls under the High Jinks category was when my network admin asked if I could Telnet into a SQL instance and see if I could make the connection through the firewall that he had just configured. I was able to establish a connection on port 1433 and it occurred to me that it would be very interesting if I could actually run T-SQL queries via a Telnet session much like you might do with an SMTP server. With that thought, I proceeded to demonstrate this could be possible by convincing my senior DBA Shawn McGehee that I was able to do so. At first he did not believe me. It shook his world view.  It was inconceivable.  What I had done, behind the scenes, of course, was to copy and rename SQLCMD.exe to Telnet.exe and used it to connect and run a simple, "Select * from sys.databases" on the SQL instance. I think if it had been anyone other than Shawn I could have extended this ruse indefinitely but he caught on within 30 seconds. It was a fun thirty seconds though. On the High Jacks side of the house, which is really merged to be SQL HACKS, I finally, after several years of struggling with how to connect to an untrusted domain like in a DMZ with a windows account in SSMS, I stumbled upon a solution that does away with the requirement to use SQL Authentication.  While "Runas" is a great command to use to run an application with a higher privileged account, I had not previously been able to figure out how to connect to the remote domain with SSMS and "Runsas". It never connected and caused a login failure every time for the remote windows domain account. Then I ran across an option for "Runas",   "/netonly".  This option postpones the login until a connection is made and only then passes the remote login you supply when you first launch SSMS with the "Runas" command. So a typical shortcut would look like: "C:\Windows\System32\runas.exe /netonly /user:remotedomain.com\rodlandrum "C:\Program Files\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\IDE\Ssms.exe" You will want to make sure the passwords are synced between the two domains, your local domain and the remote domain, otherwise you may have account lockout issues, but I have found in weeks of testing this is a stable solution. Now it is time to get ready to head for Seattle. Please, if you see me (@SQLBeat) or my wife (@Karlakay22) please run up and high five me (wait..High Jinks.High Jacks.High Fives.Need to change the title) or give me a big bear hug if you are strong enough to lift me off the ground. And if you do actually do that, I will think you are awesome and will not embarrass you by crying out for help or complaining of a broken back or sciatic nerve damage. And now the links to others who have all of the details. First, for the MVP Deep Dives 2, of which, like John, I was lucky enough to be able to participate in this year. http://www.simple-talk.com/community/blogs/johnm/archive/2011/09/29/103577.aspx And the details of the SSC party where the Exceptional DBA of 2011, Jeff Moden, will be awarded. http://www.simple-talk.com/community/blogs/rebecca_amos/archive/2011/10/05/103661.aspx   Cheers! Rodney

    Read the article

  • MCM Lab exam this week

    - by Rob Farley
    In two days I’ll’ve finished the MCM Lab exam, 88-971. If you do an internet search for 88-971, it’ll tell you the answer is –883. Obviously. It’ll also give you a link to the actual exam page, which is useful too, once you’ve finished being distracted by the calculator instead of going to the thing you’re actually looking for. (Do people actually search the internet for the results of mathematical questions? Really?) The list of Skills Measured for this exam is quite short, but can essentially be broken down into one word “Anything”. The Preparation Materials section is even better. Classroom Training – none available. Microsoft E-Learning – none available. Microsoft Press Books – none available. Practice Tests – none available. But there are links to Readiness Videos and a page which has no resources listed, but tells you a list of people who have already qualified. Three in Australia who have MCM SQL Server 2008 so far. The list doesn’t include some of the latest batch, such as Jason Strate or Tom LaRock. I’ve used SQL Server for almost 15 years. During that time I’ve been awarded SQL Server MVP seven times, but the MVP award doesn’t actually mean all that much when considering this particular certification. I know lots of MVPs who have tried this particular exam and failed – including Jason and Tom. Right now, I have no idea whether I’ll pass or not. People tell me I’ll pass no problem, but I honestly have no idea. There’s something about that “Anything” aspect that worries me. I keep looking at the list of things in the Readiness Videos, and think to myself “I’m comfortable with Resource Governor (or whatever) – that should be fine.” Except that then I feel like I maybe don’t know all the different things that can go wrong with Resource Governor (or whatever), and I wonder what kind of situations I’ll be faced with. And then I find myself looking through the stuff that’s explained in the videos, and wondering what kinds of things I should know that I don’t, and then I get amazingly bored and frustrated (after all, I tell people that these exams aren’t supposed to be studied for – you’ve been studying for the last 15 years, right?), and I figure “What’s the worst that can happen? A fail?” I’m told that the exam provides a list of scenarios (maybe 14 of them?) and you have 5.5 hours to complete them. When I say “complete”, I mean complete – you don’t get to leave them unfinished, that’ll get you ‘nil points’ for that scenario. Apparently no-one gets to complete all of them. Now, I’m a consultant. I get called on to fix the problems that people have on their SQL boxes. Sometimes this involves fixing corruption. Sometimes it’s figuring out some performance problem. Sometimes it’s as straight forward as getting past a full transaction log; sometimes it’s as tricky as recovering a database that has lost its metadata, without backups. Most situations aren’t a problem, but I also have the confidence of being able to do internet searches to verify my maths (in case I forget it’s –883). In the exam, I’ll have maybe twenty minutes per scenario (but if I need longer, I’ll have to take longer – no point in stopping half way if it takes more than twenty minutes, unless I don’t see an end coming up), so I’ll have time constraints too. And of course, I won’t have any of my usual tools. I can’t take scripts in, I can’t take staff members. Hopefully I can use the coffee machine that will be in the room. I figure it’s going to feel like one of those days when I’ve gone into a client site, and found that the problems are way worse than I expected, and that the site is down, with people standing over me needing me to get things right first time... ...so it should be fine, I’ve done that before. :) If I do fail, it won’t make me any less of a consultant. It won’t make me any less able to help all of my clients (including you if you get in touch – hehe), it’ll just mean that the particular problem might’ve taken me more than the twenty minutes that the exam gave me. @rob_farley PS: Apparently the done thing is to NOT advertise that you’re sitting the exam at a particular time, only that you’re expecting to take it at some point in the future. I think it’s akin to the idea of not telling people you’re pregnant for the first few months – it’s just in case the worst happens. Personally, I’m happy to tell you all that I’m going to take this exam the day after tomorrow (which is the 19th in the US, the 20th here). If I end up failing, you can all commiserate and tell me that I’m not actually as unqualified as I feel.

    Read the article

  • Unmet Dependencies with kdelibs5-data

    - by Jitesh
    I was trying to install Amarok 1.4 on Ubuntu 12.04 (Gnome-classic), by following this instructions. Problem started after giving these two commands dpkg -i kdelibs5-data_4.6.2-0ubuntu4_all.deb dpkg -i kdelibs-data_3.5.10.dfsg.1-5ubuntu2_all.deb Now, immediately after these commands, Ubuntu Updater popped up and gave me an error that the package catalog is broken and needs to be repaired. Nothing can be installed or removed till then. It also offered a suggestion to run apt-get install -f. I tried that, but again got the same error.Also tried apt-get clean followed by apt-get install -f. Again got the following output: jitesh@jitesh-desktop:~$ sudo apt-get clean jitesh@jitesh-desktop:~$ sudo apt-get install -f Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: kdelibs5-data The following packages will be upgraded: kdelibs5-data 1 upgraded, 0 newly installed, 0 to remove and 18 not upgraded. 2 not fully installed or removed. Need to get 2,832 kB of archives. After this operation, 2,998 kB disk space will be freed. Do you want to continue [Y/n]? y Get:1 http: //in.archive.ubuntu.com/ubuntu/precise-updates/main kdelibs5-data all 4:4.8.4a-0ubuntu0.2 [2,832 kB] Fetched 2,832 kB in 32s (86.6 kB/s) dpkg: dependency problems prevent configuration of kdelibs5-data: libplasma3 (4:4.8.4a-0ubuntu0.2) breaks kdelibs5-data (<< 4:4.6.80~) and is installed. Version of kdelibs5-data to be configured is 4:4.6.2-0ubuntu4. kate-data (4:4.8.4-0ubuntu0.1) breaks kdelibs5-data (<< 4:4.6.90) and is installed. Version of kdelibs5-data to be configured is 4:4.6.2-0ubuntu4. katepart (4:4.8.4-0ubuntu0.1) breaks kdelibs5-data (<< 4:4.6.90) and is installed. Version of kdelibs5-data to be configured is 4:4.6.2-0ubuntu4. dpkg: error processing kdelibs5-data (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of kdelibs-data: kdelibs-data depends on kdelibs5-data; however: Package kdelibs5-data is not configured yet. No apport report written because MaxReports is reached already dpkg: error processing kdelibs-data (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already Errors were encountered while processing: kdelibs5-data kdelibs-data W: Duplicate sources.list entry http://archive.canonical.com/ubuntu/ precise/partner i386 Packages (/var/lib/apt/lists/archive.canonical.com_ubuntu_dists_precise_partner_binary-i386_Packages) W: You may want to run apt-get update to correct these problems E: Sub-process /usr/bin/dpkg returned an error code (1) As I thought the error was related to configuring kdelibs, I tried to configure using dpkg. But got the following errors: jitesh@jitesh-desktop:~$ sudo dpkg --configure -a dpkg: dependency problems prevent configuration of kdelibs5-data: libplasma3 (4:4.8.4a-0ubuntu0.2) breaks kdelibs5-data (<< 4:4.6.80~) and is installed. Version of kdelibs5-data to be configured is 4:4.6.2-0ubuntu4. kate-data (4:4.8.4-0ubuntu0.1) breaks kdelibs5-data (<< 4:4.6.90) and is installed. Version of kdelibs5-data to be configured is 4:4.6.2-0ubuntu4. katepart (4:4.8.4-0ubuntu0.1) breaks kdelibs5-data (<< 4:4.6.90) and is installed. Version of kdelibs5-data to be configured is 4:4.6.2-0ubuntu4. dpkg: error processing kdelibs5-data (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of kdelibs-data: kdelibs-data depends on kdelibs5-data; however: Package kdelibs5-data is not configured yet. dpkg: error processing kdelibs-data (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: kdelibs5-data kdelibs-data jitesh@jitesh-desktop:~$ Now I dont have any idea how to proceed. I am unable to install anything from Software Centre or using Terminal now. Some basic info: Core2Duo, dual booting Ubuntu 12.04 with Win7. Fresh install of Ubuntu 12.04 (not upgrade). Incidentally, I had first upgraded from 10.04 and had succesfully installed Amarok 1.4 following this same method. But due to other issues, i had to format and do a clean install of 12.04. Now when I tried to install Amarok 1.4, I'm getting these errors. I also have digiKam and k3b installed, if that can be of any help. I use digiKam a lot, so removing KDE is not feasible for me. Any help on this issue will be highly appreciated. Thanks in advance.

    Read the article

  • Source Control and SQL Development &ndash; Part 3

    - by Ajarn Mark Caldwell
    In parts one and two of this series, I have been specifically focusing on the latest version of SQL Source Control by Red Gate Software.  But I have been doing source-controlled SQL development for years, long before this product was available, and well before Microsoft came out with Database Projects for Visual Studio.  “So, how does that work?” you may wonder.  Well, let me share some of the details of how we do it where I work… The key to this approach is that everything is done via Transact-SQL script files; either natively written T-SQL, or generated.  My preference is to write all my code by hand, which forces you to become better at your SQL syntax.  But if you really prefer to use the Management Studio GUI to make database changes, you can still do that, and then you use the Generate Scripts feature of the GUI to produce T-SQL scripts afterwards, and store those in your source control system.  You can generate scripts for things like stored procedures and views by right-clicking on the database in the Object Explorer, and Choosing Tasks, Generate Scripts (see figure 1 to the left).  You can also do that for the CREATE scripts for tables, but that does not work when you have a table that is already in production, and you need to make just a simple change, such as adding a new column or index.  In this case, you can use the GUI to make the table changes, and then instead of clicking the Save button, click the Generate Change Script button (). Then, once you have saved the change script, go ahead and execute it on your development database to actually make the change.  I believe that it is important to actually execute the script rather than just click the Save button because this is your first test that your change script is working and you didn’t somehow lose a portion of the change. As you can imagine, all this generating of scripts can get tedious and tempting to skip entirely, so again, I would encourage you to just get in the habit of writing your own Transact-SQL code, and then it is just a matter of remembering to save your work, just like you are in the habit of saving changes to a Word or Excel document before you exit the program. So, now that you have all of these script files, what do you do with them?  Well, we organize ours into folders labeled ChangeScripts, Functions, Views, and StoredProcedures, and those folders are loaded into our source control system.  ChangeScripts contains all of the table and index changes, and anything else that is basically a one-time-only execution.  Of course you want to write your scripts with qualifying logic so that if a script were accidentally run more than once in a database, it would not crash nor corrupt anything; but these scripts are really intended to be run only once in a database. Once you have your initial set of scripts loaded into source control, then making changes, such as altering a stored procedure becomes a simple matter of checking out your CREATE PROCEDURE* script, editing it in SSMS, saving the change, executing the script in order to effect the change in your database, and then checking the script back in to source control.  Of course, this is where the lack of integration for source control systems within SSMS becomes an irritation, because this means that in addition to SSMS, I also have my source control client application running to do the check-out and check-in.  And when you have 800+ procedures like we do, that can be quite tedious to locate the procedure I want to change in source control, check it out, then locate the script file in my working folder, open it in SSMS, do the change, save it, and the go back to source control to check in.  Granted, it is not nearly as burdensome as, say, losing your source code and having to rebuild it from memory, or losing the audit trail that good source control systems provide.  It is worth the effort, and this is how I have been doing development for the last several years. Remember that everything that the SQL Server Management Studio does in modifying your database can also be done in plain Transact-SQL code, and this is what you are storing.  And now I have shown you how you can do it all without spending any extra money.  You already have source control, or can get free, open-source source control systems (almost seems like an oxymoron, doesn’t it) and of course Management Studio is free with your SQL Server database engine software. So, whether you spend the money on tools to make it easier, or not, you now have no excuse for not using source control with your SQL development. * In our current model, the scripts for stored procedures and similar database objects are written with an IF EXISTS…DROP… at the top, followed by the CREATE PROCEDURE… section, and that followed by a section that assigns permissions.  This allows me to run the same script regardless of whether the procedure previously existed in the database.  If the script was only an ALTER PROCEDURE, then it would fail the first time that procedure was deployed to a database, unless you wrote other code to stub it if it did not exist.  There are a few different ways you could organize your scripts for deployment, each with its own trade-offs, but I think it is absolutely critical that whichever way you organize things, you ensure that the same script is run throughout the deployment cycle, and do not allow customizations to creep in between TEST and PROD.  If you do, then you have broken the integrity of your deployment process because what you deployed to PROD was not exactly the same as what was tested in TEST, so you effectively have now released untested code into PROD.

    Read the article

  • Is this how dynamic language copes with dynamic requirement?

    - by Amumu
    The question is in the title. I want to have my thinking verified by experienced people. You can add more or disregard my opinion, but give me a reason. Here is an example requirement: Suppose you are required to implement a fighting game. Initially, the game only includes fighters, who can attack each other. Each fighter can punch, kick or block incoming attacks. Fighters can have various fighting styles: Karate, Judo, Kung Fu... That's it for the simple universe of the game. In an OO like Java, it can be implemented similar to this way: abstract class Fighter { int hp, attack; void punch(Fighter otherFighter); void kick(Fighter otherFighter); void block(Figther otherFighter); }; class KarateFighter extends Fighter { //...implementation...}; class JudoFighter extends Fighter { //...implementation... }; class KungFuFighter extends Fighter { //...implementation ... }; This is fine if the game stays like this forever. But, somehow the game designers decide to change the theme of the game: instead of a simple fighting game, the game evolves to become a RPG, in which characters can not only fight but perform other activities, i.e. the character can be a priest, an accountant, a scientist etc... At this point, to make it more generic, we have to change the structure of our original design: Fighter is not used to refer to a person anymore; it refers to a profession. The specialized classes of Fighter (KaraterFighter, JudoFighter, KungFuFighter) . Now we have to create a generic class named Person. However, to adapt this change, I have to change the method signatures of the original operations: class Person { int hp, attack; List<Profession> skillSet; }; abstract class Profession {}; class Fighter extends Profession { void punch(Person otherFighter); void kick(Person otherFighter); void block(Person otherFighter); }; class KarateFighter extends Fighter { //...implementation...}; class JudoFighter extends Fighter { //...implementation... }; class KungFuFighter extends Fighter { //...implementation ... }; class Accountant extends Profession { void calculateTax(Person p) { //...implementation...}; void calculateTax(Company c) { //...implementation...}; }; //... more professions... Here are the problems: To adapt to the method changes, I have to fix the places where the changed methods are called (refactoring). Every time a new requirement is introduced, the current structural design has to be broken to adapt the changes. This leads to the first problem. Rigid structure makes it hard for code reuse. A function can only accept the predefined types, but it cannot accept future unknown types. A written function is bound to its current universe and has no way to accommodate to the new types, without modifications or rewrite from scratch. I see Java has a lot of deprecated methods. OO is an extreme case because it has inheritance to add up the complexity, but in general for statically typed language, types are very strict. In contrast, a dynamic language can handle the above case as follow: ;;fighter1 punch fighter2 (defun perform-punch (fighter1 fighter2) ...implementation... ) ;;fighter1 kick fighter2 (defun perform-kick (fighter1 fighter2) ...implementation... ) ;;fighter1 blocks attacks from fighter2 (defun perform-block (fighter1 fighter2) ...implementation... ) fighter1 and fighter2 can be anything as long as it has the required data for calculation; or methods (duck typing). You don't have to change from the type Fighter to Person. In the case of Lisp, because Lisp only has a single data structure: list, it's even easier to adapt to changes. However, other dynamic languages can have similar behaviors as well. I work primarily with static languages (mainly C and Java, but working with Java was a long time ago). I started learning Lisp and some other dynamic languages this year. I can see how it helps improving my productivity.

    Read the article

  • How do I turn on wireless adapter on HP Envy dv6 7200 under Ubuntu (any version)?

    - by Dave B.
    I have a new HP Envy dv6 7200 with dual boot Windows 8 / Ubuntu 12.04. In windows, the F12 key in Windows activates the "airplane mode" switch which enables/disables both on-board (mini PCIe) and USB wireless adapters. In Ubuntu, however, the wireless adapter is turned off by default and cannot be turned back on via the F12 key (or any other combination of F12 and Ctrl, Fn, Shift, etc.). Let me explain the "fixes" I've seen in various forums and explain what did or did not happen. These are listed in no particular order. (Spoiler alert: wireless is still broke). Solution 1? Use HP's "Wireless Assistant" utility to permanently activate the wireless card in Windows, then boot into Ubuntu to happily find it working. Unfortunately, this utility works in Windows 7 but not Windows 8. On the other hand, hardware drivers from HP are only available for Windows 8 for this model. Catch 22 (I could not find a comparable utility for Windows 8). Solution 2? Use a USB wireless adapter to sidestep the on-board device. I purchased such a device from thinkpenguin.com to be sure that it would be Linux-friendly. However, the wireless switch enables / disables all wireless devices including USB. So, there's my $50 donation to the nice folks at thinkpenguin.com, but still no solution. Solution 3? Following the Think Penguin folk's suggestion, modify the mini PCI express adapter following instructions here: http://www.notebookforums.com/t/225429/broken-wireless-hardware-switch-fix Tempting, but I then violate the terms of my warranty mere days after opening the box. This might be a good solution for an older machine that you want to get your geek on with, but not for a new box. Solution 4? rfkill unblock all No effect whatsoever. ubuntu@ubuntu-hp-evny:~$ rfkill unblock all ubuntu@ubuntu-hp-evny:~$ rfkill list all 0: hp-wifi: Wireless LAN Soft blocked: no Hard blocked: yes Solution 5? Re-install drivers. Done and done. Ubuntu recognizes the device - perhaps even without re-installing the drivers? - but cannot turn it on. How do I know this? In the Network Manager drop-down menu, the wireless option is blacked out and a message reads something like: "wireless network is disabled by a hardware switch". Solution 6? Identify a physical switch on the laptop and flip it. There is no such switch on this machine. In fact, walking through Best Buy yesterday, I checked and not a single new laptop PC had a physical switch on it. All of the wireless switches are either the F2 or F12 key ... I wonder if askubuntu will not be plagued by this exact issue in the near future? Additional info - lspci ubuntu@ubuntu-hp-evny:~$ lspci 00:00.0 Host bridge: Intel Corporation Ivy Bridge DRAM Controller (rev 09) 00:01.0 PCI bridge: Intel Corporation Ivy Bridge PCI Express Root Port (rev 09) 00:02.0 VGA compatible controller: Intel Corporation Ivy Bridge Graphics Controller (rev 09) 00:14.0 USB controller: Intel Corporation Panther Point USB xHCI Host Controller (rev 04) 00:16.0 Communication controller: Intel Corporation Panther Point MEI Controller #1 (rev 04) 00:1a.0 USB controller: Intel Corporation Panther Point USB Enhanced Host Controller #2 (rev 04) 00:1b.0 Audio device: Intel Corporation Panther Point High Definition Audio Controller (rev 04) 00:1c.0 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 1 (rev c4) 00:1c.2 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 3 (rev c4) 00:1c.3 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 4 (rev c4) 00:1c.5 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 6 (rev c4) 00:1d.0 USB controller: Intel Corporation Panther Point USB Enhanced Host Controller #1 (rev 04) 00:1f.0 ISA bridge: Intel Corporation Panther Point LPC Controller (rev 04) 00:1f.2 RAID bus controller: Intel Corporation 82801 Mobile SATA Controller [RAID mode] (rev 04) 00:1f.3 SMBus: Intel Corporation Panther Point SMBus Controller (rev 04) 01:00.0 VGA compatible controller: NVIDIA Corporation Device 0de9 (rev a1) 08:00.0 Unassigned class [ff00]: Realtek Semiconductor Co., Ltd. Device 5229 (rev 01) 0a:00.0 Network controller: Ralink corp. Device 539b 0b:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 07) Any suggestions would be much appreciated!

    Read the article

  • The Joy Of Hex

    - by Jim Giercyk
    While working on a mainframe integration project, it occurred to me that some basic computer concepts are slipping into obscurity. For example, just about anyone can tell you that a 64-bit processor is faster than a 32-bit processer. A grade school child could tell you that a computer “speaks” in ‘1’s and ‘0’s. Some people can even tell you that there are 8 bits in a byte. However, I have found that even the most seasoned developers often can’t explain the theory behind those statements. That is not a knock on programmers; in the age of IntelliSense, what reason do we have to work with data at the bit level? Many computer theory classes treat bit-level programming as a thing of the past, no longer necessary now that storage space is plentiful. The trouble with that mindset is that the world is full of legacy systems that run programs written in the 1970’s.  Today our jobs require us to extract data from those systems, regardless of the format, and that often involves low-level programming. Because it seems knowledge of the low-level concepts is waning in recent times, I thought a review would be in order.       CHARACTER: See Spot Run HEX: 53 65 65 20 53 70 6F 74 20 52 75 6E DECIMAL: 83 101 101 32 83 112 111 116 32 82 117 110 BINARY: 01010011 01100101 01100101 00100000 01010011 01110000 01101111 01110100 00100000 01010010 01110101 01101110 In this example, I have broken down the words “See Spot Run” to a level computers can understand – machine language.     CHARACTER:  The character level is what is rendered by the computer.  A “Character Set” or “Code Page” contains 256 characters, both printable and unprintable.  Each character represents 1 BYTE of data.  For example, the character string “See Spot Run” is 12 Bytes long, exclusive of the quotation marks.  Remember, a SPACE is an unprintable character, but it still requires a byte.  In the example I have used the default Windows character set, ASCII, which you can see here:  http://www.asciitable.com/ HEX:  Hex is short for hexadecimal, or Base 16.  Humans are comfortable thinking in base ten, perhaps because they have 10 fingers and 10 toes; fingers and toes are called digits, so it’s not much of a stretch.  Computers think in Base 16, with numeric values ranging from zero to fifteen, or 0 – F.  Each decimal place has a possible 16 values as opposed to a possible 10 values in base 10.  Therefore, the number 10 in Hex is equal to the number 16 in Decimal.  DECIMAL:  The Decimal conversion is strictly for us humans to use for calculations and conversions.  It is much easier for us humans to calculate that [30 – 10 = 20] in decimal than it is for us to calculate [1E – A = 14] in Hex.  In the old days, an error in a program could be found by determining the displacement from the entry point of a module.  Since those values were dumped from the computers head, they were in hex. A programmer needed to convert them to decimal, do the equation and convert back to hex.  This gets into relative and absolute addressing, a topic for another day.  BINARY:  Binary, or machine code, is where any value can be expressed in 1s and 0s.  It is really Base 2, because each decimal place can have a possibility of only 2 characters, a 1 or a 0.  In Binary, the number 10 is equal to the number 2 in decimal. Why only 1s and 0s?  Very simply, computers are made up of lots and lots of transistors which at any given moment can be ON ( 1 ) or OFF ( 0 ).  Each transistor is a bit, and the order that the transistors fire (or not fire) is what distinguishes one value from  another in the computers head (or CPU).  Consider 32 bit vs 64 bit processing…..a 64 bit processor has the capability to read 64 transistors at a time.  A 32 bit processor can only read half as many at a time, so in theory the 64 bit processor should be much faster.  There are many more factors involved in CPU performance, but that is the fundamental difference.    DECIMAL HEX BINARY 0 0 0000 1 1 0001 2 2 0010 3 3 0011 4 4 0100 5 5 0101 6 6 0110 7 7 0111 8 8 1000 9 9 1001 10 A 1010 11 B 1011 12 C 1100 13 D 1101 14 E 1110 15 F 1111   Remember that each character is a BYTE, there are 2 HEX characters in a byte (called nibbles) and 8 BITS in a byte.  I hope you enjoyed reading about the theory of data processing.  This is just a high-level explanation, and there is much more to be learned.  It is safe to say that, no matter how advanced our programming languages and visual studios become, they are nothing more than a way to interpret bits and bytes.  There is nothing like the joy of hex to get the mind racing.

    Read the article

  • Caveat utilitor - Can I run two versions of Microsoft Project side-by-side?

    - by Martin Hinshelwood
    A number of out customers have asked if there are any problems in installing and running multiple versions of Microsoft Project on a single client. Although this is a case of Caveat utilitor (Let the user beware), as long as the user understands and accepts the issues that can occur then they can do this. Although Microsoft provide the ability to leave old versions of Office products (except Outlook) on your client when you are installing a new version of the product they certainly do not endorse doing so. Figure: For Project you can choose to keep the old stuff   That being the case I would have preferred that they put a “(NOT RECOMMENDED)” after the options to impart that knowledge to the rest of us, but they did not. The default and recommended behaviour is for the newer version installer to remove the older versions. Of course this does not apply in the revers. There are no forward compatibility packs for Office. There are a number of negative behaviours (or bugs) that can occur in this configuration: There is only one MS Project In Windows a file extension can only be associated with a single program.  In this case, MPP files can be associated with only one version of winproj.exe.  The executables are in different folders so if a user double-clicks a Project file on the desktop, file explorer, or Outlook email, Windows will launch the winproj.exe associated with MPP and then load the MPP file.  There are problems associated with this situation and in some cases workarounds. The user double-clicks on a Project 2010 file, Project 2007 launches but is unable to open the file because it is a newer version.  The workaround is for the user to launch Project 2010 from the Start menu then open the file.  If the file is attached to an email they will need to first drag the file to the desktop. All your linked MS Project files need to be of the same version There are a number of problems that occur when people use on Microsoft’s Object Linking and Embedding (OLE) technology.  The three common uses of OLE are: for inserted projects where a Master project contains sub-projects and each sub-project resides in its own MPP file shared resource pools where multiple MPP files share a common resource pool kept in a single MPP file cross-project links where a task or milestone in one MPP file has a  predecessor/successor relationship with a task or milestone in a different MPP file What I’ve seen happen before is that if you are running in a version of Project that is not associated with the MPP extension and then try and activate an OLE link then Project tries to launch the other version of Project.  Things start getting very confused since different MPP files are being controlled by different versions of Project running at the same time.  I haven’t tried this in awhile so I can’t give you exact symptoms but I suspect that if Project 2010 is involved the symptoms will be different then in a Project 2003/2007 scenario.  I’ve noticed that Project 2010 gives different error messages for the exact same problem when it occurs in Project 2003 or 2007.  -Anonymous The recommendation would be either not to use this feature if you have to have multiple versions of Project installed or to use only a single version of Project. You may get unexpected negative behaviours if you are using shared resource pools or resource pools even when you are not running multiple versions as I have found that they can get broken very easily. If you need these thing then it is probably best to use Project Server as it was created to solve many of these specific issues. Note: I would not even allow multiple people to access a network copy of a Project file because of the way Windows locks files in write mode. This can cause write-locks that get so bad a server restart is required I’ve seen user’s files get write-locked to the point where the only resolution is to reboot the server. Changing the default version to run for an extension So what if you want to change the default association from Project 2007 to Project 2010?   Figure: “Control Panel | Folder Options | Change the file associated with a file extension” Windows normally only lists the last version installed for a particular extension. You can select a specific version by selecting the program you want to change and clicking “Change program… | Browse…” and then selecting the .exe you want to use on the file system. Figure: You will need to select the exact version of “winproj.exe” that you want to run Conclusion Although it is possible to run multiple versions of Project on one system in the main it does not really make sense.

    Read the article

< Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >