Search Results

Search found 770 results on 31 pages for 'counting'.

Page 27/31 | < Previous Page | 23 24 25 26 27 28 29 30 31  | Next Page >

  • Post MIX10 Decompression

    - by Dave Campbell
    With a big dose of reality, I walked into this place this morning and found out "yeah, I really do write .NET web apps and MS Access for a living" :( ... but it pays the bills and I've gotten *way* used to eating 3 times a day :) MIX10 was great, although the buzz didn't seem as big as MIX09, and I'm not sure why. It also seemed like a different crowd and other folks I talked to agreed with that. Of course now I can outwardly admit that the "Windows Phone 7 Series" is programmed with Silverlight ... how cool is that? I've been biting my tongue about that info for over a month! I cloistered myself in Ballroom A for the week, not counting the Keynotes. That's where the phone sessions were located. I tried to collect the full set, but ended up bailing on the last one because it was ending at the time that MIX10 was ending, and I hadn't spent a whole lot of time in 'The Commons'. I met a bunch of folks I've blogged about, or exchanged email with, and that's always fun. Renewed associations with folks I only see once or twice a year and way too long a list and don't want to mention some and leave off others... I did have an opportunity to meet Charles Petzold... wow that was interesting... I got into Windows development through Charles' Programming Windows 3.1 book 'back in the day' ... couldn't find anyone at Honeywell wanted to join my journey, so it was just me and 'Chuck' :) ... read every word of that book more than once... all marked up, tags sticking out of it. And now he's writing a WP7 book ... gotta get it: Free ebook: Programming Windows Phone 7 Series (DRAFT Preview) I went through my Big List-o-BlogsTM last night and it took over 2 hours because of all the new content since MIX10. I've got 90 posts tagged as of 9PM on 3/21. If everybody stopped right now, it would take me 9 days to push what I have now, so you'll have to be patient! I had another event on Thursday that was *extremely* tiring, so I ended up staying over another night. I drove back into the strip on Friday morning to try to find a non-cheesy souvenir for my wife, and didn't find much. Then I went to Blueberry Hill restaurant for 3 eggs, 3 strips of bacon, and 3 awesome potato pancakes. Check them out if you have time! And then hit the road. In case anyone is wondering, the 2-1/2 hour drive I took across Hoover Dam on Sunday afternoon only took 30 minutes on Friday afternoon... that was a more normal trip! I thoroughly enjoyed the time I spent with everyone. Thanks to John Papa and his crew for the great Insider's party on Monday night... the Blues Brothers were a fun surprise and they did a good job! And the swag was great... thanks to all the contributors for a fun evening at their expense! All I can say is stay tuned, go to live.visitmix.com/videos and watch everything, get the phone tools, start working... everything's different and everything's fun... jump in, it's all Silverlight! Stay in the 'Light! Technorati Tags: Silverlight    Silverlight 4    Windows Phone     MIX10

    Read the article

  • SQLAuthority News – Android Efficiency Tips and Tricks – Personal Technology Tip

    - by pinaldave
    I use my phone for lots of things.  I use it mainly to replace my tablet – I can e-mail, take and edit photos, and do almost everything I can do on a laptop with this phone.  And I am sure that there are many of you out there just like me.  I personally have a Galaxy S3, which uses the Android operating system, and I have decided to feature it as the third installment of my Technology Tips and Tricks series. 1) Shortcut to your favorite contacts on home screen Access your most-called contacts easily from your home screen by holding your finger on any empty spot on the home screen.  A menu will pop up that allows you to choose Shortcuts, and Contact.  You can scroll through your contact list and then just tap on the name of the person you want to be able to dial with a single click. 2) Keep track of your data usage Yes, we all should keep a close eye on our data usage, because it is very easy to go over our limits and then end up with a giant bill at the end of the month.  Never get surprised when you open that mobile phone envelope again.  Go to Settings, then Data Usage, and you can find a quick rundown of your usage, how much data each app uses, and you can even set alarms to let you know when you are nearing the limits.   Better yet, you can set the phone to stop using data when it reaches a certain limit. 3) Bring back Good Grammar We often hear proclamations about the downfall of written language, and how texting abbreviations, misspellings, and lack of punctuation are the root of all evil.  Well, we can show all those doomsdayers that all is not lost by bringing punctuation back to texting.  Usually we leave it off when we text because it takes too long to get to the screen with all the punctuation options.  But now you can hold down the period (or “full stop”) button and a list of all the commonly-used punctuation marks will pop right up. 4) Apps, Apps, Apps and Apps And finally, I cannot end an article about smart phones without including a list of my favorite apps.  Here are a list of my Top 10 Applications on my Android (not counting social media apps). Advanced Task Killer – Keeps my phone snappy by closing un-necessary apps WhatsApp - my favorite alternate to Text SMS Flipboard - my ‘timepass’ moments Skype – keeps me close to friends and family GoogleMaps - I am never lost because of this one thing Amazon Kindle – Books my best friends DropBox - My data always safe Pluralsight Player – Learning never stops for me Samsung Kies Air – Connecting Phone to Computer Chrome – Replacing default browser I have not included any social media applications in the above list, but you can be sure that I am linked to Twitter, Facebook, Google+, LinkedIn, and YouTube. Reference: Pinal Dave (http://blog.SQLAuthority.com)   Filed under: Best Practices, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology Tagged: Android, Personal Technology

    Read the article

  • Silverlight Cream for December 16, 2010 -- #1011

    - by Dave Campbell
    In this Issue: John Papa, Tim Heuer, Jeff Blankenburg(-2-, -3-), Jesse Liberty, Jay Kimble, Wei-Meng Lee, Paul Sheriff, Mike Snow(-2-, -3-), Samuel Jack, James Ashley, and Peter Kuhn. Above the Fold: Silverlight: "Animation Texture Creator" Peter Kuhn WP7: "dows Phone from Scratch #13 — Custom Behaviors Part II: ActionTrigger" Jesse Liberty Shoutouts: Awesome blog post by Jesse Liberty about writing in general: Ten Requirements For Tutorials, Videos, Demos and White Papers That Don’t Suck From SilverlightCream.com: 1000 Silverlight Cream Posts and Counting! John Papa has Silverlight TV number 55 up and it's an inverview he did with me the day before the Firestarter in December... thanks John... great job in making me not look stooopid :) Silverlight service release today - 4.0.51204 Tim Heuer announced a service release of Silverlight ... check out his blog for the updates and near the bottom is a link to the developer runtime. What I Learned In WP7 – Issue #3 Jeff Blankenburg has been pushing out tips ... number 3 consisted of 3 good pieces of info for WP7 devs including more info about fonts and a good site for free audio files What I Learned In WP7 – Issue #4 In number 4, Jeff Blankenburg talks about where to get some nice free WP7 icons, and a link to a cool article on getting all sorts of device info What I Learned In WP7 – Issue #5 Number 5 finds Jeff Blankenburg giving up the XAP for a CodeMash sessiondata app... or wait for it to appear in the Marketplace next week. Windows Phone from Scratch #13 — Custom Behaviors Part II: ActionTrigger Wow... Jesse Liberty is up to number 13 in his Windows Phone from scratch series... this time it's part 2 of his Custom Behaviors post, and ActionTriggers specifically. Solving the Storage Problem in WP7 (for CF Developers) Jay Kimble has released his WP7 dropbox client to the wild ... this is cool for loading files at run-time... opens up some ideas for me at least. Building Location Service Apps in Windows Phone 7 Wei-Meng Lee has a big informative post on location services in WP7... getting a Bing Maps API key, getting the data, navigating and manipulating the map, adding pushpins... good stuff Using Xml Files on Windows Phone Paul Sheriff is discussing XML files as a database for your WP7 apps via LINQ to XML. Sample code included. ABC–Win7 App Mike Snow has been busy with Tips of the Day ... he published a children's app for tracing their ABC's and discusses some of the code bits involved. Win7 Mobile Application Bar – AG_E_PARSER_BAD_PROPERTY_VALUE Mike Snow's next post is about the infamous AG_E_PARSER_BAD_PROPERTY_VALUE error or worse in WP7 ... how he got it, and how he fixed it... could save you some hair... Forward Navigation on the Windows Phone Mike Snow's latest post is about forward navigation on the WP7 ... oh wait... there isn't any... check out the post. Day 2 of my “3 days to Build a Windows Phone 7 Game” challenge Samuel Jack details about 9 hours in day 2 of his quest to build an XNA app for WP7 from a cold start. Windows Phone 7 Side Loading James Ashley has a really complete write-up on side-loading apps onto your WP7 device. Don't get excited... this isn't a hack... this is instructions for side-loading using the Microsoft-approved methos, which means a registered device. Animation Texture Creator Remember Peter Kuhn's post the other day about an Animation Texture Creator? ... well today he has some added tweaks and the source code! ... thanks Peter! Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • HP and Microsoft: That&rsquo;s What Friends Are For ?

    - by andrewbrust
    Today, HP pre-announced the second coming out for its recently acquired Palm webOS mobile operating system.  I happen to think webOS is quite good, and when the Palm Pre first came out, I thought it a worthwhile phone.  I was worried though that the platform would never attract the developer mindshare it needed to be competitive, and that turned out to be the case.  But then HP acquired Palm and announced it would be revamping the webOS offering, not only on phones, but also on tablets.  It later announced that it would also use webOS as an embedded solution on HP printers. The timing of this came shortly after HP had announced it would be producing a “Slate” product running Windows 7. After the Palm deal, HP became vague about whether the Windows-powered slate would actually come out.  They did, in fact, bring the Slate 500 to market, but by some accounts, they only built 5000 units. Another recent awkward moment between HP and Microsoft: HP withdrew itself from the Windows Home Server ecosystem.  That one hurt, as they were the dominant OEM there.  But Microsoft’s decision to kill Drive Extender had driven away many parties, not just HP. On Wednesday, HP came out with their TouchPad, and new phone models.  Not a nice thing for Windows Phone 7, but other OEMs are taking a wait and see attitude there too, I suppose.  There was one more zinger though, and it was bigger: HP announced they’d be porting webOS to PCs. No Windows Phone 7? OK. No Windows Home Server?  Whatcha gonna do?  But no Windows 7 either?  From HP?  What comes after that, no ink and toner? Some people think Microsoft’s been around too long to be relevant.  But HP started out making oscilloscopes!  The notion that HP is too cool for Windows school is a it far-fetched.  This is the company that bought EDS. This is the company that bought Compaq.  And Compaq was the company that bought Digital Equipment Corporation.  Somehow, I don’t think the VT 220 outclasses Windows PCs. What could possibly be going on?  My sense is that HP wants to put webOS on PCs that also have Windows, and that people will buy because they have Windows.  And for every one of those sold, HP gets to count, technically speaking, another webOS unit in the install base.  webOS is really nice, as I said.  But being good isn’t good enough when you are trying to get market share.  Number of units shipped matters.  The question is whether counting PCs with webOS installed, but dormant, is helpful to HP’s cause.  Seems like a funny way to account for market share, and a strange way to treat a big partner in Redmond.

    Read the article

  • F# and the rose-tinted reflection

    - by CliveT
    We're already seeing increasing use of many cores on client desktops. It is a change that has been long predicted. It is not just a change in architecture, but our notions of efficiency in a program. No longer can we focus on the asymptotic complexity of an algorithm by counting the steps that a single core processor would take to execute it. Instead we'll soon be more concerned about the scalability of the algorithm and how well we can increase the performance as we increase the number of cores. This may even lead us to throw away our most efficient algorithms, and switch to less efficient algorithms that scale better. We might even be willing to waste cycles in order to speculatively execute at the algorithm rather than the hardware level. State is the big headache in this parallel world. At the hardware level, main memory doesn't necessarily contain the definitive value corresponding to a particular address. An update to a location might still be held in a CPU's local cache and it might be some time before the value gets propagated. To get the latest value, and the notion of "latest" takes a lot of defining in this world of rapidly mutating state, the CPUs may well need to communicate to decide who has the definitive value of a particular address in order to avoid lost updates. At the user program level, this means programmers will need to lock objects before modifying them, or attempt to avoid the overhead of locking by understanding the memory models at a very deep level. I think it's this need to avoid statefulness that has led to the recent resurgence of interest in functional languages. In the 1980s, functional languages started getting traction when research was carried out into how programs in such languages could be auto-parallelised. Sadly, the impracticality of some of the languages, the overheads of communication during this parallel execution, and rapid improvements in compiler technology on stock hardware meant that the functional languages fell by the wayside. The one thing that these languages were good at was getting rid of implicit state, and this single idea seems like a solution to the problems we are going to face in the coming years. Whether these languages will catch on is hard to predict. The mindset for writing a program in a functional language is really very different from the way that object-oriented problem decomposition happens - one has to focus on the verbs instead of the nouns, which takes some getting used to. There are a number of hybrid functional/object languages that have been becoming more popular in recent times. These half-way houses make it easy to use functional ideas for some parts of the program while still allowing access to the underlying object-focused platform without a great deal of impedance mismatch. One example is F# running on the CLR which, in Visual Studio 2010, has because a first class member of the pack. Inside Visual Studio 2010, the tooling for F# has improved to the point where it is easy to set breakpoints and watch values change while debugging at the source level. In my opinion, it is the tooling support that will enable the widespread adoption of functional languages - without this support, people will put off any transition into the functional world for as long as they possibly can. Without tool support it will make it hard to learn these languages. One tool that doesn't currently support F# is Reflector. The idea of decompiling IL to a functional language is daunting, but F# is potentially so important I couldn't dismiss the idea. As I'm currently developing Reflector 6.5, I thought it wise to take four days just to see how far I could get in doing so, even if it achieved little more than to be clearer on how much was possible, and how long it might take. You can read what happened here, and of the insights it gave us on ways to improve the tool.

    Read the article

  • Sorry about the wait.

    - by Ratman21
    In the last two days have been trying remove “Iolo System Mechanic Professional” (With anti-virus and FireWall) from 3 of the 5 pc’s we have (3 lap tops and two Desk tops) as it was going to expire on the 13th.   So I could replace them with a free anti-virus (AVG) and just use the windows fire wall. I have been using the same set up on one of my desk tops (XP Pro) for 8 months and one of the Lap tops (Vista) for 5 months.   The problem was that System Mechanic did not want to go. Even after using the uninstall option on the desk top (my main PC, well its that because has the larger of all the PC’s hard drives but, is the oldest and runs XP home) and using Ccleaner to try and remove it.  It was still showing up as there and after I went a head and tried installing AVG and ran it. I found that the TCP/IP module was missing.  So no internet, I had to restore the PC back to the 1st to get the module back and then install AVG (after making sure window firewall was back on. I didn’t check that on the first try). Got the PC back to normal, very late last night. Only one of the two lap tops was easy but, even at that there are still some parts of System Mechanic on it but, AVG and firewall are working.   I may try an hunt down parts of System Mechanic on it and delete them on this lap top. Which was what finally had to do on the one of the Lap tops (also XP Home) as it would not uninstall after I restored the PC back to the 4th. So delete, delete, delete and Ccleaner (one dl file would not delete though). And I just finish installing AVG and now running a scan on the lap top. So all of this took two days (well three counting today). I started late Friday night and just finishing up now.   I only started this switch over after I had finished my Job search for day on Friday.   As for blogging on Tuesday, Wednesday and Thursday, I was busy and by the end of the day was too tired to blog, that and was hung up still on that 2nd dare of The Love Dare. So I cleaned the house, while she was out of the house. I mean, I cleaned, not just vacuumed house I cleaned the kitchen counter tops and the sinks. Did the dishes and some of the laundry over two of the those days.   As to the third day of Love Dare which is “Love is not selfish” and the dare “Whatever you put your time, energy, and money into will become more important to you. It’s hard to care for something you are not investing in. Along with restraining from negative comments, buy your spouse something that says, I was thinking of you today.”   Being on a very limited income, a lot of normal guy buying for girls is out (for one thing, the comment why did you waste our money on flowers, etc, etc, would come up. Not from me though). So that one is on hold till money issues are not a problem (no that does not mean never). The 4th day “Love is thoughtful” and the dare “Contact your spouse sometime during the business of the day. Have no agenda other than asking how he or she is doing and if there is anything you could do for them”.   I did this dare while I was still working with census last week and trying to do the dares. Well I start my CCNA classes Monday the 15th and I move on to the next Love Dare day “Love is not rude”.

    Read the article

  • Bad Spot to Be In: Playing Catch-up with Mobile Advertising

    - by Mike Stiles
    You probably noticed, there’s a mass migration going on from online desktop/laptop usage to smartphone/tablet usage.  It’s an indicator of how we live our lives in the modern world: always on the go, with no intention of being disconnected while out there. Consequently, paid as it relates to mobile advertising is taking the social spotlight. eMarketer estimated that in 2013, US adults would spend about 2 hours, 21 minutes a day on mobile, not counting talking time. More people in the world own smartphones than own toothbrushes (bad news I suppose if you’re marketing toothpaste). They’re using those mobile devices to access social networks, consuming at least 17% of their mobile time on them. Frankly, you don’t need a deep dive into mobile usage stats to know what’s going on. Just look around you in any store, venue or coffee shop. It’s really obvious…our mobile devices are now where we “are,” so that’s where marketers can increasingly reach us. And it’s a smart place for them to do just that. Mobile devices can be viewed more and more as shopping facilitators. Usually when someone is on mobile, they are not in passive research mode. They are likely standing near a store or in front of a product, using their mobile to seek reassurance that buying that product is the right move. They are the hottest of hot prospects. Consider that 4 out of 5 consumers use smartphones to shop, 52% of Americans use mobile devices for in-store for research, 70% of mobile searches lead to online action inside of an hour, and people that find you on mobile convert at almost 3x the rate as those that find you on desktop or laptop. But what are marketers doing? Enter statistics from Mary Meeker’s latest State of the Internet report. Common sense says you buy advertising where people are spending their eyeball time, right? But while mobile is 20% of media use and rising, the ad spend there is 4%. Conversely, while print usage is at 5% and falling, ad spend there is 19%. We all love nostalgia, but come on. There are reasons marketing dollar migration to mobile has not matched user migration, including the availability of mobile ad products and the ability to measure user response to mobile ads. But interesting things are happening now. First came Facebook’s mobile ad, which let app developers pay to get potential downloads. Then their mobile ad network was announced at F8, allowing marketers to target users across non-Facebook apps while leveraging the wealth of diverse data Facebook has on those users, a big deal since Nielsen has pointed out mobile apps make up 89% of the media time spent on mobile. Twitter has a similar play in motion with their MoPub acquisition. And now mobile deeplinks have arrived, which can take users straight to sub-pages of mobile apps for a faster, more direct shopper/researcher user experience. The sooner the gratification, the smoother and faster the conversion. To be clear, growth in mobile ad spending is well underway. After posting $13.1 billion in 2013, Gartner expects global mobile ad spending to reach $18 billion this year, then go to $41.9 billion by 2017. Cheap smartphones and data plans are spreading worldwide, further fueling the shift to mobile. Mobile usage in India alone should grow 400% by 2018. And, of course, there’s the famous statistic that mobile should overtake desktop Internet usage this year. How can we as marketers mess up this opportunity? Two ways. We could position ourselves in perpetual “catch-up” mode and keep spending ad dollars where the public used to be. And we could annoy mobile users with horrid old-school marketing practices. Two-thirds of users told Forrester they think interruptive in-app ads are more annoying than TV ads. Make sure your brand’s social marketing technology platform is delivering a crystal clear picture of your social connections so the mobile touch point is highly relevant, mobile optimized, and delivering real value and satisfying experiences. Otherwise, all we’ve done is find a new way to be unwanted. @mikestiles @oraclesocialPhoto: Kate Mallatratt, freeimages.com

    Read the article

  • What Counts For a DBA: Imagination

    - by drsql
    "Imagination…One little spark, of inspiration… is at the heart, of all creation." – From the song "One Little Spark", by the Sherman Brothers I have a confession to make. Despite my great enthusiasm for databases and programming, it occurs to me that every database system I've ever worked on has been, in terms of its inputs and outputs, downright dull. Most have been glorified e-spreadsheets, many replacing manual systems built on actual spreadsheets. I've created a lot of database-driven software whose main job was to "count stuff"; phone calls, web visitors, payments, donations, pieces of equipment and so on. Sometimes, instead of counting stuff, the database recorded values from other stuff, such as data from sensors or networking devices. Yee hah! So how do we, as DBAs, maintain high standards and high spirits when we realize that so much of our work would fail to raise the pulse of even the most easily excitable soul? The answer lies in our imagination. To understand what I mean by this, consider a role that, in terms of its output, offers an extreme counterpoint to that of the DBA: the Disney Imagineer. Their job is to design Disney's Theme Parks, of which I'm a huge fan. To me this has always seemed like a fascinating and exciting job. What must an Imagineer do, every day, to inspire the feats of creativity that are so clearly evident in those spectacular rides and shows? Here, if ever there was one, is a role where "dull moments" must be rare indeed, surely? I wanted to find out, and so parted with a considerable sum of money for my wife and I to have lunch with one; I reasoned that if I found one small way to apply their secrets to my own career, it would be money well spent. Early in the conversation with our Imagineer (Cindy Cote), the job did indeed sound magical. However, as talk turned to management meetings, budget-wrangling and insane deadlines, I came to the strange realization that, in fact, her job was a lot more like mine than I would ever have guessed. Much like databases, all those spectacular Disney rides bring with them a vast array of complex plumbing, lighting, safety features, and all manner of other "boring bits", kept well out of sight of the end user, but vital for creating the desired experience; and, of course, it is these "boring bits" that take up much of the Imagineer's time. Naturally, there is still a vital part of their job that is spent testing out new ideas, putting themselves in the place of a park visitor, from a 9-year-old boy to a 90-year-old grandmother, and trying to imagine what experiences they'd like to have. It is these small, but vital, sparks of imagination and creativity that have the biggest impact. The real feat of a successful Imagineer is clearly to never to lose sight of this fact, in among all the rote tasks. It is the same for a DBA. Not matter how seemingly dull is the task at hand, try to put yourself in the shoes of the end user, and imagine how your input will affect the experience he or she will have with the database you're building, and how that may affect the world beyond the bits stored in your database. Then, despite the inevitable rush to be "done", find time to go the extra mile and hone the design so that it delivers something as close to that imagined experience as you can get. OK, our output still can't and won't reach the same spectacular heights as the "Journey into The Imagination" ride at EPCOT Theme Park in Orlando, where I first heard "One Little Spark". However, our imaginative sparks and efforts can, and will, make a difference to the user who now feels slightly more at home with a database application, or to the manager holding a report presented with enough clarity to drive an interesting decision or two. They are small victories, but worth having, and appreciated, or at least that's how I imagine it.

    Read the article

  • Sparse virtual machine disk image resizing weirdness?

    - by Matt H
    I have a partitioned virtual machine disk image created by vmware. What I want to do is resize that by 10GB. The file size is showing as 64424509440. Or 60GB. So I ran this: dd if=/dev/zero of=./win7.img seek=146800640 count=0 It ran without errors and I can verify the new size is in fact 75161927680 bytes or 70GB. This is where it gets a little odd. I started the guest domain in xen which is a Windows 7 enterprise machine. What I was expecting to see in diskmgmt.msc is 2 partitions. 1 system partition at the start of around 100MB and near 60GB partition (which is C drive) followed by around 10GB of free space. Actually what I saw was a 70GB partition!?! That confused me... so I decided to run the Check Disk which when you set it on the C drive it asks you to reboot so it'll run on boot. So I did that and during the boot it ran the checks. It got all the way through stage 3 and didn't show any errors at all. Looked at the partitions in disk manager and now C drive has shrunk back to 60GB and there is no free space. What gives? Ok, I thought I'd try mounting it under Dom0 and examining it with fdisk. This is what I get when mounted sudo xl block-attach 0 tap:aio:/home/xen/vms/otoy_v1202-xen.img xvda w sudo fdisk -l /dev/xvda Disk /dev/xvda: 64.4 GB, 64424509440 bytes 255 heads, 63 sectors/track, 7832 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x582dfc96 Device Boot Start End Blocks Id System /dev/xvda1 * 1 13 102400 7 HPFS/NTFS Partition 1 does not end on cylinder boundary. /dev/xvda2 13 7833 62810112 7 HPFS/NTFS Note the cylinder boundary comment. When I run sudo cfdisk /dev/xvda I get: FATAL ERROR: Bad primary partition 1: Partition ends in the final partial cylinder Press any key to exit cfdisk So I guess this is a bigger problem than first thought. How can I fix this? EDIT: Oops, the cylinder boundary thing is not a problem at all since disks have used LBA etc. So that threw me for a moment... still the problem exists... Now this output looks a little different. sudo sfdisk -uS -l /dev/xvda Disk /dev/xvda: 7832 cylinders, 255 heads, 63 sectors/track Units = sectors of 512 bytes, counting from 0 Device Boot Start End #sectors Id System /dev/xvda1 * 2048 206847 204800 7 HPFS/NTFS /dev/xvda2 206848 125827071 125620224 7 HPFS/NTFS /dev/xvda3 0 - 0 0 Empty /dev/xvda4 0 - 0 0 Empty BTW: I do have a backup of the image so if you help me mess it up that's ok. EDIT: sudo parted /dev/xvda print free Model: Xen Virtual Block Device (xvd) Disk /dev/xvda: 64.4GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 32.3kB 1049kB 1016kB Free Space 1 1049kB 106MB 105MB primary ntfs boot 2 106MB 64.4GB 64.3GB primary ntfs 64.4GB 64.4GB 1049kB Free Space Cool. Linux is showing free space is 10GB which is what I expect. The problem is windows isn't seeing this?

    Read the article

  • Extrapolation breaks collision detection

    - by user22241
    Before applying extrapolation to my sprite's movement, my collision worked perfectly. However, after applying extrapolation to my sprite's movement (to smooth things out), the collision no longer works. This is how things worked before extrapolation: However, after I implement my extrapolation, the collision routine breaks. I am assuming this is because it is acting upon the new coordinate that has been produced by the extrapolation routine (which is situated in my render call ). After I apply my extrapolation How to correct this behaviour? I've tried puting an extra collision check just after extrapolation - this does seem to clear up a lot of the problems but I've ruled this out because putting logic into my rendering is out of the question. I've also tried making a copy of the spritesX position, extrapolating that and drawing using that rather than the original, thus leaving the original intact for the logic to pick up on - this seems a better option, but it still produces some weird effects when colliding with walls. I'm pretty sure this also isn't the correct way to deal with this. I've found a couple of similar questions on here but the answers haven't helped me. This is my extrapolation code: public void onDrawFrame(GL10 gl) { //Set/Re-set loop back to 0 to start counting again loops=0; while(System.currentTimeMillis() > nextGameTick && loops < maxFrameskip){ SceneManager.getInstance().getCurrentScene().updateLogic(); nextGameTick+=skipTicks; timeCorrection += (1000d/ticksPerSecond) % 1; nextGameTick+=timeCorrection; timeCorrection %=1; loops++; tics++; } extrapolation = (float)(System.currentTimeMillis() + skipTicks - nextGameTick) / (float)skipTicks; render(extrapolation); } Applying extrapolation render(float extrapolation){ //This example shows extrapolation for X axis only. Y position (spriteScreenY is assumed to be valid) extrapolatedPosX = spriteGridX+(SpriteXVelocity*dt)*extrapolation; spriteScreenPosX = extrapolationPosX * screenWidth; drawSprite(spriteScreenX, spriteScreenY); } Edit As I mentioned above, I have tried making a copy of the sprite's coordinates specifically to draw with.... this has it's own problems. Firstly, regardless of the copying, when the sprite is moving, it's super-smooth, when it stops, it's wobbling slightly left/right - as it's still extrapolating it's position based on the time. Is this normal behavior and can we 'turn it off' when the sprite stops? I've tried having flags for left / right and only extrapolating if either of these is enabled. I've also tried copying the last and current positions to see if there is any difference. However, as far as collision goes, these don't help. If the user is pressing say, the right button and the sprite is moving right, when it hits a wall, if the user continues to hold the right button down, the sprite will keep animating to the right, while being stopped by the wall (therefore not actually moving), however because the right flag is still set and also because the collision routine is constantly moving the sprite out of the wall, it still appear to the code (not the player) that the sprite is still moving, and therefore extrapolation continues. So what the player would see, is the sprite 'static' (yes, it's animating, but it's not actually moving across the screen), and every now and then it shakes violently as the extrapolation attempts to do it's thing....... Hope this help

    Read the article

  • Applying for internship

    - by Margus
    At the moment I'm thinking about applying for internship at Eesti Energia. I seem to be eligible, but before contacting them I need to learn how to compile an informative and complete CV and cover letter. I do not consider myself as shallow minded, but also I'm not sure how to convincingly justify the stand of interest and how internship will help me in my future career. Course of life Tallinna Tehnikagümnaasium 2003 - 2006 Tallinna Tehnikaülikool 2006 – 2009 Military service at Signal Batallion Tallinna Tehnikaülikool 2010 – ... I started my academic career as Computer and Systems Engineer, but as I excelled in programming classes, I changed my major to Software Engineer and taken my specialty in web applications and logic. Nowadays I mainly use Java, Mathematica and C# to solve problems. For 2 times, I have taken part in ACM International Collegiate Programming Contest, where my team won the nationals and did pretty well in Europe. Also as part of notable thing in my academic career, my team wrote the Kalah game AI, that won in University's main programming class AI tournament. My hobbies are mind games and occasional problem solving. Few years ago I also competed in International Checkers EM (requires being in top 3 in nationals) as part of cadet and junior age group - I did not come close to winning, but I exceeded about half of the players each time. In high school and gymnasium I took part and later was the captain of team, that passes regionals and made it to top 3 of nationals (and later won) in (blitz) russian checkers. That was impressive because, it was a team effort as we only had (depending on year) 2-3 strong players. Although I started programming exactly 9,5 years ago I have no work experience. Well actually thats not true, as I completed my army duty, I was hired for a year (days still counting) to be apart of communicational (emergency) infrastructure action group where I'm the teams IT specialist (it's more complicated). So I consider myself to be aware of: rough conditions, teamwork, high stress tolerance, being on time and what responsibility means. As negative things I can mention: I do not have drivers licence. Although only Estonian and English are noted as requirements, then Russian is most likely required as well and I barely understand some of it. Reasons why I want to apply there, are: I need to do at least 4-6 week traineeship and it's in the right field I have the requirements and tasks seem easy enough Company is well known and has fairly good reputation Family and friend think, that it would be acceptable place to work Myriad of options to do final thesis about open up Work place is located in the same city I live atm. At moment, I see myself having a hard time explain why I would prefer it or where I see myself in 10 years if I was offered a job there. Question I have some idea how Curriculum Vitæ should look like, or I can google for template, but I'm not sure how to write informative one. Last I did one, it looked like: picture + contact information + education. Vaguely I only remember, that cover letter should be custom tailored for each place you apply containing ...

    Read the article

  • Caesar Cipher Program In C++ [migrated]

    - by xaliap81
    I am trying to write a caesar cipher program in c++. This is my codes template: int chooseKEY (){ //choose key shift from 1-26 } void encrypt (char * w, char *e, int key) { //Encryption function, *w is the text in the beginning, the *e is the encrypted text //Encryption in being made only in letters noy in numbers and punctuations // *e = *w + key } void decrypt (char * e, char *w, int key) { // Decryption function, *e is the text in the beginning, the *w is the decrypted text //Dencryption in being made only in letters no numbers and punctuations // *w = *e - key } void Caesar (char * inputFile, char * outputFile, int key, int mode) { // Read the inputfile which contains some data. If mode ==1 then the data is being //encrypted else if mode == 0 the data is being decrypted and is being written in //the output file } void main() { // call the Caesar function } The program has four functions, chooseKey function have to return an int as a shift key from 1-26. Encrypt function has three parameters, *w is the text in the beginning, *e is the encrypted text and the key is from the choosekey function.For encryption : Only letters have to be encrypted not numbers or punctuation and the letters are counting cyclic. Decrypt function has three parameters *e is the encrypted text, *w is the beginning text and the key. Caesar function has four parameters, inputfile which is the file that contains the beginning text, output file which contains the encrypted text, the key and the mode (if mode==1) encryption, (mode ==0) decryption. This is my sample code: #include <iostream> #include <fstream> using namespace std; int chooseKey() { int key_number; cout << "Give a number from 1-26: "; cin >> key_number; while(key_number<1 || key_number>26) { cout << "Your number have to be from 1-26.Retry: "; cin >> key_number; } return key_number; } void encryption(char *w, char *e, int key){ char *ptemp = w; while(*ptemp){ if(isalpha(*ptemp)){ if(*ptemp>='a'&&*ptemp<='z') { *ptemp-='a'; *ptemp+=key; *ptemp%=26; *ptemp+='A'; } } ptemp++; } w=e; } void decryption (char *e, char *w, int key){ char *ptemp = e; while(*ptemp){ if(isalpha(*ptemp)) { if(*ptemp>='A'&&*ptemp<='Z') { *ptemp-='A'; *ptemp+=26-key; *ptemp%=26; *ptemp+='a'; } } ptemp++; } e=w; } void Caesar (char *inputFile, char *outputFile, int key, int mode) { ifstream input; ofstream output; char buf, buf1; input.open(inputFile); output.open(outputFile); buf=input.get(); while(!input.eof()) { if(mode == 1){ encryption(&buf, &buf1, key); }else{ decryption(&buf1, &buf, key); } output << buf; buf=input.get(); } input.close(); output.close(); } int main(){ int key, mode; key = chooseKey(); cout << "1 or 0: "; cin >> mode; Caesar("test.txt","coded.txt",key,mode); system("pause"); } I am trying to run the code and it is being crashed (Debug Assertion Failed). I think the problem is somewhere inside Caesar function.How to call the encrypt and decrypt functions. But i don't know what exactly is. Any ideas?

    Read the article

  • Cloud Adoption Challenges

    - by Herve Roggero
    Originally posted on: http://geekswithblogs.net/hroggero/archive/2013/11/07/cloud-adoption-challenges.aspxWhile cloud computing makes sense for most organizations and countless projects, I have seen customers significantly struggle with cloud adoption challenges. This blog post is not an attempt to provide a generic assessment of cloud adoption; rather it is an account of personal experiences in the field, some of which may or may not apply to your organization. Cloud First, Burst? In the rush to cloud adoption some companies have made the decision to redesign their core system with a cloud first approach. However a cloud first approach means that the system may not work anymore on-premises after it has been redesigned, specifically if the system depends on Platform as a Service (PaaS) components (such as Azure Tables). While PaaS makes sense when your company is in a position to adopt the cloud exclusively, it can be difficult to leverage with systems that need to work in different clouds or on-premises. As a result, some companies are starting to rethink their cloud strategy by designing for on-premises first, and modify only the necessary components to burst when needed in the cloud. This generally means that the components need to work equally well in any environment, which requires leveraging Infrastructure as a Service (IaaS) or additional investments for PaaS applications, or both.  What’s the Problem? Although most companies can benefit from cloud computing, not all of them can clearly identify a business reason for doing so other than in very generic terms. I heard many companies claim “it’s cheaper”, or “it allows us to scale”, without any specific metric or clear strategy behind the adoption decision. Other companies have a very clear strategy behind cloud adoption and can precisely articulate business benefits, such as “we have a 500% increase in traffic twice a year, so we need to burst in the cloud to avoid doubling our network and server capacity”. Understanding the problem being solved through by adopting cloud computing can significantly help organizations determine the optimum path and timeline to adoption. Performance or Scalability? I stopped counting the number of times I heard “the cloud doesn’t scale; our database runs faster on a laptop”.  While performance and scalability are related concepts, they are nonetheless different in nature. Performance is a measure of response time under a given load (meaning with a specific number of users), while scalability is the performance curve over various loads. For example one system could see great performance with 100 users, but timeout with 1,000 users, in which case the system wouldn’t scale. However another system could have average performance with 100 users, but display the exact same performance with 1,000,000 users, in which case the system would scale. Understanding that cloud computing does not usually provide high performance, but instead provides the tools necessary to build a scalable system (usually using PaaS services such as queuing and data federation), is fundamental to proper cloud adoption. Uptime? Last but not least, you may want to read the Service Level Agreement of your cloud provider in detail if you haven’t done so. If you are expecting 99.99% uptime annually you may be in for a surprise. Depending on the component being used, there may be no associated SLA at all! Other components may be restarted at any time, or services may experience failover conditions weekly ( or more) based on current overall conditions of the cloud service provider, most of which are outside of your control. As a result, for PaaS cloud environments (and to a certain extent some IaaS systems), applications need to assume failure and gracefully retry to be successful in the cloud in order to provide service continuity to end users. About Herve Roggero Herve Roggero, Windows Azure MVP, is the founder of Blue Syntax Consulting (http://www.bluesyntax.net). Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" and “PRO SQL Server 2012 Practices” from Apress, a PluralSight author, and runs the Azure Florida Association.

    Read the article

  • git post-receive hook throws "command not found" error but seems to run properly and no errors when run manually

    - by Ben
    I have a post-receive hook that runs on a central git repository set up with gitolite to trigger a git pull on a staging server. It seems to work properly, but throws a "command not found" error when it is run. I am trying to track down the source of the error, but have not had any luck. Running the same commands manually does not produce an error. The error changes depending on what was done in the commit that is being pushed to the central repository. For instance, if 'git rm ' was committed and pushed to the central repo the error message will be "remote: hooks/post-receive: line 16: Removed: command not found" and if 'git add ' was committed and pushed to the central repo the error message will be "remote: hooks/post-receive: line 16: Merge: command not found". In either case the 'git pull' run on the staging server works correctly despite the error message. Here is the post-receive script: #!/bin/bash # # This script is triggered by a push to the local git repository. It will # ssh into a remote server and perform a git pull. # # The SSH_USER must be able to log into the remote server with a # passphrase-less SSH key *AND* be able to do a git pull without a passphrase. # # The command to actually perform the pull request on the remost server comes # from the ~/.ssh/authorized_keys file on the REMOTE_HOST and is triggered # by the ssh login. SSH_USER="remoteuser" REMOTE_HOST="staging.server.com" `ssh $SSH_USER@$REMOTE_HOST` # This is line 16 echo "Done!" The command that does the git pull on the staging server is in the ssh user's ~/.ssh/authorized_keys file and is: command="cd /var/www/staging_site; git pull",no-port-forwarding,no-X11-forwarding,no-agent-forwarding, ssh-rsa AAAAB3NzaC1yc2EAAAABIwAA... (the rest of the public key) This is the actual output from removing a file from my local repo, committing it locally, and pushing it to the central git repo: ben@tamarack:~/thejibe/testing/web$ git rm ./testing rm 'testing' ben@tamarack:~/thejibe/testing/web$ git commit -a -m "Remove testing file" [master bb96e13] Remove testing file 1 files changed, 0 insertions(+), 5 deletions(-) delete mode 100644 testing ben@tamarack:~/thejibe/testing/web$ git push Counting objects: 3, done. Delta compression using up to 2 threads. Compressing objects: 100% (2/2), done. Writing objects: 100% (2/2), 221 bytes, done. Total 2 (delta 1), reused 0 (delta 0) remote: From [email protected]:testing remote: aa72ad9..bb96e13 master -> origin/master remote: hooks/post-receive: line 16: Removed: command not found # The error msg remote: Done! To [email protected]:testing aa72ad9..bb96e13 master -> master ben@tamarack:~/thejibe/testing/web$ As you can see the post-receive script gets to the echo "Done!" line and when I look on the staging server the git pull has been successfully run, but there's still that nagging error message. Any suggestions on where to look for the source of the error message would be greatly appreciated. I'm tempted to redirect stderr to /dev/null but would prefer to know what the problem is.

    Read the article

  • git post-receive hook throws "command not found" error but seems to run properly and no errors when run manually

    - by Ben
    I have a post-receive hook that runs on a central git repository set up with gitolite to trigger a git pull on a staging server. It seems to work properly, but throws a "command not found" error when it is run. I am trying to track down the source of the error, but have not had any luck. Running the same commands manually does not produce an error. The error changes depending on what was done in the commit that is being pushed to the central repository. For instance, if 'git rm ' was committed and pushed to the central repo the error message will be "remote: hooks/post-receive: line 16: Removed: command not found" and if 'git add ' was committed and pushed to the central repo the error message will be "remote: hooks/post-receive: line 16: Merge: command not found". In either case the 'git pull' run on the staging server works correctly despite the error message. Here is the post-receive script: #!/bin/bash # # This script is triggered by a push to the local git repository. It will # ssh into a remote server and perform a git pull. # # The SSH_USER must be able to log into the remote server with a # passphrase-less SSH key *AND* be able to do a git pull without a passphrase. # # The command to actually perform the pull request on the remost server comes # from the ~/.ssh/authorized_keys file on the REMOTE_HOST and is triggered # by the ssh login. SSH_USER="remoteuser" REMOTE_HOST="staging.server.com" `ssh $SSH_USER@$REMOTE_HOST` # This is line 16 echo "Done!" The command that does the git pull on the staging server is in the ssh user's ~/.ssh/authorized_keys file and is: command="cd /var/www/staging_site; git pull",no-port-forwarding,no-X11-forwarding,no-agent-forwarding, ssh-rsa AAAAB3NzaC1yc2EAAAABIwAA... (the rest of the public key) This is the actual output from removing a file from my local repo, committing it locally, and pushing it to the central git repo: ben@tamarack:~/thejibe/testing/web$ git rm ./testing rm 'testing' ben@tamarack:~/thejibe/testing/web$ git commit -a -m "Remove testing file" [master bb96e13] Remove testing file 1 files changed, 0 insertions(+), 5 deletions(-) delete mode 100644 testing ben@tamarack:~/thejibe/testing/web$ git push Counting objects: 3, done. Delta compression using up to 2 threads. Compressing objects: 100% (2/2), done. Writing objects: 100% (2/2), 221 bytes, done. Total 2 (delta 1), reused 0 (delta 0) remote: From [email protected]:testing remote: aa72ad9..bb96e13 master -> origin/master remote: hooks/post-receive: line 16: Removed: command not found # The error msg remote: Done! To [email protected]:testing aa72ad9..bb96e13 master -> master ben@tamarack:~/thejibe/testing/web$ As you can see the post-receive script gets to the echo "Done!" line and when I look on the staging server the git pull has been successfully run, but there's still that nagging error message. Any suggestions on where to look for the source of the error message would be greatly appreciated. I'm tempted to redirect stderr to /dev/null but would prefer to know what the problem is.

    Read the article

  • How to correctly partition usb flash drive and which filesystem to choose considering wear leveling?

    - by random1
    Two problems. First one: how to partition the flash drive? I shouldn't need to do this, but I'm no longer sure if my partition is properly aligned since I was forced to delete and create a new partition table after gparted complained when I tried to format the drive from FAT to ext4. The naive answer would be to say "just use default and everything is going to be alright". However if you read the following links you'll know things are not that simple: https://lwn.net/Articles/428584/ and http://linux-howto-guide.blogspot.com/2009/10/increase-usb-flash-drive-write-speed.html Then there is also the issue of cylinders, heads and sectors. Currently I get this: $sfdisk -l -uM /dev/sdd Disk /dev/sdd: 30147 cylinders, 64 heads, 32 sectors/track Warning: The partition table looks like it was made for C/H/S=*/255/63 (instead of 30147/64/32). For this listing I'll assume that geometry. Units = mebibytes of 1048576 bytes, blocks of 1024 bytes, counting from 0 Device Boot Start End MiB #blocks Id System /dev/sdd1 1 30146 30146 30869504 83 Linux $fdisk -l /dev/sdd Disk /dev/sdd: 31.6 GB, 31611420672 bytes 255 heads, 63 sectors/track, 3843 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00010c28 So from my current understanding I should align partitions at 4 MiB (currently it's at 1 MiB). But I still don't know how to set the heads and sectors properly for my device. Second problem: file system. From the benchmarks I saw ext4 provides the best performance, however there is the issue of wear leveling. How can I know that my Transcend JetFlash 700's microcontroller provides for wear leveling? Or will I just be killing my drive faster? I've seen a lot of posts on the web saying don't worry the newer drives already take care of that. But I've never seen a single piece of backed evidence of that and at some point people start mixing SSD with USB flash drives technology. The safe option would be to go for ext2, however a serious of tests that I performed showed horrible performance!!! These values are from a real scenario and not some synthetic test: 42 files: 3,429,415,284 bytes copied to flash drive original fat32: 15.1 MiB/s ext4 after new partition table: 10.2 MiB/s ext2 after new partition table: 1.9 MiB/s Please read the links that I posted above before answering. I would also be interested in answers backed up with some references because a lot is said and re-said but then it lacks facts. Thank you for the help.

    Read the article

  • Bash-Scripting - Munin Plugin don't work

    - by FTV Admin
    i have written a munin-plugin to count the http-statuscodes of lighttpd. The script: #!/bin/bash ###################################### # Munin-Script: Lighttpd-Statuscodes # ###################################### ##Config # path to lighttpd access.log LIGHTTPD_ACCESS_LOG_PATH="/var/log/lighttpd/access.log" # rows to parse in logfile (higher value incrase time to run plugin. if value to low you may get bad counting) LOG_ROWS="200000" # #munin case $1 in autoconf) # check config AVAILABLE=`ls $LIGHTTPD_ACCESS_LOG_PATH` if [ "$AVAILABLE" = "$LIGHTTPD_ACCESS_LOG_PATH" ]; then echo "yes" else echo "No: "$AVAILABLE echo "Please check your config!" fi exit 0;; config) # graph config cat <<'EOM' graph_title Lighhtpd Statuscodes graph_vlabel http-statuscodes / min graph_category lighttpd 1xx.label 1xx 2xx.label 2xx 3xx.label 3xx 4xx.label 4xx 5xx.label 5xx EOM exit 0;; esac ## calculate AVAILABLE=`ls $LIGHTTPD_ACCESS_LOG_PATH` if [ "$AVAILABLE" = "$LIGHTTPD_ACCESS_LOG_PATH" ]; then TIME_NOW=`date` CODE_1xx="0" CODE_2xx="0" CODE_3xx="0" CODE_4xx="0" CODE_5xx="0" for i in 1 2 3 4 5; do TIME5=`date +%d/%b/%Y:%k:%M --date "$TIME_NOW -"$i"min"` CODE_1xx=$(( $CODE_1xx + `tail -n $LOG_ROWS $LIGHTTPD_ACCESS_LOG_PATH | grep "$TIME5" | grep 'HTTP/1.1" 1' | grep -c " "` )) CODE_2xx=$(( $CODE_2xx + `tail -n $LOG_ROWS $LIGHTTPD_ACCESS_LOG_PATH | grep "$TIME5" | grep 'HTTP/1.1" 2' | grep -c " "` )) CODE_3xx=$(( $CODE_3xx + `tail -n $LOG_ROWS $LIGHTTPD_ACCESS_LOG_PATH | grep "$TIME5" | grep 'HTTP/1.1" 3' | grep -c " "` )) CODE_4xx=$(( $CODE_4xx + `tail -n $LOG_ROWS $LIGHTTPD_ACCESS_LOG_PATH | grep "$TIME5" | grep 'HTTP/1.1" 4' | grep -c " "` )) CODE_5xx=$(( $CODE_5xx + `tail -n $LOG_ROWS $LIGHTTPD_ACCESS_LOG_PATH | grep "$TIME5" | grep 'HTTP/1.1" 5' | grep -c " "` )) done CODE_1xx=$(( $CODE_1xx / 5 )) CODE_2xx=$(( $CODE_2xx / 5 )) CODE_3xx=$(( $CODE_3xx / 5 )) CODE_4xx=$(( $CODE_4xx / 5 )) CODE_5xx=$(( $CODE_5xx / 5 )) echo "1xx.value "$CODE_1xx echo "2xx.value "$CODE_2xx echo "3xx.value "$CODE_3xx echo "4xx.value "$CODE_4xx echo "5xx.value "$CODE_5xx else echo "1xx.value U" echo "2xx.value U" echo "3xx.value U" echo "4xx.value U" echo "5xx.value U" fi If i run the script on local machine it runs perfectly: root@server1 /etc/munin/plugins # ll lrwxrwxrwx 1 root root 45 2011-12-19 15:23 lighttpd_statuscodes -> /usr/share/munin/plugins/lighttpd_statuscodes* root@server1 /etc/munin/plugins # ./lighttpd_statuscodes autoconf yes root@server1 /etc/munin/plugins # ./lighttpd_statuscodes config graph_title Lighhtpd Statuscodes graph_vlabel http-statuscodes / min graph_category lighttpd 1xx.label 1xx 2xx.label 2xx 3xx.label 3xx 4xx.label 4xx 5xx.label 5xx root@server1 /etc/munin/plugins #./lighttpd_statuscodes 1xx.value 0 2xx.value 5834 3xx.value 1892 4xx.value 0 5xx.value 0 But Munin shows no graph: http://s1.directupload.net/images/111219/3psgq3vb.jpg I have tested the Plugin from munin-server via telnet: root@munin-server /etc/munin/plugins/ # telnet 123.123.123.123 4949 Trying 123.123.123.123... Connected to 123.123.123.123. Escape character is '^]'. # munin node at server1.cluster1 fetch lighttpd_statuscodes 1xx.value U 2xx.value U 3xx.value U 4xx.value U 5xx.value U . Connection closed by foreign host. You can see in the script that value = U only printed, when the script can't check the lighttpd's access.log. But why can't script do it, when running via munin, and when running on local machine all is ok? Is there a bug in my bash-script? I have no Idea. Thanks for helping!

    Read the article

  • Understanding packet flows over RVI

    - by choco-loo
    I'm trying to get a full grasp of firewall filters and how to apply them on a Juniper EX4200 switch - to be able to block ports, police traffic and shape traffic. The network architecture is as follows internet >-< vlan4000 >-< vlan43 vlan4000 is a public "routed" block (where all the IPs are routed to and the internet gw is) vlan43 is a vlan with public IPs with devices (servers) attached There are static routes and RVI's on the EX4200 to send all traffic via vlan4000's gateway to reach the internet. I've set up filters on both input and output of the respective RVI's and VLAN's - with simple counters, to measure traffic flow from a server inside of vlan43 and a server on the internet. Using a combination of iperf for UDP and TCP tests and fping for ICMP tests - I observed the following, icmp vlan43>internet internet>vlan43 unit4000-counter-in 0 0 unit4000-counter-out 0 0 unit43-counter-in 100 100 unit43-counter-out 0 0 vlan4000-counter-in 6 4 vlan4000-counter-out 107 104 vlan43-counter-in 101 100 vlan43-counter-out 100 100 tcp vlan43>internet internet>vlan43 unit4000-counter-in 0 0 unit4000-counter-out 0 0 unit43-counter-in 73535 38480 unit43-counter-out 0 0 vlan4000-counter-in 7 8 vlan4000-counter-out 73543 38489 vlan43-counter-in 73535 38481 vlan43-counter-out 38938 75880 udp vlan43>internet internet>vlan43 unit4000-counter-in 0 0 unit4000-counter-out 0 0 unit43-counter-in 81410 1 unit43-counter-out 0 0 vlan4000-counter-in 18 7 vlan4000-counter-out 81429 8 vlan43-counter-in 81411 1 vlan43-counter-out 1 85472 My key goals are to set up a few filters and policers, as there will be many more VLANs - that all need protecting from each other and the internet. Then globally limit/police all outbound traffic to the internet Block inbound ports to vlan43 (eg. 22) Limit outbound traffic from vlan43 (to the internet) Limit outbound traffic from vlan43 (to other vlans) Limit outbound traffic from vlan4000 (to the internet from all vlans) Route traffic from vlans via specific routing instances (FBF) The question What I want to understand is why there isn't ever any activity on unit4000 or vlan4000 inbound or outbound counter - is this because there isn't a device on this VLAN - and that the traffic is only traversing it? And with regards to the TCP test - why is there twice as many packets on unit43-counter-in, vlan4000-counter-out and vlan43-counter-in - is this counting both the inbound and outbound traffic?

    Read the article

  • opening adobe reader results in infinite explorer.exe process creation loop

    - by irrational John
    First, apologies if the answer to this is only a Google away. I tried, honest I did. But I wasn't able to find anything about this problem posted elsewhere. I'm using Adobe Reader v9.3.2 in Windows 7 Home Premium 64-bit. If you want more system details, then just request them. What happens is that when I attempt to open a PDF by clicking "Open" on it then (1) adobe reader never opens and (2) the explorer.exe program is (apparently) recursively opened. I base this on opening the Task Manager and seeing a long list of explorer.exe processes under the "Processes" tab. Usually there is only one. When I recreate this problem, the list of explorer.exe processes are at least a page or two long. (Too many to bother counting). I "correct" this problem by logging off and then logging back on. This kills all the explorer.exe tasks. Unfortunately I don't know another way to terminate them all. Now here's the curious part. This only happens when I attempt to "Open" a PDF file. If instead I use the context menu (right mouse click on the PDF) and select "Open with" and "Adobe Reader 9.3" then Adobe Reader opens the file with no problem. It seems that there is something wrong with the setting for the default open action for PDF files. However, I have been unable to fix this by changing the Windows setting. Here is what I have tried. When I open Control Panel > All Control Panel Items > Default Programs > Set Associations I do not find an entry for file type .pdf. There are only entries for .pdfxml and .pdx. When use "Open with" on a PDF file and select "Choose default program", the check box for "Always use the selected program to open this kind of file" is disabled (greyed out). I have uninstalled and reinstalled Adobe Reader but the problem persists. While obviously no lives are at stake here, this problem is annoying the frickin' heck out of me. If I forget and recreate this bug then I have to stop everything I'm doing to stop it. Any suggestions on how I might go about fixing this?

    Read the article

  • How to setup GIT repo on server with need for working dir (non- bare)

    - by OrangeTux
    I want to have configurate a GIT repo for a website. Multiple users will have a clone of the repo on their local machine and on the end of each day they push their work to the server. I can setup a bare repo, but I want a working dir/non-bare repository. The idea is that the working dir of the repository will the root folder for the website. At the end of each day all changes will be visible directly. But I can't find a way to do this. Initializing the server repo with git init gives the following error when a client is trying to push some files: git push origin master [email protected]'s password: Counting objects: 3, done. Writing objects: 100% (3/3), 227 bytes, done. Total 3 (delta 0), reused 0 (delta 0) remote: error: refusing to update checked out branch: refs/heads/master remote: error: By default, updating the current branch in a non-bare repository remote: error: is denied, because it will make the index and work tree inconsistent remote: error: with what you pushed, and will require 'git reset --hard' to match remote: error: the work tree to HEAD. remote: error: remote: error: You can set 'receive.denyCurrentBranch' configuration variable to remote: error: 'ignore' or 'warn' in the remote repository to allow pushing into remote: error: its current branch; however, this is not recommended unless you remote: error: arranged to update its work tree to match what you pushed in some remote: error: other way. remote: error: remote: error: To squelch this message and still keep the default behaviour, set remote: error: 'receive.denyCurrentBranch' configuration variable to 'refuse'. To ssh://[email protected]/home/orangetux/www/ ! [remote rejected] master -> master (branch is currently checked out) error: failed to push some refs to 'ssh://[email protected]/home/orangetux/www/' So I'm wondering if this the right way to setup a GIT repo for a website? If so, how do I have to do this? If not, what is a better way to setup a GIT repo for the development of a website? EDIT you can't push to a non-bare repository Oke, clear. But whats the way to solve my problem? Create a bare repository on the server and have a clone of this repo on the same server in the htdocs folder? This looks a bit clumsy to me. To see the result of a commit I've to clone the repository each time.

    Read the article

  • c# Truncate HTML safely for article summary

    - by WickedW
    Hi All, Does anyone have a c# variation of this? This is so I can take some html and display it without breaking as a summary lead in to an article? http://stackoverflow.com/questions/1193500/php-truncate-html-ignoring-tags Save me from reinventing the wheel! Thank you very much ---------- edit ------------------ Sorry, new here, and your right, should have phrased the question better, heres a bit more info I wish to take a html string and truncate it to a set number of words (or even char length) so I can then show the start of it as a summary (which then leads to the main article). I wish to preserve the html so I can show the links etc in preview. The main issue I have to solve is the fact that we may well end up with unclosed html tags if we truncate in the middle of 1 or more tags! The idea I have for solution is to a) truncate the html to N words (words better but chars ok) first (be sure not to stop in the middle of a tag and truncate a require attribute) b) work through the opened html tags in this truncated string (maybe stick them on stack as I go?) c) then work through the closing tags and ensure they match the ones on stack as I pop them off? d) if any open tags left on stack after this, then write them to end of truncated string and html should be good to go!!!! -- edit 12112009 Here is what I have bumbled together so far as a unittest file in VS2008, this 'may' help someone in future My hack attempts based on Jan code are at top for char version + word version (DISCLAIMER: this is dirty rough code!! on my part) I assume working with 'well-formed' HTML in all cases (but not necessarily a full document with a root node as per XML version) Abels XML version is at bottom, but not yet got round to fully getting tests to run on this yet (plus need to understand the code) ... I will update when I get chance to refine having trouble with posting code? is there no upload facility on stack? Thanks for all comments :) using System; using System.Collections.Generic; using System.Text.RegularExpressions; using System.Xml; using System.Xml.XPath; using Microsoft.VisualStudio.TestTools.UnitTesting; namespace PINET40TestProject { [TestClass] public class UtilityUnitTest { public static string TruncateHTMLSafeishChar(string text, int charCount) { bool inTag = false; int cntr = 0; int cntrContent = 0; // loop through html, counting only viewable content foreach (Char c in text) { if (cntrContent == charCount) break; cntr++; if (c == '<') { inTag = true; continue; } if (c == '>') { inTag = false; continue; } if (!inTag) cntrContent++; } string substr = text.Substring(0, cntr); //search for nonclosed tags MatchCollection openedTags = new Regex("<[^/](.|\n)*?>").Matches(substr); MatchCollection closedTags = new Regex("<[/](.|\n)*?>").Matches(substr); // create stack Stack<string> opentagsStack = new Stack<string>(); Stack<string> closedtagsStack = new Stack<string>(); // to be honest, this seemed like a good idea then I got lost along the way // so logic is probably hanging by a thread!! foreach (Match tag in openedTags) { string openedtag = tag.Value.Substring(1, tag.Value.Length - 2); // strip any attributes, sure we can use regex for this! if (openedtag.IndexOf(" ") >= 0) { openedtag = openedtag.Substring(0, openedtag.IndexOf(" ")); } // ignore brs as self-closed if (openedtag.Trim() != "br") { opentagsStack.Push(openedtag); } } foreach (Match tag in closedTags) { string closedtag = tag.Value.Substring(2, tag.Value.Length - 3); closedtagsStack.Push(closedtag); } if (closedtagsStack.Count < opentagsStack.Count) { while (opentagsStack.Count > 0) { string tagstr = opentagsStack.Pop(); if (closedtagsStack.Count == 0 || tagstr != closedtagsStack.Peek()) { substr += "</" + tagstr + ">"; } else { closedtagsStack.Pop(); } } } return substr; } public static string TruncateHTMLSafeishWord(string text, int wordCount) { bool inTag = false; int cntr = 0; int cntrWords = 0; Char lastc = ' '; // loop through html, counting only viewable content foreach (Char c in text) { if (cntrWords == wordCount) break; cntr++; if (c == '<') { inTag = true; continue; } if (c == '>') { inTag = false; continue; } if (!inTag) { // do not count double spaces, and a space not in a tag counts as a word if (c == 32 && lastc != 32) cntrWords++; } } string substr = text.Substring(0, cntr) + " ..."; //search for nonclosed tags MatchCollection openedTags = new Regex("<[^/](.|\n)*?>").Matches(substr); MatchCollection closedTags = new Regex("<[/](.|\n)*?>").Matches(substr); // create stack Stack<string> opentagsStack = new Stack<string>(); Stack<string> closedtagsStack = new Stack<string>(); foreach (Match tag in openedTags) { string openedtag = tag.Value.Substring(1, tag.Value.Length - 2); // strip any attributes, sure we can use regex for this! if (openedtag.IndexOf(" ") >= 0) { openedtag = openedtag.Substring(0, openedtag.IndexOf(" ")); } // ignore brs as self-closed if (openedtag.Trim() != "br") { opentagsStack.Push(openedtag); } } foreach (Match tag in closedTags) { string closedtag = tag.Value.Substring(2, tag.Value.Length - 3); closedtagsStack.Push(closedtag); } if (closedtagsStack.Count < opentagsStack.Count) { while (opentagsStack.Count > 0) { string tagstr = opentagsStack.Pop(); if (closedtagsStack.Count == 0 || tagstr != closedtagsStack.Peek()) { substr += "</" + tagstr + ">"; } else { closedtagsStack.Pop(); } } } return substr; } public static string TruncateHTMLSafeishCharXML(string text, int charCount) { // your data, probably comes from somewhere, or as params to a methodint XmlDocument xml = new XmlDocument(); xml.LoadXml(text); // create a navigator, this is our primary tool XPathNavigator navigator = xml.CreateNavigator(); XPathNavigator breakPoint = null; // find the text node we need: while (navigator.MoveToFollowing(XPathNodeType.Text)) { string lastText = navigator.Value.Substring(0, Math.Min(charCount, navigator.Value.Length)); charCount -= navigator.Value.Length; if (charCount <= 0) { // truncate the last text. Here goes your "search word boundary" code: navigator.SetValue(lastText); breakPoint = navigator.Clone(); break; } } // first remove text nodes, because Microsoft unfortunately merges them without asking while (navigator.MoveToFollowing(XPathNodeType.Text)) { if (navigator.ComparePosition(breakPoint) == XmlNodeOrder.After) { navigator.DeleteSelf(); } } // moves to parent, then move the rest navigator.MoveTo(breakPoint); while (navigator.MoveToFollowing(XPathNodeType.Element)) { if (navigator.ComparePosition(breakPoint) == XmlNodeOrder.After) { navigator.DeleteSelf(); } } // moves to parent // then remove *all* empty nodes to clean up (not necessary): // TODO, add empty elements like <br />, <img /> as exclusion navigator.MoveToRoot(); while (navigator.MoveToFollowing(XPathNodeType.Element)) { while (!navigator.HasChildren && (navigator.Value ?? "").Trim() == "") { navigator.DeleteSelf(); } } // moves to parent navigator.MoveToRoot(); return navigator.InnerXml; } [TestMethod] public void TestTruncateHTMLSafeish() { // Case where we just make it to start of HREF (so effectively an empty link) // 'simple' nested none attributed tags Assert.AreEqual(@"<h1>1234</h1><b><i>56789</i>012</b>", TruncateHTMLSafeishChar( @"<h1>1234</h1><b><i>56789</i>012345</b>", 12)); // In middle of a! Assert.AreEqual(@"<h1>1234</h1><a href=""testurl""><b>567</b></a>", TruncateHTMLSafeishChar( @"<h1>1234</h1><a href=""testurl""><b>5678</b></a><i><strong>some italic nested in string</strong></i>", 7)); // more Assert.AreEqual(@"<div><b><i><strong>1</strong></i></b></div>", TruncateHTMLSafeishChar( @"<div><b><i><strong>12</strong></i></b></div>", 1)); // br Assert.AreEqual(@"<h1>1 3 5</h1><br />6", TruncateHTMLSafeishChar( @"<h1>1 3 5</h1><br />678<br />", 6)); } [TestMethod] public void TestTruncateHTMLSafeishWord() { // zero case Assert.AreEqual(@" ...", TruncateHTMLSafeishWord( @"", 5)); // 'simple' nested none attributed tags Assert.AreEqual(@"<h1>one two <br /></h1><b><i>three ...</i></b>", TruncateHTMLSafeishWord( @"<h1>one two <br /></h1><b><i>three </i>four</b>", 3), "we have added ' ...' to end of summary"); // In middle of a! Assert.AreEqual(@"<h1>one two three </h1><a href=""testurl""><b class=""mrclass"">four ...</b></a>", TruncateHTMLSafeishWord( @"<h1>one two three </h1><a href=""testurl""><b class=""mrclass"">four five </b></a><i><strong>some italic nested in string</strong></i>", 4)); // start of h1 Assert.AreEqual(@"<h1>one two three ...</h1>", TruncateHTMLSafeishWord( @"<h1>one two three </h1><a href=""testurl""><b>four five </b></a><i><strong>some italic nested in string</strong></i>", 3)); // more than words available Assert.AreEqual(@"<h1>one two three </h1><a href=""testurl""><b>four five </b></a><i><strong>some italic nested in string</strong></i> ...", TruncateHTMLSafeishWord( @"<h1>one two three </h1><a href=""testurl""><b>four five </b></a><i><strong>some italic nested in string</strong></i>", 99)); } [TestMethod] public void TestTruncateHTMLSafeishWordXML() { // zero case Assert.AreEqual(@" ...", TruncateHTMLSafeishWord( @"", 5)); // 'simple' nested none attributed tags string output = TruncateHTMLSafeishCharXML( @"<body><h1>one two </h1><b><i>three </i>four</b></body>", 13); Assert.AreEqual(@"<body>\r\n <h1>one two </h1>\r\n <b>\r\n <i>three</i>\r\n </b>\r\n</body>", output, "XML version, no ... yet and addeds '\r\n + spaces?' to format document"); // In middle of a! Assert.AreEqual(@"<h1>one two three </h1><a href=""testurl""><b class=""mrclass"">four ...</b></a>", TruncateHTMLSafeishCharXML( @"<body><h1>one two three </h1><a href=""testurl""><b class=""mrclass"">four five </b></a><i><strong>some italic nested in string</strong></i></body>", 4)); // start of h1 Assert.AreEqual(@"<h1>one two three ...</h1>", TruncateHTMLSafeishCharXML( @"<h1>one two three </h1><a href=""testurl""><b>four five </b></a><i><strong>some italic nested in string</strong></i>", 3)); // more than words available Assert.AreEqual(@"<h1>one two three </h1><a href=""testurl""><b>four five </b></a><i><strong>some italic nested in string</strong></i> ...", TruncateHTMLSafeishCharXML( @"<h1>one two three </h1><a href=""testurl""><b>four five </b></a><i><strong>some italic nested in string</strong></i>", 99)); } } }

    Read the article

  • Best WordPress Video Themes for a Video Blog

    - by Matt
    WordPress has made blogging so easy & fun, there are plenty of video blog themes that you can pick from. However there is always rarity in quality. We at JustSkins have gathered some high quality, tested, tried video themes list. We tried to find some WordPress themes for vloggers, we knew all along that there are very few yet some of them are just brilliant premium wordpress themes. More on that later, let’s find out some themes which you can install on your vlog right now. On Demand 2.0 A fully featured video WordPress premium theme from Press75. Includes  theme options panel for personal customization and content management options, post thumbnails, drop down navigation menu, custom widgets and lots more. Demo | Price: $75 | DOWNLOAD VideoZoom An outstanding premium WordPress video theme from WPZoom featuring standard video integration plus additionally it lets you play any video from all the popular video websites. VideoZoom theme also includes a featured video slider on the homepage, multiple post layout options, theme options panel, WordPress 3.0 menus, backgrounds etc. Demo | Price Single: $69, Developer: $149 | DOWNLOAD Vidley Press75′s easy to use premium WordPress video theme. This theme is full of great features, it can be a perfect choice if you intend to make it a portal someday..it is scalable to shape like a news portal or portfolios. The Theme is widget ready. It has ability to place Featured Content and Featured Category section on homepage. The drop down menus on this theme are nifty! Demo | Price $75 |  DOWNLOAD Live A video premium WordPress theme designed for streaming video, and live event broadcasting. You can embed live video broadcasts from third party services like Ustream etc, and features a prominent timer counting down to the next broadcast, rotating bumper images, Facebook and twitter integration for viewer interaction, theme admin options panel and more make this theme one of its kind. Demo | Price: $99, Support License: $149| DOWNLOAD Groovy Video Woo Themes is pioneer in making beautiful wordpress themes,  One such theme that is built by keeping the video blogger in mind. The Groovy Theme is very colourful video blog premium WordPress theme. Creating video posts is quick and easy with just a copy / paste of the video’s embed code. The theme enables automatic video resizing, plenty of widgets. Also allows you to pick color of your choice. Price: Single Use $70, Developer Price : $150 | DOWNLOAD Video Flick Another exciting Video blogging theme by Press75 is the Video Flick theme. Video Flick is compatible with any video service that provides embed code, or if you want to host your own videos, Video Flick is also compatible with FLV (Flash Video) and Quicktime formats. This theme allows you to either keep standard Blog and/or have Video posts. You can pick a light or dark color option. Demo | Price : $75 | DOWNLOAD Woo Tube An excellent video premium WordPress theme from Woothemes, the WooTube theme is a very easy video blog platform, as it comes with  automatic video resizing, a completely widgetised sidebar and 7 different colour schemes to choose from. The theme  has the ability to be used as a normal blog or a gallery. A very wise choice! Price: Single Use $70, Developer Price : $150 | DOWNLOAD eVid Theme One of the nicest WordPress theme designed specifically for the video bloggers. Simple to integrate videos from video hosts such as Youtube, Vimeo, Veoh, MetaCafe etc. Demo | Price: $19 | DOWNLOAD Tubular A video premium WordPress theme from StudioPress which can also be used as a used a simple website or a blog. The theme is also available in a light color version. Demo | Price: $59.95 | DOWNLOAD Video Elements 2.0 Another beautiful video premium WordPress theme from Press75. Video Elements 2.0 has been re-designed to include the features you need to easily run and maintain a video blog on WordPress. Demo | Price: $75 | DOWNLOAD TV Elements 3.0 The theme includes a featured video carousel on the homepage which can display any number of videos, a featured category section which displays up to 12 channels, creates automatic thumbnails and a lots more… Demo | Price: $75 | DOWNLOAD Wave A beautiful premium video wordpress theme, Flexible & Super cool looking. The Design has very earthy feel to it. The theme has featured video area & latest listing on the homepage. All in all a simple design no fancy features. Demo | Price: $35 | Download

    Read the article

  • A Quick Primer on SharePoint Customization

    - by PeterBrunone
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} This one goes out to all the people who have been asked to change the way a SharePoint site looks.  Management wants to know how long it will take, and you can whip that out by tomorrow, right?  If you don't have time to prepare a treatise on what's involved, or if you just want to lend some extra weight to your case by quoting a blogger who was an MVP for seven years, then dive right in; this post is for you. There are three main components of SharePoint visual customization:   1)       Theme – A theme encompasses all the standardized text formatting and coloring (borders, fonts, etc), including the background images of various sections. All told, there could be around 50 images involved, and a few hundred CSS (style) classes.  Installing a theme once it’s been created is no great feat.  Given the number of pieces, of course, creating a new theme could take anywhere from a day to a week… once decisions have been made about the desired appearance. 2)      Master Page – A master page provides the framework for page layout.  This includes all the top and side menus, where content shows up, et cetera.  Master pages have been around for a long time in ASP.NET (Microsoft’s web development platform), and they do require some .NET programming knowledge.  Beyond that, in SharePoint, there are a few dozen controls which the system expects find on a given page.  They’re not all used at once, but if they’re not there when they’re needed, chaos ensues.  Estimating a custom master page is difficult, as it depends on the level of customization.  I’ve been on projects where I was brought in simply to fix some problems and add a few finishing touches, and it took 2-3 weeks.  Master page customization requires a large amount of testing time to make sure that the HTML, JavaScript, CSS, and control placement all work well together. 3)      Individual page layout – Each page (ideally) uses a master page for its template, but within the content areas defined by the master page, web parts can be added, removed, and configured from within the browser.  The wireframe that Brent provided could most likely be completed simply by manipulating the content on the home page in this fashion, and we had allowed about a day of effort for the task.  If needed, further functionality can be provided by an experienced ASP.NET developer; custom forms are a common example.  This of course is a bit more in-depth than simple content manipulation and could take several days per page (or more; there’s really no way to quantify this without a set of requirements).   That’s basically it.  To recap:  Fonts and coloring are done with themes, and can take anywhere from a day to a week to create (not counting creative time); required technical skills include HTML, CSS, and image manipulation.  Templated layout is done with master pages, and generally requires a developer familiar with both ASP.NET and SharePoint in particular; it can have far-reaching consequences depending on the complexity of the changes, and could add weeks or months to a project.  Page layout can be as simple as content manipulation in the web browser, taking a few hours per page, or it can involve more detail, like custom forms, and can require programming expertise and significantly more development time.

    Read the article

  • Spolskism or Twitterism: A Doctor writes...

    - by Phil Factor
    "I never realized I had a problem. I just 'twittered' because it was a social thing to do. All my mates were doing it. It made me feel good to have 'followers'; it bolstered my self-esteem. Of course, you don't think of the long-term effects on your work and on the way you think. There's no denying that it impairs your judgment…" Yes, this story is typical. Hundreds of people are waking up to the long term effects of twittering, and seeking help. Dave, who wishes to remain anonymous, told our reporter… "I started using Twitter at work. Just a few minutes now and then, throughout the day. A lot of my colleagues were doing it and I thought 'Well, that's cool; it must be part of what I should be doing at work'. Soon, I was avidly reading every twitter that came my way, and counting the minutes between my own twitters. I tried to kid myself that it was all about professional development and getting other people to help you with work-related problems, but in truth I had become addicted to the buzz of the social network. The worse thing was that it made me seem busy even when I was really just frittering my time away. Inevitably, I started to get behind with my real work." Experts have identified the syndrome and given it a name: 'Twitterism', sometimes referred to as 'Spolskism', after the person who first drew attention to the pernicious damage to well-being that the practice caused, and who had the courage to take the pledge of rejecting it. According to one expert… "The occasional Twitter does little harm to the participant, and can be an adaptive way of dealing with stress. Unfortunately, it rarely stops there. The addictive qualities of the practice have put a strain on the caring professions who are faced with a flood of people making that first bold step to seeking help". Dave is one of those now seeking help for his addiction… "I had lost touch with reality. Even though I twittered my work colleagues constantly, I found I actually spoke to them less and less. Even when out socializing, I would frequently disengage from the conversation, in order to twitter. I stopped blogging. I stopped responding to emails; the only way to reach me was through the world of Twitter. Unfortunately, my denial about the harm that twittering was doing to me, my friends, and my work-colleagues was so strong that I truly couldn't see that I had a problem." Like other addictions, the help and support of others who are 'taking the cure' is important. There is a common bond between those who have 'been through hell and back' and are once more able to experience the joys of actually conversing and socializing, rather than the false comfort of solitary 'twittering'. Complete abstinence is essential to the cure. Most of those who risk even an occasional twitter face a headlong slide back into 'binge' twittering. Tom, another twitterer who has managed to kick the habit explains… "My twittering addiction now seems more like a bad dream. You get to work, and switch on the PC. You say to yourself, just open up the browser, just for a minute, just to see what people are saying on Twitter. The next thing you know, half the day has gone by. The worst thing is that when you're addicted, you get good at covering up the habit; I spent so much time looking at the screen and typing on the keyboard, people just assumed I was working hard.I know that I must never forget what it was like then, and what it's like now that I've kicked the habit. I now have more time for productive work and a real social life." Like many addictions, Spolskism has its most detrimental effects on family, friends and workmates, rather than the addict. So often nowadays, we hear the sad stories of Twitter-Widows; tales of long lonely evenings spent whilst their partners are engrossed in their twittering into their 'mobiles' or indulging in their solitary spolskistic habits in privacy, under cover of 'having to do work at home'. Workmates suffer too, when the addicts even take their laptops or mobiles into meetings in order to 'twitter' with their fellow obsessives, even stooping to complain to their followers how boring the meeting is. No; The best advice is to leave twittering to the birds. You know it makes sense.

    Read the article

  • Ranking - an Introduction

    - by PointsToShare
    © 2011 By: Dov Trietsch. All rights reserved Ranking Ranking is quite common in the internet. Readers are asked to rank their latest reading by clicking on one of 5 (sometimes 10) stars. The number of stars is then converted to a number and the average number of stars as selected by all the readers is proudly (or shamefully) displayed for future readers. SharePoint 2007 lacked this feature altogether. SharePoint 2010 allows the users to rank items in a list or documents in a library (the two are actually the same because a library is actually a list). But in SP2010 the computation of the average is done later on a timer rather than on-the-spot as it should be. I suspect that the reason for this shortcoming is that they did not involve a mathematician! Let me explain. Ranking is kept in a related list. When a user rates a document the rank-list is added an item with the item id, the user name, and his number of stars. The fact that a user already ranked an item prevents him from ranking it again. This prevents the creator of the item from asking his mother to rank it a 5 and do it 753 times, thus stacking the ballot. Some systems will allow a user to change his rating and this will be done by updating the rank-list item. Now, when the timer kicks off, the list is spanned and for each item the rank-list items containing this id are summed up and divided by the number of votes thus yielding the new average. This is obviously very time consuming and very server intensive. In the 18th century an early actuary named James Dodson used what the great Augustus De Morgan (of De Morgan’s law) later named Commutation tables. The labor involved in computing a life insurance premium was staggering and also very error prone. Clerks with pencil and paper would multiply and add mountains of numbers to do the task. The more steps the greater the probability of error and the more expensive the process. Commutation tables created a “summary” of many steps and reduced the work 100 fold. So had Microsoft taken a lesson in the history of computation, they would have developed a much faster way for rating that may be done in real-time and is also 100 times faster and less CPU intensive. How do we do this? We use a form of commutation. We always keep the number of votes and the total of stars. One simple division gives us the average. So we write an event receiver. When a vote is added, we just add the stars to the total-stars and 1 to the number of votes. We then recomputed the average. When a vote is updated, we reduce the total by the old vote, increase it by the new vote and leave the number of votes the same. Again we do the division to get the new average. When a vote is deleted (highly unlikely and maybe even prohibited), we reduce the total by that vote and reduce the number of votes by 1… Gone are the days of spanning lists, counting items, and tallying votes and we have no need for a timer process to run it all. This is the first of a few treatises on ranking. Even though I discussed the math and the history thereof, in here I am only going to solve the presentation issue. I wanted to create the CSS and Jscript needed to display the stars, create the various effects like hovering and clicking (onmouseover, onmouseout, onclick, etc.) and I wanted to create a general solution with any number of stars. When I had it all done, I created the ranking game so that I could test it. The game is interesting in and on itself, so here it is (or go to the games page and select “rank the stars”). BTW, when you play it, look at the source code and see how it was all done.  Next, how the 5 stars are displayed in the New and Update forms. When the whole set of articles will be done, you’ll be able to create the complete solution. That’s all folks!

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31  | Next Page >