Search Results

Search found 5554 results on 223 pages for 'cool cs'.

Page 162/223 | < Previous Page | 158 159 160 161 162 163 164 165 166 167 168 169  | Next Page >

  • Sparse virtual machine disk image resizing weirdness?

    - by Matt H
    I have a partitioned virtual machine disk image created by vmware. What I want to do is resize that by 10GB. The file size is showing as 64424509440. Or 60GB. So I ran this: dd if=/dev/zero of=./win7.img seek=146800640 count=0 It ran without errors and I can verify the new size is in fact 75161927680 bytes or 70GB. This is where it gets a little odd. I started the guest domain in xen which is a Windows 7 enterprise machine. What I was expecting to see in diskmgmt.msc is 2 partitions. 1 system partition at the start of around 100MB and near 60GB partition (which is C drive) followed by around 10GB of free space. Actually what I saw was a 70GB partition!?! That confused me... so I decided to run the Check Disk which when you set it on the C drive it asks you to reboot so it'll run on boot. So I did that and during the boot it ran the checks. It got all the way through stage 3 and didn't show any errors at all. Looked at the partitions in disk manager and now C drive has shrunk back to 60GB and there is no free space. What gives? Ok, I thought I'd try mounting it under Dom0 and examining it with fdisk. This is what I get when mounted sudo xl block-attach 0 tap:aio:/home/xen/vms/otoy_v1202-xen.img xvda w sudo fdisk -l /dev/xvda Disk /dev/xvda: 64.4 GB, 64424509440 bytes 255 heads, 63 sectors/track, 7832 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x582dfc96 Device Boot Start End Blocks Id System /dev/xvda1 * 1 13 102400 7 HPFS/NTFS Partition 1 does not end on cylinder boundary. /dev/xvda2 13 7833 62810112 7 HPFS/NTFS Note the cylinder boundary comment. When I run sudo cfdisk /dev/xvda I get: FATAL ERROR: Bad primary partition 1: Partition ends in the final partial cylinder Press any key to exit cfdisk So I guess this is a bigger problem than first thought. How can I fix this? EDIT: Oops, the cylinder boundary thing is not a problem at all since disks have used LBA etc. So that threw me for a moment... still the problem exists... Now this output looks a little different. sudo sfdisk -uS -l /dev/xvda Disk /dev/xvda: 7832 cylinders, 255 heads, 63 sectors/track Units = sectors of 512 bytes, counting from 0 Device Boot Start End #sectors Id System /dev/xvda1 * 2048 206847 204800 7 HPFS/NTFS /dev/xvda2 206848 125827071 125620224 7 HPFS/NTFS /dev/xvda3 0 - 0 0 Empty /dev/xvda4 0 - 0 0 Empty BTW: I do have a backup of the image so if you help me mess it up that's ok. EDIT: sudo parted /dev/xvda print free Model: Xen Virtual Block Device (xvd) Disk /dev/xvda: 64.4GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 32.3kB 1049kB 1016kB Free Space 1 1049kB 106MB 105MB primary ntfs boot 2 106MB 64.4GB 64.3GB primary ntfs 64.4GB 64.4GB 1049kB Free Space Cool. Linux is showing free space is 10GB which is what I expect. The problem is windows isn't seeing this?

    Read the article

  • New Analytic settings for the new code

    - by Steve Tunstall
    If you have upgraded to the new 2011.1.3.0 code, you may find some very useful settings for the Analytics. If you didn't already know, the analytic datasets have the potential to fill up your OS hard drives. The more datasets you use and create, that faster this can happen. Since they take a measurement every second, forever, some of these metrics can get in the multiple GB size in a matter of weeks. The traditional 'fix' was that you had to go into Analytics -> Datasets about once a month and clean up the largest datasets. You did this by deleting them. Ouch. Now you lost all of that historical data that you might have wanted to check out many months from now. Or, you had to export each metric individually to a CSV file first. Not very easy or fun. You could also suspend a dataset, and have it not collect data at all. Well, that fixed the problem, didn't it? of course you now had no data to go look at. Hmmmm.... All of this is no longer a concern. Check out the new Settings tab under Analytics... Now, I can tell the ZFSSA to keep every second of data for, say, 2 weeks, and then average those 60 seconds of each minute into a single 'minute' value. I can go even further and ask it to average those 60 minutes of data into a single 'hour' value.  This allows me to effectively shrink my older datasets by a factor of 1/3600 !!! Very cool. I can now allow my datasets to go forever, and really never have to worry about them filling up my OS drives. That's great going forward, but what about those huge datasets you already have? No problem. Another new feature in 2011.1.3.0 is the ability to shrink the older datasets in the same way. Check this out. I have here a dataset called "Disk: I/O opps per second" that is about 6.32M on disk (You need not worry so much about the "In Core" value, as that is in RAM, and it fluctuates all the time. Once you stop viewing a particular metric, you will see that shrink over time, just relax).  When one clicks on the trash can icon to the right of the dataset, it used to delete the whole thing, and you would have to re-create it from scratch to get the data collecting again. Now, however, it gives you this prompt: As you can see, this allows you to once again shrink the dataset by averaging the second data into minutes or hours. Here is my new dataset size after I do this. So it shrank from 6.32MB down to 2.87MB, but i can still see my metrics going back to the time I began the dataset. Now, you do understand that once you do this, as you look back in time to the minute or hour data metrics, that you are going to see much larger time values, right? You will need to decide what size of granularity you can live with, and for how long. Check this out. Here is my Disk: Percent utilized from 5-21-2012 2:42 pm to 4:22 pm: After I went through the delete process to change everything older than 1 week to "Minutes", the same date and time looks like this: Just understand what this will do and how you want to use it. Right now, I'm thinking of keeping the last 6 weeks of data as "seconds", and then the last 3 months as "Minutes", and then "Hours" forever after that. I'll check back in six months and see how the sizes look. Steve 

    Read the article

  • what differs a computer scientist/software engineer to regular people who learn programming language and APIs?

    - by Amumu
    In University, we learn and reinvent the wheel a lot to truly learn the programming concepts. For example, we may learn assembly language to understand, what happens inside the box, and how the system operates, when we execute our code. This helps understanding higher level concepts deeper. For example, memory management like in C is just an abstraction of manually managed memory contents and addresses. The problem is, when we're going to work, usually productivity is required more. I could program my own containers, or string class, or date/time (using POSIX with C system call) to do the job, but then, it would take much longer time to use existing STL or Boost library, which abstract all of those thing and very easy to use. This leads to an issue, that a regular person doesn't need to get through all the low level/under the hood stuffs, who learns only one programming language and using language-related APIs. These people may eventually compete with the mainstream graduates from computer science or software engineer and call themselves programmers. At first, I don't think it's valid to call them programmers. I used to think, a real programmer needs to understand the computer deeply (but not at the electronic level). But then I changed my mind. After all, they get the job done and satisfy all the test criteria (logic, performance, security...), and in business environment, who cares if you're an expert and understand how computer works or not. You may get behind the "amateurs" if you spend to much time learning about how things work inside. It is totally valid for those people to call themselves programmers. This makes me confuse. So, after all, programming should be considered an universal skill? Does programming language and concepts matter or the problems we solve matter? For example, many C/C++ vs Java and other high level language, one of the main reason is because C/C++ features performance, as well as accessing low level facility. One of the main reason (in my opinion), is coding in C/C++ seems complex, so people feel good about it (not trolling anyone, just my observation, and my experience as well. Try to google "C hacker syndrome"). While Java on the other hand, made for simplifying programming tasks to help developers concentrate on solving their problems. Based on Java rationale, if the programing language keeps evolve, one day everyone can map their logic directly with natural language. Everyone can program. On that day, maybe real programmers are mathematicians, who could perform most complex logic (including business logic and academic logic) without worrying about installing/configuring compiler, IDEs? What's our job as a computer scientist/software engineer? To solve computer specific problems or to solve problems in general? For example, take a look at this exame: http://cm.baylor.edu/ICPCWiki/attach/Problem%20Resources/2010WorldFinalProblemSet.pdf . The example requires only basic knowledge about the programming language, but focus more on problem solving with the language. In sum, what differs a computer scientist/software engineer to regular people who learn programming language and APIs? A mathematician can be considered a programmer, if he is good enough to use programming language to implement his formula. Can we programmer do this? Probably not for most of us, since we specialize about computer, not math. An electronic engineer, who learns how to use C to program for his devices, can be considered a programmer. If the programming languages keep being simplified, may one day the software engineers, who implements business logic and create softwares, be obsolete? (Not for computer scientist though, since many of the CS topics are scientific, and science won't change, but technology will).

    Read the article

  • ArchBeat Link-o-Rama Top 10 for October 2012

    - by Bob Rhubart
    The Top 10 most popular items shared on the OTN ArchBeat Facebook Page for October 2012. OAM/OVD JVM Tuning | @FusionSecExpert Vinay from the Oracle Fusion Middleware Architecture Group (known as the A-Team) shares a process for analyzing and improving performance in Oracle Virtual Directory and Oracle Access Manager. SOA Galore: New Books for Technical Eyes Only Shake up up your technical skills with this trio of new technical books from community members covering SOA and BPM. Clustering ODI11g for High-Availability Part 1: Introduction and Architecture | Richard Yeardley "JEE agents can be deployed alongside, or instead of, standalone agents," says Rittman Meade's Richard Yeardley. "But there is one key advantage in using JEE agents and WebLogic – when you deploy JEE agents as part of a WebLogic cluster they can be configured together to form a high availability cluster." Learn more in Yeardley's extensive post. Solving Big Problems in Our 21st Century Information Society | Irving Wladawsky-Berger "I believe that the kind of extensive collaboration between the private sector, academia and government represented by the Internet revolution will be the way we will generally tackle big problems in the 21st century. Just as with the Internet, governments have a major role to play as the catalyst for many of the big projects that the private sector will then take forward and exploit. The need for high bandwidth, robust national broadband infrastructures is but one such example." -- Irving Wladawsky-Berger Eventually, 90% of tech budgets will be outside IT departments | ZDNet Another interesting post from ZDNet blogger Joe McKendrick about changing roles in IT. ADF Mobile - Login Functionality | Andrejus Baranovskis "The new ADF Mobile approach with native deployment is cool when you want to access phone functionality (camera, email, sms and etc.), also when you want to build mobile applications with advanced UI," reports Oracle ACE Director Andrejus Baranovskis. Podcast: Are You Future Proof? - Part 2 In Part 2, practicing architects and Oracle ACE Directors Ron Batra (AT&T), Basheer Khan (Innowave Technology), and Ronald van Luttikhuizen (Vennster) discuss re-tooling one’s skill set to reflect changes in enterprise IT, including the knowledge to steer stakeholders around the hype to what's truly valuable. ADF Mobile Custom Javascript — iFrame Injection | John Brunswick The ADF Mobile Framework provides a range of out of the box components to add within your AMX pages, according to John Brunswick. But what happens when "an out of the box component does not directly fulfill your development need? What options are available to extend your application interface?" John has an answer. Oracle Solaris 11.1 update focuses on database integration, cloud | Mark Fontecchio TechTarget editor Mark Fontecchio reports on the recent Oracle Solaris 11.1 release, with comments from IDC's Al Gillen. Architects Matter: Making sense of the people who make sense of enterprise IT Why do architects matter? Oracle Enterprise Architect Eric Stephens suggests that you ask yourself this question the next time you take the elevator to the Oracle offices on the 45th floor of the Willis Tower in Chicago, Illinois (or any other skyscraper, for that matter). If you had to take the stairs to get to those offices, who would you blame? "You get the picture," he says. "Architecture is essential for any necessarily complex structure, be it a building or an enterprise." (Read the article) Thought for the Day "I will contend that conceptual integrity is the most important consideration in system design. It is better to have a system omit certain anomalous features and improvements, but to reflect one set of design ideas, than to have one that contains many good but independent and uncoordinated ideas." — Frederick P. Brooks Source: SoftwareQuotes.com

    Read the article

  • Scenarios for Throwing Exceptions

    - by Joe Mayo
    I recently came across a situation where someone had an opinion that differed from mine of when an exception should be thrown. This particular case was an issue opened on LINQ to Twitter for an Exception on EndSession.  The premise of the issue was that the poster didn’t feel an exception should be raised, regardless of authentication status.  As first, this sounded like a valid point.  However, I went back to review my code and decided not to make any changes. Here's my rationale: 1. The exception doesn’t occur if the user is authenticated when EndAccountSession is called. 2. The exception does occur if the user is not authenticated when EndAccountSession is called. 3. The exception represents the fact that EndAccountSession is not able to fulfill its intended purpose - to end the session.  If a session never existed, then it would not be possible to perform the requested action.  Therefore, an exception is appropriate. To help illustrate how to handle this situation, I've modified the following code in Program.cs in the LinqToTwitterDemo project to illustrate the situation: static void EndSession(ITwitterAuthorizer auth) { using (var twitterCtx = new TwitterContext(auth, "https://api.twitter.com/1/", "https://search.twitter.com/")) { try { //Log twitterCtx.Log = Console.Out; var status = twitterCtx.EndAccountSession(); Console.WriteLine("Request: {0}, Error: {1}" , status.Request , status.Error); } catch (TwitterQueryException tqe) { var webEx = tqe.InnerException as WebException; if (webEx != null) { var webResp = webEx.Response as HttpWebResponse; if (webResp != null && webResp.StatusCode == HttpStatusCode.Unauthorized) Console.WriteLine("Twitter didn't recognize you as having been logged in. Therefore, your request to end session is illogical.\n"); } var status = tqe.Response; Console.WriteLine("Request: {0}, Error: {1}" , status.Request , status.Error); } } } As expected, LINQ to Twitter wraps the exception in a TwitterQueryException as the InnerException.  The TwitterQueryException serves a very useful purpose through it's Response property.  Notice in the example above that the response has Request and Error proprieties.  These properties correspond to the information that Twitter returns as part of it's response payload.  This is often useful while debugging to help you understand why Twitter was unable to perform the  requested action.  Other times, it's cryptic, but that's another story.  At least you have some way of knowing in your code how to anticipate and handle these situations, along with having extra information to debug with. To sum things up, there are two points to make: when and why an exception should be raised and when to wrap and re-throw an exception in a custom exception type. I felt it was necessary to allow the exception to be raised because the called method was unable to perform the task it was designed for.  I also felt that it is inappropriate for a general library to do anything with exceptions because that could potentially hide a problem from the caller.  A related point is that it should be the exclusive decision of the application that uses the library on what to do with an exception.  Another aspect of this situation is that I wrapped the exception in a custom exception and re-threw.  This is a tough call because I don’t want to hide any stack trace information.  However, the need to make the exception more meaningful by including vital information returned from Twitter swayed me in the direction to design an interface that was as helpful as possible to library consumers.  As shown in the code above, you can dig into the exception and pull out a lot of good information, such as the fact that the underlying HTTP response was a 401 Unauthorized.  In all, trade-offs are seldom perfect for all cases, but combining the fact that the method was unable to perform its intended function, this is a library, and the extra information can be more helpful, it seemed to be the better design. @JoeMayo

    Read the article

  • Social HCM: Is Your Team Listening?

    - by Mike Stiles
    Does integrating Social HCM into your enterprise make sense? Consider Sam and Christina. Sam is a new hire at a big company. On the job 3 weeks, a question has come up on how to properly file an expense report to get reimbursed. It was covered in the onboarding session, but shockingly enough, Sam didn’t memorize or write down every word of the session. The answer is probably in a handout, in a stack of handouts 2 inches thick. It also might be on the employee web site…somewhere. Christina is a new hire at a different big company. She has the same question. She logs into her company’s social network, goes to the “new hires” group, asks her question and gets an answer in seconds. Christina says, “Cool!” Sam says, “Grrrr.” It’s safe to say the qualified talent your company wants is accustomed to using social platforms to communicate and get quick answers. As such, Christina is comfortable at her new company, whereas Sam is wondering what he’s gotten himself into. Companies that cling to talent communication and management systems that don’t speak to talent’s needs or expectations put themselves at risk. Right from the recruiting stage, prospects can determine if a company has embraced the communications tools of the 21st century. If they don’t see it, alarm bells go off. With great talent more in demand than ever, enterprises should reconsider making “this is the way we do it, you adapt to us” their mantra. Other blogs have clearly outlined that apart from meeting top recruits’ expectations, Social HCM benefits the organization itself in terms of efficiency, talent performance & measurement. Recruiting: Jobvite shows 64% of companies hired using social. 89% of job seekers are using social in their search. Social can give employers access to relevant communities of prospects and advance the brand. Nucleus Research found general hiring software can provide over 1,000% ROI by reducing churn and improving screening. Social talent acquisition should perform at least as well. Learning & Development:Employees, learning from the company or from peers, can be kept on top of the latest needed skillsets and engage in self-paced training so as to advance within the company. Performance Management:Just as gamers are egged on by levels and achievements, talent can reach for workplace kudos, be they shout-outs from peers & managers or formally established milestones. Plus employee reviews become consistent and fair as managers have access to the cumulative feedback social offers. Workflow and Collaboration:With workforces dispersing in terms of physical location, social provides a platform that helps eliminate drawbacks that would have brought just 10 years ago. Finding and connecting with just the right colleague to get the most relevant info at any given time has never been more possible…or expected. While yes, marketing has taken the social lead inside the enterprise, HCM (with the word “human” right there in its name) is the obvious locale for the next big integration of social in business. The technology is there. At Oracle, Fusion HCM apps are deeply embedded with Social HCM…just one example of systems taking social across the enterprise. Christina’s company is communicating with her in ways she’s used to. Sam’s company may as well be trying to talk to him using signal flags. @mikestilesPhoto via stock.xchng

    Read the article

  • Development Quirk From ASP.NET Dynamic Compilation

    - by jkauffman
    The Problem I got a compilation error in my ASP.NET MVC3 project that tested my sanity today. (As always, names are changed to protect the innocent) The type or namespace name 'FishViewModel' does not exist in the namespace 'Company.Product.Application.Models' (are you missing an assembly reference?) Sure looks easy! There must be something in the project referring to a FishViewModel. The Confusing Part The first thing I noticed was the that error was occuring in a folder clearly not in my project and in files that I definitely had not created: %SystemRoot%\Microsoft.NET\Framework\(versionNumber)\Temporary ASP.NET Files\ App_Web_mezpfjae.1.cs I also ascertained these facts, each of which made me more confused than the last: Rebuild and Clean had no effect. No controllers in the project ever returned a ViewResult using FishViewModel. No views in the project defined that they use FishViewModel. Searching across all files included in the project for “FishViewModel” provided no results. The build server did not report a problem. The Solution The problem stemmed from a file that was not included in the project but still present on the file system: (By the way, if you don’t know this trick already, there is a toolbar button in the Solution Explorer window to “Show All Files” which allows you to see files all files in the file system) In my situation, I was working on the mission-critical Fish view before abandoning the feature. Instead of deleting the file, I excluded it from the project. However, this was a bad move. It caused the build failure, and in order to fix the error, this file must be deleted. By the way, this file was not in source control, so the build server did not have it. This explains why my build server did not report a problem for me. The Explanation So, what’s going on? This file isn’t even a part of the project, so why is it failing the build? This is a behavior of the ASP.NET Dynamic Compilation. This is the same process that occurs when deploying a webpage; ASP.NET compiles the web application’s code. When this occurs on a production server, it has to do so without the .csproj file (which isn’t usually deployed, if you’ve taken your time to do a deployment cleanly). This process has merely the file system available to identify what to compile. So, back in the world of developing the webpage in visual studio on my developer box, I run into the situation because the same process is occuring there. This is true even though I have more files on my machine than will actually get deployed. I can’t help but think that this error could be attributed back to the real culprit file (Fish.cshtml, rather than the temporary files) with some work, but at least the error had enough information in it to narrow it down. The Conclusion I had previously been accustomed to the idea that for c# projects, the .csproj file always “defines” the build behavior. This investigation has taught me that I’ll need to shift my thinking a bit to remember that the file system has the final say when it comes to web applications, even on the developer’s machine!

    Read the article

  • Using Solaris zfs + iscsi targets with Oracle VM

    - by wim.coekaerts
    I was playing with my Oracle VM setup and needed some shared storage that was block based. I did not have a storage array available but I did have a solaris box, that I use for Oracle VDI, available. I set up a few iscsi targets on this solaris server and exported them to my 2 Oracle VM servers. Here's how I did this : (1) On the solaris side : # zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT rpool 544G 129G 415G 23% ONLINE - I just have a simple zpool, called rpool, on this box. It has plenty of space available for my needs. So I will use rpool and I will create 5 50gb vols : zfs create -V 50G rpool/ovm1 zfs create -V 50G rpool/ovm2 zfs create -V 50G rpool/ovm3 zfs create -V 50G rpool/ovm4 zfs create -V 50G rpool/ovm5 I want to use these volumes for iscsi so I have to enable them as shared iscsi devices : zfs set shareiscsi=on rpool/ovm1 zfs set shareiscsi=on rpool/ovm2 zfs set shareiscsi=on rpool/ovm3 zfs set shareiscsi=on rpool/ovm4 zfs set shareiscsi=on rpool/ovm5 The command iscsitadm list target should list these devices so make sure they show up. # iscsitadm list target Target: rpool/ovm1 iSCSI Name: iqn.1986-03.com.sun:02:896c766c-0943-4da5-d47e-9575b5a0be36 Connections: 2 Target: rpool/ovm2 iSCSI Name: iqn.1986-03.com.sun:02:a3116b46-73e0-e8c2-e80c-9a4f71aff069 Connections: 2 Target: rpool/ovm3 iSCSI Name: iqn.1986-03.com.sun:02:a838c400-2730-c0d6-f2c2-ee186a0261c1 Connections: 2 Target: rpool/ovm4 iSCSI Name: iqn.1986-03.com.sun:02:2e046afb-d66d-4f3f-c5de-8115e0ddd931 Connections: 2 Target: rpool/ovm5 iSCSI Name: iqn.1986-03.com.sun:02:66109fbe-81ac-ef05-f85e-ab8c1f34cb43 Connections: 2 At this point I want to make sure that I have some access control on these devices. To make it easier, I will create an alias for my 2 servers and use the alias for the ACL. get the iqn from the 2 servers on my 2 ovm servers (wcoekaer-srv1, wcoekaer-srv2) get the content of /etc/iscsi/initiatorname.iscsi (for each server) InitiatorName=iqn.1986-03.com.sun:01:2a7526f0ffff On the solaris side create the aliases : iscsitadm create initiator -n iqn.1986-03.com.sun:01:2a7526f0ffff wcoekaer-srv1 iscsitadm create initiator -n iqn.1986-03.com.sun:01:e31b08110f1 wcoekaer-srv5 Add the ACL to the targets : iscsitadm modify target -l wcoekaer-srv1 rpool/ovm1 iscsitadm modify target -l wcoekaer-srv1 rpool/ovm2 iscsitadm modify target -l wcoekaer-srv1 rpool/ovm3 iscsitadm modify target -l wcoekaer-srv1 rpool/ovm4 iscsitadm modify target -l wcoekaer-srv1 rpool/ovm5 iscsitadm modify target -l wcoekaer-srv5 rpool/ovm1 iscsitadm modify target -l wcoekaer-srv5 rpool/ovm2 iscsitadm modify target -l wcoekaer-srv5 rpool/ovm3 iscsitadm modify target -l wcoekaer-srv5 rpool/ovm4 iscsitadm modify target -l wcoekaer-srv5 rpool/ovm5 (2) the Oracle VM side On each server just do 2 simple things : # iscsiadm -m discovery -t sendtargets -p ca-vdi1 where ca-vdi1 is my solaris server name # service iscsi restart When I do cat /proc/partitions on my servers I will see the devices show up # cat /proc/partitions major minor #blocks name 8 0 160836480 sda 8 1 104391 sda1 8 2 3148740 sda2 8 3 1052257 sda3 253 0 6377804 dm-0 253 1 6377804 dm-1 253 2 6377804 dm-2 8 16 52428800 sdb 8 32 52428800 sdc 8 48 52428800 sdd 8 80 52428800 sdf 8 64 52428800 sde These 5 new devices sd[b..f] are shared storage for Oracle VM and can be used to pass through to the VM's as phy: devices or put ocfs2 on it and use as shared filesystem storage for dom0 repositories. I am setting up an 11gR2 rac template (the cool stuff Saar did) so I am using my devices to create a 2 node RAC cluster with phy: devices.

    Read the article

  • Let Me Show You Something: Instagram, Vine and Snapchat for Brands

    - by Mike Stiles
    While brands are well aware of how much more impactful images are than text-only posts on social channels, today you’re additionally being presented with platform after additional platform for hosting, doctoring and sharing photos and videos.  Can you play in every sandbox? And if you do, can you be brilliant on all of them? As has usually been the case, so far brands are sticking their toes into new platforms while not actually committing to them, or strategizing for them, or resourcing them. TrackMaven found of the 123 F500 companies using Instagram, only 22% of them are active on it. Likewise, research from Simply Measured found brands are indeed jumping in, with the number establishing a presence on Instagram up 55% over the past year. Users want them there…brand engagement has exploded 350%, and over 1/3 of the top brands have at least 10,000 followers. BUT…the top 10 brands are generating 33% of all posts, reaping 83% of all engagement. Things are also growing on Twitter’s Vine, the 6-second looping video app that hit 40 million users in August. The 7th Chamber says 5 tweets a second contain a Vine link. Other studies say branded Vines are 4 times more likely to be shared and seen than rank-and-file branded videos. Why? Users know that even if a video is pure junk, they won’t get robbed of too much of their valuable time. Vine is always upgrading so you can make sure your videos are worth viewers’ time. You can now edit videos, and save & work on several projects concurrently. What you can’t do is upload a finely crafted video into Vine, but you can do that with Instagram. The key to success? Same as with all other content; make it of value. Deliver a laugh or a lesson or both. How-to, behind the scenes peeks, contests, demos, all make sense in the short video format. Or follow Nash Grier’s example, which is to just have fun with and connect to your viewers, earning their trust that your next Vine will be as good as the last. Nash is only 15, has over 1.4 million followers, and adds about 100,000 a week. He broke out when one of his videos was re-Vined by some other kid with 300,000 followers. Make good stuff, get it in front of influencers, and your brand Vines could break out as well. Then there’s Snapchat, the “this photo will self destruct” platform. How can that be of use to brands besides offering coupons that really expire? The jury is out. But with an audience of over 100 million and a valuation of $800 million, media-with-a-time-limit is compelling. Now there’s “Snapchat Stories” that can last 24 hours and be shared to the public at large. You might be able to capitalize on how much more focus gets put on content when there’s a time limit on its availability. The underlying truth to all of this is, these are all tools. Very cool, feature rich tools, but tools. You can give the exact same art kit to 5 different people and you’d get back 5 very different works, ranging from worthless garbage to masterpiece. Brands are being called upon to be still and moving image artists. That’s what your customers are used to seeing, from a variety of sources. Commit to communicating with them accordingly. @mikestiles Photo: stock.xchng

    Read the article

  • Enabling Google Webmaster Tools With Your GWB Blog

    - by ToStringTheory
    I’ll be honest and save you some time, if you don’t have your own domain for your GWB blog, this won’t help, you may just want to move on…  I don’t want to waste your time……… Still here?  Good.  How great are Google’s website tools?  I don’t just mean Analytics which rocks, but also their Webmaster Tools (https://www.google.com/webmasters/tools/) which gives you a glimpse into the queries that provide you your website traffic, search engine behavior on your site, and important keywords, just to name a few.   Pictured Above: Cool statistics. Problem Thanks to svickn over at wtfnext.com (another GeeksWithBlogs blog), we already have the knowledge on how to setup Google Analytics (wtfnext.com - How to: Set up Google Analytics on your GeeksWithBlogs blog).  However, one of the questions raised in the post, and even semi-answered in the questions, was how to setup Google Webmaster Tools with your blog as well. At first glance, it seems like it can’t be done.  Google graciously gives you several different options on how to authorize that you own a site.  The authentication options are: 1. (Recommended) – Upload an HTML file to your server 2. Add a meta tag to your site’s home page 3. Use your Google Analytics account 4. Add a DNS record to your domain’s configuration Since you don’t have access to the base path, you can’t do #1.  Same goes for #2 since you can’t edit the master/index page.  As for #3, they REQUIRE the Analytics code to be in the <head> section of your page, so even though we can use the workaround of hosting it in the news section, it won’t allow it since it isn’t in the correct place. Solution Last I checked, I didn’t see the DNS record option for Webmaster Tools.  Maybe this was recently added, or maybe I don’t remember it since I was always able to use some other method to authorize it.  In this case though, this is the option that we need.  My registrar wasn’t in their list, but they provide detailed enough instructions for the ‘Other’ option: Simply create a TXT record with your domain hoster (mine is DynDns), fill in the tag information, and then click verify.  My entry was able to be resolved immediately, but since you are working with DNS, it may take longer.  If after 24 hours you still aren’t able to verify, you can use a site such as mxtoolbox.com, and in the searchbox type “txt: {domain-name-here}”, to see if your TXT record was entered successfully. It is pretty simple to setup the TXT entry in DynDns, but if you have questions/comments, feel free to post them. Conclusion With this simple workaround (not really a workaround, but feature since they offer it..), you are now able to see loads of information regarding your standings in the world of the Google Search Engine.  No critical issues?  Did I do something wrong?! As an aside, you can do the same thing with the Bing Webmaster Tools by adding a CNAME record to bing.verify.com…  Instructions can be found on the ‘Add Site’ popup when adding your site. If you don’t have your own domain, but continued, to read to this point – thank you!

    Read the article

  • Oracle Spatial and Graph – A year in review

    - by Mandy Ho
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} What a great year for Oracle Spatial! Or shall I now say, Oracle Spatial and Graph, with our official name change this summer. There were so many exciting events and updates we had this year, and this blog will review and link to some of the events you may have missed over the year. We kicked off 2012 with our webinar: Situational Analysis at OnStar with Oracle Spatial and Graph. We collaborated with OnStar’s Emergency Strategy and Outreach expert, Jeff Joyner ,on how Onstar uses Google Earth Visualization, NAVTEQ data and Oracle Database to deliver fast, accurate emergency services to its customers. In the next webinar in our 2012 series, Oracle partner TARGUSinfo showcased how to build a robust, scalable and secure customer relationship management systems – with built-in mapping and spatial analysis, and deployed in the cloud. This is a very cool system using all Oracle technologies including Oracle Database and Fusion Middleware MapViewer. Attendees learned how to gather market insight, score prospects and customers and perform location analysis. The replay is available here. Our final webinar of the year focused on using Oracle Business Intelligence tools, along with Oracle Spatial and Graph to perform location-aware predictive analysis. Watch the webcast here: In June, we joined up with the Location Intelligence conference in Washington, DC, and had a very successful 2012 Oracle Spatial User Conference. Customers and partners from the US, as well as from EMEA and Asia, flew in to share experiences and ideas, and get technical updates from Oracle experts. Users were excited to hear about spatial-Exadata performance, and advances in MapViewer and BI. Peter Doolan of Oracle Public Sector kicked off the event with a great keynote, and US Census, NOAA, and Ordnance Survey Great Britain were just a few of the presenters. Presentation archive here. We recognized some of the most exceptional partners and customers for their contributions to advancing mainstream solutions using geospatial technologies. Planning for 2013’s conference has already started. Please contribute your papers for consideration here. http://www.locationintelligence.net/ We also launched a new Oracle PartnerNetwork Spatial Specialization – to enable partners to get validated in the marketplace for their expertise in taking solutions to market. Individuals can also get individual certifications. Learn more here. Oracle Open World was not to disappoint, with news regarding our next Oracle Spatial and Graph release, as well as the announcement of our new Oracle Spatial and Graph SIG board! Join the SIG today. One more exciting event as we look to 2013. Spatial and location technologies have a dedicated track at the January BIWA SIG Summit – on January 9-10 in Redwood Shores, CA. View the agenda and register here: www.biwasummit.org. We thank you for all your support during the year of 2012 and look towards an even more exciting 2013! Wishing you and your family a prosperous New Year and Happy Holidays!

    Read the article

  • Nokia Lumia 920 Windows Phone 8 Announcement

    - by Tim Murphy
    Today Nokia and Microsoft had an event to officially introduce the Lumia 920.  Below is a rundown of some of the things I found interesting. As a person who likes photography there was a lot to drool over.  The main feature that caught my attention was PureView with its optical stabilization.  This alone should improve the majority of you pictures.  Add to that the SmartShoot Object remover that uses multiple images to remove unwanted people or objects that move through your picture and you never have to accept reality again. For the most part the lenses concept introduced in Windows Phone 8 just makes the usability of leveraging camera better.  Of course that is Microsoft’s selling point.  One lens that caught my attention was the Bing lens.  I have to say it is about time that we can take pictures and use them to search for answers using Bing. There were a couple of features shown that involved augmented reality.  One was similar to the yapf application that is already in the market which overlays restaurants and other destination over live camera views.  The other was using the navigation directions with a live view. Then you get down to some of the physical features of the Lumia 920.  The one that got the most stage time is that it has a great 2000mah battery which can be charged wirelessly.  They also pointed out the improved glare reduction of the 4.5 in. curved glass screen.  This hardware improvement is improved further with software that detects glare conditions and adjusts the display attributes to enhance viewing ease. Adding to the wireless cool factor of the Lumia 920 is the general NFC capabilities.  This was demonstrated with NFC docking stations as well as JBL speakers and headphones. There was one more hardware feature that I applauded.  The super sensitive touch screen did away with one of my pet peeves with capacitive touch screens.  You will never have to remove you gloves to operate your phone again.  The mittens that they did the demo with looked more like boxing gloves. I was disappointed with Joe Belfiore said that they were only going to show a couple of new features of the Windows Phone 8 and would hear more at future events.  One of the things he did show is the ability to customize which buttons you preferred as defaults in IE10.  For example you could have the folders button where the refresh button normally is.  He also showed that at long last you can natively take screenshots on your phone.  Hopefully he will be back quickly to give us the rest of the features. The most disappointing part of the event was that we never found out when they would be released or how much they would cost.  Let’s hope this comes soon.  Even with these couple of items still left on my wish list I can’t wait to get my hands on a Lumia 920.  del.icio.us Tags: Windows Phone,Windows Phone 8,Nokia,Lumia,Lumia 920,Microsoft

    Read the article

  • Apps UX Launches Blueprints for Mobile User Experiences

    - by mvaughan
    By Misha Vaughan, Oracle Applications User ExperienceAt Oracle OpenWorld 2012 this year, the Oracle Applications User Experience (Apps UX) team announced the release of Mobile User Experience Functional Design Patterns. These patterns are designed to work directly with Oracle’s Fusion Middleware, specifically, ADF Mobile.  The Oracle Application Development Framework for mobile users enables developers to build one application that can be deployed to multiple mobile device platforms. These same mobile design patterns provide the guidance for Oracle teams to develop Fusion Mobile expenses. Application developers can use Oracle’s mobile design patterns to design iPhone, Android, or browser-based smartphone applications. We are sharing our mobile design patterns and their baked-in, scientifically proven usability to enable Oracle customers and partners to build mobile applications quickly.A different way of thinking and designing. Lynn Rampoldi-Hnilo, Senior Manager of Mobile User Experiences for Apps UX, says mobile design has to be compelling. “It needs to be optimized for the device, and be visually rich and simple,” she said. “What is really key is that you are designing for a user’s most personal device, the device that they will have with them at all times of the day.”Katy Massucco, director of the overall design patterns site, said: “You need to start with a simplified task flow. Everything should be a natural interaction. The action should be relevant and leveraging the device. It should be seamless.”She suggests that developers identify the essential tasks that a user would want to do while mobile. “They need to understand the user and the context,” she added. ?A sample inline action design patternWhat people are sayingReactions to the release of the design patterns have been positive. Debra Lilley, Oracle ACE Director and Fusion User Experience Advocate (FXA), has already demo’ed Fusion Mobile Expenses widely.  Fellow Oracle Ace Director Ronald van Luttikhuizen, called it a “cool demo by @debralilley of the new mobile expenses app.” FXA member Floyd Teter says he is already cooking up some plans for using mobile design patterns.  We hope to see those ideas at Collaborate or ODTUG in 2013. For another perspective on why user experience is such an important focus for mobile applications, check out this video by John King, Director, and Monty Latiolais, President, both from ODTUG, or the Oracle Development Tools User Group.In a separate interview by e-mail, Latiolais wrote: “I enjoy the fact we can take something that, in the past, has been largely subjective, and now apply to it a scientifically proven look and feel. Trusting Oracle’s UX Design Patterns, the presentation really can become one less thing to worry about. As someone with limited ADF experience, that is extremely beneficial.”?King, who was also interviewed by e-mail, wrote: “User Experience is about making the task at hand as easy and error-free as possible. Oracle's UX labs worked hard to make the User Experience in the new Fusion Applications as good as possible; ADF makes adding tested, consistent, user experiences a declarative exercise by leveraging that work. As we move applications onto mobile platforms, user experience is the driving factor. Customers are "spoiled" by a bevy of fantastic applications, and ours cannot disappoint them. Creating applications that enable users to quickly and effectively accomplish whatever task is at hand takes thought and practice. Developers must become ’power users’ and then create applications that they and their users will love.”

    Read the article

  • It happens only at Devoxx ...

    - by arungupta
    After attending several Java conferences world wide, this was my very first time at Devoxx. Here are some items I found that happens only at Devoxx ... Pioneers of theater-style seating - This not only provides comfortable seating for each attendee but the screens are very clearly visible to everybody in the room. Intellectual level of attendees is very high - Read more explanation on the Java EE 6 lab blog. In short, a lab, 1/3 of the content delivered at Devoxx 2011, could not be completed at other developer days in more than 1/3 the time. Snack box for lunches - Even though this suits well to the healthy lifestyle of multiple-snacks-during-a-day style but leaves attendees hungry sooner in the day. The longer breaks before the next snack in the evening does not help at all. Fortunately, Azure cupcakes and Android ice creams turned out to be handy. I finally carried my own apple :-) Wrist band instead of lanyard - The good part about this is that once tied to your hand then you are less likely to forget in your room. But OTOH you are a pretty much a branded conference attendee all through out the city. It was cost effective as it costed 20c as opposed to 1 euro for the lanyard. Live streaming from theater #8 (the biggest room) on parleys.com All talks recorded and released on parleys.com over next year. This allows attendees to not to miss any session and watch replay at their own leisure. Stephan promised to start sharing the sessions by mid December this year. No need to pre-register for a session - This is true for most of the conferences but bigger rooms (+ overflow room for key sessions) provide sufficient space for all those who want to attend the session. And of course all sessions are available on parleys.com anyway! Community votes on whiteboard - Devoxx attendees gets a chance to vote on topics ranging from their favorite non-Java language, operating system, or love from Oracle. Captured pictures at the end of Day 2 are shown below. Movie on the last but one night - This year it was The Adventures of Tintin and was lots of fun. Fries with mayo - This is a typical Belgian thing. Guys going in ladies room to avoid the long queues ... wow! Tweet wall everywhere and I mean literally everywhere, in rooms, hallways, front desk, and other places. The tweet picking algorithm was not very clear as I never saw my tweet appear on the wall ;-) You can also watch it at wall.devoxx.com. Cozy speaker dinner with great food and wine List of parallel and upcoming sessions displayed on the screen - This makes the information more explicit with the attendees. REST API with multiple mobile clients - This API is also used by some other conferences as well. And there always is iphone.devoxx.com. Steering committee members were recognized multiple times. The committee members were clearly identifiable wearing red hoodies. The wireless SSID was intuitive "Devoxx" but hidden to avoid some crap from Microsoft Windows. All of 9000 addresses were used up most of the times with each attendee having multiple devices. A 1 GB fibre optic cable was stretched to Metropolis to support the required network bandwidth. Stephan is already planning to upgrade the equipment and have a better infrastructure next year. Free water, soda, juice in a cooler Kinect connected to TV screens so that attendees can use their hands to browse through the list of sesssions. #devoxxblog, #devoxxwomen, #devoxxfrance, #devoxxgreat, #devoxxsuggestions And Devoxx attendees are called Devoxxians ... how cool is that ? :-) What other things do you think happen only at Devoxx ? And now the pictures from the community whiteboard: And a more complete album (including bigger pics of community votes) is available below:

    Read the article

  • Any tips on getting hired as a software project manager straight out of college?

    - by MHarrison
    I graduated with a BS in compsci last September, and I've been trying (unsuccessfully) to find a job as a project manager ever since. I fell in love with software engineering (the formal practice behind it all, not just coding) in school, and I've dedicated the last 3-4 years of my life to learning everything I can about project management and gaining experience. I've managed several projects (with teams around 12 people) while in school, and I worked with my university's software engineering research lab. My résumé is also decent - I worked as a programmer before I went to school (I'm 27 now), and I did Google Summer of Code for 3 summers. I also have general "people management" experience via working as the photo editor for my university's newspaper for 2 years. My first problem with the job hunt is not getting enough interviews. I use careers.stackoverflow.com, which is awesome because I usually get contacted by non-HR people who know what they're talking about, but there's just not enough companies using it for me to get interviews on a regular basis. I've also tried sites like monster.com, and in a fit of desperation, I sent out no less than 60 applications to project management positions. I've gotten 3 automated rejection letters and that's it. At least careers.stackoverflow gets me a phone interview with 8/10 places I apply to. But the main (and extremely frustrating) problem is the matter of experience. I've successfully managed projects from start to finish (in my software engineering classes we had real customers come in with a real software need and we built it for them), but I've never had to deal with budgets and money (I know this is why HR people immediately turn me away). Most of these positions require 5+ years PM experience, and I've seen absurd things like 12+ years required. Interviews are also maddening. I've had so many places who absolutely loved me and I made it to the final round of interviews, and I left thinking things went extremely well and they'd consider me. However, when I check in with them a week later, they tell me "We really liked you and your qualifications are excellent, but we're hoping to find someone with more experience." The bad interviews I can understand - like the PM position that would have had me managing developers both locally and overseas - I had 3 interviews with them and the ENTIRE interview process was them asking me CS brainteasers and having me waste time on things like writing quicksort on paper or writing binary search trees. Even when I tried steering the discussion towards more relevant PM stuff, they gave me some vague generic replies and went back to the "We want to be Google/MS" crap. But when I have a GOOD interview, they say my "qualifications are excellent" but they want "more experience"...that makes me want to tear my hair out. What else can I DO? While I'm aiming for technically-involved PM positions (not just crunching budget numbers), I really don't want a straight development job because I like creating software from the very high-level vs. spending a lot of time debugging memory leaks. In fact, I can't even GET development positions that I'm qualified for because I make the mistake of telling them that my future career goals are as PM (which usually results in them saying something like "Well we already have PMs and this position isn't really set up to get you there." - which I take to mean "No, that's my job, stay away.") My apologies on the long rant, but I'm seriously hellbent on getting hired as a PM since it's both my career goal and the passion that keeps me awake at night. Any suggestions on what the heck else I can do? I'm currently writing a blog where I talk about my philosophies about software engineering, and I'm writing up specs for an iOS app which I will design, code, and show employers, but this takes an awful lot of time that I don't have.

    Read the article

  • Viewport / Camera Calculation in 2D Game

    - by Dave
    we have a 2D game with some sprites and tiles and some kind of camera/viewport, that "moves" around the scene. so far so good, if we wouldn't had some special behaviour for your camera/viewport translation. normally you could stick the camera to your player figure and center it, resulting in a very cheap, undergraduate, translation equation, like : vec_translation -/+= speed (depending in what keys are pressed. WASD as default.) buuuuuuuuuut, we want our player figure be able to actually reach the bounds, when the viewport/camera has reached a maximum translation. we came up with the following solution (only keys a and d are the shown here, the rest is just adaption of calculation or maybe YOUR super-cool and elegant solution :) ): if(keys[A]) { playerX -= speed; if(playerScreenX <= width / 2 && tx < 0) { playerScreenX = width / 2; tx += speed; } else if(playerScreenX <= width / 2 && (tx) >= 0) { playerScreenX -= speed; tx = 0; if(playerScreenX < 0) playerScreenX = 0; } else if(playerScreenX >= width / 2 && (tx) < 0) { playerScreenX -= speed; } } if(keys[D]) { playerX += speed; if(playerScreenX >= width / 2 && (-tx + width) < sceneWidth) { playerScreenX = width / 2; tx -= speed; } if(playerScreenX >= width / 2 && (-tx + width) >= sceneWidth) { playerScreenX += speed; tx = -(sceneWidth - width); if(playerScreenX >= width - player.width) playerScreenX = width - player.width; } if(playerScreenX <= width / 2 && (-tx + width) < sceneWidth) { playerScreenX += speed; } } i think the code is rather self explaining: keys is a flag container for currently active keys, playerX/-Y is the position of the player according to world origin, tx/ty are the translation components vital to background / npc / item offset calculation, playerOnScreenX/-Y is the actual position of the player figure (sprite) on screen and width/height are the dimensions of the camera/viewport. this all looks quite nice and works well, but there is a very small and nasty calculation error, which in turn sums up to some visible effect. let's consider following piece of code: if(playerScreenX <= width / 2 && tx < 0) { playerScreenX = width / 2; tx += speed; } it can be translated into plain english as : if the x position of your player figure on screen is less or equal the half of your display / camera / viewport size AND there is enough space left LEFT of your viewport/camera then set players x position on screen to width half, increase translation (because we subtract the translation from something we want to move). easy, right?! doing this will create a small delta between playerX and playerScreenX. after so much talking, my question appears now here at the bottom of this document: how do I stick the calculation of my player-on-screen to the actual position of the player AND having a viewport that is not always centered aroung the players figure? here is a small test-case in processing: http://pastebin.com/bFaTauaa thank you for reading until now and thank you in advance for probably answering my question.

    Read the article

  • AI to move custom-shaped spaceships (shape affecting movement behaviour)

    - by kaoD
    I'm designing a networked turn based 3D-6DOF space fleet combat strategy game which relies heavily on ship customization. Let me explain the game a bit, since you need to know a bit about it to set the question. What I aim for is the ability to create your own fleet of ships with custom shapes and attached modules (propellers, tractor beams...) which would give advantages and disadvantages to each ship, so you have lots of different fleet distributions. E.g., long ship with two propellers at the side would let the ship spin around that plane easily, bigger ships would move slowly unless you place lots of propellers at the back (therefore spending more "construction" points and energy when moving, and it will only move fast towards that direction.) I plan to balance all the game around this feature. The game would revolve around two phases: orders and combat phase. During the orders phase, you command the different ships. When all players finish the order phase, the combat phase begins and the ship orders get resolved in real-time for some time, then the action pauses and there's a new orders phase. The problem comes when I think about player input. To move a ship, you need to turn on or off different propellers if you want to steer, travel forward, brake, rotate in place... These propellers don't have to work at their whole power, so you can achieve more movement combinations with less propellers. I think this approach is a bit boring. The player doesn't want to fiddle with motors or anything, you just want to MOVE and KILL. The way I intend the player to give orders to these ships is by a destination and a rotation, and then the AI would calculate the correct propeller power to achive that movement and rotation. Propulsion doesn't have to be the same throught the entire turn calculation (after the orders have been given) so it would be cool if the ships reacted as they move, adjusting the power of the propellers for their needs dynamically, but it may be too hard to implement and it's not really needed for the game to work. In both cases, how would that AI decide which propellers to activate for the best (or at least not worst) trajectory to be achieved? I though about some approaches: Learning AI: The ship types would learn about their movement by trial and error, adjusting their behaviour with more uses, and finally becoming "smart". I don't want to get involved THAT far in AI coding, and I think it can be frustrating for the player (even if you can let it learn without playing.) Pre-calculated timestep movement: Upon ship creation, ALL possible movements are calculated for each propeller configuration and power for a given delta-time. Memory intensive, ugly, bad. Pre-calculated trajectories: The same as above but not for each delta-time but the whole trajectory, which would then be fitted as much as possible. Requires a fixed propeller configuration for the whole combat phase and is still memory intensive, ugly and bad. Continuous brute forcing: The AI continously checks ALL possible propeller configurations throughout the entire combat phase, precalculates a few time steps and decides which is the best one based on that. Con: what's good now might not be that good later, and it's too CPU intensive, ugly, and bad too. Single brute forcing: Same as above, but only brute forcing at the beginning of the simulation, so it needs constant propeller configuration throughout the entire combat phase. Coninuous angle check: This is not a full movement method, but maybe a way to discard "stupid" propeller configurations. Given the current propeller's normal vector and the final one, you can approximate the power needed for the propeller based on the angle. You must do this continuously throughout the whole combat phase. I figured this one out recently so I didn't put in too much thought. A priori, it has the "what's good now might not be that good later" drawback too, and it doesn't care about the other propellers which may act together to make a better propelling configuration. I'm really stuck here. Any ideas?

    Read the article

  • Converting .docx to pdf (or .doc to pdf, or .doc to odt, etc.) with libreoffice on a webserver on the fly using php

    - by robertphyatt
    Ok, so I needed to convert .docx files to .pdf files on the fly, but none of the free php libraries that were available let me do it on my server (a webservice was not good enough). Basically either I needed to pay for a library (and have it maybe suck) or just deal with the free ones that didn't convert the formatting well enough. Not good enough! I found that LibreOffice (OpenOffice's successor) allows command line conversion using the LibreOffice conversion engine (which DID preserve the formatting like I wanted and generally worked great). I loaded the latest version of Ubuntu (http://www.ubuntu.com/download/ubuntu/download) onto my Virtual Box (https://www.virtualbox.org/wiki/Downloads) on my computer and found that I was able to easily convert files using the commandline like this: libreoffice --headless -convert-to pdf fileToConvert.docx -outdir output/path/for/pdf I thought: sweet...but I don't have admin rights on my host's web server. I tried to use a "portable" version of LibreOffice that I obtained from http://portablelinuxapps.org/ but I was unable to get it to work on my host's webserver, because my host's webserver didn't have all the dependencies (Dependency Hell! http://en.wikipedia.org/wiki/Dependency_hell) I was at a loss of how to make it work, until I ran across a cool project made by a Ph.D. student (Philip J. Guo) at Stanford called CDE: http://www.stanford.edu/~pgbovine/cde.html I will let you look at his explanations of how it works (I followed what he did in http://www.youtube.com/watch?feature=player_embedded&v=6XdwHo1BWwY, starting at about 32:00 as well as the directions on his site), but in short, it allows one to avoid dependency hell by copying all the files used when you run certain commands, recreating the linux environment where the command worked. I was able to use this to run LibreOffice without having to resort to someone's portable version of it, and it worked just like it did when I did it on Ubuntu with the command above, with a tweak: I needed to run the wrapper of LibreOffice the CDE generated. So, below is my PHP code that calls it. In this code snippet, the filename to be copied is passed in as $_POST["filename"]. I copy the file to the same spot where I originally converted the file, convert it, copy it back and then delete all the files (so that it doesn't start growing exponentially). I did it this way because I wasn't able to make it work otherwise on the webserver. If there is a linux + webserver ninja out there that can figure out how to make it work without doing this, I would be interested to know what you did. Please post a comment or something if you did that. <?php //first copy the file to the magic place where we can convert it to a pdf on the fly copy($time.$_POST["filename"], "../LibreOffice/cde-package/cde-root/home/robert/Desktop/".$_POST["filename"]); //change to that directory chdir('../LibreOffice/cde-package/cde-root/home/robert'); //the magic command that does the conversion $myCommand = "./libreoffice.cde --headless -convert-to pdf Desktop/".$_POST["filename"]." -outdir Desktop/"; exec ($myCommand); //copy the file back copy("Desktop/".str_replace(".docx", ".pdf", $_POST["filename"]), "../../../../../documents/".str_replace(".docx", ".pdf", $_POST["filename"])); //delete all the files out of the magic place where we can convert it to a pdf on the fly $files1 = scandir('Desktop'); //my files that I generated all happened to start with a number. $pattern = '/^[0-9]/'; foreach ($files1 as $value) { preg_match($pattern, $value, $matches); if(count($matches) ?> 0) { unlink("Desktop/".$value); } } //changing the header to the location of the file makes it work well on androids header( 'Location: '.str_replace(".docx", ".pdf", $_POST["filename"]) ); ?> And here is the tar.gz file I generated I generated with CDE. To duplicate what I did exactly, put the tar.gz file in a folder somewhere. I will call that folder the "root". Make a new folder called "documents" in the "root" folder. Unpack the tar.gz and run the php script above from the "documents" folder. Success! I made a truly portable version of LibreOffice that can convert files on the fly on a webserver using 100% free, open source software!

    Read the article

  • Would this be a good web application architecture?

    - by Gustav Bertram
    My problem Our MVC based framework does not allow us to cache only part of our output. Ideally we want to cahce static and semi-static bits, and run dynamic bits. In addition, we need to consider data caching that reacts to database changes. My idea The concept I came up with was to represent a page as a tree of XML fragment objects. (I say XML, but I mean XHTML). Some of the fragments are dynamic, and can pull their data directly from models or other sources, but most of the fragments are static scaffolding. If a subtree of fragments is completely static, then I imagine that they could unfold into pure XML that would then be cached as the text representation of their parent element. This process would ideally continue until we are left with a root element that contains all of the static XML, and has a couple of dynamic XML fragments that are resolved and attached to the relevant nodes of the XML tree just before the page is displayed. In addition to separating content into dynamic and static fragments, some fragments could be dynamic and cached. A simple expiry time which propagates up through the XML fragment tree would indicate that a specific fragment should periodically be refreshed. A newspaper section or front page does not need to be updated each second. Minutes or sometimes even longer is sufficient. Other fragments would be dynamic and uncached. Typically too many articles are viewed for them to be cached - the cache would overflow. Some individual articles may be cached if they are extremely popular. Functional notes The folding mechanism could be to be smart enough to judge when it would be more profitable to fold a dynamic cached fragment and propagate the expiry date to the parent fragment, or to keep it separate and simple attach to the XML tree when resolving the page. If some dynamic cached fragments are associated to database objects through mechanisms like a globally unique content id, then changes to the database could trigger changes to the output cache. If fragments store the identifiers of parent fragments, then they could trigger a refolding process that would then include the updated data. A set of pure XML with an ordered array of fragment objects (that each store the identifying information of the node to which they should be attached), can be resolved in a fairly simple way by walking the XML tree, and merging the data from the fragments. Because it is not necessary to parse and construct the entire tree in memory before attaching nodes, processing should be fairly fast. The identifiers of each fragment would be a combination of relevant identity data and the type of fragment object. Cached parent fragments would contain references to these identifiers, in order to then either pull them from the fragment cache, or to run their code. The controller's responsibility is reduced to making changes to the database, and telling the root XML fragment object to render itself. The Question My question has two parts: Is this a good design? Are there any obvious flaws I'm missing? Has somebody else thought of this before? References? Is there an existing alternative that I should consider? A cool templating engine maybe?

    Read the article

  • 3 Incredibly Useful Projects to jump-start your Kinect Development.

    - by mbcrump
    I’ve been playing with the Kinect SDK Beta for the past few days and have noticed a few projects on CodePlex worth checking out. I decided to blog about them to help spread awareness. If you want to learn more about Kinect SDK then you check out my”Busy Developer’s Guide to the Kinect SDK Beta”. Let’s get started:   KinectContrib is a set of VS2010 Templates that will help you get started building a Kinect project very quickly. Once you have it installed you will have the option to select the following Templates: KinectDepth KinectSkeleton KinectVideo Please note that KinectContrib requires the Kinect for Windows SDK beta to be installed. Kinect Templates after installing the Template Pack. The reference to Microsoft.Research.Kinect is added automatically.  Here is a sample of the code for the MainWindow.xaml in the “Video” template: <Window x:Class="KinectVideoApplication1.MainWindow" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="MainWindow" Height="480" Width="640"> <Grid> <Image Name="videoImage"/> </Grid> </Window> and MainWindow.xaml.cs using System; using System.Windows; using System.Windows.Media; using System.Windows.Media.Imaging; using Microsoft.Research.Kinect.Nui; namespace KinectVideoApplication1 { public partial class MainWindow : Window { //Instantiate the Kinect runtime. Required to initialize the device. //IMPORTANT NOTE: You can pass the device ID here, in case more than one Kinect device is connected. Runtime runtime = new Runtime(); public MainWindow() { InitializeComponent(); //Runtime initialization is handled when the window is opened. When the window //is closed, the runtime MUST be unitialized. this.Loaded += new RoutedEventHandler(MainWindow_Loaded); this.Unloaded += new RoutedEventHandler(MainWindow_Unloaded); //Handle the content obtained from the video camera, once received. runtime.VideoFrameReady += new EventHandler<Microsoft.Research.Kinect.Nui.ImageFrameReadyEventArgs>(runtime_VideoFrameReady); } void MainWindow_Unloaded(object sender, RoutedEventArgs e) { runtime.Uninitialize(); } void MainWindow_Loaded(object sender, RoutedEventArgs e) { //Since only a color video stream is needed, RuntimeOptions.UseColor is used. runtime.Initialize(Microsoft.Research.Kinect.Nui.RuntimeOptions.UseColor); //You can adjust the resolution here. runtime.VideoStream.Open(ImageStreamType.Video, 2, ImageResolution.Resolution640x480, ImageType.Color); } void runtime_VideoFrameReady(object sender, Microsoft.Research.Kinect.Nui.ImageFrameReadyEventArgs e) { PlanarImage image = e.ImageFrame.Image; BitmapSource source = BitmapSource.Create(image.Width, image.Height, 96, 96, PixelFormats.Bgr32, null, image.Bits, image.Width * image.BytesPerPixel); videoImage.Source = source; } } } You will find this template pack is very handy especially for those new to Kinect Development.   Next up is The Coding4Fun Kinect Toolkit which contains extension methods and a WPF control to help you develop with the Kinect SDK. After downloading the package simply add a reference to the .dll using either the WPF or WinForms version. Now you will have access to several methods that can help you save an image: (for example) For a full list of extension methods and properties, please visit the site at http://c4fkinect.codeplex.com/. Kinductor – This is a great application for just learning how to use the Kinect SDK. The project uses MVVM Light and is a great start for those looking how to structure their first Kinect Application. Conclusion: Things are already getting easier for those working with the Kinect SDK. I imagine that after a few more months we will see the SDK go out of beta and allow commercial applications to run using it. I am very excited and hope that you continue reading my blog for more Kinect, WPF and Silverlight news.  Subscribe to my feed

    Read the article

  • How can a developer realize the full value of his work [closed]

    - by Jubbat
    I, honestly, don't want to work as a developer in a company anymore after all I have seen. I want to continue developing software, yes, but not in the way I see it all around me. And I'm in London, a city that congregates lots of great developers from the whole world, so it shouldn't be a problem of location. So, what are my concerns? First of all, best case scenario: you are paying managers salary out of yours. You are consistently underpaid by making up for the average manager negative net return plus his whole salary. Typical scenario. I am a reasonably good developer with common sense who cares for readable code with attention to basic principles. I have found way too often, overconfident and arrogant developers with a severe lack of common sense. Personally, I don't want to follow TDD or Agile practices like all the cool kids nowadays. I would read about them, form my own opinion and take what I feel is useful, but don't follow it sheepishly. I want to work with people who understand that you have to design good interfaces, you absolutely have to document your code, that readability is at the top of your priorities. Also people who don't have a cargo cult mentality too. For instance, the same person who asked me about design patterns in a job interview, later told me that something like a List of Map of Vector of Map of Set (in Java) is very readable. Why would someone ask me about design patterns if they can't even grasp encapsulation? These kind of things are the norm. I've seen many examples. I've seen worse than that too, from very well paid senior devs, by the way. Every second that you spend working with people with such lack of common sense and clear thinking, you are effectively losing money by being terribly inefficient with your time. Yet, with all these inefficiencies, the average developer earns a high salary. So I tried working on my own then, although I don't like the idea. I prefer healthy exchange of opinions and ideas and task division. I then did a bit of online freelancing for a while but I think working in a sweatshop might be more enjoyable. Also, I studied computer engineering and you are in an environment in which your client will presume you don't have any formal education because there is no way to prove it. Again, you are undervalued. You could try building a product, yes. But, of course, luck is a big factor. I wonder if there is a way to work in something you can do well, software development, and be valued for the quality of your work and be paid accordingly, and where you and only you get fairly paid for the value you generate. I know that what I have written seems somehow unlikely but I strongly feel this way. Hopefully someone will understand me and has already figured this out. I don't think I'm alone in this kind of feeling.

    Read the article

  • Best approach to depth streaming via existing codec

    - by Kevin
    I'm working on a development system (and game) intended for games set mostly in static third-person views. We produce our scenery by CG and photographic techniques. Our background art is rendered off-line by a production-grade renderer. To allow the runtime imagery to properly interact with the background art, I wrote a program to convert from depth output by Mental Ray into a texture, and a pixel shader to draw a quad such that the Z data comes from the texture. This technique is working out very well, but now we've decided that some of the camera angle changes between scenes should be animated. The animation itself is straightforward to produce from our CG models. We intend to encode it to some HD video codec such as H.264. The problem is that in order to maintain our runtime imagery on the screen, the depth buffer will need to be loaded for each video frame. Due to the bandwidth, the video's depth data will need to be compressed efficiently. I've looked into methods for performing temporal compression of depth info and found an interesting research paper here: http://web4.cs.ucl.ac.uk/staff/j.kautz/publications/depth-streaming.pdf The method establishes a mapping between 16-bit depth values and YCbCr values. The mapping is tuned to the properties of existing video codecs in order to maximize precision of the decoded depths after the YCbCr has undergone video compression. It allows an existing, unmodified video codec to be used on the backend. I'm looking at how to pull this off with the least possible work. (This design change was unplanned.) Our game engine itself is native C++, presently for Win32 and DirectX, although we've worked hard to keep platform dependence segregated because we intend other ports. We don't have motion video facilities in the engine yet but will ultimately need that anyway for cinematics. I was planning on using some off-the-shelf motion video solution we can plug into our engine, and haven't chosen one yet. This new added requirement makes selecting one harder since, among other things, we'll now need to bypass colourspace conversion on one of the streams, and also will need to be playing two streams simultaneously in lockstep, on top of in some cases audio on one of them (for the cinematics). I'm also wondering if it's possible (or even useful) to do the conversion from YCbCr to depth in a pixel shader, or if it's better to just do it in CPU and separately load the resulting depth values into a locked tex. The conversion unfortunately does involve branching logic per-pixel. (There are more naive mappings that don't need branching, but they produce inferior results.) It could be reduced to a table lookup but the table would be 32MB. Programming is second-nature to me but I'm not that experienced with pix shaders and have zero knowledge of off-the-shelf video solutions. I'd therefore be interested in advice from others who may have dealt more with depth streaming, pixel shaders, and/or off-the-shelf codecs, regarding how feasible the proposed application is and what off-the-shelf video systems out there would best get along with this usage case.

    Read the article

  • Cleaner ClientID's with ASP.NET 4.0

    - by amaniar
    Normal 0 false false false EN-US X-NONE HI /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; mso-bidi-font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} A common complain we have had when using ASP.NET web forms is the inability to control the ID attributes being rendered in the HTML markup when using server controls. Our Interface Engineers want to be able to predict the ID’s of controls thereby having more control over their client side code for selecting/manipulating elements by ID or using CSS to target them. While playing with the just released VS2010 and .NET 4.0 I discovered some real cool improvements. One of them is the ability to now have full control over the ID being rendered for server controls. ASP.NET 4.0 controls now have a new ClientIDMode property which gives the developer complete control over the ID’s being rendered making it easy to write JavaScript and CSS against the rendered html. By default the ClientIDMode is set to Predictable which results in clean and predictable ID’s by concatenating the ID’s of the Parent and child controls. So the following markup: <asp:Content ID="ParentContainer" ContentPlaceHolderID="MainContentPlaceHolder" runat="server">     <asp:Label runat="server" ID="MyLabel">My Label</asp:Label> </asp:Content>                                                                                                                                                             Will render:   <span id="ParentContainer_MyLabel">My Label</span> Instead of something like this: (current) <span id="ct100_ParentContainer_MyLabel">My Label</span> Other modes include AutoID (renders ID’s like it currently does in .NET 3.5), Static (renders the ID exactly as specified in the code) and Inherit (defers the mode to the parent control). So now I can write my jQuery selector as: $(“ParentContainer_MyLabel”).text(“My new Text”); Instead of: $(‘<%=this. MyLabel.ClientID%>’).text(“My new Text”); Scott Mitchell has a great article about this new feature: http://bit.ly/ailEJ2 Am excited about this and some other improvements. Many thanks to the ASP.NET team for Listening!

    Read the article

  • BEHIND THE SCENES AT A FLASH-MOB...

    - by OliviaOC
    Today, we interviewed Aarti, who recently organised a flash-mob for Oracle Campus, which you can see on our facebook page Hi Aarti, perhaps you could give us a quick introduction of yourself, and what you do at Oracle? I’ve been with Campus Recruitment for just over a year. I’ve been with Oracle for three years. I was keen to get into the campus role after having watched other colleagues working in campus and when the opportunity arrived I jumped at it. The journey has been fantastic thus far. I’m responsible for the GBU hiring at Oracle. Why did you record the flash-mob video - what were your goals? Flash-mobs were one thing that took off really big in India after the first one in Mumbai. It’s the hot thing in the student community at the moment. A better way to reach out and connect with students. I think that it is also a good way to demonstrate our openness and culture at Oracle – demonstrate that we are very flexible and that we have a cool culture. I knew the video could be shared on our social media pages and reach out to a wider student community What was the preparation and rehearsal for the video like? When I decided to do the video, I had to decide who I would like to do the flash-mob. The new campus hires to Oracle would be ideal for this. We were 2 teams at 2 different locations and Each team took 2-3 songs and choreographed it themselves. Every day at 5pm, each team would meet up and every other weekend the whole group met. Practicing went on for about a month like this. How was the video received by participants and by students on the University campus? The event was well received. We did it during the lunch break at the University so that there was a large presence of students around while the flash mob took place. We set up about an hour beforehand to get everything ready. The break-bell sounded and the students came out, that’s when the flash-mob started. The students were pleasantly surprised that a company was doing this. They also recognised some of the participants involved as former graduates. Since the flash-mob and the video of it that you recorded, have you had much response due to it? We have, especially in the past two weeks. We went back to the college to make some hires. The flash-mob was still fresh in their minds and they knew well who Oracle was as a result. Would you like to repeat this kind of creative initiative again with the recruitment team? Yes, absolutely! I’m over the moon with the flash-mob. My mind is working overtime now with ideas about the next things to do!

    Read the article

  • Some PowerShell goodness

    - by KyleBurns
    Ever work somewhere where processes dump files into folders to maintain an archive?  Me too and Windows Explorer hates it.  Very often I find myself needing to organize these files into subfolders so that I can go after files without locking up Windows Explorer and my answer used to be to write a program in something like C# to do the job.  These programs will typically enumerate the files in a folder and move each file to a subdirectory named based on a datestamp.  The last such program I wrote had to use lower-level Win32 API calls to perform the enumeration because it appears the standard .Net calls make use of the same method of enumerating the directories that Windows Explorer chokes on when dealing with a large number of entries in a particular directory, so a simple task was accomplished with a lot of code. Of course, this little utility was just something I used to make my life easier and "not a production app", so it was in my local source folder and not source control when my hard drive died.  So... I was getting ready to re-create it and thought it might be a good idea to play with PowerShell a bit - something I had been wanting to do but had not yet met a requirement to make me do it.  The resulting script was amazingly succinct and even building the flexibility for parameterization and adding line breaks for readability was only about 25 lines long.  Here's the code with discussion following: param(     [Parameter(         Mandatory = $false,         Position = 0,         HelpMessage = "Root of the folders or share to archive.  Be sure to end with appropriate path separator"     )]     [String] $folderRoot="\\fileServer\pathToFolderWithLotsOfFiles\",       [Parameter(         Mandatory = $false,         Position = 1     )]     [int] $days = 1 ) dir $folderRoot|?{(!($_.PsIsContainer)) -and ((get-date) - $_.lastwritetime).totaldays -gt $days }|%{     [string]$year=$([string]$_.lastwritetime.year)     [string]$month=$_.lastwritetime.month     [string]$day=$_.lastwritetime.day     $dir=$folderRoot+$year+"\"+$month+"\"+$day     if(!(test-path $dir)){         new-item -type container $dir     }     Write-output $_     move-item $_.fullname $dir } The script starts by declaring two parameters.  The first parameter holds the path to the folder that I am going to be sorting into subdirectories.  The path separator is intended to be included in this argument because I didn't want to mess with determining whether this was local or UNC and picking the right separator in code, but this could be easily improved upon using Path.Combine since PowerShell has access to the full framework libraries.  The second parameter holds a minimum age in days for files to be removed from the root folder.  The script then pipes the dir command through a query to include only files (by excluding containers) and of those, only entries that meet the age requirement based on the last modified datestamp.  For each of those, the datestamp is used to construct a folder name in the format YYYY\MM\DD (if you're in an environment where even a day's worth of files need further divided, you could make this more granular) and the folder is created if it does not yet exist.  Finally, the file is moved into the directory. One of the things that was really cool about using PowerShell for this task is that the new-item command is smart enough to create the entire subdirectory structure with a single call.  In previous code that I have written to do this kind of thing, I would have to test the entire tree leading down to the subfolder I want, leading to a lot of branching code that detracted from being able to quickly look at the code and understand the job it performs. Overall, I have to say I'm really pleased with what has been done making PowerShell powerful and useful.

    Read the article

< Previous Page | 158 159 160 161 162 163 164 165 166 167 168 169  | Next Page >