Search Results

Search found 13727 results on 550 pages for 'target platform'.

Page 294/550 | < Previous Page | 290 291 292 293 294 295 296 297 298 299 300 301  | Next Page >

  • Why Is Vertical Resolution Monitor Resolution so Often a Multiple of 360?

    - by Jason Fitzpatrick
    Stare at a list of monitor resolutions long enough and you might notice a pattern: many of the vertical resolutions, especially those of gaming or multimedia displays, are multiples of 360 (720, 1080, 1440, etc.) But why exactly is this the case? Is it arbitrary or is there something more at work? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader Trojandestroy recently noticed something about his display interface and needs answers: YouTube recently added 1440p functionality, and for the first time I realized that all (most?) vertical resolutions are multiples of 360. Is this just because the smallest common resolution is 480×360, and it’s convenient to use multiples? (Not doubting that multiples are convenient.) And/or was that the first viewable/conveniently sized resolution, so hardware (TVs, monitors, etc) grew with 360 in mind? Taking it further, why not have a square resolution? Or something else unusual? (Assuming it’s usual enough that it’s viewable). Is it merely a pleasing-the-eye situation? So why have the display be a multiple of 360? The Answer SuperUser contributor User26129 offers us not just an answer as to why the numerical pattern exists but a history of screen design in the process: Alright, there are a couple of questions and a lot of factors here. Resolutions are a really interesting field of psychooptics meeting marketing. First of all, why are the vertical resolutions on youtube multiples of 360. This is of course just arbitrary, there is no real reason this is the case. The reason is that resolution here is not the limiting factor for Youtube videos – bandwidth is. Youtube has to re-encode every video that is uploaded a couple of times, and tries to use as little re-encoding formats/bitrates/resolutions as possible to cover all the different use cases. For low-res mobile devices they have 360×240, for higher res mobile there’s 480p, and for the computer crowd there is 360p for 2xISDN/multiuser landlines, 720p for DSL and 1080p for higher speed internet. For a while there were some other codecs than h.264, but these are slowly being phased out with h.264 having essentially ‘won’ the format war and all computers being outfitted with hardware codecs for this. Now, there is some interesting psychooptics going on as well. As I said: resolution isn’t everything. 720p with really strong compression can and will look worse than 240p at a very high bitrate. But on the other side of the spectrum: throwing more bits at a certain resolution doesn’t magically make it better beyond some point. There is an optimum here, which of course depends on both resolution and codec. In general: the optimal bitrate is actually proportional to the resolution. So the next question is: what kind of resolution steps make sense? Apparently, people need about a 2x increase in resolution to really see (and prefer) a marked difference. Anything less than that and many people will simply not bother with the higher bitrates, they’d rather use their bandwidth for other stuff. This has been researched quite a long time ago and is the big reason why we went from 720×576 (415kpix) to 1280×720 (922kpix), and then again from 1280×720 to 1920×1080 (2MP). Stuff in between is not a viable optimization target. And again, 1440P is about 3.7MP, another ~2x increase over HD. You will see a difference there. 4K is the next step after that. Next up is that magical number of 360 vertical pixels. Actually, the magic number is 120 or 128. All resolutions are some kind of multiple of 120 pixels nowadays, back in the day they used to be multiples of 128. This is something that just grew out of LCD panel industry. LCD panels use what are called line drivers, little chips that sit on the sides of your LCD screen that control how bright each subpixel is. Because historically, for reasons I don’t really know for sure, probably memory constraints, these multiple-of-128 or multiple-of-120 resolutions already existed, the industry standard line drivers became drivers with 360 line outputs (1 per subpixel). If you would tear down your 1920×1080 screen, I would be putting money on there being 16 line drivers on the top/bottom and 9 on one of the sides. Oh hey, that’s 16:9. Guess how obvious that resolution choice was back when 16:9 was ‘invented’. Then there’s the issue of aspect ratio. This is really a completely different field of psychology, but it boils down to: historically, people have believed and measured that we have a sort of wide-screen view of the world. Naturally, people believed that the most natural representation of data on a screen would be in a wide-screen view, and this is where the great anamorphic revolution of the ’60s came from when films were shot in ever wider aspect ratios. Since then, this kind of knowledge has been refined and mostly debunked. Yes, we do have a wide-angle view, but the area where we can actually see sharply – the center of our vision – is fairly round. Slightly elliptical and squashed, but not really more than about 4:3 or 3:2. So for detailed viewing, for instance for reading text on a screen, you can utilize most of your detail vision by employing an almost-square screen, a bit like the screens up to the mid-2000s. However, again this is not how marketing took it. Computers in ye olden days were used mostly for productivity and detailed work, but as they commoditized and as the computer as media consumption device evolved, people didn’t necessarily use their computer for work most of the time. They used it to watch media content: movies, television series and photos. And for that kind of viewing, you get the most ‘immersion factor’ if the screen fills as much of your vision (including your peripheral vision) as possible. Which means widescreen. But there’s more marketing still. When detail work was still an important factor, people cared about resolution. As many pixels as possible on the screen. SGI was selling almost-4K CRTs! The most optimal way to get the maximum amount of pixels out of a glass substrate is to cut it as square as possible. 1:1 or 4:3 screens have the most pixels per diagonal inch. But with displays becoming more consumery, inch-size became more important, not amount of pixels. And this is a completely different optimization target. To get the most diagonal inches out of a substrate, you want to make the screen as wide as possible. First we got 16:10, then 16:9 and there have been moderately successful panel manufacturers making 22:9 and 2:1 screens (like Philips). Even though pixel density and absolute resolution went down for a couple of years, inch-sizes went up and that’s what sold. Why buy a 19″ 1280×1024 when you can buy a 21″ 1366×768? Eh… I think that about covers all the major aspects here. There’s more of course; bandwidth limits of HDMI, DVI, DP and of course VGA played a role, and if you go back to the pre-2000s, graphics memory, in-computer bandwdith and simply the limits of commercially available RAMDACs played an important role. But for today’s considerations, this is about all you need to know. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

  • Delivering the Integrated Portal Experience!

    - by Michael Snow
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Guest post by Richard Maldonado, Principal Product Manager, Oracle WebCenter Portal Organizations are still struggling to standardize on a user interaction platform which can meet the needs of all their target audiences.  This has not only resulted in inefficient and inconsistent experiences for their users, but it also creates inefficiencies (productivity and costs) for the departments that manage the applications and information systems.  Portals have historically been the unifying platform that provide IT with a common interface which can securely surface the most relevant interactions for a given user and/or group of users.  However, organizations have found that the technologies available have either not provided the flexibility necessary to address all of their use cases, or they rely too much on IT resources to manage, maintain, and evolve.  Empowering  the Business Groups The core issue that IT departments face with delivering portal experiences is having enough resources to respond and address the influx of requirements which come in from the business.  Commonly, when a business group wants a new portal site established for their group, they will submit a request to the IT dept, the IT dept then assigns a resource to an administrator and/or developer to build.  Unfortunately, this approach is not scalable, it can be a time consuming activity which requires significant interaction between the business owner and the IT resource.  A modern user interaction platforms should empower the business groups by providing them tools which they can use to build and manage the portal experiences without the need for IT's involvement.  And because business groups rarely have technical resources (developers) on staff, the tools must be easy enough that virtually any business user could use.  In addition, the tool must be powerful enough to allow them to build the experience that they need, things such as creating a whole new portal, add/manage page and page hierarchy, manage user/group access, add/modify components within the page, etc.  This balance between ease-of-use and flexibility is key to the successful adoption of tools which will ultimately reduce the burden on IT, respond to the needs of the business, and deliver high-value experiences for the users.  Ready or Not, Here They Come: Smartphones and Tablets Recently, several studies have highlighted that smartphone and tablet-style devices have overtaken PC's in both sales and usage.  This shift is further driving organizations to revaluate how they're delivering data, information, and applications to their users.  Users are expecting to get the same level of access and interaction, but in a ways which are optimized for the capabilities of the device that they are using.  Expect More With the ever growing number of new IT projects and flat/shrinking budgets, organizations are looking for comprehensive solutions which can deliver integrated web experiences that are tailored for the users and optimized for mobile devices.  Piecing together a number of point solutions is no longer an option.  A modern portal technology should not only address the traditional needs of integrating and surfacing back-end applications/information, but it should enable the business through easy-to-use tools and accelerate the delivery of mobile optimized experiences.   v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} 12.00 Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} 12.00 Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} WebCenter in Action Series: Qualcomm Provides a Seamless Experience for Customers with Oracle WebCenter Featuring Qualcomm & Keste 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 -"/ /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:Calibri; mso-fareast-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} 12.00 Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-fareast- mso-bidi-font-family:"Times New Roman";}

    Read the article

  • SQL Server Developer Tools &ndash; Codename Juneau vs. Red-Gate SQL Source Control

    - by Ajarn Mark Caldwell
    So how do the new SQL Server Developer Tools (previously code-named Juneau) stack up against SQL Source Control?  Read on to find out. At the PASS Community Summit a couple of weeks ago, it was announced that the previously code-named Juneau software would be released under the name of SQL Server Developer Tools with the release of SQL Server 2012.  This replacement for Database Projects in Visual Studio (also known in a former life as Data Dude) has some great new features.  I won’t attempt to describe them all here, but I will applaud Microsoft for making major improvements.  One of my favorite changes is the way database elements are broken down.  Previously every little thing was in its own file.  For example, indexes were each in their own file.  I always hated that.  Now, SSDT uses a pattern similar to Red-Gate’s and puts the indexes and keys into the same file as the overall table definition. Of course there are really cool features to keep your database model in sync with the actual source scripts, and the rename refactoring feature is now touted as being more than just a search and replace, but rather a “semantic-aware” search and replace.  Funny, it reminds me of SQL Prompt’s Smart Rename feature.  But I’m not writing this just to criticize Microsoft and argue that they are late to the party with this feature set.  Instead, I do see it as a viable alternative for folks who want all of their source code to be version controlled, but there are a couple of key trade-offs that you need to know about when you choose which tool set to use. First, the basics Both tool sets integrate with a wide variety of source control systems including the most popular: Subversion, GIT, Vault, and Team Foundation Server.  Both tools have integrated functionality to produce objects to upgrade your target database when you are ready (DACPACs in SSDT, integration with SQL Compare for SQL Source Control).  If you regularly live in Visual Studio or the Business Intelligence Development Studio (BIDS) then SSDT will likely be comfortable for you.  Like BIDS, SSDT is a Visual Studio Project Type that comes with SQL Server, and if you don’t already have Visual Studio installed, it will install the shell for you.  If you already have Visual Studio 2010 installed, then it will just add this as an available project type.  On the other hand, if you regularly live in SQL Server Management Studio (SSMS) then you will really enjoy the SQL Source Control integration from within SSMS.  Both tool sets store their database model in script files.  In SSDT, these are on your file system like other source files; in SQL Source Control, these are stored in the folder structure in your source control system, and you can always GET them to your file system if you want to browse them directly. For me, the key differentiating factors are 1) a single, unified check-in, and 2) migration scripts.  How you value those two features will likely make your decision for you. Unified Check-In If you do a continuous-integration (CI) style of development that triggers an automated build with unit testing on every check-in of source code, and you use Visual Studio for the rest of your development, then you will want to really consider SSDT.  Because it is just another project in Visual Studio, it can be added to your existing Solution, and you can then do a complete, or unified single check-in of all changes whether they are application or database changes.  This is simply not possible with SQL Source Control because it is in a different development tool (SSMS instead of Visual Studio) and there is no way to do one unified check-in between the two.  You CAN do really fast back-to-back check-ins, but there is the possibility that the automated build that is triggered from the first check-in will cause your unit tests to fail and the CI tool to report that you broke the build.  Of course, the automated build that is triggered from the second check-in which contains the “other half” of your changes should pass and so the amount of time that the build was broken may be very, very short, but if that is very, very important to you, then SQL Source Control just won’t work; you’ll have to use SSDT. Refactoring and Migrations If you work on a mature system, or on a not-so-mature but also not-so-well-designed system, where you want to refactor the database schema as you go along, but you can’t have data suddenly disappearing from your target system, then you’ll probably want to go with SQL Source Control.  As I wrote previously, there are a number of changes which you can make to your database that the comparison tools (both from Microsoft and Red Gate) simply cannot handle without the possibility (or probability) of data loss.  Currently, SSDT only offers you the ability to inject PRE and POST custom deployment scripts.  There is no way to insert your own script in the middle to override the default behavior of the tool.  In version 3.0 of SQL Source Control (Early Access version now available) you have that ability to create your own custom migration script to take the place of the commands that the tool would have done, and ensure the preservation of your data.  Or, even if the default tool behavior would have worked, but you simply know a better way then you can take control and do things your way instead of theirs. You Decide In the environment I work in, our automated builds are not triggered off of check-ins, but off of the clock (currently once per night) and so there is no point at which the automated build and unit tests will be triggered without having both sides of the development effort already checked-in.  Therefore having a unified check-in, while handy, is not critical for us.  As for migration scripts, these are critically important to us.  We do a lot of new development on systems that have already been in production for years, and it is not uncommon for us to need to do a refactoring of the database.  Because of the maturity of the existing system, that often involves data migrations or other additional SQL tasks that the comparison tools just can’t detect on their own.  Therefore, the ability to create a custom migration script to override the tool’s default behavior is very important to us.  And so, you can see why we will continue to use Red Gate SQL Source Control for the foreseeable future.

    Read the article

  • Building an OpenStack Cloud for Solaris Engineering, Part 1

    - by Dave Miner
    One of the signature features of the recently-released Solaris 11.2 is the OpenStack cloud computing platform.  Over on the Solaris OpenStack blog the development team is publishing lots of details about our version of OpenStack Havana as well as some tips on specific features, and I highly recommend reading those to get a feel for how we've leveraged Solaris's features to build a top-notch cloud platform.  In this and some subsequent posts I'm going to look at it from a different perspective, which is that of the enterprise administrator deploying an OpenStack cloud.  But this won't be just a theoretical perspective: I've spent the past several months putting together a deployment of OpenStack for use by the Solaris engineering organization, and now that it's in production we'll share how we built it and what we've learned so far.In the Solaris engineering organization we've long had dedicated lab systems dispersed among our various sites and a home-grown reservation tool for developers to reserve those systems; various teams also have private systems for specific testing purposes.  But as a developer, it can still be difficult to find systems you need, especially since most Solaris changes require testing on both SPARC and x86 systems before they can be integrated.  We've added virtual resources over the years as well in the form of LDOMs and zones (both traditional non-global zones and the new kernel zones).  Fundamentally, though, these were all still deployed in the same model: our overworked lab administrators set up pre-configured resources and we then reserve them.  Sounds like pretty much every traditional IT shop, right?  Which means that there's a lot of opportunity for efficiencies from greater use of virtualization and the self-service style of cloud computing.  As we were well into development of OpenStack on Solaris, I was recruited to figure out how we could deploy it to both provide more (and more efficient) development and test resources for the organization as well as a test environment for Solaris OpenStack.At this point, let's acknowledge one fact: deploying OpenStack is hard.  It's a very complex piece of software that makes use of sophisticated networking features and runs as a ton of service daemons with myriad configuration files.  The web UI, Horizon, doesn't often do a good job of providing detailed errors.  Even the command-line clients are not as transparent as you'd like, though at least you can turn on verbose and debug messaging and often get some clues as to what to look for, though it helps if you're good at reading JSON structure dumps.  I'd already learned all of this in doing a single-system Grizzly-on-Linux deployment for the development team to reference when they were getting started so I at least came to this job with some appreciation for what I was taking on.  The good news is that both we and the community have done a lot to make deployment much easier in the last year; probably the easiest approach is to download the OpenStack Unified Archive from OTN to get your hands on a single-system demonstration environment.  I highly recommend getting started with something like it to get some understanding of OpenStack before you embark on a more complex deployment.  For some situations, it may in fact be all you ever need.  If so, you don't need to read the rest of this series of posts!In the Solaris engineering case, we need a lot more horsepower than a single-system cloud can provide.  We need to support both SPARC and x86 VM's, and we have hundreds of developers so we want to be able to scale to support thousands of VM's, though we're going to build to that scale over time, not immediately.  We also want to be able to test both Solaris 11 updates and a release such as Solaris 12 that's under development so that we can work out any upgrade issues before release.  One thing we don't have is a requirement for extremely high availability, at least at this point.  We surely don't want a lot of down time, but we can tolerate scheduled outages and brief (as in an hour or so) unscheduled ones.  Thus I didn't need to spend effort on trying to get high availability everywhere.The diagram below shows our initial deployment design.  We're using six systems, most of which are x86 because we had more of those immediately available.  All of those systems reside on a management VLAN and are connected with a two-way link aggregation of 1 Gb links (we don't yet have 10 Gb switching infrastructure in place, but we'll get there).  A separate VLAN provides "public" (as in connected to the rest of Oracle's internal network) addresses, while we use VxLANs for the tenant networks. One system is more or less the control node, providing the MySQL database, RabbitMQ, Keystone, and the Nova API and scheduler as well as the Horizon console.  We're curious how this will perform and I anticipate eventually splitting at least the database off to another node to help simplify upgrades, but at our present scale this works.I had a couple of systems with lots of disk space, one of which was already configured as the Automated Installation server for the lab, so it's just providing the Glance image repository for OpenStack.  The other node with lots of disks provides Cinder block storage service; we also have a ZFS Storage Appliance that will help back-end Cinder in the near future, I just haven't had time to get it configured in yet.There's a separate system for Neutron, which is our Elastic Virtual Switch controller and handles the routing and NAT for the guests.  We don't have any need for firewalling in this deployment so we're not doing so.  We presently have only two tenants defined, one for the Solaris organization that's funding this cloud, and a separate tenant for other Oracle organizations that would like to try out OpenStack on Solaris.  Each tenant has one VxLAN defined initially, but we can of course add more.  Right now we have just a single /24 network for the floating IP's, once we get demand up to where we need more then we'll add them.Finally, we have started with just two compute nodes; one is an x86 system, the other is an LDOM on a SPARC T5-2.  We'll be adding more when demand reaches the level where we need them, but as we're still ramping up the user base it's less work to manage fewer nodes until then.My next post will delve into the details of building this OpenStack cloud's infrastructure, including how we're using various Solaris features such as Automated Installation, IPS packaging, SMF, and Puppet to deploy and manage the nodes.  After that we'll get into the specifics of configuring and running OpenStack itself.

    Read the article

  • Feedback on meeting of the Linux User Group of Mauritius

    Once upon a time in a country far far away... Okay, actually it's not that bad but it has been a while since the last meeting of the Linux User Group of Mauritius (LUGM). There have been plans in the past but it never really happened. Finally, Selven took the opportunity and organised a new meetup with low administrative overhead, proper scheduling on alternative dates and a small attendee's survey on the preferred option. All the pre-work was nicely executed. First, I wasn't sure whether it would be possible to attend. Luckily I got some additional information, like children should come, too, and I was sold to this community gathering. According to other long-term members of the LUGM it was the first time 'ever' that a gathering was organised outside of Quatre Bornes, and I have to admit it was great! LUGM - user group meeting on the 15.06.2013 in L'Escalier Quick overview of Linux & the LUGM With a little bit of delay the LUGM meeting officially started with a quick overview and introduction to Linux presented by Avinash. During the session he told the audience that there had been quite some activity over the island some years ago but unfortunately it had been quiet during recent times. Of course, we also spoke about the acknowledged world dominance of Linux - thanks to Android - and the interesting possibilities for countries like Mauritius. It is known that a couple of public institutions have there back-end infrastructure running on Red Hat Linux systems but the presence on the desktop is still very low. Users are simply hanging on to Windows XP and older versions of Microsoft Office. Following the introduction of the LUGM Ajay joined into the session and it quickly changed into a panel discussion with lots of interesting questions and answers, sharing of first-hand experience either on the job or in private use of Linux, and a couple of ideas about how the LUGM could promote Linux a bit more in Mauritius. It was great to get an insight into other attendee's opinion and activities. Especially taking into consideration that I'm already using Linux since around 1996/97. Frankly speaking, I bought a SuSE 4.x distribution back in those days because I couldn't achieve certain tasks on Windows NT 4.0 without spending a fortune. OpenELEC Mediacenter Next, Selven gave us decent introduction on OpenELEC: Open Embedded Linux Entertainment Center (OpenELEC) is a small Linux distribution built from scratch as a platform to turn your computer into an XBMC media center. OpenELEC is designed to make your system boot fast, and the install is so easy that anyone can turn a blank PC into a media machine in less than 15 minutes. I didn't know about it until this presentation. In the past, I was mainly attached to Video Disk Recorder (VDR) as it allows the use of satellite receiver cards very easily. Hm, somehow I'm still missing my precious HTPC that I had to leave back in Germany years ago. It was great piece of hardware and software; self-built PC in a standard HiFi-sized (43cm) black desktop casing with 2 full-featured Hauppauge DVB-s cards, an old-fashioned Voodoo graphics card, WiFi card, Pioneer slot-in DVD drive, and fully remote controlled via infra-red thanks to Debian, VDR and LIRC. With EP Guide, scheduled recordings and general multimedia centre it offered all the necessary comfort in the living room, besides a Nintendo game console; actually a GameCube at that time... But I have to admit that putting OpenELEC on a Raspberry Pi would be a cool DIY project in the near future. LUGM - our next generation of linux users (15.06.2013) Project Evil Genius (PEG) Don't be scared of the paragraph header. Ish gave us a cool explanation why he named it PEG - Project Evil Genius; it's because of the time of the day when he was scripting down his ideas to be able to build, package and provide software applications to various Linux distributions. The main influence came from openSuSE but the platform didn't cater for his needs and ideas, so he started to work out something on his own. During his passionate session he also talked about the amazing experience he had due to other Linux users from all over the world. During the next couple of days Ish promised to put his script to GitHub... Looking forward to that. Check out Ish's personal blog over at hacklog.in. Highly recommended to read. Why India? Simply because the registration fees per year for an Indian domain are approximately 20 times less than for a Mauritian domain (.mu). Exploring the beach of L'Escalier af the meeting 'After-party' at the beach of L'Escalier Puh, after such interesting sessions, ideas around Linux and good conversation during the breaks and over lunch it was time for a little break-out. Selven suggested that we all should head down to the beach of L'Escalier and get some impressions of nature down here in the south of the island. Talking about 'beach' ;-) - absolutely not comparable to the white-sanded ones here in Flic en Flac... There are no lagoons down at the south coast of Mauriitus, and watching the breaking waves is a different experience and joy after all. Unfortunately, I was a little bit worried about the thoughtless littering at such a remote location. You have to drive on natural paths through the sugar cane fields and I was really shocked by the amount of rubbish lying around almost everywhere. Sad, really sad and it concurs with Yasir's recent article on the same topic. Resumé & outlook It was a great event. I met with new people, had some good conversations, and even my children enjoyed themselves the whole day. The location was well-chosen, enough space for each and everyone, parking spaces and even a playground for the children. Also, a big "Thank You" to Selven and his helpers for the organisation and preparation of lunch. I'm kind of sure that this was an exceptional meeting of LUGM and I'm really looking forward to the next gathering of Linux geeks. Hopefully, soon. All images are courtesy of Avinash Meetoo. More pictures are available on Flickr.

    Read the article

  • CodePlex Daily Summary for Monday, August 18, 2014

    CodePlex Daily Summary for Monday, August 18, 2014Popular ReleasesMagick.NET: Magick.NET 7.0.0.0001: Magick.NET linked with ImageMagick 7-Beta.CMake Tools for Visual Studio: CMake Tools for Visual Studio 1.2: This release adds the following new features and bug fixes from CMake Tools for Visual Studio 1.1: Added support for CMake 3.0. Added support for word completion. Added IntelliSense support for the CMAKEHOSTSYSTEM_INFORMATION command. Fixed syntax highlighting for tokens beginning with escape sequences. Fixed issue uninstalling CMake Tools for Visual Studio after Visual Studio has been uninstalled.GW2 Personal Assistant Overlay: GW2 Personal Assistant Overlay 1.1: Overview1.1 is the second 'stable' release of the GW2 Personal Assistant Overlay. This version includes just a couple of very minor features and some minor bug fixes. For details regarding installation, setup, and general use, see Documentation. Note: If you were using a previous version, you will probably want to copy over the following user settings files: GW2PAO.DungeonSettings.xml GW2PAO.EventSettings.xml GW2PAO.WvWSettings.xml GW2PAO.ZoneCompletionSettings.xml New FeaturesAdded new "No...WallSwitch: WallSwitch 1.2.5: Version 1.2.5 Changes: Added support for sequential order in collage mode. Added option to display multiple images per switch in collage mode. Fixed bug where border width wasn't being loaded properly, and was reverting to default values. Fixed bug where sequential order was repeating images on multiple monitors. Decreased likelihood of random images being repeated.OpenCppCoverage: OpenCppCoverage 0.9.1: - Add Jenkins support. - Command line argument can be placed inside a config file. If you do not have Visual Studio C++ 2013 you need to download redistributable packages: http://www.microsoft.com/en-us/download/details.aspx?id=40784Easy Backup Windows Service: Release 2.0 with CU: Fix log error when "To" directory not exist in fyle system. Force run program as administrator by default. Add 'everyday' schedule element. Update solution to VS 2013.Easy Backup Application: Release 2.0 with CU: Fix log error when "To" directory not exist in fyle system. Fix app location initialization. Force run program as administrator by default. Update solution to VS 2013.TEBookConverter: 1.5: Added: Turkish and French translations Added: A few interface changes Removed: SkinDynamulet: Dynamulet v0.1: DynamoDB Transaction Server v0.1Console parallel nunit tests runner: ConsoleUnitTestsRunner 1.03: bugfixingFluentx: Fluentx v1.5.3: Added few more extension methods.fastJSON: v2.1.2: 2.1.2 - bug fix circular referencesJPush.NET: JPush Server SDK 1.2.1 (For JPush V3): Assembly: 1.2.1.24728 JPush REST API Version: v3 JPush Documentation Reference .NET framework: v4.0 or above. Sample: class: JPushClientV3 2014 Augest 15th.SEToolbox: SEToolbox 01.043.008 Release 1: Changed ship/station names to use new DisplayName instead of Beacon/Antenna. Fixed issue with updated SE binaries 01.043.018 using new Voxel Material definitions.Google .Net API: Drive.Sample: Google .NET Client API – Drive.SampleInstructions for the Google .NET Client API – Drive.Sample</h2> http://code.google.com/p/google-api-dotnet-client/source/browse/?repo=samples#hg%2FDrive.SampleBrowse Source, or main file http://code.google.com/p/google-api-dotnet-client/source/browse/Drive.Sample/Program.cs?repo=samplesProgram.cs <h3>1. Checkout Instructions</h3> <p><b>Prerequisites:</b> Install Visual Studio, and <a href="http://mercurial.selenic.com/">Mercurial</a>.</p> ...FineUI - jQuery / ExtJS based ASP.NET Controls: FineUI v4.1.1: -??Form??????????????(???-5929)。 -?TemplateField??ExpandOnDoubleClick、ExpandOnEnter、ExpandToSelectRow????(LZOM-5932)。 -BodyPadding???????,??“5”“5 10”,???????????“5px”“5px 10px”。 -??TriggerBox?EnableEdit=false????,??????????????(Jango_Jing-5450)。 -???????????DataKeyNames???????????(yygy-6002)。 -????????????????????????(Gnid-6018)。 -??PageManager???AutoSizePanelID????,??????????????????(yygy-6008)。 -?FState???????????????,????????????????(????-5925)。 -??????OnClientClick???return?????????(FineU...DNN CMS Platform: 07.03.02: Major Highlights Fixed backwards compatibility issue with 3rd party control panels Fixed issue in the drag and drop functionality of the File Uploader in IE 11 and Safari Fixed issue where users were able to create pages with the same name Fixed issue that affected older versions of DNN that do not include the maxAllowedContentLength during upgrade Fixed issue that stopped some skins from being upgraded to newer versions Fixed issue that randomly showed an unexpected error during us...WordMat: WordMat for Mac: WordMat for Mac has a few limitations compared to the Windows version - Graph is not supported (Gnuplot, GeoGebra and Excel works) - Units are not supported yet (Coming up) The Mac version is yet as tested as the windows version.MFCMAPI: August 2014 Release: Build: 15.0.0.1042 Full release notes at SGriffin's blog. If you just want to run the MFCMAPI or MrMAPI, get the executables. If you want to debug them, get the symbol files and the source. The 64 bit builds will only work on a machine with Outlook 2010/2013 64 bit installed. All other machines should use the 32 bit builds, regardless of the operating system. Facebook BadgeEWSEditor: EwsEditor 1.10 Release: • Export and import of items as a full fidelity steam works - without proxy classes! - I used raw EWS POSTs. • Turned off word wrap for EWS request field in EWS POST windows. • Several windows with scrolling texts boxes were limiting content to 32k - I removed this restriction. • Split server timezone info off to separate menu item from the timezone info windows so that the timezone info window could be used without logging into a mailbox. • Lots of updates to the TimeZone window. • UserAgen...New Projectsballmon: ballmonExchange Database Recovery With and Without Log Files is Possible: This segments giving an overview of Exchange Server transaction log files. It describes process how users can recover their database with & without log filesFabs.Net: Ego tatmini ve gelisme amaçli yaptigim bir projedir.JacoChat: JacoChat is a simple chatting interface that uses my personal webserver as a "wall" for people to chat on.ManagedWin32: ManagedWin32 is a library that exposes the Win32 API to .NET applications.Open XML Extensions: The project provides additions to the Open XML SDK and related projects (e.g., PowerTools for Open XML), starting with MemoryStreams for Open XML Documents.orntic: Project for insurace companyTBOX: The Treasure Box Library: TBOX is a mutli-platform c library for unix, windows, mac, ios, android, etc. It includes asio, stream, container, algorithm, xml and other library modules.WeatherTS: Typescript weather application.?????@/????: ??????????????:????,????,????,???????,????????,??????:????????,?????! ?????????: ????????????????????,????????:??、??、???,?????????????????????! ????-??: ??????????????,????,???????????????。

    Read the article

  • SpringMvc java.lang.NullPointerException When Posting Form To Server [closed]

    - by dev_darin
    I have a form with a user name field on it when i tab out of the field i use a RESTFUL Web Service that makes a call to a handler method in the controller. The method makes a call to a DAO class that checks the database if the user name exists. This works fine, however when the form is posted to the server i call the same exact function i would call in the handler method however i get a java.lang.NullPointerException when it accesses the class that makes a call to the DAO object. So it does not even access the DAO object the second time. I have exception handlers around the calls in all my classes that makes calls. Any ideas as to whats happening here why i would get the java.lang.NullPointerException the second time the function is called.Does this have anything to do with Spring instantiating DAO classes using a Singleton method or something to that effect? What can be done to resolve this? This is what happens the First Time The Method is called using the Web Service(this is suppose to happen): 13011 [http-8084-2] INFO com.crimetrack.jdbc.JdbcOfficersDAO - Inside jdbcOfficersDAO 13031 [http-8084-2] DEBUG org.springframework.jdbc.core.JdbcTemplate - Executing prepared SQL query 13034 [http-8084-2] DEBUG org.springframework.jdbc.core.JdbcTemplate - Executing prepared SQL statement [SELECT userName FROM crimetrack.tblofficers WHERE userName = ?] 13071 [http-8084-2] DEBUG org.springframework.jdbc.datasource.DataSourceUtils - Fetching JDBC Connection from DataSource 13496 [http-8084-2] DEBUG org.springframework.jdbc.core.StatementCreatorUtils - Setting SQL statement parameter value: column index 1, parameter value [adminz], value class [java.lang.String], SQL type unknown 13534 [http-8084-2] DEBUG org.springframework.jdbc.datasource.DataSourceUtils - Returning JDBC Connection to DataSource 13537 [http-8084-2] INFO com.crimetrack.jdbc.JdbcOfficersDAO - No username was found in exception 13537 [http-8084-2] INFO com.crimetrack.service.ValidateUserNameManager - UserName :adminz does NOT exist The Second time When The Form Is 'Post' and a validation method handles the form and calls the same method the web service would call: 17199 [http-8084-2] INFO com.crimetrack.service.OfficerRegistrationValidation - UserName is not null so going to check if its valid for :adminz 17199 [http-8084-2] INFO com.crimetrack.service.OfficerRegistrationValidation - User Name in try.....catch block is adminz 17199 [http-8084-2] INFO com.crimetrack.service.ValidateUserNameManager - Inside Do UserNameExist about to validate with username : adminz 17199 [http-8084-2] INFO com.crimetrack.service.ValidateUserNameManager - UserName :adminz EXCEPTION OCCURED java.lang.NullPointerException ValidateUserNameManager.java public class ValidateUserNameManager implements ValidateUserNameIFace { private OfficersDAO officerDao; private final Logger logger = Logger.getLogger(getClass()); public boolean DoesUserNameExist(String userName) throws Exception { logger.info("Inside Do UserNameExist about to validate with username : " + userName); try{ if(officerDao.OfficerExist(userName) == true){ logger.info("UserName :" + userName + " does exist"); return true; }else{ logger.info("UserName :" + userName + " does NOT exist"); return false; } }catch(Exception e){ logger.info("UserName :" + userName + " EXCEPTION OCCURED " + e.toString()); return false; } } /** * @return the officerDao */ public OfficersDAO getOfficerDao() { return officerDao; } /** * @param officerdao the officerDao to set */ public void setOfficerDao(OfficersDAO officerDao) { this.officerDao = officerDao; } } JdbcOfficersDAO.java public boolean OfficerExist(String userName){ String dbUserName; try{ logger.info("Inside jdbcOfficersDAO"); String sql = "SELECT userName FROM crimetrack.tblofficers WHERE userName = ?"; try{ dbUserName = (String)getJdbcTemplate().queryForObject(sql, new Object[]{userName},String.class); logger.info("Just Returned from database"); }catch(Exception e){ logger.info("No username was found in exception"); return false; } if(dbUserName == null){ logger.info("Database did not find any matching records"); } logger.info("after JdbcTemplate"); if (dbUserName.equals(userName)) { logger.info("User Name Exists"); return true; }else{ logger.info("User Name Does NOT Exists"); return false; } }catch(Exception e){ logger.info("Exception Message in JdbcOfficersDAO is "+e.getMessage()); return false; } } OfficerRegistrationValidation.java public class OfficerRegistrationValidation implements Validator{ private final Logger logger = Logger.getLogger(getClass()); private ValidateUserNameManager validateUserNameManager; public boolean supports(Class<?> clazz) { return Officers.class.equals(clazz); } public void validate(Object target, Errors errors) { Officers officer = (Officers) target; if (officer.getUserName() == null){ errors.rejectValue("userName", "userName.required"); }else{ String userName = officer.getUserName(); logger.info("UserName is not null so going to check if its valid for :" + userName); try { logger.info("User Name in try.....catch block is " + userName); if (validateUserNameManager.DoesUserNameExist(userName)== true){ errors.rejectValue("userName", "userName.exist"); } } catch (Exception e) { logger.info("Error Occured When validating UserName"); errors.rejectValue("userName", "userName.error"); } } if(officer.getPassword()== null){ errors.rejectValue("password", "password.required"); } if(officer.getPassword2()== null){ errors.rejectValue("password2", "password2.required"); } if(officer.getfName() == null){ errors.rejectValue("fName","fName.required"); } if(officer.getlName() == null){ errors.rejectValue("lName", "lName.required"); } if (officer.getoName() == null){ errors.rejectValue("oName", "oName.required"); } if (officer.getEmailAdd() == null){ errors.rejectValue("emailAdd", "emailAdd.required"); } if (officer.getDob() == null){ errors.rejectValue("dob", "dob.required"); } if (officer.getGenderId().equals("A")){ errors.rejectValue("genderId","genderId.required"); } if(officer.getDivisionNo() == 1){ errors.rejectValue("divisionNo", "divisionNo.required"); } if(officer.getPositionId() == 1){ errors.rejectValue("positionId", "positionId.required"); } if (officer.getStartDate() == null){ errors.rejectValue("startDate","startDate.required"); } if(officer.getEndDate() == null){ errors.rejectValue("endDate","endDate.required"); } logger.info("The Gender ID is " + officer.getGenderId().toString()); if(officer.getPhoneNo() == null){ errors.rejectValue("phoneNo", "phoneNo.required"); } } /** * @return the validateUserNameManager */ public ValidateUserNameManager getValidateUserNameManager() { return validateUserNameManager; } /** * @param validateUserNameManager the validateUserNameManager to set */ public void setValidateUserNameManager( ValidateUserNameManager validateUserNameManager) { this.validateUserNameManager = validateUserNameManager; } } Update Error Log using Logger.Error("Message", e): 39024 [http-8084-2] INFO com.crimetrack.service.OfficerRegistrationValidation - UserName is not null so going to check if its valid for :adminz 39025 [http-8084-2] INFO com.crimetrack.service.OfficerRegistrationValidation - User Name in try.....catch block is adminz 39025 [http-8084-2] ERROR com.crimetrack.service.OfficerRegistrationValidation - Message java.lang.NullPointerException at com.crimetrack.service.OfficerRegistrationValidation.validate(OfficerRegistrationValidation.java:47) at org.springframework.validation.DataBinder.validate(DataBinder.java:725) at org.springframework.web.bind.annotation.support.HandlerMethodInvoker.doBind(HandlerMethodInvoker.java:815) at org.springframework.web.bind.annotation.support.HandlerMethodInvoker.resolveHandlerArguments(HandlerMethodInvoker.java:367) at org.springframework.web.bind.annotation.support.HandlerMethodInvoker.invokeHandlerMethod(HandlerMethodInvoker.java:171) at org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter.invokeHandlerMethod(AnnotationMethodHandlerAdapter.java:436) at org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter.handle(AnnotationMethodHandlerAdapter.java:424) at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:923) at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:852) at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:882) at org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:789) at javax.servlet.http.HttpServlet.service(HttpServlet.java:637) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:859) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:602) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489) at java.lang.Thread.run(Unknown Source) 39025 [http-8084-2] INFO com.crimetrack.service.OfficerRegistrationValidation - Error Occured When validating UserName

    Read the article

  • Restoring databases to a set drive and directory

    - by okeofs
     Restoring databases to a set drive and directory Introduction Often people say that necessity is the mother of invention. In this case I was faced with the dilemma of having to restore several databases, with multiple ‘ndf’ files, and having to restore them with different physical file names, drives and directories on servers other than the servers from which they originated. As most of us would do, I went to Google to see if I could find some code to achieve this task and found some interesting snippets on Pinal Dave’s website. Naturally, I had to take it further than the code snippet, HOWEVER it was a great place to start. Creating a temp table to hold database file details First off, I created a temp table which would hold the details of the individual data files within the database. Although there are a plethora of fields (within the temp table below), I utilize LogicalName only within this example. The temporary table structure may be seen below:   create table #tmp ( LogicalName nvarchar(128)  ,PhysicalName nvarchar(260)  ,Type char(1)  ,FileGroupName nvarchar(128)  ,Size numeric(20,0)  ,MaxSize numeric(20,0), Fileid tinyint, CreateLSN numeric(25,0), DropLSN numeric(25, 0), UniqueID uniqueidentifier, ReadOnlyLSN numeric(25,0), ReadWriteLSN numeric(25,0), BackupSizeInBytes bigint, SourceBlocSize int, FileGroupId int, LogGroupGUID uniqueidentifier, DifferentialBaseLSN numeric(25,0), DifferentialBaseGUID uniqueidentifier, IsReadOnly bit, IsPresent bit,  TDEThumbPrint varchar(50) )    We now declare and populate a variable(@path), setting the variable to the path to our SOURCE database backup. declare @path varchar(50) set @path = 'P:\DATA\MYDATABASE.bak'   From this point, we insert the file details of our database into the temp table. Note that we do so by utilizing a restore statement HOWEVER doing so in ‘filelistonly’ mode.   insert #tmp EXEC ('restore filelistonly from disk = ''' + @path + '''')   At this point, I depart from what I gleaned from Pinal Dave.   I now instantiate a few more local variables. The use of each variable will be evident within the cursor (which follows):   Declare @RestoreString as Varchar(max) Declare @NRestoreString as NVarchar(max) Declare @LogicalName  as varchar(75) Declare @counter as int Declare @rows as int set @counter = 1 select @rows = COUNT(*) from #tmp  -- Count the number of records in the temp                                    -- table   Declaring and populating the cursor At this point I do realize that many people are cringing about the use of a cursor. Being an Oracle professional as well, I have learnt that there is a time and place for cursors. I would remind the reader that the data that will be read into the cursor is from a local temp table and as such, any locking of the records (within the temp table) is not really an issue.   DECLARE MY_CURSOR Cursor  FOR  Select LogicalName  From #tmp   Parsing the logical names from within the cursor. A small caveat that works in our favour,  is that the first logical name (of our database) is the logical name of the primary data file (.mdf). Other files, except for the very last logical name, belong to secondary data files. The last logical name is that of our database log file.   I now open my cursor and populate the variable @RestoreString Open My_Cursor  set @RestoreString =  'RESTORE DATABASE [MYDATABASE] FROM DISK = N''P:\DATA\ MYDATABASE.bak''' + ' with  '   We now fetch the first record from the temp table.   Fetch NEXT FROM MY_Cursor INTO @LogicalName   While there are STILL records left within the cursor, we dynamically build our restore string. Note that we are using concatenation to create ‘one big restore executable string’.   Note also that the target physical file name is hardwired, as is the target directory.   While (@@FETCH_STATUS <> -1) BEGIN IF (@@FETCH_STATUS <> -2) -- As long as there are no rows missing select @RestoreString = case  when @counter = 1 then -- This is the mdf file    @RestoreString + 'move  N''' + @LogicalName + '''' + ' TO N’’X:\DATA1\'+ @LogicalName + '.mdf' + '''' + ', '   -- OK, if it passes through here we are dealing with an .ndf file -- Note that Counter must be greater than 1 and less than the number of rows.   when @counter > 1 and @counter < @rows then -- These are the ndf file(s)    @RestoreString + 'move  N''' + @LogicalName + '''' + ' TO N’’X:\DATA1\'+ @LogicalName + '.ndf' + '''' + ', '   -- OK, if it passes through here we are dealing with the log file When @LogicalName like '%log%' then    @RestoreString + 'move  N''' + @LogicalName + '''' + ' TO N’’X:\DATA1\'+ @LogicalName + '.ldf' +'''' end --Increment the counter   set @counter = @counter + 1 FETCH NEXT FROM MY_CURSOR INTO @LogicalName END   At this point we have populated the varchar(max) variable @RestoreString with a concatenation of all the necessary file names. What we now need to do is to run the sp_executesql stored procedure, to effect the restore.   First, we must place our ‘concatenated string’ into an nvarchar based variable. Obviously this will only work as long as the length of @RestoreString is less than varchar(max) / 2.   set @NRestoreString = @RestoreString EXEC sp_executesql @NRestoreString   Upon completion of this step, the database should be restored to the server. I now close and deallocate the cursor, and to be clean, I would also drop my temp table.   CLOSE MY_CURSOR DEALLOCATE MY_CURSOR GO   Conclusion Restoration of databases on different servers with different physical names and on different drives are a fact of life. Through the use of a few variables and a simple cursor, we may achieve an efficient and effective way to achieve this task.

    Read the article

  • Annotation Processor for Superclass Sensitive Actions

    - by Geertjan
    Someone creating superclass sensitive actions should need to specify only the following things: The condition under which the popup menu item should be available, i.e., the condition under which the action is relevant. And, for superclass sensitive actions, the condition is the name of a superclass. I.e., if I'm creating an action that should only be invokable if the class implements "org.openide.windows.TopComponent",  then that fully qualified name is the condition. The position in the list of Java class popup menus where the new menu item should be found, relative to the existing menu items. The display name. The path to the action folder where the new action is registered in the Central Registry. The code that should be executed when the action is invoked. In other words, the code for the enablement (which, in this case, means the visibility of the popup menu item when you right-click on the Java class) should be handled generically, under the hood, and not every time all over again in each action that needs this special kind of enablement. So, here's the usage of my newly created @SuperclassBasedActionAnnotation, where you should note that the DataObject must be in the Lookup, since the action will only be available to be invoked when you right-click on a Java source file (i.e., text/x-java) in an explorer view: import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import org.netbeans.sbas.annotations.SuperclassBasedActionAnnotation; import org.openide.awt.StatusDisplayer; import org.openide.loaders.DataObject; import org.openide.util.NbBundle; import org.openide.util.Utilities; @SuperclassBasedActionAnnotation( position=30, displayName="#CTL_BrandTopComponentAction", path="File", type="org.openide.windows.TopComponent") @NbBundle.Messages("CTL_BrandTopComponentAction=Brand") public class BrandTopComponentAction implements ActionListener { private final DataObject context; public BrandTopComponentAction() { context = Utilities.actionsGlobalContext().lookup(DataObject.class); } @Override public void actionPerformed(ActionEvent ev) { String message = context.getPrimaryFile().getPath(); StatusDisplayer.getDefault().setStatusText(message); } } That implies I've created (in a separate module to where it is used) a new annotation. Here's the definition: package org.netbeans.sbas.annotations; import java.lang.annotation.ElementType; import java.lang.annotation.Retention; import java.lang.annotation.RetentionPolicy; import java.lang.annotation.Target; @Retention(RetentionPolicy.SOURCE) @Target(ElementType.TYPE) public @interface SuperclassBasedActionAnnotation { String type(); String path(); int position(); String displayName(); } And here's the processor: package org.netbeans.sbas.annotations; import java.util.Set; import javax.annotation.processing.Processor; import javax.annotation.processing.RoundEnvironment; import javax.annotation.processing.SupportedAnnotationTypes; import javax.annotation.processing.SupportedSourceVersion; import javax.lang.model.SourceVersion; import javax.lang.model.element.Element; import javax.lang.model.element.TypeElement; import javax.lang.model.util.Elements; import org.openide.filesystems.annotations.LayerBuilder.File; import org.openide.filesystems.annotations.LayerGeneratingProcessor; import org.openide.filesystems.annotations.LayerGenerationException; import org.openide.util.lookup.ServiceProvider; @ServiceProvider(service = Processor.class) @SupportedAnnotationTypes("org.netbeans.sbas.annotations.SuperclassBasedActionAnnotation") @SupportedSourceVersion(SourceVersion.RELEASE_6) public class SuperclassBasedActionProcessor extends LayerGeneratingProcessor { @Override protected boolean handleProcess(Set annotations, RoundEnvironment roundEnv) throws LayerGenerationException { Elements elements = processingEnv.getElementUtils(); for (Element e : roundEnv.getElementsAnnotatedWith(SuperclassBasedActionAnnotation.class)) { TypeElement clazz = (TypeElement) e; SuperclassBasedActionAnnotation mpm = clazz.getAnnotation(SuperclassBasedActionAnnotation.class); String teName = elements.getBinaryName(clazz).toString(); String originalFile = "Actions/" + mpm.path() + "/" + teName.replace('.', '-') + ".instance"; File actionFile = layer(e).file( originalFile). bundlevalue("displayName", mpm.displayName()). methodvalue("instanceCreate", "org.netbeans.sbas.annotations.SuperclassSensitiveAction", "create"). stringvalue("type", mpm.type()). newvalue("delegate", teName); actionFile.write(); File javaPopupFile = layer(e).file( "Loaders/text/x-java/Actions/" + teName.replace('.', '-') + ".shadow"). stringvalue("originalFile", originalFile). intvalue("position", mpm.position()); javaPopupFile.write(); } return true; } } The "SuperclassSensitiveAction" referred to in the code above is unchanged from how I had it in yesterday's blog entry. When I build the module containing two action listeners that use my new annotation, the generated layer file looks as follows, which is identical to the layer file entries I hard coded yesterday: <folder name="Actions"> <folder name="File"> <file name="org-netbeans-sbas-impl-ActionListenerSensitiveAction.instance"> <attr name="displayName" stringvalue="Process Action Listener"/> <attr methodvalue="org.netbeans.sbas.annotations.SuperclassSensitiveAction.create" name="instanceCreate"/> <attr name="type" stringvalue="java.awt.event.ActionListener"/> <attr name="delegate" newvalue="org.netbeans.sbas.impl.ActionListenerSensitiveAction"/> </file> <file name="org-netbeans-sbas-impl-BrandTopComponentAction.instance"> <attr bundlevalue="org.netbeans.sbas.impl.Bundle#CTL_BrandTopComponentAction" name="displayName"/> <attr methodvalue="org.netbeans.sbas.annotations.SuperclassSensitiveAction.create" name="instanceCreate"/> <attr name="type" stringvalue="org.openide.windows.TopComponent"/> <attr name="delegate" newvalue="org.netbeans.sbas.impl.BrandTopComponentAction"/> </file> </folder> </folder> <folder name="Loaders"> <folder name="text"> <folder name="x-java"> <folder name="Actions"> <file name="org-netbeans-sbas-impl-ActionListenerSensitiveAction.shadow"> <attr name="originalFile" stringvalue="Actions/File/org-netbeans-sbas-impl-ActionListenerSensitiveAction.instance"/> <attr intvalue="10" name="position"/> </file> <file name="org-netbeans-sbas-impl-BrandTopComponentAction.shadow"> <attr name="originalFile" stringvalue="Actions/File/org-netbeans-sbas-impl-BrandTopComponentAction.instance"/> <attr intvalue="30" name="position"/> </file> </folder> </folder> </folder> </folder>

    Read the article

  • Silverlight 4 Twitter Client &ndash; Part 3

    - by Max
    Finally Silverlight 4 RC is released and also that Windows 7 Phone Series will rely heavily on Silverlight platform for apps platform. its a really good news for Silverlight developers and designers. More information on this here. You can use SL 4 RC with VS 2010. SL 4 RC does not come with VS 2010, you need to download it separately and install it. So for the next part, be ready with VS 2010 and SL4 RC, we will start using them and not With this momentum, let us go to the next part of our twitter client tutorial. This tutorial will cover setting your status in Twitter and also retrieving your 1) As everything in Silverlight is asynchronous, we need to have some visual representation showing that something is going on in the background. So what I did was to create a progress bar with indeterminate animation. The XAML is here below. <ProgressBar Maximum="100" Width="300" Height="50" Margin="20" Visibility="Collapsed" IsIndeterminate="True" Name="progressBar1" VerticalAlignment="Center" HorizontalAlignment="Center" /> 2) I will be toggling this progress bar to show the background work. So I thought of writing this small method, which I use to toggle the visibility of this progress bar. Just pass a bool to this method and this will toggle it based on its current visibility status. public void toggleProgressBar(bool Option){ if (Option) { if (progressBar1.Visibility == System.Windows.Visibility.Collapsed) progressBar1.Visibility = System.Windows.Visibility.Visible; } else { if (progressBar1.Visibility == System.Windows.Visibility.Visible) progressBar1.Visibility = System.Windows.Visibility.Collapsed; }} 3) Now let us create a grid to hold a textbox and a update button. The XAML will look like something below <Grid HorizontalAlignment="Center"> <Grid.RowDefinitions> <RowDefinition Height="50"></RowDefinition> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition Width="400"></ColumnDefinition> <ColumnDefinition Width="200"></ColumnDefinition> </Grid.ColumnDefinitions> <TextBox Name="TwitterStatus" Width="380" Height="50"></TextBox> <Button Name="UpdateStatus" Content="Update" Grid.Row="1" Grid.Column="2" Width="200" Height="50" Click="UpdateStatus_Click"></Button></Grid> 4) The click handler for this update button will be again using the Web Client to post values. Posting values using Web Client. The code is: private void UpdateStatus_Click(object sender, RoutedEventArgs e){ toggleProgressBar(true); string statusupdate = "status=" + TwitterStatus.Text; WebRequest.RegisterPrefix("https://", System.Net.Browser.WebRequestCreator.ClientHttp);  WebClient myService = new WebClient(); myService.AllowReadStreamBuffering = true; myService.UseDefaultCredentials = false; myService.Credentials = new NetworkCredential(GlobalVariable.getUserName(), GlobalVariable.getPassword());  myService.UploadStringCompleted += new UploadStringCompletedEventHandler(myService_UploadStringCompleted); myService.UploadStringAsync(new Uri("https://twitter.com/statuses/update.xml"), statusupdate);  this.Dispatcher.BeginInvoke(() => ClearTextBoxValue());} 5) In the above code, we have a event handler which will be fired on this request is completed – !! Remember SL is Asynch !! So in the myService_UploadStringCompleted, we will just toggle the progress bar and change some status text to say that its done. The code for this will be StatusMessage is just another textblock conveniently positioned in the page.  void myService_UploadStringCompleted(object sender, UploadStringCompletedEventArgs e){ if (e.Error != null) { StatusMessage.Text = "Status Update Failed: " + e.Error.Message.ToString(); } else { toggleProgressBar(false); TwitterCredentialsSubmit(); }} 6) Now let us look at fetching the friends updates of the logged in user and displaying it in a datagrid. So just define a data grid and set its autogenerate columns as true. 7) Let us first create a data structure for use with fetching the friends timeline. The code is something like below: namespace MaxTwitter.Classes{ public class Status { public Status() {} public string ID { get; set; } public string Text { get; set; } public string Source { get; set; } public string UserID { get; set; } public string UserName { get; set; } }} You can add as many fields as you want, for the list of fields, have a look at here. It will ask for your Twitter username and password, just provide them and this will display the xml file. Go through them pick and choose your desired fields and include in your Data Structure. 8) Now the web client request for this is similar to the one we saw in step 4. Just change the uri in the last but one step to https://twitter.com/statuses/friends_timeline.xml Be sure to change the event handler to something else and within that we will use XLINQ to fetch the required details for us. Now let us how this event handler fetches details. public void parseXML(string text){ XDocument xdoc; if(text.Length> 0) xdoc = XDocument.Parse(text); else xdoc = XDocument.Parse(@"I USED MY OWN LOCAL COPY OF XML FILE HERE FOR OFFLINE TESTING"); statusList = new List<Status>(); statusList = (from status in xdoc.Descendants("status") select new Status { ID = status.Element("id").Value, Text = status.Element("text").Value, Source = status.Element("source").Value, UserID = status.Element("user").Element("id").Value, UserName = status.Element("user").Element("screen_name").Value, }).ToList(); //MessageBox.Show(text); //this.Dispatcher.BeginInvoke(() => CallDatabindMethod(StatusCollection)); //MessageBox.Show(statusList.Count.ToString()); DataGridStatus.ItemsSource = statusList; StatusMessage.Text = "Datagrid refreshed."; toggleProgressBar(false);} in the event handler, we call this method with e.Result.ToString() Parsing XML files using LINQ is super cool, I love it.   I am stopping it here for  this post. Will post the completed files in next post, as I’ve worked on a few more features in this page and don’t want to confuse you. See you soon in my next post where will play with Twitter lists. Have a nice day! Technorati Tags: Silverlight,LINQ,XLINQ,Twitter API,Twitter,Network Credentials

    Read the article

  • JAVA image transfer problem

    - by user579098
    Hi, I have a school assignment, to send a jpg image,split it into groups of 100 bytes, corrupt it, use a CRC check to locate the errors and re-transmit until it eventually is built back into its original form. It's practically ready, however when I check out the new images, they appear with errors.. I would really appreciate if someone could look at my code below and maybe locate this logical mistake as I can't understand what the problem is because everything looks ok :S For the file with all the data needed including photos and error patterns one could download it from this link:http://rapidshare.com/#!download|932tl2|443122762|Data.zip|739 Thanks in advance, Stefan p.s dont forget to change the paths in the code for the image and error files package networks; import java.io.*; // for file reader import java.util.zip.CRC32; // CRC32 IEEE (Ethernet) public class Main { /** * Reads a whole file into an array of bytes. * @param file The file in question. * @return Array of bytes containing file data. * @throws IOException Message contains why it failed. */ public static byte[] readFileArray(File file) throws IOException { InputStream is = new FileInputStream(file); byte[] data=new byte[(int)file.length()]; is.read(data); is.close(); return data; } /** * Writes (or overwrites if exists) a file with data from an array of bytes. * @param file The file in question. * @param data Array of bytes containing the new file data. * @throws IOException Message contains why it failed. */ public static void writeFileArray(File file, byte[] data) throws IOException { OutputStream os = new FileOutputStream(file,false); os.write(data); os.close(); } /** * Converts a long value to an array of bytes. * @param data The target variable. * @return Byte array conversion of data. * @see http://www.daniweb.com/code/snippet216874.html */ public static byte[] toByta(long data) { return new byte[] { (byte)((data >> 56) & 0xff), (byte)((data >> 48) & 0xff), (byte)((data >> 40) & 0xff), (byte)((data >> 32) & 0xff), (byte)((data >> 24) & 0xff), (byte)((data >> 16) & 0xff), (byte)((data >> 8) & 0xff), (byte)((data >> 0) & 0xff), }; } /** * Converts a an array of bytes to long value. * @param data The target variable. * @return Long value conversion of data. * @see http://www.daniweb.com/code/snippet216874.html */ public static long toLong(byte[] data) { if (data == null || data.length != 8) return 0x0; return (long)( // (Below) convert to longs before shift because digits // are lost with ints beyond the 32-bit limit (long)(0xff & data[0]) << 56 | (long)(0xff & data[1]) << 48 | (long)(0xff & data[2]) << 40 | (long)(0xff & data[3]) << 32 | (long)(0xff & data[4]) << 24 | (long)(0xff & data[5]) << 16 | (long)(0xff & data[6]) << 8 | (long)(0xff & data[7]) << 0 ); } public static byte[] nextNoise(){ byte[] result=new byte[100]; // copy a frame's worth of data (or remaining data if it is less than frame length) int read=Math.min(err_data.length-err_pstn, 100); System.arraycopy(err_data, err_pstn, result, 0, read); // if read data is less than frame length, reset position and add remaining data if(read<100){ err_pstn=100-read; System.arraycopy(err_data, 0, result, read, err_pstn); }else // otherwise, increase position err_pstn+=100; // return noise segment return result; } /** * Given some original data, it is purposefully corrupted according to a * second data array (which is read from a file). In pseudocode: * corrupt = original xor corruptor * @param data The original data. * @return The new (corrupted) data. */ public static byte[] corruptData(byte[] data){ // get the next noise sequence byte[] noise = nextNoise(); // finally, xor data with noise and return result for(int i=0; i<100; i++)data[i]^=noise[i]; return data; } /** * Given an array of data, a packet is created. In pseudocode: * frame = corrupt(data) + crc(data) * @param data The original frame data. * @return The resulting frame data. */ public static byte[] buildFrame(byte[] data){ // pack = [data]+crc32([data]) byte[] hash = new byte[8]; // calculate crc32 of data and copy it to byte array CRC32 crc = new CRC32(); crc.update(data); hash=toByta(crc.getValue()); // create a byte array holding the final packet byte[] pack = new byte[data.length+hash.length]; // create the corrupted data byte[] crpt = new byte[data.length]; crpt = corruptData(data); // copy corrupted data into pack System.arraycopy(crpt, 0, pack, 0, crpt.length); // copy hash into pack System.arraycopy(hash, 0, pack, data.length, hash.length); // return pack return pack; } /** * Verifies frame contents. * @param frame The frame data (data+crc32). * @return True if frame is valid, false otherwise. */ public static boolean verifyFrame(byte[] frame){ // allocate hash and data variables byte[] hash=new byte[8]; byte[] data=new byte[frame.length-hash.length]; // read frame into hash and data variables System.arraycopy(frame, frame.length-hash.length, hash, 0, hash.length); System.arraycopy(frame, 0, data, 0, frame.length-hash.length); // get crc32 of data CRC32 crc = new CRC32(); crc.update(data); // compare crc32 of data with crc32 of frame return crc.getValue()==toLong(hash); } /** * Transfers a file through a channel in frames and reconstructs it into a new file. * @param jpg_file File name of target file to transfer. * @param err_file The channel noise file used to simulate corruption. * @param out_file The name of the newly-created file. * @throws IOException */ public static void transferFile(String jpg_file, String err_file, String out_file) throws IOException { // read file data into global variables jpg_data = readFileArray(new File(jpg_file)); err_data = readFileArray(new File(err_file)); err_pstn = 0; // variable that will hold the final (transfered) data byte[] out_data = new byte[jpg_data.length]; // holds the current frame data byte[] frame_orig = new byte[100]; byte[] frame_sent = new byte[100]; // send file in chunks (frames) of 100 bytes for(int i=0; i<Math.ceil(jpg_data.length/100); i++){ // copy jpg data into frame and init first-time switch System.arraycopy(jpg_data, i*100, frame_orig, 0, 100); boolean not_first=false; System.out.print("Packet #"+i+": "); // repeat getting same frame until frame crc matches with frame content do { if(not_first)System.out.print("F"); frame_sent=buildFrame(frame_orig); not_first=true; }while(!verifyFrame(frame_sent)); // usually, you'd constrain this by time to prevent infinite loops (in // case the channel is so wacked up it doesn't get a single packet right) // copy frame to image file System.out.println("S"); System.arraycopy(frame_sent, 0, out_data, i*100, 100); } System.out.println("\nDone."); writeFileArray(new File(out_file),out_data); } // global variables for file data and pointer public static byte[] jpg_data; public static byte[] err_data; public static int err_pstn=0; public static void main(String[] args) throws IOException { // list of jpg files String[] jpg_file={ "C:\\Users\\Stefan\\Desktop\\Data\\Images\\photo1.jpg", "C:\\Users\\Stefan\\Desktop\\Data\\Images\\photo2.jpg", "C:\\Users\\Stefan\\Desktop\\Data\\Images\\photo3.jpg", "C:\\Users\\Stefan\\Desktop\\Data\\Images\\photo4.jpg" }; // list of error patterns String[] err_file={ "C:\\Users\\Stefan\\Desktop\\Data\\Error Pattern\\Error Pattern 1.DAT", "C:\\Users\\Stefan\\Desktop\\Data\\Error Pattern\\Error Pattern 2.DAT", "C:\\Users\\Stefan\\Desktop\\Data\\Error Pattern\\Error Pattern 3.DAT", "C:\\Users\\Stefan\\Desktop\\Data\\Error Pattern\\Error Pattern 4.DAT" }; // loop through all jpg/channel combinations and run tests for(int x=0; x<jpg_file.length; x++){ for(int y=0; y<err_file.length; y++){ System.out.println("Transfering photo"+(x+1)+".jpg using Pattern "+(y+1)+"..."); transferFile(jpg_file[x],err_file[y],jpg_file[x].replace("photo","CH#"+y+"_photo")); } } } }

    Read the article

  • Turning PHP page calling Zend functions procedurally into Zend Framework MVC-help!

    - by Joel
    Hi guys, I posted much of this question, but if didn't include all the Zend stuff because I thought it'd be overkill, but now I'm thinking it's not easy to figure out an OO way of doing this without that code... So with that said, please forgive the verbose code. I'm learning how to use MVC and OO in general, and I have a website that is all in PHP but most of the pages are basic static pages. I have already converted them all to views in Zend Framework, and have the Controller and layout set. All is good there. The one remaining page I have is the main reason I did this...it in fact uses Zend library (for gData connection and pulling info from a Google Calendar and displaying it on the page. I don't know enough about this to know where to begin to refactor the code to fit in the Zend Framework MVC model. Any help would be greatly appreciated!! .phtml view page: <div id="dhtmltooltip" align="left"></div> <script src="../js/tooltip.js" type="text/javascript"> </script> <div id="container"> <div id="conten"> <a name="C4"></a> <?php function get_desc_second_part(&$value) { list(,$val_b) = explode('==',$value); $value = trim($val_b); } function filterEventDetails($contentText) { $data = array(); foreach($contentText as $row) { if(strstr($row, 'When: ')) { ##cleaning "when" string to get date in the format "May 28, 2009"## $data['duration'] = str_replace('When: ','',$row); list($when, ) = explode(' to ',$data['duration']); $data['when'] = substr($when,4); if(strlen($data['when'])>13) $data['when'] = trim(str_replace(strrchr($data['when'], ' '),'',$data['when'])); $data['duration'] = substr($data['duration'], 0, strlen($data['duration'])-4); //trimming time zone identifier (UTC etc.) } if(strstr($row, 'Where: ')) { $data['where'] = str_replace('Where: ','',$row); //pr($row); //$where = strstr($row, 'Where: '); //pr($where); } if(strstr($row, 'Event Description: ')) { $event_desc = str_replace('Event Description: ','',$row); //$event_desc = strstr($row, 'Event Description: '); ## Filtering event description and extracting venue, ticket urls etc from it. //$event_desc = str_replace('Event Description: ','',$contentText[3]); $event_desc_array = explode('|',$event_desc); array_walk($event_desc_array,'get_desc_second_part'); //pr($event_desc_array); $data['venue_url'] = $event_desc_array[0]; $data['details'] = $event_desc_array[1]; $data['tickets_url'] = $event_desc_array[2]; $data['tickets_button'] = $event_desc_array[3]; $data['facebook_url'] = $event_desc_array[4]; $data['facebook_icon'] = $event_desc_array[5]; } } return $data; } // load library require_once 'Zend/Loader.php'; Zend_Loader::loadClass('Zend_Gdata'); Zend_Loader::loadClass('Zend_Gdata_ClientLogin'); Zend_Loader::loadClass('Zend_Gdata_Calendar'); Zend_Loader::loadClass('Zend_Http_Client'); // create authenticated HTTP client for Calendar service $gcal = Zend_Gdata_Calendar::AUTH_SERVICE_NAME; $user = "[email protected]"; $pass = "xxxxxxxx"; $client = Zend_Gdata_ClientLogin::getHttpClient($user, $pass, $gcal); $gcal = new Zend_Gdata_Calendar($client); $query = $gcal->newEventQuery(); $query->setUser('[email protected]'); $secondary=true; $query->setVisibility('private'); $query->setProjection('basic'); $query->setOrderby('starttime'); $query->setSortOrder('ascending'); //$query->setFutureevents('true'); $startDate=date('Y-m-d h:i:s'); $endDate="2015-12-31"; $query->setStartMin($startDate); $query->setStartMax($endDate); $query->setMaxResults(30); try { $feed = $gcal->getCalendarEventFeed($query); } catch (Zend_Gdata_App_Exception $e) { echo "Error: " . $e->getResponse(); } ?> <h1><?php echo $feed->title; ?></h1> <?php echo $feed->totalResults; ?> event(s) found. <table width="90%" border="3" align="center"> <tr> <td width="20%" align="center" valign="middle"><b>;DATE</b></td> <td width="25%" align="center" valign="middle"><b>VENUE</b></td> <td width="20%" align="center" valign="middle"><b>CITY</b></td> <td width="20%" align="center" valign="middle"><b>DETAILS</b></td> <td width="15%" align="center" valign="middle"><b>LINKS</b></td> </tr> <?php if((int)$feed->totalResults>0) { //checking if at least one event is there in this date range foreach ($feed as $event) { //iterating through all events //pr($event);die; $contentText = stripslashes($event->content->text); //striping any escape character $contentText = preg_replace('/\<br \/\>[\n\t\s]{1,}\<br \/\>/','<br />',stripslashes($event->content->text)); //replacing multiple breaks with a single break //die(); $contentText = explode('<br />',$contentText); //splitting data by break tag $eventData = filterEventDetails($contentText); $when = $eventData['when']; $where = $eventData['where']; $duration = $eventData['duration']; $venue_url = $eventData['venue_url']; $details = $eventData['details']; $tickets_url = $eventData['tickets_url']; $tickets_button = $eventData['tickets_button']; $facebook_url = $eventData['facebook_url']; $facebook_icon = $eventData['facebook_icon']; $title = stripslashes($event->title); echo '<tr>'; echo '<td width="20%" align="center" valign="middle" nowrap="nowrap">'; echo $when; echo '</td>'; echo '<td width="20%" align="center" valign="middle">'; if($venue_url!='') { echo '<a href="'.$venue_url.'" target="_blank">'.$title.'</a>'; } else { echo $title; } echo '</td>'; echo '<td width="20%" align="center" valign="middle">'; echo $where; echo '</td>'; echo '<td width="20%" align="center" valign="middle">'; $details = str_replace("\n","<br>",htmlentities($details)); $duration = str_replace("\n","<br>",$duration); $detailed_description = "<b>When</b>: <br>".$duration."<br><br>"; $detailed_description .= "<b>Description</b>: <br>".$details; echo '<a href="javascript:void(0);" onmouseover="ddrivetip(\''.$detailed_description.'\')" onmouseout="hideddrivetip()" onclick="return false">View Details</a>'; echo '</td>'; echo '<td width="20%" valign="middle">'; if(trim($tickets_url) !='' && trim($tickets_button)!='') { echo '<a href="'.$tickets_url.'" target="_blank"><img src="'.$tickets_button.'" border="0" ></a>'; } if(trim($facebook_url) !='' && trim($facebook_icon)!='') { echo '<a href="'.$facebook_url.'" target="_blank"><img src="'.$facebook_icon.'" border="0" ></a>'; } else { echo '......'; } echo '</td>'; echo '</tr>'; } } else { //else show 'no event found' message echo '<tr>'; echo '<td width="100%" align="center" valign="middle" colspan="5">'; echo "No event found"; echo '</td>'; } ?> </table> <h3><a href="#pastevents">Scroll down for a list of past shows.</a></h3> <br /> <a name="pastevents"></a> <ul class="pastShows"> <?php $startDate='2005-01-01'; $endDate=date('Y-m-d'); /*$gcal = Zend_Gdata_Calendar::AUTH_SERVICE_NAME; $user = "[email protected]"; $pass = "silverroof10"; $client = Zend_Gdata_ClientLogin::getHttpClient($user, $pass, $gcal); $gcal = new Zend_Gdata_Calendar($client); $query = $gcal->newEventQuery(); $query->setUser('[email protected]'); $query->setVisibility('private'); $query->setProjection('basic');*/ $query->setOrderby('starttime'); $query->setSortOrder('descending'); $query->setFutureevents('false'); $query->setStartMin($startDate); $query->setStartMax($endDate); $query->setMaxResults(1000); try { $feed = $gcal->getCalendarEventFeed($query); } catch (Zend_Gdata_App_Exception $e) { echo "Error: " . $e->getResponse(); } if((int)$feed->totalResults>0) { //checking if at least one event is there in this date range foreach ($feed as $event) { //iterating through all events $contentText = stripslashes($event->content->text); //striping any escape character $contentText = preg_replace('/\<br \/\>[\n\t\s]{1,}\<br \/\>/','<br />',stripslashes($event->content->text)); //replacing multiple breaks with a single break $contentText = explode('<br />',$contentText); //splitting data by break tag $eventData = filterEventDetails($contentText); $when = $eventData['when']; $where = $eventData['where']; $duration = $eventData['duration']; $title = stripslashes($event->title); echo '<li class="pastShows">' . $when . " - " . $title . ", " . $where . '</li>'; } } ?> </div> </div>

    Read the article

  • Tor : Stuck at Connecting to Relay Directory

    - by Ghassan
    i have never ever worked with tor before. the company where i work allows us to have access to any site we wish. nonetheless as of the the beginning of this month, they installed a proxy server to filter which sites can be accessed and which ones cant. the filter isnt only on URLS, but IPS as well, even hexa IPS wont work. so after some research, i decided to use tor, the first day i installed it, everything went smooth and i was accessing any website i wish. just 2day, everything stopped. i try 2 start vidalia, it gets stuck at Connecting to Relay Directory. i work on windows 7 platform. Please help me out! thanks in advance.

    Read the article

  • Office 2013 OCT unhandled exception when saving .RSP

    - by user52874
    I'm trying to prepare a deployment of office 2013 pro plus. If I deploy an existing .rsp file that was left behind by the old analyst (typing from the client): PS C: \\deploybox\software\Office2013\setup.exe /adminfile \\deploybox\software\Office2013\SWKS.MSP Things seem to deploy just fine. if I make any changes to the .rsp file by doing (all from the client): PS C: \\deploybox\software\Office2013\setup.exe /admin * Open SWKS.MSP * Make changes * Save under a different name SWKS1.MSP I get the following errorbox: Unhandled Exception: MsiGetSummaryInformation call failed. And if I try to deploy the new SWKS1.MSP, PS C: \\deploybox\software\Office2013\setup.exe /adminfile \\deploybox\software\Office2013\SWKS1.MSP it fails with the message: Path or file specified with /adminfile did not contain any customization patches that apply to this product or platform. If I even open the old known good .rsp file SWKS.MSP, and immediately save it as a new name SWKS1.MSP, making no changes, then the same thing happens. So what stupid newbie mistake am I making here? Thanks!

    Read the article

  • Outlook Web Access: "Outlook Web Access has encountered a Web browsing error"

    - by Calum
    When one of my colleagues is accessing Outlook Web Access from IE, he frequently gets an error reported: "Outlook Web Access has encountered a Web browsing error". The error report includes the following: Client Information User Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0; GTB5; SLCC1; .NET CLR 2.0.50727; .NET CLR 3.0.04506) CPU Class: x86 Platform: Win32 System Language: en-gb User Language: en-gb CookieEnabled: true Mime Types: Exception Details Date: Tue Apr 6 16:46:54 UTC+0100 2010 Message: Automation server can't create object Url: https://example.com/owa/x.y.z.a/scripts/premium/uglobal.js Line: 85 Any idea as to what might be causing such a problem? The only solution suggested so far is "Reinstall Windows", which he'd rather avoid.

    Read the article

  • Testing radius server from Mac OS X client

    - by Calvin Froedge
    I have a radius server set up on a server running Ubuntu 11.04. I have configured my switch to use the authentication server's IP (192.168.1.2) for RADIUS / 802.1x authentication, and I created a connection to test connecting from my Mac OSX client. Here is my radius configuration for the client: client 192.168.1.0/16 { secret = testing123 } I can successfully authenticate using both 127.0.0.1 (localhost) and 192.168.1.2 (ip of eth1), so I know radius is getting those requests. I set up a connection to test from my macbook, and my requests are timing out. http://screencast.com/t/tMhRLS3H7 Is there a better way to test the radius connection from my macbook? Thanks! UPDATE: I was able to successfully test on Mac OSX client using RadPerf. This is available as a cross-platform command line tool.

    Read the article

  • How to fix Solr - Server is shutting down issue?

    - by Krunal
    I was having a running Solr 4.1 on Windows Server 2008 R2. The Solr is deployed on Tomcat. However, today it stops suddenly, and while accessing Solr it gives following error. HTTP Status 503 - Server is shutting down type Status report message Server is shutting down description The requested service is not currently available. On further looking into Logs, we got following: Log File: tomcat7-stderr.2013-05-09.txt May 09, 2013 8:00:40 PM org.apache.solr.core.CoreContainer finalize SEVERE: CoreContainer was not shutdown prior to finalize(), indicates a bug -- POSSIBLE RESOURCE LEAK!!! instance=2221663 Log File: catalina.2013-05-09.txt May 09, 2013 7:59:25 PM org.apache.solr.core.SolrResourceLoader <init> INFO: new SolrResourceLoader for directory: 'c:\solrdir\' May 09, 2013 7:59:29 PM org.apache.solr.common.SolrException log SEVERE: Exception during parsing file: null:org.xml.sax.SAXParseException; systemId: file:/c:/solr/solr.xml; lineNumber: 2; columnNumber: 6; The processing instruction target matching "[xX][mM][lL]" is not allowed. at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.createSAXParseException(Unknown Source) at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.fatalError(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLScanner.reportFatalError(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLScanner.scanPIData(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanPIData(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLScanner.scanPI(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl$PrologDriver.next(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(Unknown Source) at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(Unknown Source) at org.apache.solr.core.Config.<init>(Config.java:121) at org.apache.solr.core.CoreContainer.load(CoreContainer.java:428) at org.apache.solr.core.CoreContainer.load(CoreContainer.java:404) at org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:336) at org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:98) at org.apache.catalina.core.ApplicationFilterConfig.initFilter(ApplicationFilterConfig.java:281) at org.apache.catalina.core.ApplicationFilterConfig.getFilter(ApplicationFilterConfig.java:262) at org.apache.catalina.core.ApplicationFilterConfig.<init>(ApplicationFilterConfig.java:107) at org.apache.catalina.core.StandardContext.filterStart(StandardContext.java:4656) at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5309) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:901) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:877) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:633) at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:977) at org.apache.catalina.startup.HostConfig$DeployWar.run(HostConfig.java:1655) at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source) at java.util.concurrent.FutureTask.run(Unknown Source) at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) May 09, 2013 7:59:29 PM org.apache.solr.servlet.SolrDispatchFilter init SEVERE: Could not start Solr. Check solr/home property and the logs May 09, 2013 7:59:29 PM org.apache.solr.common.SolrException log SEVERE: null:org.apache.solr.common.SolrException: at org.apache.solr.core.CoreContainer.load(CoreContainer.java:431) at org.apache.solr.core.CoreContainer.load(CoreContainer.java:404) at org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:336) at org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:98) at org.apache.catalina.core.ApplicationFilterConfig.initFilter(ApplicationFilterConfig.java:281) at org.apache.catalina.core.ApplicationFilterConfig.getFilter(ApplicationFilterConfig.java:262) at org.apache.catalina.core.ApplicationFilterConfig.<init>(ApplicationFilterConfig.java:107) at org.apache.catalina.core.StandardContext.filterStart(StandardContext.java:4656) at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5309) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:901) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:877) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:633) at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:977) at org.apache.catalina.startup.HostConfig$DeployWar.run(HostConfig.java:1655) at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source) at java.util.concurrent.FutureTask.run(Unknown Source) at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Caused by: org.xml.sax.SAXParseException; systemId: file:/c:/solrdir/solr.xml; lineNumber: 2; columnNumber: 6; The processing instruction target matching "[xX][mM][lL]" is not allowed. at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.createSAXParseException(Unknown Source) at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.fatalError(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLScanner.reportFatalError(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLScanner.scanPIData(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanPIData(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLScanner.scanPI(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl$PrologDriver.next(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(Unknown Source) at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(Unknown Source) at org.apache.solr.core.Config.<init>(Config.java:121) at org.apache.solr.core.CoreContainer.load(CoreContainer.java:428) ... 20 more May 09, 2013 7:59:29 PM org.apache.solr.servlet.SolrDispatchFilter init INFO: SolrDispatchFilter.init() done May 09, 2013 7:59:29 PM org.apache.catalina.startup.HostConfig deployDirectory INFO: Deploying web application directory C:\Program Files (x86)\Apache Software Foundation\Tomcat 7.0\webapps\docs May 09, 2013 7:59:30 PM org.apache.catalina.startup.HostConfig deployDirectory INFO: Deploying web application directory C:\Program Files (x86)\Apache Software Foundation\Tomcat 7.0\webapps\manager May 09, 2013 7:59:30 PM org.apache.catalina.startup.HostConfig deployDirectory INFO: Deploying web application directory C:\Program Files (x86)\Apache Software Foundation\Tomcat 7.0\webapps\ROOT May 09, 2013 7:59:30 PM org.apache.coyote.AbstractProtocol start INFO: Starting ProtocolHandler ["http-bio-8983"] May 09, 2013 7:59:30 PM org.apache.coyote.AbstractProtocol start INFO: Starting ProtocolHandler ["ajp-bio-8009"] May 09, 2013 7:59:30 PM org.apache.catalina.startup.Catalina start INFO: Server startup in 9578 ms May 09, 2013 8:00:40 PM org.apache.solr.core.CoreContainer finalize SEVERE: CoreContainer was not shutdown prior to finalize(), indicates a bug -- POSSIBLE RESOURCE LEAK!!! instance=2221663 Any idea what could be wrong and how to fix?

    Read the article

  • Sharepoint "List does not exist" after site import

    - by Jeff Sacksteder
    I'm try to copy a site from a production WSS 3.0 server to a testing server in a different farm to try some changes. The export using stsadm works as expected, I create the target site in the test farm and the import finished without error. The site in question makes use of some of the 'Sharepoint Boost' products and those are installed and activated on the destination server. When I try to access the site, I receive the error- "List does not exist The page you selected contains a list that does not exist. It may have been deleted by another user." Any specific guidance might be helpful, but this is the most useless message I've seen. What's the process for troubleshooting this - Is there a more verbose log that contains more information somewhere?

    Read the article

  • How to migrate from Natara DayNotez for Pocket PC / Windows Mobile

    - by piggymouse
    I've been using DayNotez as my notes manager since the old Palm PDA days. When I moved to Windows Mobile, I installed DayNotez there and migrated from the Palm version. Now I wish to move from DayNotez altogether (I currently consider Evernote as a decent cross-platform tool). Problem is, DayNotez doesn't let me export the notes (unless I want to transfer them one by one, which is a pain). Natara offers an export tool for Windows, but it only works for Palm HotSync (as it reads from the backed-up PDB file). DayNotez Desktop for Windows stores its local DB under "My Documents\Natara\DayNotez\" directory in a file named "[device name] DayNotez.dnz". Quick look within the file spots a string "Standard Jet DB" near the beginning, but I couldn't open it as a regular JET/MDB file. Any help would be greatly appreciated.

    Read the article

  • Can't install SQL Express 2008R2- caspol.exe application error - the application failed to initialize

    - by Nir
    I'm trying to install SQL Server Express 2008 R2 on Windows 2003 Server (enterprise edition). I get the following error message: Title: caspol.exe - Application Error Text: The application failed to initialize properly (0x000007b), Click on OK to terminate the application. I get the same error message both when downloading the installer and running it and when using the web platform installer. All the pages on the internet I've found about similar problem say it's a corrupt .net installation issue - This server runs multiple .net apps and I've never had any problems with any of them. I've uninstalled and reinstalled .net (causing a painful outage) and nothing changed. Does anyone here has any idea what might cause this? Update 1: additional information I forgot to include: 32bit version of Windows running in a virtual machine, no anti virus Update 2: when running caspol.exe from the command line I get the same error

    Read the article

  • IIS 7.5 URL Rewrite - missing “route to server” option

    - by Martin
    (Question moved from StackOverflow) I am running Windows 7 Ultimate, and have activated IIS (Version 7.5.7600.16385). I have installed the following modules using Microsoft Web Platform Installer 2.0: * IIS URL Rewrite Module 2 * Microsoft Application Request Routing Version 2 for IIS 7 * Microsoft Web Farm Framework Version 1 for IIS 7 * Microsoft External Cache Version 1 for IIS 7 Now I am trying to configure a URL Rewrite rule using the "Route to Server" option. However, in the Edit Inbound Rule page, there is no such option: all I have available are: * Rewrite * None * Redirect * Custom Response * Abort Request Why is there no "route to server" option ?

    Read the article

  • The vps is running, but the domain name is down ?

    - by Tom
    Hi, i created a vps on vps.net and installed CentOS 5.4 x64 LAMP , i choosed the root password and the server start running, i added my domain name in the vps domain section, also i added the two name servers on where i bought the domain name. the problem is that i still can't access the domain, so i think that's because a mistake in the DNS template that is created automatically by the vps provider, here is it : TYPE | NAME | TARGET | MX PRIORITY | ACTIONS ------------------------------------------------------------------- OA @ dns1.vps.net. NS @ dns1.vps.net. NS @ dns2.vps.net. MX @ mail.www.XXXXXXX.com. 10 delete A @ 85.128.137.292 delete A www 85.128.137.292 delete A mail 85.128.137.292 delete A webmail 85.128.137.292 delete A ftp 85.128.137.292 The vps is working fine, & i can access the webmin panel via the ip, so what's wrong with my parameters? Thanks

    Read the article

  • Free Dynamic DNS Nameservers

    - by Maxim Zaslavsky
    I recently set up a home server that I want to use as my primary hosting platform. So far, I've mapped some domains to it by setting up A records for them that point to my home IP. As my home IP can change randomly and without notice, however, I'm afraid of such downtime. Thus, I'm looking for a dynamic DNS solution. So far, I've set up DynDNS, but I haven't found a way to use dynamic DNS with an existing domain. Are there any free dynamic DNS nameserver services available?

    Read the article

  • Oracle Internet Directory - Required Schemas already loaded

    - by Brandon Kreisel
    After a successful installation of Oracle Internet Directories (OID) I attemped to remove and re-install it with a different configuration. First I ran ./runInstaller.sh -deinstall from the OID middleware directory to uninstall. Then i ran the installer again but it complained on the Specify Schema Database step. Upon connecting to the database, the installer threw: INST-5174: Required schemas are already loaded in the specified database. I connected to the target database and removed the OID schemas I knew of SQL> drop user ODS cascade; SQL> drop user ODSSM cascade; That didn't not work and the error still appears. What steps am I missing? Note: The Database is 11g and it was brand new before installing OID so there is no other data select * from all_users doesn't show any other schemas related to OID from what I can tell, the latest user creation date is OCT 2010

    Read the article

  • How to easily change columns into rows in Excel?

    - by DavidD
    I have a huge Excel table that I need to transform into paragraphs for a Word report, and I can't find an efficient way to do it The source looks like this: And I would ultimately need something like this, i.e. through a pivot table. Note that "Item C", which doesn't have any description values, is skipped: Now, to get there I believe I need to transform my source to this intermediate format, that has one description per line: How do I get from the source to the intermediate format in an efficient way? Or maybe there is an easier way to produce the target format that I don't know of? Any help is welcome!

    Read the article

< Previous Page | 290 291 292 293 294 295 296 297 298 299 300 301  | Next Page >