Search Results

Search found 29110 results on 1165 pages for 'open leadership'.

Page 402/1165 | < Previous Page | 398 399 400 401 402 403 404 405 406 407 408 409  | Next Page >

  • Grid pathfinding with a lot of entities

    - by Vee
    I'd like to explain this problem with a screenshot from a released game, DROD: Gunthro's Epic Blunder, by Caravel Games. The game is turn-based and tile-based. I'm trying to create something very similar (a clone of the game), and I've got most of the fundamentals done, but I'm having trouble implementing pathfinding. Look at the screenshot. The guys in yellow are friendly, and want to kill the roaches. Every turn, every guy in yellow pathfinds to the closest roach, and every roach pathfinds to the closest guy in yellow. By closest I mean the target with the shortest path, not a simple distance calculation. All of this without any kind of slowdown when loading the level or when passing turns. And all of the entities change position every turn. Also (not shown in screenshot), there can be doors that open and close and change the level's layout. Impressive. I've tried implementing pathfinding in my clone. First attempt was making every roach find a path to a yellow guy every turn, using a breadth-first search algorithm. Obviously incredibly slow with more than a single roach, and would get exponentially slower with more than a single yellow guy. Second attempt was mas making every yellow guy generate a pathmap (still breadth-first search) every time he moved. Worked perfectly with multiple roaches and a single yellow guy, but adding more yellow guys made the game slow and unplayable. Last attempt was implementing JPS (jump point search). Every entity would individually calculate a path to its target. Fast, but with a limited number of entities. Having less than half the entities in the screenshot would make the game slow. And also, I had to get the "closest" enemy by calculating distance, not shortest path. I've asked on the DROD forums how they did it, and a user replied that it was breadth-first search. The game is open source, and I took a look at the source code, but it's C++ (I'm using C#) and I found it confusing. I don't know how to do it. Every approach I tried isn't good enough. And I believe that DROD generates global pathmaps, somehow, but I can't understand how every entity find the best individual path to other entities that move every turn. What's the trick? This is a reply I just got on the DROD forums: Without having looked at the code I'd wager it's two (or so) pathmaps for the whole room: One to the nearest enemy, and one to the nearest friendly for every tile. There's no need to make a separate pathmap for every entity when the overall goal is "move towards nearest enemy/friendly"... just mark every tile with the number of moves it takes to the nearest target and have the entity chose the move that takes it to the tile with the lowest number. To be honest, I don't understand it that well.

    Read the article

  • How do I revert updates/tweaks to get to a usable GUI?

    - by Frankenmartin
    I just installed 12.04 the other day and then ran into trouble upon restarting after installing updates. What I did before the problem occurred: I did not make many changes before this problem occurred. Changes I did make included: Downloading and installing Adobe Flash Player (off topic but: I am under the impression that Java, "C&C" and Shockwave can not be run in Ubuntu. Could anybody verify this?) I also installed gnome-tweak-tool and used it to install several themes. These themes worked well until restarting after the update. Is it possible that one of these themes caused the problem (in combination with the update or because of the restart)? Installed 215 updates from update manager and restarted my system. Current Situation: Unity 3D is unusable since restarting after running updates. When I log in after entering my password the following things happen: the overhead panel disappears and the screen goes black for a minute my wallpaper flashes for a couple seconds but then the screen goes black again after another minute the wallpaper reappears but nothing else does and I am not able to open anything or even right click. after 5 minutes I can finally get a right click menu eventually a box comes up warning about a Compiz failure and asking to let it quit--which I did. Using the right click functionality I was able to create a new folder on the desktop and use this to open a file browser. In doing so I noticed that the downloads I had made were missing (music, image files, etc., even after unpacking several .zip and .rar files) even though I believe that everything should still be there. Any new windows that I create are un-closable/minimizable/movable/etc, because the window bars are missing. I have tried rebooting several times but the results are the same. I was able to browse some off the System Settings windows by clicking on the wallpaper link in the right click menu. In doing so I navigated into the update manager and noticed that updates were selected to be accepted from some "unsupported sources". I do not recall setting these options myself and wonder why these--potentially dangerous--options would be selected by default. Unity 2D is usable but not free of bugs--I stumbled across the ability to log into a Unity 2D session while trying to log into Unity 3D. So far I have only noticed one bug in Unity 2D: the close, minimize and maximize buttons are invisible--however they are still usable despite being invisible. What I need: I'm very new to Linux and Ubuntu and still am in the feeling out stages. As such I will have some trouble answering clarifying questions. I haven't used the terminal yet and would probably not be comfortable using it without very clear instructions. What I do need is to know how I can roll back/remove all those updates so I can use my computer regularly again. I do believe that I could follow step-by-step instructions as long as they are clear and concise if someone knows what my problem is.

    Read the article

  • Company wants to write custom project management tool, rather then use third party product.

    - by Jason Evans
    At the company I work, we are really wanting to get into the agile methodology for developing software. One thing that I'm not excited about is the fact that management wants us to build a custom project management feature inside the company's Intranet. I think this is a total waste of time. There are many great third party tools available (e.g. Axosoft OnTime) that can do everything we need, and more. For how much development time it would cost us to build our own project management module, we could buy numerous licences for a third party product. One concern is that, whilst we are writing code for a client, and using our custom Intranet project management module, we find bugs in the module that need fixing ASAP. That means having to stop work on the client code to fix the Intranet. That just puts shivers down my spine. Another worry I have is lack of functionality. This custom module is going to be so basic, that it will just feel really crap to use. That might sound a bit snooty, but for goodness sake, many third party tools are so feature rich, that the idea of having to write our own tool makes feel very uneasy. In fact, I can't be bothered. What do you guys think? I'm going to raise this issue with my boss, since I feel it's such an important topic to talk about. EDIT: Thanks for the great responses, much appreciated. To summarize some of them: Money Naturally my boss does want to save money, by not forking out a few hundred £'s for licences. However, for us to write a custom tool, it will take x number of days, multiplied by approx £500, which is our costs. I don't see the business value in this. Management have mentioned that they want to sell the Intranet as a product in the future, but it's so custom to our needs (and downright basic), that in order to give it to another client, I can see us having to fork a version of the code and rebuild the majority of it anyway. So it's not like we're gaining anything there in reuse. Features Having our own custom module means not feature bloat - only the functionality we require will be in the product. My issue is that there are plenty of free, open-source project management tools out there with minimal features already. So even if cost is an issue, we could look into open-source. Again it all boils down to the fact that I don't see the point in writing a project management tool in this day and age. It's a bit like writing your own web browser - why?, what's the point? Although management are asking for this tool, just because they are, it does not mean I'm going to please them and do it just because they asked for it. If something does not make sense, then I will raise it as a concern. At the end of the day, it's the developers who write the code, it's the developers who make money for a business. Thus, as far I'm concerned, the devs have a very big role in deciding how a company should manage projects and what tools are used. "I am Spartan, argh!" :) Hmm, I've not been able to make this question a wiki for some reason, thus I'm going to have to pick an answer to accept. Cheers. Jas.

    Read the article

  • More Free Apps Bound for the Marketplace

    - by Scott Kuhl
    Microsoft has announced they are raising the limit of free applications a developer can submit from 5 to 100.  But what does that really mean? First, lets look at the reason for the limitation.  The iTunes Store and the Android Market both have a lot more applications available than the Windows Phone Marketplace.  But that says nothing about the quality of those applications.  I attended a couple of pre-launch events and Microsoft representatives were clearly told to send a message. We don’t want a bunch of junky applications that do nothing but spam the marketplace.  That was the reason for the 5 free application limit. Okay, so now what has the result been?  Well, there are still fart apps, but there is no sign of a developer flooding the marking with 1500 wallpaper applications or 1000 of the same application all pointed at different RSS feeds.   On the other hand there are developers who want to release real free apps but are constrained by the 5 app limit. So why did Microsoft change it’s mind?  Is it to get the count of applications up, or is to make developers happy?  Windows Phone Marketplace is growing fast but it’s a long way behind the other guys.   I don’t think Microsoft wants to have 100,000 apps show up in the next 3 months if they are loaded with copy cat apps.  Those numbers will get picked apart quickly and the press will start complaining about  the same problems the Android Market has.  I do think the bump was at developer request.  Microsoft is usually good about listening to developer feedback, but has been pretty slow about it at times.  And from a financial perspective, there will me more apps that Microsoft has to review that they will see no profit on.  At least not until they bake in a advertising model connected to Bing. Ultimately, what does this mean for the future? Well, there are developers out there looking to release more than 5 simple free apps, so I think we will see more hobby apps.  And there are developers out there trying to make money from advertising instead of sales, so I think we will see more of those also.  But the category that I think will grow the fastest is free versions of paid applications that are the same as the trial version of the application.  While technically that makes no sense, its purely a marketing move.  Free apps get downloaded a lot more than paid apps, even with a trial mode.  It always surprises me how little consumers are willing to spend on mobile apps.  How many reviews of applications have you seen that says something like “a bit pricey at $1.99”.  Really?  Have you looked at how much you spend on your phone and plan?  I always thought the trial mode baked into Windows Marketplace was a good idea.  So I’m not sure how the more open free market will play out. In the long run though, I won’t be surprised to see a Bing ad mobile ad model show up so Microsoft can capitalize on the more open and free Windows Marketplace. Bonus: The Oatmeal on How I Feel About Buying Apps

    Read the article

  • Rolling Along: PASS Board Year 2, Q2

    - by Denise McInerney
    Eighteen months into my time as a PASS Director I’m especially proud of what the Virtual Chapters have accomplished and want to share that progress with you. I'm also pleased that the organization has invested more resources to support the VCs. In this quarter I got to attend two conferences and meet more members of the SQL community. Virtual Chapters In the first six months of 2013 VCs have hosted more than 50 webinars, offering free technical education to over 6200 attendees. This is a great benefit to PASS members; thanks to the VC leaders, volunteers and speakers who contribute their time to produce these events. The Performance VC held their “Summer Performance Palooza”, an event featuring eight back-to-back sessions. Links to the session recordings can be found on the VCs web site. The new webinar platform, GoToWebinar, has been rolled out to all the VCs. This is a more stable, scalable platform and represents an important investment into the future of the VCs. A few new VCs are in the planning stages, including one focused on Security and one for Russian speakers. Visit the Virtual Chapter home page to sign up for the chapters that interest you. Each Virtual Chapter is offering a discount code for PASS Summit 2013. Be sure to ask your VC leader for the code to save $200 on Summit registration. 24 Hours of PASS The next 24HOP will be on July 31. This Summit Preview edition will feature 24 consecutive webcasts presented by experts who will be speaking at Summit in October. Registration for this free event is open now. And we will be using the GoToWebinar platform for 24HOP also. Business Analytics Conference April marked the first PASS Business Analytics Conference in Chicago. This introduced PASS to another segment of data professionals: the analysts and data scientists who work with the world’s growing collection of data. Overall the inaugural event was a success and gave us a glimpse into this increasingly important space. After Chicago the Board had several serious discussions about the lessons learned from this seven and what we should do next. We agreed to apply those lessons and continue to invest in this event; there will be a PASS Business Analytics Conference in 2014. I’m very pleased the next event will be in San Jose, CA, the heart of Silicon Valley, a place where a great deal of investment and innovation in data analytics is taking place. Global SQL Community Over the last couple of years PASS has been taking steps to become more relevant to SQL communities in different parts of the world. In May I had the opportunity to attend SQL Bits XI in Nottingham, England. It was enlightening to meet and talk with SQL professionals from around the U.K. as well as many other European countries. The many SQL Bits volunteers put on a great event and were gracious hosts. Budgets The Board passed the FY14 budget at the end of June. The  budget process can be challenging and requires the Board to make some difficult choices about where to allocate resources. Overall I’m satisfied with the decisions we made and think we are investing in the right activities and programs. Next Up The Board is meeting July 18-19 in Kansas City. We will be holding the Executive Committee election for the Exec Co that will take office in 2014. We will also be discussing plans for the next BA conference as well as the next steps for our Global Growth initiative. Applications for the upcoming Board of Directors election open on July 24. If you are considering running for the Board you can visit the PASS elections site to learn more about the election process. And I encourage anyone considering running to reach out to current and past Board members to learn about what the role entails. Plans for the next PASS Summit are in full swing. We are working on some fun new ideas to introduce attendees to the many ways to become involved in the SQL community.

    Read the article

  • Stepping outside Visual Studio IDE [Part 2 of 2] with Mono 2.6.4

    - by mbcrump
    Continuing part 2 of my Stepping outside the Visual Studio IDE, is the open-source Mono Project. Mono is a software platform designed to allow developers to easily create cross platform applications. Sponsored by Novell (http://www.novell.com/), Mono is an open source implementation of Microsoft's .NET Framework based on the ECMA standards for C# and the Common Language Runtime. A growing family of solutions and an active and enthusiastic contributing community is helping position Mono to become the leading choice for development of Linux applications. So, to clarify. You can use Mono to develop .NET applications that will run on Linux, Windows or Mac. It’s basically a IDE that has roots in Linux. Let’s first look at the compatibility: Compatibility If you already have an application written in .Net, you can scan your application with the Mono Migration Analyzer (MoMA) to determine if your application uses anything not supported by Mono. The current release version of Mono is 2.6. (Released December 2009) The easiest way to describe what Mono currently supports is: Everything in .NET 3.5 except WPF and WF, limited WCF. Here is a slightly more detailed view, by .NET framework version: Implemented C# 3.0 System.Core LINQ ASP.Net 3.5 ASP.Net MVC C# 2.0 (generics) Core Libraries 2.0: mscorlib, System, System.Xml ASP.Net 2.0 - except WebParts ADO.Net 2.0 Winforms/System.Drawing 2.0 - does not support right-to-left C# 1.0 Core Libraries 1.1: mscorlib, System, System.Xml ASP.Net 1.1 ADO.Net 1.1 Winforms/System.Drawing 1.1 Partially Implemented LINQ to SQL - Mostly done, but a few features missing WCF - silverlight 2.0 subset completed Not Implemented WPF - no plans to implement WF - Will implement WF 4 instead on future versions of Mono. System.Management - does not map to Linux System.EnterpriseServices - deprecated Links to documentation. The Official Mono FAQ’s Links to binaries. Mono IDE Latest Version is 2.6.4 That's it, nothing more is required except to compile and run .net code in Linux. Installation After landing on the mono project home page, you can select which platform you want to download. I typically pick the Virtual PC image since I spend all of my day using Windows 7. Go ahead and pick whatever version is best for you. The Virtual PC image comes with Suse Linux. Once the image is launch, you will see the following: I’m not going to go through each option but its best to start with “Start Here” icon. It will provide you with information on new projects or existing VS projects. After you get Mono installed, it's probably a good idea to run a quick Hello World program to make sure everything is setup properly. This allows you to know that your Mono is working before you try writing or running a more complex application. To write a "Hello World" program follow these steps: Start Mono Development Environment. Create a new Project: File->New->Solution Select "Console Project" in the category list. Enter a project name into the Project name field, for example, "HW Project". Click "Forward" Click “Packaging” then OK. You should have a screen very simular to a VS Console App. Click the "Run" button in the toolbar (Ctrl-F5). Look in the Application Output and you should have the “Hello World!” Your screen should look like the screen below. That should do it for a simple console app in mono. To test out an ASP.NET application, simply copy your code to a new directory in /srv/www/htdocs, then visit the following URL: http://localhost/directoryname/page.aspx where directoryname is the directory where you deployed your application and page.aspx is the initial page for your software. Databases You can continue to use SQL server database or use MySQL, Postgress, Sybase, Oracle, IBM’s DB2 or SQLite db. Conclusion I hope this brief look at the Mono IDE helps someone get acquainted with development outside of VS. As always, I welcome any suggestions or comments.

    Read the article

  • Why Is Hibernation Still Used?

    - by Jason Fitzpatrick
    With the increased prevalence of fast solid-state hard drives, why do we still have system hibernation? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader Moses wants to know why he should use hibernate on a desktop machine: I’ve never quite understood the original purpose of the Hibernation power state in Windows. I understand how it works, what processes take place, and what happens when you boot back up from Hibernate, but I’ve never truly understood why it’s used. With today’s technology, most notably with SSDs, RAM and CPUs becoming faster and faster, a cold boot on a clean/efficient Windows installation can be pretty fast (for some people, mere seconds from pushing the power button). Standby is even faster, sometimes instantaneous. Even SATA drives from 5-6 years ago can accomplish these fast boot times. Hibernation seems pointless to me [on desktop computers] when modern technology is considered, but perhaps there are applications that I’m not considering. What was the original purpose behind hibernation, and why do people still use it? Quite a few people use hibernate, so what is Moses missing in the big picture? The Answer SuperUser contributor Vignesh4304 writes: Normally hibernate mode saves your computer’s memory, this includes for example open documents and running applications, to your hard disk and shuts down the computer, it uses zero power. Once the computer is powered back on, it will resume everything where you left off. You can use this mode if you won’t be using the laptop/desktop for an extended period of time, and you don’t want to close your documents. Simple Usage And Purpose: Save electric power and resuming of documents. In simple terms this comment serves nice e.g (i.e. you will sleep but your memories are still present). Why it’s used: Let me describe one sample scenario. Imagine your battery is low on power in your laptop, and you are working on important projects on your machine. You can switch to hibernate mode – it will result your documents being saved, and when you power on, the actual state of application gets restored. Its main usage is like an emergency shutdown with an auto-resume of your documents. MagicAndre1981 highlights the reason we use hibernate everyday: Because it saves the status of all running programs. I leave all my programs open and can resume working the next day very easily. Doing a real boot would require to start all programs again, load all the same files into those programs, get to the same place that I was at before, and put all my windows in exactly the same place. Hibernating saves a lot of work pulling these things back up again. It’s not unusual to find computers around the office here that have been hibernated day in and day out for months without an actual full system shutdown and restart. It’s enormously convenient to freeze your work space at the exact moment you stopped working and to turn right around and resume there the next morning. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

  • Openmatics Revolutionizes Fleet Management with Standards-Based Vehicle Telematics Platform

    - by Michael Snow
    Openmatics s.r.o. was founded in 2010 as a subsidiary of ZF Friedrichshafen AG, a global player in driveline and chassis technology. Oracle Customer:  Openmatics s.r.o.Location:  Pilsen, Czech RepublicIndustry:  AutomotiveEmployees:  70 Its goal was to develop and operate a flexible, open telematics platform for automotive applications, which is independent from vehicle and component suppliers—recognizing that the fragmented telematics market was not meeting today’s fleet management needs. Openmatics provides a rich product portfolio, and customers can extend the platform, as required, to meet their needs. Partners and third-parties can develop their own applications using the Openmatics’ software development kit and can sell them via the Openmatics app shop.ZF Friedrichshafen AG is a global player in driveline and chassis technology. With 121 production companies and 650 service partners in 26 countries, ZF is among the top 10 largest automotive suppliers worldwide. Founded in 1915 to develop and produce transmissions for airships and vehicles, the group’s product offerings now include transmissions and steering systems as well as chassis components and complete axle systems and modules.  A word from Openmatics s.r.o.  “Oracle WebCenter Portal, together with the underlying Oracle Application Development Framework, provided the fundamental infrastructure for the Openmatics platform. Fleet managers can now reduce fuel consumption and operating costs, and more efficiently manage vehicle usage, maintenance, and safety. The standards-based platform allows third-party suppliers to deploy their own vehicle telematics services as Openmatics apps and creates a de facto standard for the automotive industry, independent from a single manufacturer or service provider.” – Gero Strobel, Head of Development, Openmatics s.r.o. Challenges Create an industry standard for vehicle telematics by establishing a customizable platform that enables access to telematics information, such as current and past fuel consumption, through a web browser to better meet automotive market and customer needs Reduce fleet-management costs by eliminating the need to invest in isolated telematics hardware and software solutions per vehicle brand and vehicle component manufacturer Establish an open platform where third-party providers—such as original equipment manufacturers (OEM), insurers, fleet operators, and individual developers—can deploy their own vehicle telematics services Allow users to purchase targeted telematics services as single apps to reduce costs and ensure rapid growth of telematics services available on the platform Enable users to configure their telematics apps with ease to make sure the platform meets individual fleet management requirements, such as analyzing past and current fuel consumption of a truck fleet Solutions Deployed Oracle WebCenter Portal as a foundation for Openmatics, a standards-based automotive telematics platform that provides next-generation fleet management with unified digital communication from and to vehicles on the move Used Oracle Application Development Framework as the development framework for Oracle WebCenter Portal’s components and services, providing developers with ready-to-use software development kits with application programming interfaces, design templates, and visual tools that accelerated time to market Used Oracle Enterprise Pack for Eclipse to simplify telematics application development in Java Enabled fleet monitoring by recording vehicle data—such as fuel consumption information—through onboard units, delivering the information to Oracle Database, and making it accessible through a customizable app portfolio on any web browser Stored vehicle telematics data—sent as encrypted information—in Oracle Database, ensuring data integrity and immediate availability for the platform’s telematics applications Enabled a wide range of telematics services suppliers, from vehicle component manufacturers to fleet application developers, to offer vehicle telematics services on the Openmatics platform, ensuring platform independence from OEMs Provided Openmatics customers with the means to individually select the automotive telematics services that are relevant to their business requirements, eliminating the need to pay for superfluous information and reducing fleet management costs Oracle Products & Services Oracle Application Development Framework Oracle WebCenter Portal Oracle SOA Suite Oracle Enterprise Pack for Eclipse Oracle Database Oracle Consulting &amp;amp;amp;amp;amp;amp;amp;&amp;amp;amp;amp;amp;lt;span id=&amp;amp;amp;amp;amp;quot;XinhaEditingPostion&amp;amp;amp;amp;amp;quot;&amp;amp;amp;amp;amp;gt;&amp;amp;amp;amp;amp;lt;/span&amp;amp;amp;amp;amp;gt;amp;&amp;amp;amp;amp;amp;amp;lt;span id=&amp;amp;amp;amp;amp;amp;quot;XinhaEditingPostion&amp;amp;amp;amp;amp;amp;quot;&amp;amp;amp;amp;amp;amp;gt;&amp;amp;amp;amp;amp;amp;lt;/span&amp;amp;amp;amp;amp;amp;gt;lt;p&amp;amp;amp;amp;amp;amp;amp;amp;gt; &amp;amp;amp;amp;amp;amp;amp;amp;lt;/p&amp;amp;amp;amp;amp;amp;amp;amp;gt;

    Read the article

  • Move on and look elsewhere, or confront the boss?

    - by Meister
    Background: I have my Associates in Applied Science (Comp/Info Tech) with a strong focus in programming, and I'm taking University classes to get my Bachelors. I was recently hired at a local company to be a Software Engineer I on a team of about 8, and I've been told they're looking to hire more. This is my first job, and I was offered what I feel to be an extremely generous starting salary ($30/hr essentially + benefits and yearly bonus). What got me hired was my passion for programming and a strong set of personal projects. Problem: I had no prior experience when I interviewed, so I didn't know exactly what to ask them about the company when I was hired. I've spotted a number of warning signs and annoyances since then, such as: Four developers when I started, with everyone talking about "Ben" or "Ryan" leaving. One engineer hired thirty days before me, one hired two weeks after me. Most of the department has been hiring a large number of people since I started. Extremely limited internet access. I understand the idea from an IT point of view, but not only is Facebook blocked, but so it Youtube, Twitter, and Pandora. I've also figured out that they block all access to non-DNS websites (http://xxx.xxx.xxx.xxx/) and strangely enough Miranda-IM. Low cubicles. Which is fine because I like my immediate coworkers, but they put the developers with the customer service, customer training, and QA department in a huge open room. Noise, noise, noise, and people stop to chitchat all day long. Headphones only go so far. Several emails have been sent out by my boss since I started telling us programmers to not talk about non-work-related-things like Video Games at our cubicles, despite us only spending maybe five minutes every few hours doing so. Further digging tells me that this is because someone keeps complaining that the programmers are "slacking off". People are looking over my shoulder all day. I was in the Freenode webchat to get help with a programming issue, and within minutes I had an email from my boss (to all the developers) telling us that we should NOT be connected to any outside chat servers at work. Version control system from 2005 that we must access with IE and keep the Java 1.4 JRE installed to be able to use. I accidentally updated to Java 6 one day and spent the next two days fighting with my PC to undo this "problem". No source control, no comments on anything, no standards, no code review, no unit testing, no common sense. I literally found a problem in how they handle string resource translations that stems from the simple fact that they don't trim excess white spaces, leading to developers doing: getResource("Date: ") instead of: getResource("Date") + ": ", and I was told to just add the excess white spaces back to the database instead of dealing with the issue directly. Some of these things I'd like to try to understand, but I like having IRC open to talk in a few different rooms during the day and keep in touch with friends/family over IM. They don't break my concentration (not NEARLY as much as the lady from QA stopping by to talk about her son), but because people are looking over my shoulder all day as they walk by they complain when they see something that's not "programmer-looking work". I've been told by my boss and QA that I do good, fast work. I should be judged on my work output and quality, not what I have up on my screen for the five seconds you're walking by So, my question is, even though I'm just barely at my 90 days: How do you decide to move on from a job and looking elsewhere, or when you should start working with your boss to resolve these issues? Is it even possible to get the boss to work with me in many of these things? This is the only place I heard back from even though I sent out several resume's a day for several months, and this place does pay well for putting up with their many flaws, but I'm just starting to get so miserable working here already. Should I just put up with it?

    Read the article

  • Simple Merging Of PDF Documents with iTextSharp 5.4.5.0

    - by Mladen Prajdic
    As we were working on our first SQL Saturday in Slovenia, we came to a point when we had to print out the so-called SpeedPASS's for attendees. This SpeedPASS file is a PDF and contains thier raffle, lunch and admission tickets. The problem is we have to download one PDF per attendee and print that out. And printing more than 10 docs at once is a pain. So I decided to make a little console app that would merge multiple PDF files into a single file that would be much easier to print. I used an open source PDF manipulation library called iTextSharp version 5.4.5.0 This is a console program I used. It’s brilliantly named MergeSpeedPASS. It only has two methods and is really short. Don't let the name fool you It can be used to merge any PDF files. The first parameter is the name of the target PDF file that will be created. The second parameter is the directory containing PDF files to be merged into a single file. using iTextSharp.text; using iTextSharp.text.pdf; using System; using System.IO; namespace MergeSpeedPASS { class Program { static void Main(string[] args) { if (args.Length == 0 || args[0] == "-h" || args[0] == "/h") { Console.WriteLine("Welcome to MergeSpeedPASS. Created by Mladen Prajdic. Uses iTextSharp 5.4.5.0."); Console.WriteLine("Tool to create a single SpeedPASS PDF from all downloaded generated PDFs."); Console.WriteLine(""); Console.WriteLine("Example: MergeSpeedPASS.exe targetFileName sourceDir"); Console.WriteLine(" targetFileName = name of the new merged PDF file. Must include .pdf extension."); Console.WriteLine(" sourceDir = path to the dir containing downloaded attendee SpeedPASS PDFs"); Console.WriteLine(""); Console.WriteLine(@"Example: MergeSpeedPASS.exe MergedSpeedPASS.pdf d:\Downloads\SQLSaturdaySpeedPASSFiles"); } else if (args.Length == 2) CreateMergedPDF(args[0], args[1]); Console.WriteLine(""); Console.WriteLine("Press any key to exit..."); Console.Read(); } static void CreateMergedPDF(string targetPDF, string sourceDir) { using (FileStream stream = new FileStream(targetPDF, FileMode.Create)) { Document pdfDoc = new Document(PageSize.A4); PdfCopy pdf = new PdfCopy(pdfDoc, stream); pdfDoc.Open(); var files = Directory.GetFiles(sourceDir); Console.WriteLine("Merging files count: " + files.Length); int i = 1; foreach (string file in files) { Console.WriteLine(i + ". Adding: " + file); pdf.AddDocument(new PdfReader(file)); i++; } if (pdfDoc != null) pdfDoc.Close(); Console.WriteLine("SpeedPASS PDF merge complete."); } } } } Hope it helps you and have fun.

    Read the article

  • Investigating on xVelocity (VertiPaq) column size

    - by Marco Russo (SQLBI)
      In January I published an article about how to optimize high cardinality columns in VertiPaq. In the meantime, VertiPaq has been rebranded to xVelocity: the official name is now “xVelocity in-memory analytics engine (VertiPaq)” but using xVelocity and VertiPaq when we talk about Analysis Services has the same meaning. In this post I’ll show how to investigate on columns size of an existing Tabular database so that you can find the most important columns to be optimized. A first approach can be looking in the DataDir of Analysis Services and look for the folder containing the database. Then, look for the biggest files in all subfolders and you will find the name of a file that contains the name of the most expensive column. However, this heuristic process is not very optimized. A better approach is using a DMV that provides the exact information. For example, by using the following query (open SSMS, open an MDX query on the database you are interested to and execute it) you will see all database objects sorted by used size in a descending way. SELECT * FROM $SYSTEM.DISCOVER_STORAGE_TABLE_COLUMN_SEGMENTS ORDER BY used_size DESC You can look at the first rows in order to understand what are the most expensive columns in your tabular model. The interesting data provided are: TABLE_ID: it is the name of the object – it can be also a dictionary or an index COLUMN_ID: it is the column name the object belongs to – you can also see ID_TO_POS and POS_TO_ID in case they refer to internal indexes RECORDS_COUNT: it is the number of rows in the column USED_SIZE: it is the used memory for the object By looking at the ration between USED_SIZE and RECORDS_COUNT you can understand what you can do in order to optimize your tabular model. Your options are: Remove the column. Yes, if it contains data you will never use in a query, simply remove the column from the tabular model Change granularity. If you are tracking time and you included milliseconds but seconds would be enough, round the data source column to the nearest second. If you have a floating point number but two decimals are good enough (i.e. the temperature), round the number to the nearest decimal is relevant to you. Split the column. Create two or more columns that have to be combined together in order to produce the original value. This technique is described in VertiPaq optimization article. Sort the table by that column. When you read the data source, you might consider sorting data by this column, so that the compression will be more efficient. However, this technique works better on columns that don’t have too many distinct values and you will probably move the problem to another column. Sorting data starting from the lower density columns (those with a few number of distinct values) and going to higher density columns (those with high cardinality) is the technique that provides the best compression ratio. After the optimization you should be able to reduce the used size and improve the count/size ration you measured before. If you are interested in a longer discussion about internal storage in VertiPaq and you want understand why this approach can save you space (and time), you can attend my 24 Hours of PASS session “VertiPaq Under the Hood” on March 21 at 08:00 GMT.

    Read the article

  • Welcome To The Nashorn Blog

    - by jlaskey
    Welcome to all.  Time to break the ice and instantiate The Nashorn Blog.  I hope to contribute routinely, but we are very busy, at this point, preparing for the next development milestone and, of course, getting ready for open source. So, if there are long gaps between postings please forgive. We're just coming back from JavaOne and are stoked by the positive response to all the Nashorn sessions. It was great for the team to have the front and centre slide from Georges Saab early in the keynote. It seems we have support coming from all directions. Most of the session videos are posted. Check out the links. Nashorn: Optimizing JavaScript and Dynamic Language Execution on the JVM. Unfortunately, Marcus - the code generation juggernaut,  got saddled with the first session of the first day. Still, he had a decent turnout. The talk focused on issues relating to optimizations we did to get good performance from the JVM. Much yet to be done but looking good. Nashorn: JavaScript on the JVM. This was the main talk about Nashorn. I delivered the little bit of this and a little bit of that session with an overview, a follow up on the open source announcement, a run through a few of the Nashorn features and some demos. The room was SRO, about 250±. High points: Sam Pullara, from Twitter, came forward to describe how painless it was to get Mustache.js up and running (20x over Rhino), and,  John Ceccarelli, from NetBeans came forward to describe how Nashorn has become an integral part of Netbeans. A healthy Q & A at the end was very encouraging. Meet the Nashorn JavaScript Team. Michel, Attila, Marcus and myself hosted a Q & A. There was only a handful of people in the room (we assume it was because of a conflicting session ;-) .) Most of the questions centred around Node.jar, which leads me to believe, Nashorn + Node.jar is what has the most interest. Akhil, Mr. Node.jar, sitting in the audience, fielded the Node.jar questions. Nashorn, Node, and Java Persistence. Doug Clarke, Akhil and myself, discussed the title topics, followed by a lengthy Q & A (security had to hustle us out.) 80 or so in the room. Lots of questions about Node.jar. It was great to see Doug's use of Nashorn + JPA. Nashorn in action, with such elegance and grace. Putting the Metaobject Protocol to Work: Nashorn’s Java Bindings. Attila discussed how he applied Dynalink to Nashorn. Good turn out for this session as well. I have a feeling that once people discover and embrace this hidden gem, great things will happen for all languages running on the JVM. Finally, there were quite a few JavaOne sessions that focused on non-Java languages and their impact on the JVM. I've always believed that one's tool belt should carry a variety of programming languages, not just for domain/task applicability, but also to enhance your thinking and approaches to problem solving. For the most part, future blog entries will focus on 'how to' in Nashorn, but if you have any suggestions for topics you want discussed, please drop a line.  Cheers. 

    Read the article

  • Snap App Windows to Pre-Defined Screen Sections with Acer GridVista

    - by Asian Angel
    The window snapping feature in Windows 7 and the ability to organize monitor(s) into specific gridded sections have both become popular lately. If you love the idea of having both combined in a single software then join us as we look at Acer GridVista. Note: Acer GridVista works with Windows XP, Vista, & 7. It will also work with dual monitors. Setup Acer GridVista comes in a zip file format and at first you might assume that it is portable in nature but it is not. Once you unzip the enclosed folder you will need to double click on “Setup.exe” to install the program. Acer GridVista in Action Once you have installed the program and started it up all that you will notice at first is the new “System Tray Icon”. Here you can see the “Context Menu”… The only menu command that you will likely use most of the time is the “Grid Configuration Command”. Notice that for our single monitor setup that it lists “Display 1”. The “Single Setting” is enabled by default and you can easily choose the layout that best suits your needs. The enabled layout style will always be highlighted in yellow for easy reference. For our example we chose the “Triple (primary at right)” layout style. Each section will be specifically numbered as shown here. Do not worry…the grid and numbers only appear for a moment and then become invisible again until you move an app window into that section/area of your screen. On every regular app window that you open you will notice three new buttons in the upper right corner. Here is what each of these new buttons do: Acer GridVista Extensions (Transparent, Send To Window Grid, About Acer GridVista): Viewable in a drop-down menu Lock To Grid (Enable/Disable): Enabled by default –> Note: Set to disable on a particular window to keep it free of the “grid locking function” Always On Top (Enable/Disable): Disabled by default A good look at the “Extensions Drop-Down Menu” where you can set an app window to be transparent or send it to a specific screen section on your monitor(s). If you open an app it will not automatically lock into a specific section. To lock the window into a specific section drag-and-drop the app window into the desired section. Notice the red outline and highlighted number on “Section 2” below. The red outline and highlighted number serves as an indicator that if you release the app window at that moment it will lock into the outlined/highlighted section. Now that Notepad is locked into “Section 2” you can see that it is maximized within that section. Continue to drag-and-drop your app windows into the appropriate sections as desired…apps can still be reduced to the “Taskbar” the same as before. Options These are the options available for Acer GridVista… Conclusion If you have been wanting the ability to “snap” windows and organize them into specific screen areas then Acer GridVista is definitely a program that you should try out. Links Download Acer GridVista at Softpedia View detailed information at the Acer GridVista Homepage Similar Articles Productive Geek Tips Multitask Like a Pro with AquaSnapHelp Troubleshoot the Blue Screen of Death by Preventing Automatic RebootAdd Windows 7’s AeroSnap Feature to Vista and XPResize Windows to Specific Dimensions Easily With SizerKeyboard Ninja: Assign a Hotkey to any Window TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Playing Games In Chrome Made Easier Stop In The Name Of Love (Firefox addon) Chitika iPad Labs Gives Live iPad Sale Stats Heaven & Hell Finder Icon Using TrueCrypt to Secure Your Data Quickly Schedule Meetings With NeedtoMeet

    Read the article

  • I am trying to create an windows application watcher? [migrated]

    - by Broken_Code
    I recently started coding in c #(in may this year) and well I find it best to learn by working with code. this application http://www.c-sharpcorner.com/UploadFile/satisharveti/ActiveApplicationWatcher01252007024921AM/ActiveApplicationWatcher.aspx. I am trying to recreate it however mine will be saving the information into an sql database(new at this as well). I am having some coding problems though as it does not do what I expect it to do. THis is the main code I am using. private void GetTotalTimer() { DateTime now = DateTime.Now; IntPtr hwnd = APIFunc.getforegroundWindow(); Int32 pid = APIFunc.GetWindowProcessID(hwnd); Process p = Process.GetProcessById(pid); appName = p.ProcessName; const int nChars = 256; int handle = 0; StringBuilder Buff = new StringBuilder(nChars); handle = GetForegroundWindow(); appltitle = APIFunc.ActiveApplTitle().Trim().Replace("\0", ""); //if (GetWindowText(handle, Buff, nChars) > 0) //{ // string strbuff = Buff.ToString(); // StrWindow = strbuff; #region insert statement try { if (Conn.State == ConnectionState.Closed) { Conn.Open(); } if (Conn.State == ConnectionState.Open) { SqlCommand com = new SqlCommand("Select top 1 [Window Title] From TimerLogs ORDER BY [Time of Event] DESC", Conn); SqlDataReader reader = com.ExecuteReader(); startTime = DateTime.Now; string time = now.ToString(); if (!reader.HasRows) { reader.Close(); cmd = new SqlCommand("insert into [TimerLogs] values(@time,@appName,@appltitle,@Elapsed_Time,@userName)", Conn); cmd.Parameters.AddWithValue("@time", time); cmd.Parameters.AddWithValue("@appName", appName); cmd.Parameters.AddWithValue("@appltitle", appltitle); cmd.Parameters.AddWithValue("@Elapsed_Time", blank.ToString()); cmd.Parameters.AddWithValue("@userName", userName); cmd.ExecuteNonQuery(); Conn.Close(); } else if(reader.HasRows) { reader.Read(); if (appltitle != reader.ToString()) { reader.Close(); endTime = DateTime.Now; appduration = endTime.Subtract(startTime); cmd = new SqlCommand("insert into [TimerLogs] values (@time,@appName,@appltitle,@Elapsed_Time,@userName)", Conn); cmd.Parameters.AddWithValue("@time", time); cmd.Parameters.AddWithValue("@appName", appName); cmd.Parameters.AddWithValue("@appltitle", appltitle); cmd.Parameters.AddWithValue("@Elapsed_Time", appduration.ToString()); cmd.Parameters.AddWithValue("@userName", userName); cmd.ExecuteNonQuery(); reader.Close(); Conn.Close(); } } } } catch (Exception) { } //} #endregion ActivityTimer.Start(); Processing = "Working"; } Unfortunately this is the result. it is not saving the data as I expect it to. What am i doing wrong I had thought that with the sql reader it would first check for a value and only save if they do not match however it is saving whether there is a match or not.

    Read the article

  • Investigating on xVelocity (VertiPaq) column size

    - by Marco Russo (SQLBI)
      In January I published an article about how to optimize high cardinality columns in VertiPaq. In the meantime, VertiPaq has been rebranded to xVelocity: the official name is now “xVelocity in-memory analytics engine (VertiPaq)” but using xVelocity and VertiPaq when we talk about Analysis Services has the same meaning. In this post I’ll show how to investigate on columns size of an existing Tabular database so that you can find the most important columns to be optimized. A first approach can be looking in the DataDir of Analysis Services and look for the folder containing the database. Then, look for the biggest files in all subfolders and you will find the name of a file that contains the name of the most expensive column. However, this heuristic process is not very optimized. A better approach is using a DMV that provides the exact information. For example, by using the following query (open SSMS, open an MDX query on the database you are interested to and execute it) you will see all database objects sorted by used size in a descending way. SELECT * FROM $SYSTEM.DISCOVER_STORAGE_TABLE_COLUMN_SEGMENTS ORDER BY used_size DESC You can look at the first rows in order to understand what are the most expensive columns in your tabular model. The interesting data provided are: TABLE_ID: it is the name of the object – it can be also a dictionary or an index COLUMN_ID: it is the column name the object belongs to – you can also see ID_TO_POS and POS_TO_ID in case they refer to internal indexes RECORDS_COUNT: it is the number of rows in the column USED_SIZE: it is the used memory for the object By looking at the ration between USED_SIZE and RECORDS_COUNT you can understand what you can do in order to optimize your tabular model. Your options are: Remove the column. Yes, if it contains data you will never use in a query, simply remove the column from the tabular model Change granularity. If you are tracking time and you included milliseconds but seconds would be enough, round the data source column to the nearest second. If you have a floating point number but two decimals are good enough (i.e. the temperature), round the number to the nearest decimal is relevant to you. Split the column. Create two or more columns that have to be combined together in order to produce the original value. This technique is described in VertiPaq optimization article. Sort the table by that column. When you read the data source, you might consider sorting data by this column, so that the compression will be more efficient. However, this technique works better on columns that don’t have too many distinct values and you will probably move the problem to another column. Sorting data starting from the lower density columns (those with a few number of distinct values) and going to higher density columns (those with high cardinality) is the technique that provides the best compression ratio. After the optimization you should be able to reduce the used size and improve the count/size ration you measured before. If you are interested in a longer discussion about internal storage in VertiPaq and you want understand why this approach can save you space (and time), you can attend my 24 Hours of PASS session “VertiPaq Under the Hood” on March 21 at 08:00 GMT.

    Read the article

  • Wireless device bug on 13.10. BCM4313 registers as eth1 instead of wlan0 and no internet access

    - by user205691
    My Hotel wiFi requires me to login with a username & password after connecting to the hotspot. So, my browser would open a page with username & passwrd fields to login and then connect to internet. But unfortunately, firefox & chromium dont seem to work. i dont think it is browser related but a setting for the wifi router or driver which is creating this issue. using Broadcom 801.11 STA wireless driver (proprietary). tried open source as well but same result !! The image linked below shows my wifi connection setting & Chromium. The login page itself comes up after a long time and after entering the credentials, it keeps loading for ever !! it is the same case for every other browser.. so i dont think its browser issue but something to do with wifi setting or network manager stuff.. interestingly, i am able to connect to WiFi networks with WPA key without any issue. Adhoc hotspot is a problem and that is my regular home network :( .. I hope i can get some help solving this issue ! I have tried repeating the same hotspot after login from my android, by creating a virtual repeater with WPA key and it works. I can browse on ubuntu using this method.. but cant be doing this regularly ! I tried loading the same login page of the hotel wifi while browsing through my repeater wifi created on mobile and screen shot attached below. the page loads up quick and easy.. so this means something is wrong with the way network manager handles adhoc connectivity & login ?? i installed wicd0 but it crashes on startup and not helpful at all ! Screenshot of Chromium page Login page with repeated hotspot ifconfig in my terminal results: krishna@krishna-HP-ENVY-4-Notebook-PC:~$ ifconfig eth0 Link encap:Ethernet HWaddr 28:92:4a:1d:54:fa UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) eth1 Link encap:Ethernet HWaddr e0:06:e6:89:fa:49 inet addr:10.24.1.71 Bcast:10.24.1.255 Mask:255.255.255.0 inet6 addr: fe80::e206:e6ff:fe89:fa49/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:10940 errors:0 dropped:0 overruns:0 frame:348431 TX packets:6611 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:7669631 (7.6 MB) TX bytes:864195 (864.1 KB) Interrupt:17 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:2146 errors:0 dropped:0 overruns:0 frame:0 TX packets:2146 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:166120 (166.1 KB) TX bytes:166120 (166.1 KB) I wonder why is the wireless configured under eth1 ? I think this is a bug with earlier ubuntu versions, but is this normal in 13.10 or is there a wrong configuration here ? The wireless device in my pc is BCM4313 and i have installed the bcmwl-kernel-sources, wireless-tools to support the device. i also reinstalled the bcmwl-kernel as suggested on broadcom website, via synaptic package manager. Nothing has changed this situation ! I tried booting into liveUSB and then ifconfig results show wireless under wlan0. But then the wireless connects and loads the login page. So is the problem with the device configuration now ? i really want to get this fixed before i start configuring the other stuff like ATI graphics and such on the laptop for overheating.. lack of internet access is too bad a bug for me :P any help is appreciated!

    Read the article

  • Server-Sent Events using GlassFish (TOTD #179)

    - by arungupta
    Bhakti blogged about Server-Sent Events on GlassFish and I've been planning to try it out for past some days. Finally, I took some time out today to learn about it and build a simplistic example showcasing the touch points. Server-Sent Events is developed as part of HTML5 specification and provides push notifications from a server to a browser client in the form of DOM events. It is defined as a cross-browser JavaScript API called EventSource. The client creates an EventSource by requesting a particular URL and registers an onmessage event listener to receive the event notifications. This can be done as shown var url = 'http://' + document.location.host + '/glassfish-sse/simple';eventSource = new EventSource(url);eventSource.onmessage = function (event) { var theParagraph = document.createElement('p'); theParagraph.innerHTML = event.data.toString(); document.body.appendChild(theParagraph);} This code subscribes to a URL, receives the data in the event listener, adds it to a HTML paragraph element, and displays it in the document. This is where you'll parse JSON and other processing to display if some other data format is received from the URL. The URL to which the EventSource is subscribed to is updated on the server side and there are multipe ways to do that. GlassFish 4.0 provide support for Server-Sent Events and it can be achieved registering a handler as shown below: @ServerSentEvent("/simple")public class MySimpleHandler extends ServerSentEventHandler { public void sendMessage(String data) { try { connection.sendMessage(data); } catch (IOException ex) { . . . } }} And then events can be sent to this handler using a singleton session bean as shown: @Startup@Statelesspublic class SimpleEvent { @Inject @ServerSentEventContext("/simple") ServerSentEventHandlerContext<MySimpleHandler> simpleHandlers; @Schedule(hour="*", minute="*", second="*/10") public void sendDate() { for(MySimpleHandler handler : simpleHandlers.getHandlers()) { handler.sendMessage(new Date().toString()); } }} This stateless session bean injects ServerSentEventHandlers listening on "/simple" path. Note, there may be multiple handlers listening on this path. The sendDate method triggers every 10 seconds and send the current timestamp to all the handlers. The client side browser simply displays the string. The HTTP request headers look like: Accept: text/event-streamAccept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3Accept-Encoding: gzip,deflate,sdchAccept-Language: en-US,en;q=0.8Cache-Control: no-cacheConnection: keep-aliveCookie: JSESSIONID=97ff28773ea6a085e11131acf47bHost: localhost:8080Referer: http://localhost:8080/glassfish-sse/faces/index2.xhtmlUser-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_3) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.54 Safari/536.5 And the response headers as: Content-Type: text/event-streamDate: Thu, 14 Jun 2012 21:16:10 GMTServer: GlassFish Server Open Source Edition 4.0Transfer-Encoding: chunkedX-Powered-By: Servlet/3.0 JSP/2.2 (GlassFish Server Open Source Edition 4.0 Java/Apple Inc./1.6) Notice, the MIME type of the messages from server to the client is text/event-stream and that is defined by the specification. The code in Bhakti's blog can be further simplified by using the recently-introduced Twitter API for Java as shown below: @Schedule(hour="*", minute="*", second="*/10") public void sendTweets() { for(MyTwitterHandler handler : twitterHandler.getHandlers()) { String result = twitter.search("glassfish", String.class); handler.sendMessage(result); }} The complete source explained in this blog can be downloaded here and tried on GlassFish 4.0 build 34. The latest promoted build can be downloaded from here and the complete source code for the API and implementation is here. I tried this sample on Chrome Version 19.0.1084.54 on Mac OS X 10.7.3.

    Read the article

  • New PeopleSoft HCM 9.1 On Demand Standard Edition provides a complete set of IT services at a low, predictable monthly cost

    - by Robbin Velayedam
    At Oracle Open World last month, Oracle announced that we are extending our On Demand offerings with the general availability of PeopleSoft On Demand Standard Edition. Standard Edition represents Oracle’s commitment to providing customers a choice of solutions, technology, and deployment options commensurate with their business needs and future growth. The Standard Edition offering complements the traditional On Demand offerings (Enterprise and Professional Editions) by focusing on a low, predictable monthly cost model that scales with the size of your business.   As part of Oracle's open cloud strategy, customers can freely move PeopleSoft licensed applications between on premise and the various  on demand options as business needs arise.    In today’s business climate, aggressive and creative business objectives demand more of IT organizations. They are expected to provide technology-based solutions to streamline business processes, enable online collaboration and multi-tasking, facilitate data mining and storage, and enhance worker productivity. As IT budgets remain tight in a recovering economy, the challenge becomes how to meet these demands with limited time and resources. One way is to eliminate the variable costs of projects so that your team can focus on the high priority functions and better predict funding and resource needs two to three years out. Variable costs and changing priorities can derail the best laid project and capacity plans. The prime culprits of variable costs in any IT organization include disaster recovery, security breaches, technical support, and changes in business growth and priorities. Customers have an immediate need for solutions that are cheaper, predictable in cost, and flexible enough for long-term growth or capacity changes. The Standard Edition deployment option fulfills that need by allowing customers to take full advantage of the rich business functionality that is inherent to PeopleSoft HCM, while delegating all application management responsibility – such as future upgrades and product updates – to Oracle technology experts, at an affordable and expected price. Standard Edition provides the advantages of the secure Oracle On Demand hosted environment, the complete set of PeopleSoft HCM configurable business processes, and timely management of regular updates and enhancements to the application functionality and underlying technology. Standard Edition has a convenient monthly fee that is scalable by number of employees, which helps align the customer’s overall cost of ownership with its size and anticipated growth and business needs. In addition to providing PeopleSoft HCM applications' world class business functionality and Oracle On Demand's embassy-grade security, Oracle’s hosted solution distinguishes itself from competitors by offering customers the ability to transition between different deployment and service models at any point in the application ownership lifecycle. As our customers’ business and economic climates change, they are free to transition their applications back to on-premise at any time. HCM On Demand Standard Edition is based on configurability options rather than customizations, requiring no additional code to develop or maintain. This keeps the cost of ownership low and time to production less than a month on average. Oracle On Demand offers the highest standard of security and performance by leveraging a state-of-the-art data center with dedicated databases, servers, and secured URL all within a private cloud. Customers will not share databases, environments, platforms, or access portals with other customers because we value how mission critical your data are to your business. Oracle’s On Demand also provides a full breadth of disaster recovery services to provide customers the peace of mind that their data are secure and that backup operations are in place to keep their businesses up and running in the case of an emergency. Currently we have over 50 PeopleSoft customers delegating us with the management of their applications through Oracle On Demand. If you are a customer interested in learning more about the PeopleSoft HCM 9.1 Standard Edition and how it can help your organization minimize your variable IT costs and free up your resources to work on other business initiatives, contact Oracle or your Account Services Representative today.

    Read the article

  • Gnome Install Error (1)

    - by Guy1984
    I'm trying to install Gnome on my Ubuntu 12.04 P.Pangolin and getting the following errors: root@***:~# sudo apt-get install gnome-core gnome-session-fallback Reading package lists... Done Building dependency tree Reading state information... Done gnome-core is already the newest version. gnome-session-fallback is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 4 not upgraded. 5 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? y Setting up bluez (4.98-2ubuntu7) ... start: Job failed to start invoke-rc.d: initscript bluetooth, action "start" failed. dpkg: error processing bluez (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of gnome-bluetooth: gnome-bluetooth depends on bluez (>= 4.36); however: Package bluez is not configured yet. dpkg: error processing gnome-bluetooth (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of gnome-shell: gnome-shell depends on gnome-bluetooth (>= 3.0.0); however: Package gnome-bluetooth is not configured yet. dpkg: error processing gnome-shell (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of gnome-user-share: gnome-user-share depends on gnome-bluetooth; however: Package gnome-bluetooth is not configured yet. dpkg: error processing gnome-user-share (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of gnome-core: gnome-core depends on gNo apport report written because the error message indicates its a followup error from a previous failure. No apport report written because the error message indicates its a followup error from a previous failure. No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already nome-bluetooth (>= 3.0); however: Package gnome-bluetooth is not configured yet. gnome-core depends on gnome-shell (>= 3.0); however: Package gnome-shell is not configured yet. gnome-core depends on gnome-user-share (>= 3.0); however: Package gnome-user-share is not configured yet. dpkg: error processing gnome-core (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: bluez gnome-bluetooth gnome-shell gnome-user-share gnome-core E: Sub-process /usr/bin/dpkg returned an error code (1) Syslog: Oct 5 16:04:17 ks34900 bluetoothd[5176]: Bluetooth daemon 4.98 Oct 5 16:04:17 ks34900 bluetoothd[5176]: Starting SDP server Oct 5 16:04:17 ks34900 bluetoothd[5176]: opening L2CAP socket: Address family not supported by protocol Oct 5 16:04:17 ks34900 bluetoothd[5176]: Server initialization failed Oct 5 16:04:17 ks34900 bluetoothd[5176]: Failed to init alert plugin Oct 5 16:04:17 ks34900 bluetoothd[5176]: Failed to init time plugin Oct 5 16:04:17 ks34900 bluetoothd[5176]: Failed to init proximity plugin Oct 5 16:04:17 ks34900 bluetoothd[5176]: Failed to open control socket: Address family not supported by protocol (97) Oct 5 16:04:17 ks34900 bluetoothd[5176]: Can't init bnep module Oct 5 16:04:17 ks34900 bluetoothd[5176]: Failed to init network plugin Oct 5 16:04:17 ks34900 bluetoothd[5176]: Unable to start SCO server socket Oct 5 16:04:17 ks34900 bluetoothd[5176]: Failed to init audio plugin Oct 5 16:04:17 ks34900 bluetoothd[5176]: Failed to init gatt_example plugin Oct 5 16:04:17 ks34900 bluetoothd[5176]: Can't open HCI socket: Address family not supported by protocol (97) Oct 5 16:04:17 ks34900 bluetoothd[5176]: adapter_ops_setup failed Oct 5 16:04:17 ks34900 kernel: init: bluetooth main process (5176) terminated with status 1 Oct 5 16:04:17 ks34900 kernel: init: bluetooth main process ended, respawning Oct 5 16:04:17 ks34900 bluez: Stopping uarts Oct 5 16:04:17 ks34900 bluez: Stopping rfcomm Any thoughts?

    Read the article

  • Squibbly: LibreOffice Integration Framework for the Java Desktop

    - by Geertjan
    Squibbly is a new framework for Java desktop applications that need to integrate with LibreOffice, or more generally, need office features as part of a Java desktop solution that could include, for example, JavaFX components. Here's what it looks like, right now, on Ubuntu 13.04: Why is the framework called Squibbly? Because I needed a unique-ish name, because "squibble" sounds a bit like "scribble" (which is what one does with text documents, etc), and because of the many absurd definitions in the Urban Dictionary for the apparently real word "squibble", e.g., "A name for someone who is squibblish in nature." And, another e.g., "A squibble is a small squabble. A squabble is a little skirmish." But the real reason is the first definition (and definitely not the fourth definition): "Taking a small portion of another persons something, such as a small hit off of a pipe, a bite of food, a sip of a drink, or drag of a cigarette." In other words, I took (or "squibbled") a small portion of LibreOffice, i.e., OfficeBean, and integrated it into a NetBeans Platform application. Now anyone can add new features to it, to do anything they need, such as create a legislative software system as Propylon has done with their own solution on the NetBeans Platform: For me, the starting point was Chuk Munn Lee's similar solution from some years ago. However, he uses reflection a lot in that solution, because he didn't want to bundle the related JARs with the application. I understand that benefit but I find it even more beneficial to not need to require the user to specify the location of the LibreOffice location, since all the necessary JARs and native libraries (currently 32-bit Linux only, by the way) are bundled with the application. Plus, hundreds of lines of reflection code, as in Chuk's solution, is not fun to work with at all. Switching between applications is done like this: It's a work in progress, a proof of concept only. Just the result of a few hours of work to get the basic integration to work. Several problems remain, some of them potentially unsolvable, starting with these, but others will be added here as I identify them: Window management problems. I'd like to let the user have multiple LibreOffice applications and documents open at the same time, each in a new TopComponent. However, I haven't figured out how to do that. Right now, each application is opened into the same TopComponent, replacing the currently open application. I don't know the OfficeBean API well enough, e.g., should a single OfficeBean be shared among multiple TopComponents or should each of them have their own instance of it? Focus problems. When putting the application behind other applications and then switching back to the application, typing text becomes impossible. When closing a TopComponent and reopening it, the content is lost completely. Somehow the loss of focus, and then the return of focus, disables something. No idea how to fix that. The project is checked into this location, which isn't public yet, so you can't access it yet. Once it's publicly available, it would be great to get some code contributions and tweaks, etc. https://java.net/projects/squibbly Here's the source structure, showing especially how the OfficeBean JARs and native libraries (currently for Linux 32-bit only) fit in: Ultimately, would be cool to integrate or share code with http://joeffice.com!

    Read the article

  • Caesar Cipher Program In C++ [migrated]

    - by xaliap81
    I am trying to write a caesar cipher program in c++. This is my codes template: int chooseKEY (){ //choose key shift from 1-26 } void encrypt (char * w, char *e, int key) { //Encryption function, *w is the text in the beginning, the *e is the encrypted text //Encryption in being made only in letters noy in numbers and punctuations // *e = *w + key } void decrypt (char * e, char *w, int key) { // Decryption function, *e is the text in the beginning, the *w is the decrypted text //Dencryption in being made only in letters no numbers and punctuations // *w = *e - key } void Caesar (char * inputFile, char * outputFile, int key, int mode) { // Read the inputfile which contains some data. If mode ==1 then the data is being //encrypted else if mode == 0 the data is being decrypted and is being written in //the output file } void main() { // call the Caesar function } The program has four functions, chooseKey function have to return an int as a shift key from 1-26. Encrypt function has three parameters, *w is the text in the beginning, *e is the encrypted text and the key is from the choosekey function.For encryption : Only letters have to be encrypted not numbers or punctuation and the letters are counting cyclic. Decrypt function has three parameters *e is the encrypted text, *w is the beginning text and the key. Caesar function has four parameters, inputfile which is the file that contains the beginning text, output file which contains the encrypted text, the key and the mode (if mode==1) encryption, (mode ==0) decryption. This is my sample code: #include <iostream> #include <fstream> using namespace std; int chooseKey() { int key_number; cout << "Give a number from 1-26: "; cin >> key_number; while(key_number<1 || key_number>26) { cout << "Your number have to be from 1-26.Retry: "; cin >> key_number; } return key_number; } void encryption(char *w, char *e, int key){ char *ptemp = w; while(*ptemp){ if(isalpha(*ptemp)){ if(*ptemp>='a'&&*ptemp<='z') { *ptemp-='a'; *ptemp+=key; *ptemp%=26; *ptemp+='A'; } } ptemp++; } w=e; } void decryption (char *e, char *w, int key){ char *ptemp = e; while(*ptemp){ if(isalpha(*ptemp)) { if(*ptemp>='A'&&*ptemp<='Z') { *ptemp-='A'; *ptemp+=26-key; *ptemp%=26; *ptemp+='a'; } } ptemp++; } e=w; } void Caesar (char *inputFile, char *outputFile, int key, int mode) { ifstream input; ofstream output; char buf, buf1; input.open(inputFile); output.open(outputFile); buf=input.get(); while(!input.eof()) { if(mode == 1){ encryption(&buf, &buf1, key); }else{ decryption(&buf1, &buf, key); } output << buf; buf=input.get(); } input.close(); output.close(); } int main(){ int key, mode; key = chooseKey(); cout << "1 or 0: "; cin >> mode; Caesar("test.txt","coded.txt",key,mode); system("pause"); } I am trying to run the code and it is being crashed (Debug Assertion Failed). I think the problem is somewhere inside Caesar function.How to call the encrypt and decrypt functions. But i don't know what exactly is. Any ideas?

    Read the article

  • Executing Stored Procedures in Visual Studio LightSwitch.

    - by dataintegration
    A LightSwitch Project is very easy way to visualize and manipulate information directly from one of our ADO.NET Providers. But when it comes to executing the Stored Procedures, it can be a bit more complicated. In this article, we will demonstrate how to execute a Stored Procedure in LightSwitch. For the purposes of this article, we will be using the RSSBus Email Data Provider, but the same process will work with any of our ADO.NET Providers. Creating the RIA Service. Step 1: Open Visual Studio and create a new WCF RIA Service Class Project. Step 2:Add the reference to the RSSBus Email Data Provider dll in the (ProjectName).Web project. Step 3: Add a new Domain Service Class to the (ProjectName).Web project. Step 4: In the new Domain Service Class, create a new class with the attributes needed for the Stored Procedure's parameters. In this demo, the Stored Procedure we are executing is called SendMessage. The parameters we will need are as follows: public class NewMessage{ [Key] public int ID { get; set; } public string FromEmail { get; set; } public string ToEmail { get; set; } public string Subject { get; set; } public string Text { get; set; } } Note: The created class must have an ID which will serve as the key value. Step 5: Create a new method that will executed when the insert event fires. Inside this method you can use the standards ADO.NET code which will execute the stored procedure. [Insert] public void SendMessage(NewMessage newMessage) { try { EmailConnection conn = new EmailConnection(connectionString); EmailCommand comm = new EmailCommand("SendMessage", conn); comm.CommandType = System.Data.CommandType.StoredProcedure; if (!newMessage.FromEmail.Equals("")) comm.Parameters.Add(new EmailParameter("@From", newMessage.FromEmail)); if (!newMessage.ToEmail.Equals("")) comm.Parameters.Add(new EmailParameter("@To", newMessage.ToEmail)); if (!newMessage.Subject.Equals("")) comm.Parameters.Add(new EmailParameter("@Subject", newMessage.Subject)); if (!newMessage.Text.Equals("")) comm.Parameters.Add(new EmailParameter("@Text", newMessage.Text)); comm.ExecuteNonQuery(); } catch (Exception exc) { Console.WriteLine(exc.Message); } } Step 6: Create a query method. We are not going to be using getNewMessages(), so it does not matter what it returns for the purpose of our example, but you will need to create a method for the query event as well. [Query(IsDefault=true)] public IEnumerable<NewMessage> getNewMessages() { return null; } Step 7: Rebuild the whole solution. Creating the LightSwitch Project. Step 8: Open Visual Studio and create a new LightSwitch Application Project. Step 9: On the Data Sources, add a new data source. Choose a WCF RIA Service Step 10: Choose to add a new reference and select the (Project Name).Web.dll generated from the RIA Service. Step 11: Select the entities you would like to import. In this case, we are using the recently created NewMessage entity. Step 13: On the Screens section, create a new screen and select the NewMessage entity as the Screen Data. Step 14: After you run the project, you will be able to add a new record and save it. This will execute the Stored Procedure and send the new message. If you create a screen to check the sent messages, you can refresh this screen to see the mail you sent. Sample Project To help you with get started using stored procedures in LightSwitch, download the fully functional sample project. You will also need the RSSBus Email Data Provider to make the connection. You can download a free trial here.

    Read the article

  • How can I work efficiently on a desktop sharing workflow?

    - by OSdave
    I am a freelance Magento developer, based in Spain. One of my clients is a Germany based web development company and they're asking me something I think it's impossible. OK, maybe not impossible but definitely not a preferred way of doing things. One of their clients has a Magento Entreprise installation, which is the paid (and I think proprietary) version of Magento. Their client has forbidden them to download the files from his server. My client is asking me now to study one particular module of the application in order to interact with it from a custom module I'll have to develop. As they have a read-only ssh access to their client's server, they came up with this solution: Set up a desktop/screen sharing session between one of their developer's station and mine, alongsides with a skype conversation. Their idea is that I'll say to the developer: show me file foo.php The developer will then open this foo.php file in his IDE. I'll have then to ask him to show me the bar method, the parent class, etc... Remember that it's a read-only session, so forget about putting a Zend_Debug::log() anywhere, and don't even think about a xDebug breakpoint (they don't use any kind of debugger, sic). Their client has also forbidden them to use any version control system... My first reaction when they explained to me this was (and I actually did say it outloud to them): Well, find another client. but they took it as a joke from me. I understand that in a business point of view rejecting a client is not a good practice, but I think that the condition of this assignment make it impossible to complete. At least according to my workflow. I mean, the way I work or learn a new framework/program is: download all files and copy of db on my pc create a git repository and a branch run the application locally use breakpoints use Zend_Debug::log() write the code and tests commit to git repo upload to (test/staging first if there is one, production if not) server I have agreed to try the desktop sharing session, although I think it will be a waste of time. On one hand I don't mind, they pay me for that time, but I know me and I don't like the sensation of loosing my time. On the other hand, I have other clients for whom I can work according to my workflow. I am about to say to them that I cannot (don't want to) do it. Well, I'll first try this desktop sharing session: maybe I'm wrong and it can actually work. But I like to consider myself as a professional, and I know that I don't know everything. So I try to keep an open mind and I am always willing to learn new stuff. So my questions are: Can this desktop-sharing workflow work? What should be done in order to take the most of it? Taking into account all the obstacles (geographic locations, no local, no git), is there another way for me to work on that project?

    Read the article

  • Introducing the Metro User Interface on Windows 2012

    - by andywe
    Although I am a big fan of using PowerShell to do many of my server operations, that aspect is well covered by those far more knowledgeable than I, and there is vast information around the web already on that. The new Metro interface, and getting around both Windows 8 and Windows Server 2012 though is relatively new, even for those whop ran the previews. What is this? A blank Desktop!   Where did the start button go? Well, it is still there...sort of. It is hidden, and acts like an auto hidden component that appear only when the mouse is hovered over the lower left corner of the screen. Those familiar with Gnome or OSX can relate this to the "Hot Corners" functions. To get to the start button, hover your mouse in the very left corner of the task bar. Let it sit there a moment, and a small blue square with colored tiles in it called start will appear. Click it. I clicked it and now I have all the tiles..What is this?   Welcome to the Metro interface. This is a much more modern look, and although at first seems weird and cumbersome, I have actually found that it is a bit more extensible, allowing greater organization and customization than the older explorer desktop. If you look closely, you'll see each box represents either a program, or program group. First, a few basics about using the start view. First and foremost, a right mouse click will bring up a bar on the bottom, with an icon towards the right. Notice it is titled “All Apps”. An even easier way in many places is to hover your mouse in the exact opposite corner, in the upper right. A sidebar will open and expose what used to be a widget bar (remember Vista?), and there are options for Search, Start, and Settings.   Ok Great, but where is everything? It’s all there…Click the All Apps icon.   Look better? Notice the scroll bar at the bottom. Move it right..your desktop is sized to your content..so you can have a smaller, or larger amount of programs exposed. Each icon can be secondary clicked (right mouse click for most of us, and an options bar at the bottom, rather than the old small context menu, is opened with some very familiar options.   Notice the top of the Windows Explorer window has some new features. You still have your right mouse click functions, but since the shortcuts for these items already exist..just copy them. There are many ways, but here is a long way to show you more of the interface. 1. Right mouse click a program icon, and select the Open File Location option. 2. Trusty file manager opens…but if you look closely up at top edge of the window, you’ll see a nifty enhancement. An orange colored box that is titled Shortcut Tools and another lavender box Title Application tools. Each of these adds options at the top of the file manager window to make selection easy. Of course, you can still secondary click an item in the listing window too. 3. Click shortcut tools, right click your app shortcut and copy it. Then simply paste it into the desktop outside the File Explorer window Also note some of the newer features. The large icons up top below the menu that has many common operations. The options change as you select each menu item. Well, that’s it for this installment. I hope this helps you out.

    Read the article

  • Using Private Extension Galleries in Visual Studio 2012

    - by Jakob Ehn
    Note: The installer and the complete source code is available over at CodePlex at the following location: http://inmetavsgallery.codeplex.com   Extensions and addins are everywhere in the Visual Studio ALM ecosystem! Microsoft releases new cool features in the form of extensions and the list of 3rd party extensions that plug into Visual Studio just keeps growing. One of the nice things about the VSIX extensions is how they are deployed. Microsoft hosts a public Visual Studio Gallery where you can upload extensions and make them available to the rest of the community. Visual Studio checks for updates to the installed extensions when you start Visual Studio, and installing/updating the extensions is fast since it is only a matter of extracting the files within the VSIX package to the local extension folder. But for custom, enterprise-specific extensions, you don’t want to publish them online to the whole world, but you still want an easy way to distribute them to your developers and partners. This is where Private Extension Galleries come into play. In Visual Studio 2012, it is now possible to add custom extensions galleries that can point to any URL, as long as that URL returns the expected content of course (see below).Registering a new gallery in Visual Studio is easy, but there is very little documentation on how to actually host the gallery. Visual Studio galleries uses Atom Feed XML as the protocol for delivering new and updated versions of the extensions. This MSDN page describes how to create a static XML file that returns the information about your extensions. This approach works, but require manual updates of that file every time you want to deploy an update of the extension. Wouldn’t it be nice with a web service that takes care of this for you, that just lets you drop a new version of your VSIX file and have it automatically detect the new version and produce the correct Atom Feed XML? Well search no more, this is exactly what the Inmeta Visual Studio Gallery Service does for you :-) Here you can see that in addition to the standard Online galleries there is an Inmeta Gallery that contains two extensions (our WIX templates and our custom TFS Checkin Policies). These can be installed/updated i the same way as extensions from the public Visual Studio Gallery. Installing the Service Download the installler (Inmeta.VSGalleryService.Install.msi) for the service and run it. The installation is straight forward, just select web site, application pool and (optional) a virtual directory where you want to install the service.   Note: If you want to run it in the web site root, just leave the application name blank Press Next and finish the installer. Open web.config in a text editor and locate the the <applicationSettings> element Edit the following setting values: FeedTitle This is the name that is shown if you browse to the service using a browser. Not used by Visual Studio BaseURI When Visual Studio downloads the extension, it will be given this URI + the name of the extension that you selected. This value should be on the following format: http://SERVER/[VDIR]/gallery/extension/ VSIXAbsolutePath This is the path where you will deploy your extensions. This can be a local folder or a remote share. You just need to make sure that the application pool identity account has read permissions in this folder Save web.config to finish the installation Open a browser and enter the URL to the service. It should show an empty Feed page:   Adding the Private Gallery in Visual Studio 2012 Now you need to add the gallery in Visual Studio. This is very easy and is done as follows: Go to Tools –> Options and select Environment –> Extensions and Updates Press Add to add a new gallery Enter a descriptive name, and add the URL that points to the web site/virtual directory where you installed the service in the previous step   Press OK to save the settings. Deploying an Extension This one is easy: Just drop the file in the designated folder! :-)  If it is a new version of an existing extension, the developers will be notified in the same way as for extensions from the public Visual Studio gallery: I hope that you will find this sever useful, please contact me if you have questions or suggestions for improvements!

    Read the article

< Previous Page | 398 399 400 401 402 403 404 405 406 407 408 409  | Next Page >