Search Results

Search found 20187 results on 808 pages for 'team oracle'.

Page 572/808 | < Previous Page | 568 569 570 571 572 573 574 575 576 577 578 579  | Next Page >

  • Webcast WebCenter Content, April 11th, 2012

    - by rituchhibber
    Our next WebCenter Content webcast will be on April 10th, 2012. This WebCast will help you to prepare yourself for the WebCenter Content Certified Implementation Specialist EXAM. Webcast Details: Date Topic Speaker Web Call Details Intercall Details  April 10th                WebCenter Content   Refresh     Course      Markus NeubauerSilburyWebCenter ContentSpecialized Partner Join Webcast Dial-in numbers:CC/SP: 1579222/9221 Time: 12:00 -15:00 CET Break around 13:30 Conference ID/Key: 9819145/1004 For more details, please click here.

    Read the article

  • Salon du E-commerce et Social CRM B2B

    - by Valérie De Montvallon
    Nous participions au Salon du E-commerce et Social CRM B2B en septembre dernier et nous vous proposons la vidéo réalisée par Les décideurs de la relation client. Découvrez des avis d'experts de la Relation Client pour en savoir toujours plus sur le Social CRM BtoB. Pour le BtoB, la gestion de la Relation Client semble bien simple quand il s’agit de récolter des informations à partir d’appels téléphoniques, d’entretiens physiques ou d’emails. Toutefois, la tâche s’enhardit sur les réseaux sociaux. Ces plateformes sont-elles réellement adaptées au BtoB ? Comment procéder quand on se lance ? Quels sont les pièges à éviter ? Quels sont les éléments qui laissent à penser que le Social CRM BtoB est une vraie tendance de la Relation Client ? Autant de questions auxquelles les experts rencontrés ont apporté des éléments de réponse. Vous découvrirez l'interview de notre expert, Khalid Madarbokus, qui s'exprime sur la remontée d'informations depuis les médias sociaux au sein des départements d'une entreprise B2B (à 3:20)

    Read the article

  • Don't Call it a Comeback

    - by Chris Haaker
    I received the email like most of you about Jeff and crew stepping down and selling the blog to another company. That it is a long time associate and friend of the team we have all grown to know and love, I feel much better about the move. Who cares, Chris, you haven't blogged religiously in ages! I know, and its a crime. Blame life, Twitter, my kids, laziness or whatever else you can think of. I always tell myself I am going to make a comeback - - "Don't call it a comeback - I been here for years." But after a few posts I seem to lose my steam. Its hard to explain, hell, I can't explain it. But we'll see what happens this time. Just don't call it a comeback.  2012 rMBP 15" Quad Core 2.33 GHz 16GB Memory 258GB SSDMarsEdit 3.5 (Please Microsoft Live Team - Make LiveWriter for OS X)

    Read the article

  • JavaOne Content Catalog Live!

    - by programmarketingOTN
    The JavaOne Content Catalog—the central repository for information on sessions, demos, labs, user groups, exhibitors, and more for San Francisco 2012—is live!In the Content Catalog you can search on tracks, session types, session categories, keywords, and tags. Or, you can search for your favorite speakers to see what they’re presenting this year. And, directly from the catalog, you can share sessions you’re interested in with friends and colleagues through a broad array of social media channels.Start checking out JavaOne content now to plan your week at the conference. Then you’ll be ready to sign up for all of your sessions in mid-July when the scheduling tool goes live. Happy browsing! 

    Read the article

  • Enterprise Trade Compliance: Changing Trade Operations around the World

    - by John Murphy
    We live in a world of incredible bounty and speed where any product can be delivered anywhere on earth. However, our world is also filled with challenges for business – where volatility, uncertainty, risk, and chaos are our daily companions. To prosper amid the realities of this new world, organizations cannot rely on old strategies; they need new business models. Key trends within the global economy are mandating that companies fully integrate global trade management best practices within broader supply chain management strategies, rather than simply leaving it as a discrete event at the end of the order or procurement cycle. To explain, many companies face a complicated and changing compliance environment. This is directly linked to the speed and configuration of the supply chain, particularly with the explosion of new markets, shorter service cycles and ship times, accelerating rates of globalization and outsourcing, and increasing product complexity and regulation. Read More...

    Read the article

  • Halloween: Season for Java Embedded Internet of Spooky Things (IoST) (Part 4)

    - by hinkmond
    And now here's the Java code that you'll need to read your ghost sensor on your Raspberry Pi The general idea is that you are using Java code to access the GPIO pin on your Raspberry Pi where the ghost sensor (JFET trasistor) detects minute changes in the electromagnetic field near the Raspberry Pi and will change the GPIO pin to high (+3 volts) when something is detected, otherwise there is no value (ground). Here's that Java code: try { /*** Init GPIO port(s) for input ***/ // Open file handles to GPIO port unexport and export controls FileWriter unexportFile = new FileWriter("/sys/class/gpio/unexport"); FileWriter exportFile = new FileWriter("/sys/class/gpio/export"); for (String gpioChannel : GpioChannels) { System.out.println(gpioChannel); // Reset the port File exportFileCheck = new File("/sys/class/gpio/gpio"+gpioChannel); if (exportFileCheck.exists()) { unexportFile.write(gpioChannel); unexportFile.flush(); } // Set the port for use exportFile.write(gpioChannel); exportFile.flush(); // Open file handle to input/output direction control of port FileWriter directionFile = new FileWriter("/sys/class/gpio/gpio" + gpioChannel + "/direction"); // Set port for input directionFile.write(GPIO_IN); } /*** Read data from each GPIO port ***/ RandomAccessFile[] raf = new RandomAccessFile[GpioChannels.length]; int sleepPeriod = 10; final int MAXBUF = 256; byte[] inBytes = new byte[MAXBUF]; String inLine; int zeroCounter = 0; // Get current timestamp with Calendar() Calendar cal; DateFormat dateFormat = new SimpleDateFormat("yyyy/MM/dd HH:mm:ss.SSS"); String dateStr; // Open RandomAccessFile handle to each GPIO port for (int channum=0; channum And, then we just load up our Java SE Embedded app, place each Raspberry Pi with a ghost sensor attached in strategic locations around our Santa Clara office (which apparently is very haunted by ghosts from the Agnews Insane Asylum 1906 earthquake), and watch our analytics for any ghosts. Easy peazy. See the previous posts for the full series on the steps to this cool demo: Halloween: Season for Java Embedded Internet of Spooky Things (IoST) (Part 1) Halloween: Season for Java Embedded Internet of Spooky Things (IoST) (Part 2) Halloween: Season for Java Embedded Internet of Spooky Things (IoST) (Part 3) Halloween: Season for Java Embedded Internet of Spooky Things (IoST) (Part 4) Hinkmond

    Read the article

  • 101 Ways to Participate...and make the future Java

    - by heathervc
     In case you missed it earlier today, and as promised in BOF6283, here are the 101 Ways to Improve (and Make the Future) Java...thanks to Bruno Souza of SouJava and Martijn Verburg of the London Java Community for their contributions! Join or create a JUG Come to the meetings Help promoting your JUG: twitter, facebook, etc Find someone that can give a talk Get your company to sponsor (a meeting, an event) Organize an activity (meetings, hackathons, dojos, etc) Answer questions on a mailing list (or simply join!) Volunteer for a small, one time tasks (creating a web page, helping with an activity) Come early to an event, and help to carry the piano Moderate a list or add things to the wiki Participate in the organization meetings or mailing lists Take pictures of an event or meeting and publish them online Write a blog about an event or meeting, to help promote the group Help record and post a session online Present your JavaOne experience when you get back Repeat the best talk you saw at JavaOne at a JUG meeting Send this list of ideas to other Java developers in your area so they can help out too! Present a step-by-step tutorial Present GreenFoot and Alice to school students Present BlueJ and Alice to university students Teach those tools to teachers and professors Write a step-by-step tutorial on your blog or to a magazine Create a page that lists resources Give a talk about your favorite Java feature or technology Learn a new Java API and present to your co-workers Then, present in a JUG meeting, and then, present it in an event in your area, and submit it to JavaOne! Create a study group to get certified or to learn some new Java technology Teach a non-Java developer how to download the basic tools and where to find more information Download and use an open source project Improve the documentation Write an article or a blog post about the project Write an FAQ Join and participate on the mailing list Describe a bug in detail and submit a bug report Fix a bug and submit it to the project Give a talk about it at a JUG meeting Teach your co-workers how to use the project Sign up to Adopt a JSR Test regular builds of the Reference Implementation (RI) Report bugs in the RI Submit Feature Requests to the spec Triage issues on the issue tracker Run a hack day to discuss the API Moderate mailing lists and forums Create an FAQ or Wiki Evangelize a specification on Twitter, G+, Hacker News, etc Give a lightning talk Help build the RI Help build the Technical Compatibility Kit (TCK) Create a Podcast Learn Latin - e.g. legal language, translate to English Sign up to Adopt OpenJDK Run a Bugathon Fix javac compiler warnings Build virtual images Add tests to Java Submit Javadoc patches Give a webbing Teach someone to build OpenJDK Hold a brown bag session at work Fix the oldest known bug Overhaul Javadoc to use HTML Load the OpenJDK into different IDEs Run a build farm node Test your code on a nightly build Learn how to read Java byte code Visit JCP.org Follow jcp_org on Twitter Friend JCP on Facebook Read JCP Blog Register for JCP.org site Create a JSR Watch List Review JSRs in progress Comment on JSRs in progress, write and track bug reports, use cases, etc Review JSRs in Maintenance Comment on JSRs in Maintenance Implement Final JSRs Review the Transparency of JSRs in progress and provide feedback to the PMO and Spec Lead/community Become a JCP Member or associate with a current JCP member Nominate to serve on an Expert Group (EG) Serve on an EG Submit a JSR proposal and become Spec Lead Take a Spec Lead role in an Inactive or Dormant JSR Nominate for an Executive Committee (EC) seat Vote in the EC elections Vote in EC Special Elections Review EC Meeting Summaries Attend Spec Lead calls Write blogs, articles on your experiences Join the EC project on java.net Join JCP.Next on java.net/JSR 358 Participate on the JCP forums and join JSR projects on java.net Suggest agenda items for open EC meetings Attend public EC teleconference (2x per year) Attend open EC meetings at JavaOne Nominate for JCP Annual Awards Attend annual JavaOne and JCP Annual Awards Ceremony Attend JCP related BOF sessions and give your feedback to Program Office Invite JCP program office members to your JUG  or meetup Invite JSR Spec Leads to your JUG or meetup And always - hold a party!

    Read the article

  • Selecting Items in a GeoToolkit Driven Map

    - by Geertjan
    When you take a look at all the tools provided by GeoToolkit, you'll be quite impressed. For example, within the US map shown in yesterday's blog entry, you can drill down into individual states by selecting them via the mouse, as shown below: With that, the basis of a more complex application is laid, since all the map-related functionality is handed to you out of the box. The sample referred to yesterday has been updated, if you check it out and run it (assuming you've taken the additional steps mentioned yesterday), you'll see the above. http://java.net/projects/nb-api-samples/sources/api-samples/show/versions/7.3/tutorials/geospatial/geotoolkit/MyGeospatialSystem

    Read the article

  • Draggable & Resizable Editors

    - by Geertjan
    Thanks to a cool tip from Steven Yi (here in the comments to a blog entry), I was able to make a totally pointless but fun set of draggable and resizable editors: What you see above are two JEditorPanes within JPanels. The JPanels are within ComponentWidgets provided by the NetBeans Visual Library, which is also where the special border comes from. The ComponentWidgets are within a Visual Library Scene, which is within a JScrollPane in a TopComponent. Each editor has this, which means the NetBeans Java Editor is bound to the JEditorPane: jEditorPane1.setContentType("text/x-java"); EditorKit kit = CloneableEditorSupport.getEditorKit("text/x-java"); jEditorPane1.setEditorKit(kit); jEditorPane1.getDocument().putProperty("mimeType", "text/x-java"); A similar thing is done in the other JEditorPane, i.e., it is bound to the XML Editor. While the XML Editor also has code completion, in addition to syntax coloring, as can be seen above, this is not the case for the JEditorPane bound to the Java Editor, since the JEditorPane doesn't have a Java classpath, which is needed for Java code completion to work.

    Read the article

  • CPU Usage in Very Large Coherence Clusters

    - by jpurdy
    When sizing Coherence installations, one of the complicating factors is that these installations (by their very nature) tend to be application-specific, with some being large, memory-intensive caches, with others acting as I/O-intensive transaction-processing platforms, and still others performing CPU-intensive calculations across the data grid. Regardless of the primary resource requirements, Coherence sizing calculations are inherently empirical, in that there are so many permutations that a simple spreadsheet approach to sizing is rarely optimal (though it can provide a good starting estimate). So we typically recommend measuring actual resource usage (primarily CPU cycles, network bandwidth and memory) at a given load, and then extrapolating from those measurements. Of course there may be multiple types of load, and these may have varying degrees of correlation -- for example, an increased request rate may drive up the number of objects "pinned" in memory at any point, but the increase may be less than linear if those objects are naturally shared by concurrent requests. But for most reasonably-designed applications, a linear resource model will be reasonably accurate for most levels of scale. However, at extreme scale, sizing becomes a bit more complicated as certain cluster management operations -- while very infrequent -- become increasingly critical. This is because certain operations do not naturally tend to scale out. In a small cluster, sizing is primarily driven by the request rate, required cache size, or other application-driven metrics. In larger clusters (e.g. those with hundreds of cluster members), certain infrastructure tasks become intensive, in particular those related to members joining and leaving the cluster, such as introducing new cluster members to the rest of the cluster, or publishing the location of partitions during rebalancing. These tasks have a strong tendency to require all updates to be routed via a single member for the sake of cluster stability and data integrity. Fortunately that member is dynamically assigned in Coherence, so it is not a single point of failure, but it may still become a single point of bottleneck (until the cluster finishes its reconfiguration, at which point this member will have a similar load to the rest of the members). The most common cause of scaling issues in large clusters is disabling multicast (by configuring well-known addresses, aka WKA). This obviously impacts network usage, but it also has a large impact on CPU usage, primarily since the senior member must directly communicate certain messages with every other cluster member, and this communication requires significant CPU time. In particular, the need to notify the rest of the cluster about membership changes and corresponding partition reassignments adds stress to the senior member. Given that portions of the network stack may tend to be single-threaded (both in Coherence and the underlying OS), this may be even more problematic on servers with poor single-threaded performance. As a result of this, some extremely large clusters may be configured with a smaller number of partitions than ideal. This results in the size of each partition being increased. When a cache server fails, the other servers will use their fractional backups to recover the state of that server (and take over responsibility for their backed-up portion of that state). The finest granularity of this recovery is a single partition, and the single service thread can not accept new requests during this recovery. Ordinarily, recovery is practically instantaneous (it is roughly equivalent to the time required to iterate over a set of backup backing map entries and move them to the primary backing map in the same JVM). But certain factors can increase this duration drastically (to several seconds): large partitions, sufficiently slow single-threaded CPU performance, many or expensive indexes to rebuild, etc. The solution of course is to mitigate each of those factors but in many cases this may be challenging. Larger clusters also lead to the temptation to place more load on the available hardware resources, spreading CPU resources thin. As an example, while we've long been aware of how garbage collection can cause significant pauses, it usually isn't viewed as a major consumer of CPU (in terms of overall system throughput). Typically, the use of a concurrent collector allows greater responsiveness by minimizing pause times, at the cost of reducing system throughput. However, at a recent engagement, we were forced to turn off the concurrent collector and use a traditional parallel "stop the world" collector to reduce CPU usage to an acceptable level. In summary, there are some less obvious factors that may result in excessive CPU consumption in a larger cluster, so it is even more critical to test at full scale, even though allocating sufficient hardware may often be much more difficult for these large clusters.

    Read the article

  • Maxco Quickly Implements JD Edwards World A9.1

    David Bryant, Vice President and CFO of Maxco, explains to Cliff why Maxco chose to be one of the first to implement JD Edwards World A9.1, how the implementation is going to be a huge competitive advantage for Maxco and its customers, and the value Bryant sees in being part of the Quest User Group community.

    Read the article

  • A Patent for Workload Management Based on Service Level Objectives

    - by jsavit
    I'm very pleased to announce that after a tiny :-) wait of about 5 years, my patent application for a workload manager was finally approved. Background Many operating systems have a resource manager which lets you control machine resources. For example, Solaris provides controls for CPU with several options: shares for proportional CPU allocation. If you have twice as many shares as me, and we are competing for CPU, you'll get about twice as many CPU cycles), dedicated CPU allocation in which a number of CPUs are exclusively dedicated to an application's use. You can say that a zone or project "owns" 8 CPUs on a 32 CPU machine, for example. And, capped CPU in which you specify the upper bound, or cap, of how much CPU an application gets. For example, you can throttle an application to 0.125 of a CPU. (This isn't meant to be an exhaustive list of Solaris RM controls.) Workload management Useful as that is (and tragic that some other operating systems have little resource management and isolation, and frighten people into running only 1 app per OS instance - and wastefully size every server for the peak workload it might experience) that's not really workload management. With resource management one controls the resources, and hope that's enough to meet application service objectives. In fact, we hold resource distribution constant, see if that was good enough, and adjust resource distribution if that didn't meet service level objectives. Here's an example of what happens today: Let's try 30% dedicated CPU. Not enough? Let's try 80% Oh, that's too much, and we're achieving much better response time than the objective, but other workloads are starving. Let's back that off and try again. It's not the process I object to - it's that we to often do this manually. Worse, we sometimes identify and adjust the wrong resource and fiddle with that to no useful result. Back in my days as a customer managing large systems, one of my users would call me up to beg for a "CPU boost": Me: "it won't make any difference - there's plenty of spare CPU to be had, and your application is completely I/O bound." User: "Please do it anyway." Me: "oh, all right, but it won't do you any good." (I did, because he was a friend, but it didn't help.) Prior art There are some operating environments that take a stab about workload management (rather than resource management) but I find them lacking. I know of one that uses synthetic "service units" composed of the sum of CPU, I/O and memory allocations multiplied by weighting factors. A workload is set to make a target rate of service units consumed per second. But this seems to be missing a key point: what is the relationship between artificial 'service units' and actually meeting a throughput or response time objective? What if I get plenty of one of the components (so am getting enough service units), but not enough of the resource whose needed to remove the bottleneck? Actual workload management That's not really the answer either. What is needed is to specify a workload's service levels in terms of externally visible metrics that are meaningful to a business, such as response times or transactions per second, and have the workload manager figure out which resources are not being adequately provided, and then adjust it as needed. If an application is not meeting its service level objectives and the reason is that it's not getting enough CPU cycles, adjust its CPU resource accordingly. If the reason is that the application isn't getting enough RAM to keep its working set in memory, then adjust its RAM assignment appropriately so it stops swapping. Simple idea, but that's a task we keep dumping on system administrators. In other words - don't hold the number of CPU shares constant and watch the achievement of service level vary. Instead, hold the service level constant, and dynamically adjust the number of CPU shares (or amount of other resources like RAM or I/O bandwidth) in order to meet the objective. Instrumenting non-instrumented applications There's one little problem here: how do I measure application performance in a way relating to a service level. I don't want to do it based on internal resources like number of CPU seconds it received per minute - We need to make resource decisions based on externally visible and meaningful measures of performance, not synthetic items or internal resource counters. If I have a way of marking the beginning and end of a transaction, I can then measure whether or not the application is meeting an objective based on it. If I can observe the delay factors for an application, I can see which resource shortages are slowing an application enough to keep it from meeting its objectives. I can then adjust resource allocations to relieve those shortages. Fortunately, Solaris provides facilities for both marking application progress and determining what factors cause application latency. The Solaris DTrace facility let's me introspect on application behavior: in particular I can see events like "receive a web hit" and "respond to that web hit" so I can get transaction rate and response time. DTrace (and tools like prstat) let me see where latency is being added to an application, so I know which resource to adjust. Summary After a delay of a mere few years, I am the proud creator of a patent (advice to anyone interested in going through the process: don't hold your breath!). The fundamental idea is fairly simple: instead of holding resource constant and suffering variable levels of success meeting service level objectives, properly characterise the service level objective in meaningful terms, instrument the application to see if it's meeting the objective, and then have a workload manager change resource allocations to remove delays preventing service level attainment. I've done it by hand for a long time - I think that's what a computer should do for me.

    Read the article

  • My JavaOne 2012

    - by Geertjan
    I received a JavaOne speaker invitation for the following sessions and BOFs. Only one involves me on my own: Session ID: CON2987Session Title: Unlocking the Java EE 6 Platform The rest are combo packages, i.e., you get multiple speakers for the price of one.  Sessions and BOFs together with others:  Session ID: BOF4227 (together with Zoran Sevarac)Session Title: Building Smart Java Applications with Neural Networks, Using the Neuroph Framework Session ID: BOF5806 (together with Manfred Riem)Session Title: Doing JSF Development in NetBeans 7.1 Session ID: CON3160 (together with Allan Gregersen and others)Session Title: Dynamic Class Reloading in the Wild with Javeleon Discussion Panels:  Session ID: CON4952 (together with several NetBeans Platform developers)Session Title: NetBeans Platform Panel Discussion Session ID: CON6139 (together with several NetBeans IDE users)Session Title: Lessons Learned in Building Enterprise and Desktop Applications with the NetBeans IDE

    Read the article

  • RPi and Java Embedded GPIO: Connecting LEDs

    - by hinkmond
    Next, we need some low-level peripherals to connect to the Raspberry Pi GPIO header. So, we'll do what's called a "Fry's Run" in Silicon Valley, which means we go shop at the local Fry's Electronics store for parts. In this case, we'll need some breadboard jumper wires (blue wires in photo), some LEDs, and some resistors (for the RPi GPIO, 150 ohms - 300 ohms would work for the 3.3V output of the GPIO ports). And, if you want to do other projects, you might as well by a breadboard, which is a development board with lots of holes in it. Ask a Fry's clerk for help. Or, better yet, ask the customer standing next to you in the electronics components aisle for help. (Might be faster) So, go to your local hobby electronics store, or go to Fry's if you have one close by, and come back here to the next blog post to see how to hook these parts up. Hinkmond

    Read the article

  • Simple Merging Of PDF Documents with iTextSharp 5.4.5.0

    - by Mladen Prajdic
    As we were working on our first SQL Saturday in Slovenia, we came to a point when we had to print out the so-called SpeedPASS's for attendees. This SpeedPASS file is a PDF and contains thier raffle, lunch and admission tickets. The problem is we have to download one PDF per attendee and print that out. And printing more than 10 docs at once is a pain. So I decided to make a little console app that would merge multiple PDF files into a single file that would be much easier to print. I used an open source PDF manipulation library called iTextSharp version 5.4.5.0 This is a console program I used. It’s brilliantly named MergeSpeedPASS. It only has two methods and is really short. Don't let the name fool you It can be used to merge any PDF files. The first parameter is the name of the target PDF file that will be created. The second parameter is the directory containing PDF files to be merged into a single file. using iTextSharp.text; using iTextSharp.text.pdf; using System; using System.IO; namespace MergeSpeedPASS { class Program { static void Main(string[] args) { if (args.Length == 0 || args[0] == "-h" || args[0] == "/h") { Console.WriteLine("Welcome to MergeSpeedPASS. Created by Mladen Prajdic. Uses iTextSharp 5.4.5.0."); Console.WriteLine("Tool to create a single SpeedPASS PDF from all downloaded generated PDFs."); Console.WriteLine(""); Console.WriteLine("Example: MergeSpeedPASS.exe targetFileName sourceDir"); Console.WriteLine(" targetFileName = name of the new merged PDF file. Must include .pdf extension."); Console.WriteLine(" sourceDir = path to the dir containing downloaded attendee SpeedPASS PDFs"); Console.WriteLine(""); Console.WriteLine(@"Example: MergeSpeedPASS.exe MergedSpeedPASS.pdf d:\Downloads\SQLSaturdaySpeedPASSFiles"); } else if (args.Length == 2) CreateMergedPDF(args[0], args[1]); Console.WriteLine(""); Console.WriteLine("Press any key to exit..."); Console.Read(); } static void CreateMergedPDF(string targetPDF, string sourceDir) { using (FileStream stream = new FileStream(targetPDF, FileMode.Create)) { Document pdfDoc = new Document(PageSize.A4); PdfCopy pdf = new PdfCopy(pdfDoc, stream); pdfDoc.Open(); var files = Directory.GetFiles(sourceDir); Console.WriteLine("Merging files count: " + files.Length); int i = 1; foreach (string file in files) { Console.WriteLine(i + ". Adding: " + file); pdf.AddDocument(new PdfReader(file)); i++; } if (pdfDoc != null) pdfDoc.Close(); Console.WriteLine("SpeedPASS PDF merge complete."); } } } } Hope it helps you and have fun.

    Read the article

  • Can we put percentage on amount of work of a certain role in project's lifecycle?

    - by deviDave
    The title may be confusing, but I will elaborate it here. I am trying to figure our how much time and effort each person spend during some project. I divided roles into: - junior developer (works mainly on UI and some light things) - senior developer (develops complex logic, database structures, etc.) - lead developer (leads the team, usually most experienced person) - negotiator/resolver (a person who directly talk to a client trying to either negotiate terms and timeframe or to clarify vagueness presented by a team leader) My AIM is to calculate percentage of role's involvement based on quality, not time (obviously a junior will spend most time in project, but with the least quality). In the end I would get a table which may look like this: Total: 100% ---------------- Junior: 10% Senior: 50% Lead: 30% Negotiator: 10% Can this be achieved? Has anyone found any source which may help me?

    Read the article

  • Data Governance (Veri Yönetisimi)

    - by Arda Eralp
    Data governance,veri ile ilgili islemler için bir sorumluluklar sistemidir. Bu sistemin temelini ise politikalar, standartlar ve prosedürler olusturur. Sistem politikalar, standartlar ve prosedürler sayesinde verinin ne zaman, hangi sartlar altinda, hangi eylemlerde, hangi yöntemler ile kimler tarafindan kullanilacagina karar verir. Sistemin kurumda basarili bir sekilde islemesi için öncelikle kurumda farkindalik saglanmasi gereklidir. Farkindalik saglandiktan sonra ise kurum governance ve mimari kültürünü benimsemelidir. Ancak bu sartlar altinda sistem basarili bir sekilde isleyebilecektir. Bu sebeplerden dolayidir ki data governance kisa bir süreç degil, aksine kurum varligini sürdürdügü sürece isleyecek olan bir süreçtir. Bu durum bize data governance in bir proje degil bir program oldugunu açiklamaktadir. Programin baslangicinda kurumun ihtiyaçlarinin netlesmesi ve farkindaligin saglanmasi temeldir. Hedef kitle ise, veri ile dogrudan ve ya dolayli olarak iliski içerisinde olan herkesdir. Bu sebeple programin baslangicinda hedef kitleyi içeren ekipler ile toplantilar düzenlenecektir. Bu toplantilar sayesinde hem farkindalik saglanacak hemde ekiplerin ihtiyaçlari birebir ekipler tarafindan aktarilarak netlesecektir. Hedef kitlenin ihtiyaçlari netlestirildikten sonra ise devamli isleyecek olan bu sürecin planlamasi yapilacaktir. Bu sürecin planlanmasinda ihtiyaçlarin önceliklendirilmesi gerekmektedir. Sebebi ise her ekibin ihtiyaçlarinin farkli olabilecegi ve bütün ihtiyaçlara ayni anda karsilik verilemeyebileceginin öngörülmesidir. Bu öngörünün temeli ise ekiplerin ihtiyaçlarinin birbirleriyle olan baglantisidir. Süreç planlamasinda ihtiyaçlarin önceliklendirilmesinin ardindan kurumun büyüklügünün gözönünde bulundurulmasi gerekmedir. Kurumun büyüklügünün önemi ise eger kurum bir bütün olarak ayni anda govern edilemeyecek kadar büyük ise ihtiyaçlari öncelikli olarak bulunan ekipler ile govern edilmesine baslanarak sürecin belirli bir hiz ile bütün kurumda isler hale getirilmesini saglamaktir. Ihtiyaçlar belirlendikten ve ilgili ekipler seçildikten sonra artik programin planlanmasina geçilebilecek. Programin planlama asamasinda öncelikli olarak sürecin asamalarini kontrol edecek ve süreç kurum içerisinde isleyise geçtiginde kontrolü saglayacak olan Data Governance Office in planlanmasidir. Office in planlanmasiyla birlikte süreçteki roller ve bu rollerin sorumluluklari belirlenecektir. Planlama asamasinda Data governance office, roller ve sorumluluklar, güvenlik ve veri saklanan sistemler ele alinacak konulardir. Planlama asamasi tamamlandiginda ise belirlenen ekipler ve ihtiyaçlar dogrultusunda programin isleyis asamasina geçilebilecektir. Isleyis kisminda ekibin ihtiyaçlari dogrultusunda güvenlik konusunda ve veri saklanan sistemler üzerinde çalismalar yapilacaktir. Bu yapilan çalismalar bir süreç olarak dökümante edilecek ve süreç sona erdiginde baska bir ekiple baska bir ihtiyaç dogrultusunda çalisma yapilarak ayni süreç isletilecek ve böylece kurum içesinde ilgili süreçte standartlasma saglanacaktir. Güvenlik konusunda verinin erisim güvenligi ve kullanim güvenligi ele alinacaktir. Veri saklanan sistemler üzerindeki çalismalar ise saklanan sistemlerin program dahilinde belirlenen standartlar ile olusturulmasi ve yönetilmesi saglanacaktir. Isleyis kisminin ardindan ise programin izleme kismina geçilecektir. Bu kisimda artik Data Governance Office olusmus, politikalar, standartlar ve prosedürler belirlenmistir. Ve Data Governance Office çalisanlari rolleri ve sorumluluklari dahilinde programin isleyisini izleyecek ve gerek gördügünde politikalar standartlar ve prosedürler üzerinde degisiklikler yapacaklardir.

    Read the article

  • Finding which activities will execute next in a process instance

    - by Mark Nelson
      We have had a few queries lately about how to find out what activity (or activities) will be the next to execute in a particular process instance.  It is possible to do this, however you will need to use a couple of undocumented APIs.  That means that they could (and probably will) change in some future release and break your code.  If you understand the risks of using undocumented APIs and are prepared to accept that risk, read on… READ MORE >>

    Read the article

  • PASS Budget Posted

    - by Bill Graziano
    If you’re a member of PASS you can view our FY2011 budget at http://www.sqlpass.org/AboutPASS/Governance.aspx.  Our detailed budget is 29 pages long and provides an incredibly detailed snapshot of where our money comes from and how we spend it.  I’ve also written a summary highlighting some of the changes from last year.  If you have any questions about the budget you can ask them here or on the PASS site.

    Read the article

  • JavaFX Makeover for JFugue Music NotePad

    - by Geertjan
    Bengt-Erik Fröberg from Sweden, one of the developers working on ProSang, the leading Scandinavian blood bank system (and based on the NetBeans Platform), is reworking the user interface of the JFugue Music NotePad. In particular, the Score window (named ScoreFX window below) contains components that are now quite clearly JavaFX, instead of Swing. Looks a lot better and also performs better. The sliders in the Keyboard window are candidates for being similarly redone to use JavaFX instead of Swing. Want to do something similar? Here's all the info you need: http://platform.netbeans.org/tutorials/nbm-javafx.html

    Read the article

  • DotNetNuke 5.4.1 Released

    I am happy to announce the release of DotNetNuke 5.4.1 which corrects the major issues which slipped through the QA process for 5.4. While we try to do a good job in testing our releases, our recent efforts for 5.3 and 5.4 have fallen short of the mark. We are currently working with a small team of commercial module developers and the core team to put a better public beta testing process in place that will help augment our own internal testing. Ultimately, community testing is the only testing that...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Lifecycle of an ASP.NET MVC 5 Application

    Here you can download a PDF Document that charts the lifecycle of every ASP.NET MVC 5 application, from receiving the HTTP request to sending the HTTP response back to the client. It is designed both as an educational tool for those who are new to ASP.NET MVC and also as a reference for those who need to drill into specific aspects of the application. The PDF document has the following features: Relevant HttpApplication stages to help you understand where MVC integrates into the ASP.NET application lifecycle. A high-level view of the MVC application lifecycle, where you can understand the major stages that every MVC application passes through in the request processing pipeline. A detail view that shows drills down into the details of the request processing pipeline. You can compare the high-level view and the detail view to see how the lifecycles details are collected into the various stages. Placement and purpose of all overridable methods on the Controller object in the request processing pipeline. You may or may not have the need to override any one method, but it is important for you to understand their role in the application lifecycle so that you can write code at the appropriate life cycle stage for the effect you intend. Blown-up diagrams showing how each of the filter types (authentication, authorization, action, and result) is invoked. Link to a useful article or blog from each point of interest in the detail view. span.fullpost {display:none;}

    Read the article

  • Groovy Refactoring in NetBeans

    - by Martin Janicek
    Hi guys, during the NetBeans 7.3 feature development, I spend quite a lot of time trying to get some basic Groovy refactoring to the game. I've implemented find usages and rename refactoring for some basic constructs (class types, fields, properties, variables and methods). It's certainly not perfect and it will definitely need a lot fixes and improvements to get it hundred percent reliable, but I need to start somehow :) I would like to ask all of you to test it as much as possible and file a new tickets to the cases where it doesn't work as expected (e.g. some occurrences which should be in usages isn't there etc.) ..it's really important for me because I don't have real Groovy project and thus I can test only some simple cases. I can promise, that with your help we can make it really useful for the next release. Also please be aware that the current version is focusing only on the .groovy files. That means it won't find any usages from the .java files (and the same applies for finding usages from java files - it won't find any groovy usages). I know it's not ideal, but as I said.. we have to start somehow and it wasn't possible to make it all-in-one, so only other option was to wait for the NetBeans 7.4. I'll focus on better Java-Groovy integration in the next release (not only in refactoring, but also in navigation, code completion etc.) BTW: I've created a new component with surprising name "Refactoring" in our bugzilla[1], so please put the reported issues into this category. [1] http://netbeans.org/bugzilla/buglist.cgi?product=groovy;component=Refactoring

    Read the article

  • Real-Time Multi-User Gaming Platform

    - by Victor Engel
    I asked this question at Stack Overflow but was told it's more appropriate here, so I'm posting it again here. I'm considering developing a real-time multi-user game, and I want to gather some information about possibilities before I do some real development. I've thought about how best to ask the question, and for simplicity, the best way that occurred to me was to make an analogy to the field (or playground) game darebase. In the field game of darebase, there are two or more bases. To start, there is one team on each base. The game is a fancy game of tag. When two people meet out in the field, the person who left his base most recently timewise captures the other person. They then return to that person's base. Play continues until everyone is part of the same team. So, analogizing this to an online computer game, let's suppose there are an indefinite number of bases. When a person starts up the game, he has a team that is located at, for example, his current GPS coordinates. It could be a virtual world, but for sake of argument, let's suppose the virtual world corresponds to the player's actual GPS coordinates. The game software then consults the database to see where the closest other base is that is online, and the two teams play their game of virtual tag. Note that the user of the other base could have a different base than the one run by the current user as the closest base to him, in which case, he would be in two simultaneous battles, one with each base. When they go offline, the state of their players is saved on a server somewhere. Game logic calls for the players to have some automaton-logic of some sort, so they can fend for themselves in a limited way using basic rules, until their user goes online again. The user doesn't control the players' movements directly, but issues general directives that influence the players' movement logic. I think this analogy is good enough to frame my question. What sort of platforms are available to develop this sort of game? I've been looking at smartfoxserver, but I'm not convinced yet that it is the best option or even that it will work at all. One possibility, of course, would be to roll out my own web server, but I'd rather not do that if there is an existing service out there already that I could tap into. I will be developing for iOS devices at first. So any suggestions would be greatly appreciated. I think I need to establish the architecture first before proceeding with this project. Note that darbase is not the game I intend to implement, but, upon reflection, that might not be a bad idea either.

    Read the article

< Previous Page | 568 569 570 571 572 573 574 575 576 577 578 579  | Next Page >