Search Results

Search found 4437 results on 178 pages for 'mark tyler'.

Page 128/178 | < Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >

  • South Florida SQL Saturday 2010

    - by ScottKlein
    The South Florida SQL Server Users Group is proud to announce the 2nd annual South Florida SQL Saturday will be held Saturday, July 31st at DeVry University (the same place it was held last year). Like last year, this will be a FREE event, aimed at everyone involved with SQL Server. For those who attended in 2009, this was a great event. We had nearly 500 attendees with 6 tracks and great speakers. We hope to do it bigger and better this year, with more tracks, more speakers, and more people! Our goal is to surpass the 500 attendee mark, with tracks for DBA's, SQL Developers, plenty of BI information, and Azure! Last year I said seating was limited, but what what the heck? No limitation. If you deal with SQL, or want to learn SQL, this is the place to be. To register, click on this link: http://www.sqlsaturday.com/40/eventhome.aspx We already have a lot of registered attendees and many sessions submitted. Many SQL Server experts and MVP's will be speaking so here is a chance to learn from the BEST!

    Read the article

  • Efficiency concerning thread granularity

    - by MaelmDev
    Lately, I've been thinking of ways to use multithreading to improve the speed of different parts of a game engine. What confuses me is the appropriate granularity of threads, especially when dealing with single-instruction-multiple-data (SIMD) tasks. Let's use line-of-sight detection as an example. Each AI actor must be able to detect objects of interest around them and mark them. There are three basic ways to go about this with multithreading: Don't use threading at all. Create a thread for each actor. Create a thread for each actor-object combination. Option 1 is obviously going to be the least efficient method. However, choosing between the next two options is more difficult. Only using one thread per actor is still running through every object in series instead of in parallel. However, are CPU's able to create and join threads in the granularity posed in Option 3 efficiently? It seems like that many calls to the OS could be really slow, and varying enormously between different hardware.

    Read the article

  • Oracle Database Appliance and Remarketers - A Whole New Opportunity

    - by Martin Morganti
    We have recently had another exciting announcement for the remarketer initiative. The Oracle Database Appliance is now available for resale through your authorized Value Added Distributor. This means that with no fees, no barriers and no upfront commitments, a remarketer can identify and close transactions for the Oracle Database Appliance without having to sign an Oracle contract. In addition, the remarketer is now authorized to sell the Oracle Database Appliance with Oracle Database 11G Enterprise Edition, RAC or RAC One Node software included in the transaction. The availability of the Oracle Database Appliance for remarketers means that a broad base of customers through remarketers can now start engaging with Oracle by taking advantage of this engineered system technology; basically Oracle Hardware and Software Engineered to Work Together to drive incremental revenue. The Oracle Database Appliance is a simple, reliable and affordable way to rapidly deploy a database cluster by plugging in the power; the network and pushing the one button install for the Appliance Manager software. Find out more about the Oracle Database Appliance, including a webinar by Oracle President Mark Hurd, as well as other material to help you better understand the opportunity available to you as remarketers. If you want Oracle to keep you informed about developments in Remarketer complete the free registration, or talk to your local Oracle Remarketer Authorized value added distributors. (Complete list available on the Oracle Remarketer page).

    Read the article

  • 8 Reasons to Attend Oracle OpenWorld 2012

    - by kgee
    Every year, the Oracle Hardware team recognizes the unique buzz that accompanies the season of OpenWorld. During the late nights kept possible by the grace of caffeine combined with the stress and eagerness for the event to run smoothly, we like to remind ourselves of why all our hard work is going to pay off. So, now that we've registered, here are some of our top reasons that we’re excited for Oracle OpenWorld 2012: The KeynotesJust to name a few...Larry Ellison, Mark Hurd, Thomas Kurian, John Fowler and many more are speaking live. We're expecting to walk away from the keynotes with a new frame of reference on a vast array of hot topics. NetworkingWhether it's through means of the OpenWorld Lounges, social media, or bars and cafes around Moscone Center, we'll be surrounded by people who are experts in the hardware field. Hardware SessionsThere are enough sessions to satisfy every Oracle hardware knowledge need. Hardware Experts in GeneralSo many experts that we wish we could be in two places at once sometimes. Pearl Jam & Kings of LeonRock out with these two legendary bands at the Oracle Appreciation Event! Oracle Music FestivalJoss Stone, Macy Gray, the Hives, and Jimmy Cliff will be welcome escapes at the end of each day at OpenWorld, and are just a couple more reasons these all nighters before OpenWorld are worth it. ORACLE TEAM USA and the America's Cup trophyAfter the sailors take on San Francisco Bay for Fleet Week, we’ll be soliciting them for autographs and taking pictures with them at OpenWorld. Location, Location, LocationThe Moscone Center is beautiful and in the best location in San Francisco. We know the OpenWorld hype will get to us sometimes, and it's nice to know that we have pretty much everything San Francisco has to offer at our finger tips. Why are you excited for #OOW? Tell us why!

    Read the article

  • Topeka Dot Net User Group (DNUG) Meeting &ndash; April 6, 2010

    - by Robz / Fervent Coder
    Topeka DNUG is free for anyone to attend! Mark your calendars now! SPEAKER: Troy Tuttle is a self-described pragmatic agilist, and Kanban practitioner, with more than a decade of experience in delivering software in the finance and health industries and as a consultant. He advocates teams improve their performance through pursuit of better practices like continuous integration and automated testing. Troy is the founder of the Kansas City Limited WIP Society and is a speaker at local area groups on team related topics. He currently works as a Project Lead Consultant with AdventureTech Group of Kansas City, KS. TOPIC: Why Kanban? Kanban is receiving a large amount of attention recently. What does it offer compared to other approaches? Answering that question may require you to hit the “reset” button on previously held biases and assumptions. Kanban blends Lean thought with ideas from first generation agile methodologies. To get started with Kanban, we will examine what steps are necessary to establish a transparent, work-limited, pull system. We will highlight the perils of allowing too much work-in-progress and how it affects development performance. Once established, Kanban teams need only a few metrics and tools to monitor their performance and improvement. WHERE: Federal Home Loan Bank Topeka on the Security Benefit Campus – Directions? WHEN: 11:30 AM - 1:00 PM on April 6th, 2010 REGISTER: http://topekadotnet.wufoo.com/forms/topeka-dnug-meeting-attendance/ ADDITIONAL INFO: As always, please sign in and out of FHLBank to help them with their accountability. Please park in the visitors section at the front of the building when you arrive. If  there are no spots in visitors you may park in the overflow lot at the far east end of the facility.  Lunch will be provided and we will have some great door prizes!

    Read the article

  • How to represent an agile project to people focused on waterfall [closed]

    - by ahsteele
    Our team has been asked to represent our development efforts in a project plan. No one is unhappy with our work or questioning our ability to deliver, we are just participating in an IT cattle call for project plans. Trouble is we are an agile team and haven't thought about our work in terms of a formal project plan. While we have a general idea of what we are working on next we aren't 100% sure until we plan an iteration. Until now our team has largely operated in a vacuum and has not been required to present our methodology or metrics to outside parties. We follow most of the practices espoused in Extreme Programming. We hold quarterly planning meetings to have a general idea of the stories we are going to work on for a quarter. That said, our stories are documented on 3x5 cards and are only estimated at the beginning of the iteration in which they are going to be worked. After estimation we document the story in Team Foundation Sever. During an iteration, we attach code to stories and mark stories as completed once finished. From this data we are able to generate burn down and velocity charts. Most importantly we know our average velocity for an iteration keeping us from biting off more than we can chew. I am not looking to modify the way we do development but want to present our development activities in a report that someone only familiar with waterfall will understand. In What Does an Agile Project Plan Look Like, Kent McDonald does a good job laying out the differences between agile and waterfall project plans. He specifies the differences in consumable bullets: An agile project plan is feature based An Agile Project Plan is organized into iterations An Agile Project Plan has different levels of detail depending on the time frame An Agile Project Plan is owned by the Team Being able to explain the differences is great, but how best to present the data?

    Read the article

  • OpenGL profiling with AMD PerfStudio 2

    - by Aurus
    I'm rendering just a really small amount of polygons for my UI but however I still tried to increase the FPS. In the end I removed redundant calls which increased the FPS. I really don't want to lose FPS for nothing so I keep looking for more improvements. The first thing I noticed is the "huge" time where no calls are made before SwapBuffer (the black one). Well I know that OpenGL works asynchronous so SwapBuffer has to wait until everything is done. But shouldn't PerfStudio mark this time also as black ? Correct me If I am wrong. The second thing I noticed is that some glUniform2f calls just take longer (the brown ones). I mean they should all upload 2floats to the GPU how can the time be so different from call to call. The program isn't even changed or something like that. I also tried to look at other programs like gDebugger or CodeXL but they often crashed and they show less statistics (only # of calls or redundant calls etc.) EDIT: I also realized that the draw calls also have different durations, which was obvious for me but sometimes drawing more vertices is faster than drawing less vertices.

    Read the article

  • EMEA Engineered Systems Partner Update Call&ndash;October 30th 2013

    - by JuergenKress
    EMEA Engineered Systems Partner Update Call: Engineered Systems (Including Exalogic) updates from Oracle OpenWorld on 30th October, 2013 at 15:00 CET (UTC/GMT +1 Hour) We are pleased to invite you to the next Webcast from our Engineered Systems Partner Update Series. This time it will be all around "Engineered Systems updates from Oracle OpenWorld – all the news from Exalogic included" on Wednesday 30th October, 2013 at 15:00 CET (UTC/GMT +1 Hour). One more year, San Francisco hosted the Oracle OpenWorld, in the month of September. Every year, thousands of partners and customers attend this event to discover new products and solutions, improve their technical proficiency and knowledge, learn tips and tricks for currently installed products and understand where the industry is headed. In case you could not make it to San Francisco this time, we want to provide you with the key updates announced at Oracle OpenWorld around Engineered Systems. Please mark your diaries. You can also attend Larry’s keynote around the Oracle Database 12c In-Memory Database and M6 Big Memory Machine and many more on the Oracle OpenWorld On Demand website. Agenda: Overview of latest Engineered Systems including Exalogic and how Oracle Fusion Middleware performs on the machine How to articulate their value to customers Webcast Joining details: To Join the webcast CLICK HERE For audio reception please use the following details: Global Dial-in Numbers Session/Conference ID: 595 534 979 Password: 12385 WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: Engineered Systems,Exalogic,OOW,Oracle OpenWorld,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • links for 2011-03-16

    - by Bob Rhubart
    InfoQ: Randy Shoup on Evolvable Systems Randy Shoup discusses evolvable systems: how to run different versions of a system in parallel during migrations, decoupling a system with events, schemas at eBay and much more. (tags: ping.fm) InfoQ: Heresy & Heretical Open Source: A Heretic's Perspective Douglas Crockford presents a debate existing around XML and JSON, and the negative effect of the Intellectual Property laws on open source software. (tags: ping.fm) Oracle Technology Network Architect Day: Toronto Registration is now open for this day-long event, to be held at the Sheraton Centre Toronto on April 21. Registration is free, but seating is limited.  (tags: oracle otn enterprisearchitecture cloudcomputing) Harry Foxwell: The Cloud is STILL too slow! "Considering the exponentially growing expectations of what the Web, that is, "the Cloud", is supposed to provide, today's Web/Cloud services are still way too slow." - Harry Foxwell (tags: oracle otn cloud) Architecture Standards - BPMN vs. BPEL for Business Process Management (Enterprise Architecture at Oracle) Path Shepherd gives props to Mark Nelson. (tags: entarch oracle otn) ORCLville: Oracle Fusion Applications: If I Were An AppsTech Oracle ACE Director Floyd Teter says:" If I were an Oracle AppsTech with an eye on Fusion Applications, there are three tools/technologies I'd want... (tags: oracle otn oracleace fusionapplications) Events OverviewYour brain on #entarch - OTN Architect Day - Denver - March 23 This free event includes sessions on Cloud Computing, Application Portfolio Rationalization, System Optimization, Event-Driven Architecture, plus food, beverages, an lots of peer networking. Seating is limited. (tags: oracle entarch otn)

    Read the article

  • Sprite Sheets in PyGame?

    - by Eamonn
    So, I've been doing some googling, and haven't found a good solution to my problem. My problem is that I'm using PyGame, and I want to use a Sprite Sheet for my player. This is all well and good, and it would be too, if I wasn't using a Sprite Sheet strip. Basically, if you don't understand, I have a strip of 32x32 'frames'. These frames are all in an image, along side each other. So, I have 3 frames in 1 image. I'd like to be able to use them as my sprite sheet, and not have to crop them up. I have used an awesome, popular and easy-to-use game framework for Lua called LÖVE. LÖVE has these things called "Quads". They are similar to texture regions in LibGDX, if you know what they are. Basically, quads allow you to get parts of an image. You define how large a quad is, and you define parts of an image that way, or 'regions' of an image. I would like to do something similar to this in PyGame, and use a "for" loop to go through the entire image width and height and mark each 32x32 area (or whatever the user defines as their desired frame width and height) and store that in a list or something for use later on. I'd define an animation speed and stuff, but that's for later on. I've been looking around on the web, and I can't find anything that will do this. I found 1 script on the PyGame website, but it crashed PyGame when I tried to run it. I tried for hours trying to fix it, but no luck. So, is there a way to do this? Is there a way to get regions of an image? Am I going about this the wrong way? Is there a simpler way to do this? Thanks! :-)

    Read the article

  • What is the Best way to learn c# programming and capability to do anything with C#?

    - by MSU
    I browsed all the answers given in Stackoverflow,i couldn't find my answer that's why i am writing this, so please don't mark this a duplicate. My case i want to get the capability of doing anything in C# from building applications to solving problems. I searched for and tried to read books. Then one of the experts said, reading books will not make any good for you, for learning you have to solve problems i.e. real world problems in C#, he gave me some sample problems, which i previously did in C++, i started doing, but the thing is that i know the internal logic of solving the problem or the algorithm to solve. But i don't know how to imepelement them using C# efficiently. so, the gap is, i can't think in C#, I know the message to passed but don't the exact way to pass it. I did a program to solve a problem, then find out there are much easier ways of doing it wherever i was doing it in tougher way. So, can you guys(C# experts) please enlighten me, what i need to get hold of the language and get the ability to code in C# proficiently?

    Read the article

  • Automatic Generalization

    - by Nick Harrison
    I have been interested in functional programming since college. I played around a little with LISP back then, but I have not had an opportunity since then. Now that F# ships standard with VS 2010, I figured now is my chance. So, I was reading up on it a little over the weekend when I came across a very interesting topic. F# includes a concept called "Automatic Generalization". As I understand it, the compiler will look at your method and analyze how you are using parameters. It will automatically switch to a generic parameter if it is possible based on your usage. Wow! I am looking forward to playing with this. I have long been an advocate of using the most generic types possible especially when developing library classes. Use the highest level base class that you can get away with. Use an interface instead of a specific implementation. I don't advocate passing object around, but you get the idea. Tools like resharper, fxCop, and most static code analysis tools provide guidance to help you identify when a more generalized type is possible, but this is the first time I have heard about the compiler taking matters into its own hands. I like the sound of this. We'll see if it is a good idea or not. What are your thoughts? Am I missing the mark on what Automatic Generalization does in F#? How would this work in C#? Do you see any problems with this?

    Read the article

  • Get Started with JavaFX 2 and Scene Builder

    - by Janice J. Heiss
    Up on otn/java is a very useful article by Oracle Java/Middleware/Core Tech Engineer Mark Heckler, titled, “How to Get Started (FAST!) with JavaFX 2 and Scene Builder.”  Heckler, who has development experience in numerous environments, shows developers how to develop a JavaFX application using Scene Builder “in less time than it takes to drink a cup of coffee, while learning your way around in the process”. He begins with a warning and a reassurance: “JavaFX is a new paradigm and can seem a bit imposing when you first take a look at it. But remember, JavaFX is easy and fun. Let's give it a try.” Next, after showing readers how to download and install JDK/JavaFX and Scene Builder, he informs the reader that they will “create a simple JavaFX application, create and modify a window using Scene Builder, and successfully test it in under 15 minutes.” Then readers download some NetBeans files:“EasyJavaFX.java contains the main application class. We won't do anything with this class for our example, as its primary purpose in life is to load the window definition code contained in the FXML file and then show the main stage/scene. You'll keep the JavaFX terms straight with ease if you relate them to the theater: a platform holds a stage, which contains scenes. SampleController.java is our controller class that provides the ‘brains’ behind the graphical interface. If you open the SampleController, you'll see that it includes a property and a method tagged with @FXML. This tag enables the integration of the visual controls and elements you define using Scene Builder, which are stored in an FXML (FX Markup Language) file. Sample.fxml is the definition file for our sample window. You can right-click and Edit the filename in the tree to view the underlying FXML -- and you may need to do that if you change filenames or properties by hand - or you can double-click on it to open it (visually) in Scene Builder.” Then Scene Builder enters the picture and the task is soon done. Check out the article here.

    Read the article

  • Acceptable placement of the composition root using dependency injection and inversion of control containers

    - by Lumirris
    I've read in several sources including Mark Seemann's 'Ploeh' blog about how the appropriate placement of the composition root of an IoC container is as close as possible to the entry point of an application. In the .NET world, these applications seem to be commonly thought of as Web projects, WPF projects, console applications, things with a typical UI (read: not library projects). Is it really going against this sage advice to place the composition root at the entry point of a library project, when it represents the logical entry point of a group of library projects, and the client of a project group such as this is someone else's work, whose author can't or won't add the composition root to their project (a UI project or yet another library project, even)? I'm familiar with Ninject as an IoC container implementation, but I imagine many others work the same way in that they can scan for a module containing all the necessary binding configurations. This means I could put a binding module in its own library project to compile with my main library project's output, and if the client wanted to change the configuration (an unlikely scenario in my case), they could drop in a replacement dll to replace the library with the binding module. This seems to avoid the most common clients having to deal with dependency injection and composition roots at all, and would make for the cleanest API for the library project group. Yet this seems to fly in the face of conventional wisdom on the issue. Is it just that most of the advice out there makes the assumption that the developer has some coordination with the development of the UI project(s) as well, rather than my case, in which I'm just developing libraries for others to use?

    Read the article

  • Misconceptions about purely functional languages?

    - by Giorgio
    I often encounter the following statements / arguments: Pure functional programming languages do not allow side effects (and are therefore of little use in practice because any useful program does have side effects, e.g. when it interacts with the external world). Pure functional programming languages do not allow to write a program that maintains state (which makes programming very awkward because in many application you do need state). I am not an expert in functional languages but here is what I have understood about these topics until now. Regarding point 1, you can interact with the environment in purely functional languages but you have to explicitly mark the code (functions) that introduces them (e.g. in Haskell by means of monadic types). Also, AFAIK computing by side effects (destructively updating data) should also be possible (using monadic types?) but is not the preferred way of working. Regarding point 2, AFAIK you can represent state by threading values through several computation steps (in Haskell, again, using monadic types) but I have no practical experience doing this and my understanding is rather vague. So, are the two statements above correct in any sense or are they just misconceptions about purely functional languages? If they are misconceptions, how did they come about? Could you write a (possibly small) code snippet illustrating the Haskell idiomatic way to (1) implement side effects and (2) implement a computation with state?

    Read the article

  • De-index URL paremeters

    - by Doug Firr
    Upon reading over this question is lengthy so allow me to provide a one sentence summary: I need to get Google to de-index URLs that have certain parameters appended I have a website example.com with language translations. There used to be many translations but I deleted them all so that only English (Default) and French options remain. When one selects a language option a parameter is aded to the URL. For example, the home page: https://example.com (default) https://example.com/main?l=fr_FR (French) I added a robots.txt to stop Google from crawling any of the language translations: # robots.txt generated at http://www.mcanerin.com User-agent: * Disallow: Disallow: /cgi-bin/ Disallow: /*?l= So any pages containing "?l=" should not be crawled. I checked in GWT using the robots testing tool. It works. But under html improvements the previously crawled language translation URLs remain indexed. The internet says to add a 404 to the header of the removed URLs so the Googles knows to de-index it. I checked to see what my CMS would throw up if I visited one of the URLs that should no longer exist. This URL was listed in GWT under duplicate title tags (One of the reasons I want to scrub up my URLS) https://example.com/reports/view/884?l=vi_VN&l=hy_AM This URL should not exist - I removed the language translations. The page loads when it should not! I played around. I typed example.com?whatever123 It seems that parameters always load as long as everything before the question mark is a real URL. So if Google has indexed all these URLS with parameters how do I remove them? I cannot check if a 404 is being generated because the page always loads because it's a parameter that needs to be de-indexed.

    Read the article

  • Mono is frequently used to say "Yes, .NET is cross-platform". How valid is that claim?

    - by Thorbjørn Ravn Andersen
    In What would you choose for your project between .NET and Java at this point in time? I say that I would consider the "Will you always deploy to Windows?" the single most important (EDIT: technical) decision to make up front in a new web project, and if the answer is "no", I would recommend Java instead of .NET. A very common counter-argument is that "If we ever want to run on Linux/OS X/Whatever, we'll just run Mono", which is a very compelling argument on the surface, but I don't agree for several reasons. OpenJDK and all the vendor supplied JVM's have passed the official Sun TCK ensuring things work correctly. I am not aware of Mono passing a Microsoft TCK. Mono trails the .NET releases. What .NET-level is currently fully supported? Does all GUI elements (WinForms?) work correctly in Mono? Businesses may not want to depend on Open Source frameworks as the official plan B. I am aware that with the new governance of Java by Oracle, the future is unsafe, but e.g. IBM provides JDK's for many platforms, including Linux. They are just not open sourced. So, under which circumstances is Mono a valid business strategy for .NET-applications? Edit: Mark H summarized it as: "If the claim is that "I have a windows application written in .NET, it should run on mono", then not, it's not a valid claim - but Mono has made efforts to make porting such applications simpler.".

    Read the article

  • Price Drop for Processor based License on Exalytics

    - by Mike.Hallett(at)Oracle-BI&EPM
    ·       33% reduction in the list `per processor` license pricing for the Oracle BI Foundation Suite ·       New capacity-based licensing which allows customers to think big & start small, significantly lowering the entry price point for an Exalytics. Oracle BI Software List Price changes In response to new powerful platforms like the in-memory Oracle Exalytics with 40 cpu cores (counted under Oracle pricing policy as 20 “processors”), the list price of “Oracle BI Foundation Suite” (BIFS) is reduced by 33% from $450K per processor to $300K per processor. Capacity-based licensing on Exalytics (Trusted Partitions) “Capacity-based pricing” for the BIFS, Endeca, Essbase and Times Ten for Exalytics software is now available for Exalytics systems. This is delivered using “Oracle VM” (OVM).  We still ship a full Exalytics machine to all customers, but they may choose to only use and license a subset of the processors installed in the machine.   Customers can license Exalytics software in units of 5 “processors”: 5, 10, 15 or the full capacity 20.   As the customer’s implementation and workload increases, it is a simple matter to license additional processors and, using OVM, make them available to the BI or EPM application. Endeca Information Discovery now available on Exalytics Oracle has also announced the certification of “Oracle Endeca Information Discovery” (EID) on the Exalytics machine.    EID can be licensed alone or in combination with the BIFS & Times Ten for an Exalytics stack, and also participates in the capacity based pricing outlined above.   The Exalytics hardware is the perfect platform for EID, and provides superb power and performance for this in-memory hybrid text-search-analytics.   For more information : Oracle Price lists Oracle Partitioning Policy Discussion by Mark Rittman (Rittman Mead Consulting ltd.) on Oracle Trusted Partitions for Oracle Engineered Systems, Oracle Exalytics and Updated BI Foundation Pricing.

    Read the article

  • Arrow ECS Oracle OpenWorld update

    - by mseika
    A date to mark in your diaries now! On Tuesday 23rd October will be holding our post Oracle OpenWorld Oracle. Covering all the important announcements from Oracle OpenWorld, this must attend event will be held in the Royal Exchange, London. The full agenda and speaker line up is being finalised but we will cover all the major strategy and product announcements from Oracle OpenWorld, FY 13 Channel Strategy and Partnering with Arrow ECS. Oracle OpenWorld a Channel Perspective, David Tweddle - Head of UK Alliance and Channel Hardware Announcements, Christopher Lindsay - Oracle Hardware EMEA Software Announcements, speaker to be confirmed Arrow ECS Focus and Strategy, Simon Rushbrook - Business Development Manager, Arrow ECS Summary and close, Nick Tinsley - Sales Director Who should attend? The event is subject Oracle NDA and is aimed at sales and pre-sales personal within your organisation. This event will commence at 12.30 with lunch, presentations will start at 1.30 and will close at 4.30 followed by drinks and networking. Full details to follow shortly but save the date and register now. Date & Time Tuesday 23rd October 201212.30pm - 4.30pm Post event drinks will be servedfrom 4.30pm Location London OfficeEntrance 2, Fourth Floor,The Royal Exchange,London, EC3V 3LN Click here for directions >

    Read the article

  • Modelling work-flow plus interaction with a database - quick and accessible options

    - by cjmUK
    I'm wanting to model a (proposed) manufacturing line, with specific emphasis on interaction with a traceability database. That is, various process engineers have already mapped the manufacturing process - I'm only interested in the various stations along the line that have to talk to the DB. The intended audience is a mixture of project managers, engineers and IT people - the purpose is to identify: points at which the line interacts with the DB (perhaps going so far as indicating the Store Procs called at each point, perhaps even which parameters are passed.) the communication source (PC/Handheld device/PLC) the communication medium (wireless/fibre/copper) control flow (if leak test fails, unit is diverted to repair station) Basically, the model will be used as a focus different groups on outstanding tasks; for example, I'm interested in the DB and any front-end app needed, process engineers need to be thinking about the workflow and liaising with the PLC suppliers, the other IT guys need to make sure we have the hardware and comms in place. Obviously I could just improvise in Visio, but I was wondering if there was a particular modelling technique that might particularly suit my needs or my audience. I'm thinking of a visual model with supporting documentation (as little as possible, as much as is necessary). Clearly, I don't want something that will take me ages to (effectively) learn, nor one that will alienate non-technical members of the project team. So far I've had brief looks at BPMN, EPC Diagrams, standard Flow Diagrams... and I've forgotten most of what I used to know about UML... And I'm not against picking and mixing... as long as it is quick, clear and effective. Conclusion: In the end, I opted for a quasi-workflow/dataflow diagram. I mapped out the parts of the manufacturing process that interact with the traceability DB, and indicated in a significantly-simplified form, the data flows and DB activity. Alongside which, I have a supporting document which outlines each process, the data being transacted for each process (a 'data dictionary' of sorts) and details of hardware and connectivity required. I can't decide whether is a product of genius or a crime against established software development practices, but I do think that is will hit the mark for this particular audience.

    Read the article

  • How to verify that all files are intact prior to install?

    - by Kalle H. Väravas
    I'm working on my CMS (in PHP platform) for a long time now. The main program is done and I'm currently developing the Installer part. Installation itself will be fairly simple: Upload all files Verify that the "content/" dir has correct permissions Check if ALL files are intact and not modified [This is the subject of this question] Insert the config data and first settings Run install (Generate all DB tables and insert sample data etc.) Now the question-mark is at step 3. How do I verify ALL files? Verification itself should compare all CMS root-directories files against a list from remote location. List should contain filename, filesize and filetype. This way the user can check, that there are no unnecessary or corrupted files, that could indicated a breach in the software. I have seen some software installers do that, but I cannot find any right now and there for I'm clueless on the most optimal method for this. Of course there always is a simple array trick, but there surely must be a better and faster method?!

    Read the article

  • notification icons in Gnome Shell cause lag

    - by Relik
    Before marking this as a duplicate please read my question throughly. OK I am having a bit of a problem with Gnome Shell (3.3.90); Any time there are any icons in the bottom notification bar it causes massive lag when opening/closing and interacting with the Activities panel. If I close all applications that have a notification icon the lag goes away 100%. In My case it's drop box, but any program that requires a icon in the notification area will cause this issue to happen. I am using an AMD A6-3420M running Catalyst 11.11, however I can confirm that this is not a driver issue. I had the same issue on a system running a GeForce 8600GT,and a HD6850, and there is another question on here titled "Gnome Shell lags after a while" where the user is having the same exact issue and He is also using a nVidia card, a 9800GT to be specific. Please, this question has been asked several time and every time people say its a fglrx issue and close the question, or they mark the question as a duplicate and link to a question that says its an fglrx issue. This is not a driver issue, I understand that the AMD drivers caught heat for quite some time for Gnome Shell compatibility issues but those issues have been resolved. Aside from this issue Gnome Shell runs beautiful on my HD650, my A6-3420M, and my 8600GT. Having said that, does anyone know how to correct this issue? Closing Drop Box is not an option for me either.

    Read the article

  • Why is Desktop Unity using the global application menu?

    - by Kazade
    It was announced in another question that the desktop version of Unity will keep the global menu by default. Here are the facts: The global menu was introduced into UNE to save vertical screen space because at Netbook resolutions the vertical space is limited. On a modern desktop with a high resolution, there is ample vertical space making this unnecessary On the announcement of UNE global menus, Mark Shuttleworth himself said the following: "There are outstanding questions about the usability of a panel-hosted menu on much larger screens, where the window and the menu could be very far apart." The benefits of a global menu don't seem to carry across to a high-resolution desktop and instead seem to bring draw backs (increased mouse travel, large distance between the menu and its associated window). The other worrying factor is that applications seem to be moving away from having a menu bar, and instead of innovating on this and defining new guidelines for moving away from the menu, we are giving it prime place right at the top of the desktop. If applications continue moving away from the desktop we will have an inconsistent experience concerning where to locate application related options/tools depending on which app you are using (e.g. Chrome). Finally, the current global menu bar implementation doesn't work for all apps, and doesn't even work for all apps in the default install. This means that the default desktop implementation will be inconsistent. So, there are a bunch of reasons why moving to a global menu is a bad idea, so we need some pretty convincing arguments for why it is a good idea. What are the reasons for the global menu implementation in the desktop version of Unity?

    Read the article

  • Why can't I install from software center?

    - by user64720
    There was a problem upgrading to Firefox 13. This error kept returning: /var/cache/apt/archives/firefox_13.0+build1-0ubuntu0.12.04.1_i386.deb W: Waited for dpkg --assert-multi-arch but was not there - dpkgGo (10: There are no "child" processes). Now it seems that there is some problem with dpkg and I can't install anything from software center. I already tried to clean previous packages with sudo rm /var/lib/apt/lists/* -vf and then sudo apt-get update, it didn't work. When running sudo dpkg --configure -a, I get this: dpkg: problems with dependencies prevent the configuration of firefox-globalmenu: firefox-globalmenu depends on firefox (= 13.0+build1-0ubuntu0.12.04.1); however: The package is not installed. dpkg: error while processing firefox-globalmenu (--configure): problems com dependencies - leaving unconfigured There has been found errors while processing: firefox-globalmenu What should I do to fix this?? EDIT: I don't have the necessary expertise to understand why what I did worked and what was causing the conflict, but anyway, since there was a problem with firefox-globalmenu:, I went to synaptics package manager, I removed this particular package and reinstalled it. After that, I was able to install Firefox from synaptics and also any other applications from software center. However, still there was a problem, when running sudo apt-get update, the following kept returning: Failed to get gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise_main_binary-i386_Packages Verification code hash doesn't match. E: Some archives index failed at being downloaded. They have been ignored, or older copies are used instead. So I typed sudo rm /var/lib/apt/lists/* -vf in terminal and then again sudo apt-get update and everything is fine now. I did this before an answer was posted, anyway I agree the problem was that particular package and its removal. So I'll mark the below answer as accepted.

    Read the article

  • Sun2Oracle: Upgrading from DSEE to the next generation Oracle Unified Directory

    - by Darin Pendergraft
    Mark your calendars and register to join this webcast featuring Steve Giovanetti from Hub City Media, Albert Wu from UCLA and our own Scott Bonnell as they discuss a directory upgrade project from Sun DSEE to Oracle Unified Directory. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Date: Thursday, September 13, 2012 Time: 10:00 AM Pacific Join us for this webcast and you will: Learn from one customer that has successfully upgraded to the new platform See what technology and business drivers influenced the upgrade Hear about the benefits of OUD’s elastic scalability and unparalleled performance Get additional information and resources for planning an upgrade Register Now!

    Read the article

< Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >