Search Results

Search found 16560 results on 663 pages for 'high tech resources'.

Page 305/663 | < Previous Page | 301 302 303 304 305 306 307 308 309 310 311 312  | Next Page >

  • Answers to Conference Revenue Tweet Questions

    - by D'Arcy Lussier
    Originally posted on: http://geekswithblogs.net/dlussier/archive/2014/05/27/156612.aspxI tweeted this the other day… …and I had some people tweet back questioning/asking about the profit number. So here’s how I came to that figure. Total Revenue Let’s talk total revenue first. This conference has a huge list of companies/organizations paying some amount for sponsorship. Platinum ($1500) x 5 = $7500 Gold ($1000) x 3 = $3000 Silver ($500) x 9 = $4500 Bronze ($250) x 13 = $3250 There’s also a title sponsor level but there’s no mention of how much that is…more than $1500 though, so let’s just say $2500. Total Sponsorship Revenue: $20750.00 For registrations, this conference is claiming over 300 attendees. We’ll just calculate at 300 and the discounted “member rate” – $249. Total Registration Revenue: $74700.00 Booth space is also sold for a vendor area, but let’s just leave that out of the calculation. Total Event Revenue: $95450.00 Now that we know how much money we’re playing with, let’s knock out the costs for the event. Total Costs Hard Costs Audio/Visual Services $2000 Conference Rooms (4 Breakouts + Plenary) $2500 Insurance $700 Printing/Signage $1500 Travel/Hotel Rooms $2000 Keynotes $2000 So let’s talk about these hard costs first. First you may be asking about the Audio Visual. Yes those services can be that high, actually higher. But since there’s an A/V company touted as the official A/V provider, I gotta think there’s some discount for being branded as such. Conference rooms are actually an inflated amount of $500 per. Venues make money on the food they sell at events, not on room rentals. The more food, the cheaper the rooms tend to be offered at. Still, for the sake of argument, let’s set the rooms at $500 each knowing that they could be lower. For travel and hotel rooms…it appears that most of the speakers at this conference are local, meaning there’s no travel or hotel cost. But a few of them I wasn’t too sure…so let’s factor in enough to cover two outside speakers (airfare and hotel). There are two keynotes for this event and depending on the event those may be paid gigs. I’m not sure if they are or not, but considering the closing one is a comedian I’m going to add some funds here for that just in case. Total Hard Costs: $10700 Now that the hard costs are out of the way, let’s talk about the food costs. Food Costs The conference is providing a continental breakfast (YEEEESH!), some level of luncheon, and I have to assume coffee breaks in between. Let’s look at those costs. Continental Breakfast $12 per person Lunch Buffet $18 per person Coffee Breaks (2) $6 per person (or $3 a cup) Snacks (2) $10 per person (or $5 each) Note that the lunch buffet assumes a *good* lunch buffet – two entrees, starch, vegetable, salads, and bread. Not sure if there’ll be snacks during coffee breaks but let’s assume so. Total Food Cost Per Person: $46 Food Cost: $14950 Gratuity: $2691 Total Food Cost: $17641 Total food cost is based on the $46 per person cost x 325. 300 for attendance, 12 for speakers, extra 13 for volunteers/organizers. Gratuity is 18%. Grand Totals So let’s sum things up here. Total Costs Hard Costs: $10700.00 Food Costs: $17641.00 Total:          $28341.00 Taxes:         $3685.00 Grand Total  $32026.00 Total Revenue Sponsorship  $20750 Registration   $74700 Grand Total   $95450.00 Total Profit $63424.00 Now what if the registration numbers were lower and they only got 100 people to show up. In that scenario there’d still be a profit of just under $26000. Closing Comments A couple of things to note: - I haven’t factored in anything for prizes. Not sure if any will be given out - We didn’t add in the booth space revenue - We’re assuming speakers aren’t getting paid, but even if they were at the high end its $12000 ($1000 per session), which is probably an inflated number for local speakers. - Note that all registrations were set to the “member” discounted price. The non-member registration price is higher. There is also an option for those that just want to show up for the opening keynote. There you have it! Let me know if you have any questions. D

    Read the article

  • Alternatives to voxel-based terrain

    - by Neomex
    Are there any alternatives to voxel based terrains? Such terrain should be fully destructable, allow for arches, overhangs, preserve sharp features where needed and keep consistent topology. Maybe you can explain the problem that makes you ask this question? Voxel based terrain is basically just using a 3D grid of data to store data. There are lots of ways to render that data, but it doesn't get much simpler for storing it. – Byte56 Current isosurface extraction methods aren't most effective/bug-free. Cubical Marching Squares seem to solve most of the issues, however it is a relatively new method and there aren't too many resources about it. (I've found single university paper) Even if we stick to CMS, when we want to add multi-material support, we can either divide surface into multiple meshes, or pass a texture array or texture atlas to shaders, then we are limited to set amount of textures and additionally increase memory-usage alot.

    Read the article

  • Friday Spotlight: A Webcast You Do Not Want To Miss!

    - by Chris Kawalek
    Happy Friday! Today's Spotlight is about what promises to be an information packed webcast next week. We're really excited about it and hope that you are, too! Oracle Managed Cloud Services uses Oracle VM to serve up thousands of Oracle applications to thousands of end users every day. To do this, they utilize nearly 20,000 instances of Oracle VM. It's an amazing story of high availability in an unrelenting customer environment, and it's all powered by Oracle. You can leverage this team's experience in your own deployments to gain valuable insight and best practices. If you'd like to understand how well Oracle VM can scale for your organization, you do not want to miss this webcast. It is coming up this Tuesday at 10AM Pacific Time. Click the banner below to register and we hope to see you there! Oracle VM: Design Considerations for Enterprise-Scale Deployment  Tuesday, June 10, 2014 10:00 AM PDT / 1:00 PM EDT

    Read the article

  • Azure Florida Association: Modern Architecture for Elastic Azure Applications

    - by Herve Roggero
    Join us on November 28th at 7PM, US Eastern Time, to hear Zachary Gramana talk about elastic scale on Windows Azure. REGISTER HERE: https://www3.gotomeeting.com/register/657038102 Description: Building horizontally scalable applications can be challenging. If you face the need to rapidly scale or adjust to high load variations, then you are left with little choice. Azure provides a fantastic platform for building elastic applications. Combined with recent advances in browser capabilities, some older architectural patterns have become relevant again. We will dust off one of them, the client-server architecture, and show how we can use its modern incarnation to bypass a class of problems normally encountered with distributed ASP.NET applications.

    Read the article

  • Becoming an expert vs boredom [closed]

    - by QAH
    I am a college student, and I love to program, period. I code all kinds of things in different kinds of languages. Although I enjoy programming, I have an extremely hard time sticking to one project for a long time. I attribute this shortcoming to my high level of curiosity, exploring different technologies, languages, libraries, etc. What would be best? Should I settle down more and spend time on becoming an expert in one or two programming fields, or should I be more of a jack of all trades, trying out all kinds of new technologies, languages, programming methods, etc.? I'm guessing that somewhere in the middle would be best. I'm always amazed at how many developers are able to create one or two projects, and develop on them for years. What techniques do you guys employ to help you stay focused on a project?

    Read the article

  • New technical whitepaper on Database-as-a-Service

    - by Javier Puerta
    High Availability Best Practices for Database Consolidation- The Foundation for Database-as-a-Service. An Oracle White Paper - April 2014 This paper provides MAA best practices for Database Consolidation using Oracle Multitenant. It describes standard HA architectures that are the foundation for DBaaS. It is most appropriate for a technical audience: Architects, Directors of IT and Database Administrators responsible for the consolidation and migration of traditional database deployments to DBaaS.  Recommended best practices are equally relevant to any platform supported by Oracle Database except where explicitly noted as being an optimization or an example that applies only to Oracle Engineered systems. 

    Read the article

  • SAP ouvre sa plateforme In Memory aux Startups et organise une série d'événements pour construire un écosystème autour d'HANA

    SAP ouvre sa plateforme In Memory aux Startups et organise une série d'événements pour construire un écosystème fiable autour d'HANA SAP organise une série d'événements pour aider les développeurs et startups qui utilisent la plateforme In Memory HANA à tirer parti de celle-ci. SAP HANA (High-Performance Analytic Appliance) permet de produire des environnements de Data Warehouse dopés, qui fournissent des données clients en temps réel. Elle permet également d'animer un réseau en ligne et offre une plateforme ouverte aux développeurs. La société souhaite qu'un écosystème fiable soit construit autour de sa plateforme grâce à son programme de soutien aux startups du monde en...

    Read the article

  • ReSharper 5.0 Adds New Add Parameter Refactoring

    In this post, Ill show a simple example of how when you add a parameter to C# method, ReSharper gives you a simple prompting to ask if you want to add a parameter to your method, or create an... This site is a resource for asp.net web programming. It has examples by Peter Kellner of techniques for high performance programming...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Why not write all tests at once when doing TDD? [closed]

    - by RichK
    Possible Duplicate: Why not write all tests at once when doing TDD? The Red - Green - Refactor cycle for TDD is well established and accepted. We write one failing unit test and make it pass as simply as possible. What are the benefits to this approach over writing many failing unit tests for a class and make them all pass in one go. The test suite still protects you against writing incorrect code or making mistakes in the refactoring stage, and code coverage should be just as high, so what's the harm? Sometimes it's easier to write all the tests first as a form of 'brain dump' to quickly write down all the expected behavior in one go.

    Read the article

  • How to recover my inclusion in google results after being penalized for receiving comment spam?

    - by UXdesigner
    My website had very high search engine results, especially in Google. But I left the website for a couple of months and didn't notice the comments were full of SPAM, about 20k comments of SPAM. Then i checked my google results and I'm out of google ! After years of having good results, no spam, how can I now recover from that? The spam problem has been solved completely. No more spam, and the website is very legit and very nice. Well, at least I think I was penalized, I don't see any other reason.

    Read the article

  • What percentage should a consulting company take off the top of your pay?

    - by JasonStoltz
    Let's say that, hypothetically, a programmer is being paid $40 / hour for a 6 month contract, through a contracting agency. That contracting agency is being paid $85 / hour for every hour that programmer works by the client. So the programmer only actually takes home 47% of what the client is paying per hour. Is this normal, or is the percentage unusually low? Other things to consider: The consulting agency isn't paying benefits P.S (If this is normal, I'd also be curious what the justification would be to take that high of a percentage. And if it is NOT normal, what would be a normal percentage?)

    Read the article

  • OpenStack + Ubuntu 12.04

    - by csgeek
    Supposedly openstack can be installed easily under Ubuntu 12.04 LTS. I've installed 32 and 64bit versions of Ubuntu Server with the same behavior. sudo tasksel check OpenStack hit Okay then I get a tasksel: aptitude failed (100) I've seen: http://www.hastexo.com/resources/docs/installing-openstack-essex-20121-ubuntu-1204-precise-pangolin and https://github.com/EmilienM/doc-openstack documentation, but I was hoping that since it was an LTS released and it was an option in tasksel that I was simply overlooking something obvious and it's just a matter of selecting the right checkbox and hitting okay. Too much wishful thinking?

    Read the article

  • READ_ME_FIRST: What Do I Do All of Those SPARC Threads?

    - by user12608550
    New Oracle Technical White Paper: READ_ME_FIRST: What Do I Do All of Those SPARC Threads? Executive Overview With an amazing 1,536 threads in an Oracle M5-32 system, the number of threads in a single system has never been so high. This offers a tremendous processing capacity, but one may wonder how to make optimal use of all these resources. In this technical white paper, we explain how the heavily threaded Oracle T5 and M5 servers can be deployed to efficiently consolidate and manage workloads using virtualization through Oracle Solaris Zones, Oracle VM Server for SPARC, and Oracle Enterprise Manager Ops Center, as well as how to improve the performance of a single application through multi-threading. READ_ME_FIRST: What Do I Do All of Those SPARC Threads?

    Read the article

  • Performance impact of the new mtmalloc memory allocator

    - by nospam(at)example.com (Joerg Moellenkamp)
    I wrote at a number of occasions (here or here), that it could be really beneficial to use a different memory allocator for highly-threaded workloads, as the standard allocator is well ... the standard, however not very effective as soon as many threads comes into play. I didn't wrote about this as it was in my phase of silence but there was some change in the allocator area, Solaris 10 got a revamped mtmalloc allocator in version Solaris 10 08/11 (as described in "libmtmalloc Improvements"). The new memory allocator was introduced to Solaris development by the PSARC case 2010/212. But what's the effect of this new allocator and how does it works? Rickey C. Weisner wrote a nice article with "How Memory Allocation Affects Performance in Multithreaded Programs" explaining the inner mechanism of various allocators but he also publishes test results comparing Hoard, mtmalloc, umem, new mtmalloc and the libc malloc. Really interesting read and a must for people running applications on servers with a high number of threads.

    Read the article

  • After 10.10 -> 11.04 upgrade, can only login via Classic (No Effects)

    - by Ryan P.
    Yesterday I upgraded from 10.10 to 11.04, everything seemed to go okay until immediately after login: the desktop goes into a "corrupted" looking state (similar to having too high resolution set). I can see some kind of movement by moving the mouse around/right clicking, and can enter text terminals via ctrl + alt + f1 It does this in both plain "Ubuntu" and "Ubuntu Classic", and only seems to login/startup properly with Ubuntu Classic (No Effects). I have checked my video card (Radeon X600) and run the unity support test which passes with all "yes" results (Unity supported: yes): /usr/lib/nux/unity_support_test -p I have tried re-installing my Ubuntu desktop: rm -rf .gnome .gnome2 .gconf .gconfd .metacity sudo apt-get remove ubuntu-desktop sudo apt-get install ubuntu-desktop With no success. I can workaround for now with Classic (No Effects), but I'd really like to find the root problem. Any suggestions on what else to try would be appreciated!

    Read the article

  • Why do exclusively outsourcing projects as a company?

    - by user19833
    A prospective employer told me they took a company level decision to only do outsourcing projects. I do not understand why did they take such a decision and the guy I talked to did not elaborate. He further said only that "their intention is to build software components". Since they are growing quite fast and reached around 300 employees, shouldn't they be at least open to the possibility of having a project of their own, maybe? All other companies I've had contact with were at least open to have one in the future.. I talked to a few of their employees and some are working in parallel on more than 2 outsourced projects (dividing time something like 4 + 4 hours / day). It seemed like a lot of projects with a period of a few months, maybe half an year come and go... Why would a company choose to provide only outsourcing services like that? How does it work to keep hundreds of people on outsourced projects with a seemingly high project turnover rate?

    Read the article

  • Where can I learn to write my own database?

    - by Buttons840
    I'm interested in writing my own database - a triple-store. Are there any good resources to help with the challenges of such a project? Or more generally: How can I learn to write my own database? Some specific issues I'm unsure of: How is the data actually stored on the file-system? A flat-file seems easy enough, but a database is a lot more then a flat-file. What kinds of things are typically stored (or cached) in memory? How are indexes created and stored? How is ACID compliance achieved? Etc. This is a big topic, but knowing how to store large amounts of data in a reliable way is good to know. (My investigation into existing triple-stores was summarized back in 2008; not much has changed in 4 years it seems. This is why I want write my own.)

    Read the article

  • Handling (many) multiple projects in Git in an enterprise environment

    - by Michael K
    One of the advantages of older version control systems such as CVS and SVN in enterprise development is that anyone can connect to source control and see all the projects that the company has. This can make it easier to get a high level view of what kid of development is happening outside your sprint and also keeps everything in one place and easy to find. However, distributed version control systems (Git, specifically) use the repository as their base unit. They work best with one project (or several closely related projects) per repository. This makes repository management more difficult in most enterprise environments where it is not unusual to have more than 25-50 projects to support. As far as I have been able to determine, you have to keep a list somewhere else of all the repos you have. There is software available, like GitHub, that help, but that still is an extra step beyond a single connection string and listing the contents of the repository. What is the best way to deal with the complexity of multiple repositories?

    Read the article

  • How to get familiar with "what happens underneath of Java"?

    - by FidEliO
    I did not study CS nor IT. I just became a developer, now working with Java. Actually, since I now work with a big company writing high-scalable web applications, I think I need to be better with details. I have no understanding of what happens underneath of Java. Java Performance, Server-Side Java might be the buzz words?!! I am very poor with those more of low-level details but I do not know where to look honestly. I started looking for some keywords in Amazon, ended up reading books like "pragmatic programmer", "clean code", "code complete" which IMO they are not what I am looking for. Could you please give me some learning resources (books, articles, blog posts, online trainings) for this matter? I also read this post as well: Approaching Java/JVM internals But I think I need a pre-step before jumping into the OpenJDK, right?!

    Read the article

  • This Is My New Post

      This is an article line items 1111 2222 3333     my pdf This is the text This is the end. This site is a resource for asp.net web programming. It has examples by Peter Kellner of techniques for high performance programming...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Using XA Transactions in Coherence-based Applications

    - by jpurdy
    While the costs of XA transactions are well known (e.g. increased data contention, higher latency, significant disk I/O for logging, availability challenges, etc.), in many cases they are the most attractive option for coordinating logical transactions across multiple resources. There are a few common approaches when integrating Coherence into applications via the use of an application server's transaction manager: Use of Coherence as a read-only cache, applying transactions to the underlying database (or any system of record) instead of the cache. Use of TransactionMap interface via the included resource adapter. Use of the new ACID transaction framework, introduced in Coherence 3.6.   Each of these may have significant drawbacks for certain workloads. Using Coherence as a read-only cache is the simplest option. In this approach, the application is responsible for managing both the database and the cache (either within the business logic or via application server hooks). This approach also tends to provide limited benefit for many workloads, particularly those workloads that either have queries (given the complexity of maintaining a fully cached data set in Coherence) or are not read-heavy (where the cost of managing the cache may outweigh the benefits of reading from it). All updates are made synchronously to the database, leaving it as both a source of latency as well as a potential bottleneck. This approach also prevents addressing "hot data" problems (when certain objects are updated by many concurrent transactions) since most database servers offer no facilities for explicitly controlling concurrent updates. Finally, this option tends to be a better fit for key-based access (rather than filter-based access such as queries) since this makes it easier to aggressively invalidate cache entries without worrying about when they will be reloaded. The advantage of this approach is that it allows strong data consistency as long as optimistic concurrency control is used to ensure that database updates are applied correctly regardless of whether the cache contains stale (or even dirty) data. Another benefit of this approach is that it avoids the limitations of Coherence's write-through caching implementation. TransactionMap is generally used when Coherence acts as system of record. TransactionMap is not generally compatible with write-through caching, so it will usually be either used to manage a standalone cache or when the cache is backed by a database via write-behind caching. TransactionMap has some restrictions that may limit its utility, the most significant being: The lock-based concurrency model is relatively inefficient and may introduce significant latency and contention. As an example, in a typical configuration, a transaction that updates 20 cache entries will require roughly 40ms just for lock management (assuming all locks are granted immediately, and excluding validation and writing which will require a similar amount of time). This may be partially mitigated by denormalizing (e.g. combining a parent object and its set of child objects into a single cache entry), at the cost of increasing false contention (e.g. transactions will conflict even when updating different child objects). If the client (application server JVM) fails during the commit phase, locks will be released immediately, and the transaction may be partially committed. In practice, this is usually not as bad as it may sound since the commit phase is usually very short (all locks having been previously acquired). Note that this vulnerability does not exist when a single NamedCache is used and all updates are confined to a single partition (generally implying the use of partition affinity). The unconventional TransactionMap API is cumbersome but manageable. Only a few methods are transactional, primarily get(), put() and remove(). The ACID transactions framework (accessed via the Connection class) provides atomicity guarantees by implementing the NamedCache interface, maintaining its own cache data and transaction logs inside a set of private partitioned caches. This feature may be used as either a local transactional resource or as logging XA resource. However, a lack of database integration precludes the use of this functionality for most applications. A side effect of this is that this feature has not seen significant adoption, meaning that any use of this is subject to the usual headaches associated with being an early adopter (greater chance of bugs and greater risk of hitting an unoptimized code path). As a result, for the moment, we generally recommend against using this feature. In summary, it is possible to use Coherence in XA-oriented applications, and several customers are doing this successfully, but it is not a core usage model for the product, so care should be taken before committing to this path. For most applications, the most robust solution is normally to use Coherence as a read-only cache of the underlying data resources, even if this prevents taking advantage of certain product features.

    Read the article

  • Distorted text in programs

    - by Teneff
    I've installed Ubuntu 11 with gnome and in some point the text in the programs becomes unreadable like this. It's not only the text, but even the desktop background looks awful. I've tried to add section in xorg.conf, but it didn't helped out. Section "Device" Identifier "g33/X3000" Driver "intel" BusID "PCI:0:2:0" Option "ModeDebug" "on" Option "MonitorLayout" "LCD,VGA" Option "DevicePresence" "true" EndSection And this is what the lshw returns about the VGA *-display description: VGA compatible controller product: 82945G/GZ Integrated Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 02 width: 32 bits clock: 33MHz capabilities: msi pm vga_controller bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:16 memory:dfe00000-dfe7ffff ioport:8800(size=8) memory:e0000000-efffffff memory:dfe80000-d$

    Read the article

  • Books about Windows Os Programming [closed]

    - by LostInLib
    I'm trying to develop a desktop application which is similar to CCleaner. But I'm having problems with R&D resources... I can't find good books about Windows Operating System Programming Topic Examples; Explaining Windows 7 (or even 8) registry. Which registry entry turn on/off "showing desktop icons" Or, What is Windows registry defrag?, How can you defrag registry?, How can you optimize windows startup for( Windows 7 ) etc. I googled my questions, find msdn-some stackoverflow topics etc. But I can't find a book about low-level explaining current windows 7 operating system... What I'm missing ? Thanks for your any input... and sorry, I don't know is this the right place to ask that question, but I asked anyway...

    Read the article

  • Calculate initial velocity of a 3d vector-based projectile

    - by Frotty
    Okay, so I got a Projectile with 2 Vectors, position and velocity. I now want to calculate the initial velocity for it in order to reach a specific point on the map. Or actually, how high has the start z-velocity to be (because x and y are probably defined by a speed variable) in order for the projectile to hit the marked position. The projectile is influenced by a constant gravity vector. All calculations are done 32 times per second. I want this, because I don't want to use a parabola function, so the projectile can still be influenced by other sources, simply adding some velocity. I didn't really find anything referring to that topic and would be glad for every helping answer, Thanks.

    Read the article

  • Implementing RAID 1 in Ubuntu 10.04 Desktop [closed]

    - by Dibyendra
    I found many resources on implementing RAID 1 using two disk drives. But, I am confused while implementing RAID 1 using 4 RAID disks. Can we use two disks for storage and two for mirroring using RAID 1? I couldn't find the way to create RAID disk using gparted tool in Ubuntu 10.04 Desktop version. Maybe, the desktop version doesn't support RAID. I am trying to implement RAID on the existing Ubuntu installation? I have added 4 X 2TB HDD in the system and I want RAID 1 to be implemented in these 4 drives with 2 drives for storage and 2 devices for mirroring. Any help would be appreciated. Thanks! Updated: I installed Ubuntu 12.04 LTS and followed the following tutorial and it works now: www.youtube.com/watch?v=z84oBqOxsD0

    Read the article

< Previous Page | 301 302 303 304 305 306 307 308 309 310 311 312  | Next Page >