Search Results

Search found 13357 results on 535 pages for 'revenue model'.

Page 236/535 | < Previous Page | 232 233 234 235 236 237 238 239 240 241 242 243  | Next Page >

  • bluetooth fails to get enabled; Unknown symbol security_sk_clone (err 0) in dmesg

    - by Srivatsa Kanchi
    $ uname -a Linux ubuntu1110 3.0.0-14-generic-pae #23-Ubuntu SMP Mon Nov 21 22:07:10 UTC 2011 i686 i686 i386 GNU/Linux laptop model: Lenovo W520 upon trying to enable bluetooth, the led turns on but, bluetooth is fails to get enabled in the preferences also seen below error message in dmesg [78183.389048] usb 1-1.4: new full speed USB device number 19 using ehci_hcd [78183.504129] bluetooth: Unknown symbol security_sk_clone (err 0) [78183.505084] bluetooth: Unknown symbol security_sk_clone (err 0) [78183.505189] bluetooth: Unknown symbol security_sk_clone (err 0) [78183.505294] bluetooth: Unknown symbol security_sk_clone (err 0)

    Read the article

  • Workflow 4.5 is Awesome, cant wait for 5.0!

    - by JoshReuben
    About 2 years ago I wrote a blog post describing what I would like to see in Workflow vnext: http://geekswithblogs.net/JoshReuben/archive/2010/08/25/workflow-4.0---not-there-yet.aspx At the time WF 4.0 was a little rough around the edges – the State Machine was on codeplex and people were simulating state machines with Flowcharts. Last year I built a near- realtime machine management system using WF 4.0.1 – its managing the internal operations of this device: http://landanano.com/products/commercial   Well WF 4.5 has come a long way – many of my gripes have been addressed: C# expressions - no more VB 'AndAlso' clauses state machine awesomeness - can query current state many designer improvements - Document Outline is so much more succinct than Designer! Separate WCF Service Contract interfaces and ability to generate activities from contract operations ability to rehydrate to updated flow definitions via DynamicUpdateMap and WorkflowIdentity you can read about the new features here: http://msdn.microsoft.com/en-us/library/hh305677(VS.110).aspx   2013 could be the year of Workflow evangelism for .NET, as it comes together as the DSL language. Eg on Azure it could be used to graphically orchestrate between WebRoles, WorkerRoles and AppFabric Queues and the ServiceBus – that would be grand.   Here’s a list of things I’d like to see in Workflow 5.0: Stronger Parallelism support for true multithreaded workflows . A Workflow executes on a single thread – wouldn’t it be great if we had the ability to model TPL DataFlow? Parallel is not really parallel, just allows AsyncCodeActivity.     support for recursion an ExpressionTree activity with an editor design surface a math activity pack return of application level protocol (3.51 WF services) – automatically expose a state machine as a WCF service with bookmark Receive activities generated from OperationContract automatically placed in state transition triggers. A new HTML5 ActivityDesigner control – support with different CSS3  skinnable hooks,  remote connectivity (had to roll my own) A data flow view – crucial to understanding the big picture Ability to refactor a Sequence to custom activity in a separate .xaml file – like Expression Blend does for UserControl state machine global error handling - if all states goto an error state, you quickly get visual spagetti. Now you could nest a state machine, but what if you want an application level protocol whereby each state exposes certain WCF ops. DSL RAD editing - Make the Document Outline into a DSL editor for adding activities  – For WF to really succeed as a higher level of abstraction, It needs to be more productive than raw coding - drag & drop on the designer is currently too slow compared to just typing code. Extensible Wizard API - for pluggable WF editor experience other execution models beyond Sequence, Flowchart & StateMachine: SSIS, Behavior Trees,  Wolfram Model tool – surprise us! improvements to Designer debugging API - SourceLocation is tied to XAML file line number and char position, and ModelService access seems convoluted - why not leverage WPF LogicalTreeHelper / VisualTreeHelper ? Workflow Team , keep on rocking!

    Read the article

  • Ideas for time-keeping in a webbased RPG?

    - by ashy_32bit
    I'm assigned a task of doing the preliminary research stuff for a web-based MMO RPG. Now my buggiest problem here is "web based" vs "MMO RPG". I did some research about time keeping systems and I'm totally confused as how exactly something as real-time as an MMO-RPG can work on some pull-only (unidirectional) platform like HTTP. I know there is also a turn-based alternative to time keeping but can it work in an MMO setting ? EDIT: Take a battle for example, player A (human) wants to attack Player B (also human) in the open. How does it work when when player A issues the "attack" command on player B ? how do I inform player B that he is being attacked ? and then how exactly the battle goes on between the two in an HTTP based communication channel? To my knowledge this is impossible unless you resort to another technology (HTML is 1-way, that is you can just ask server and get response, server can't update you unless being asked to. this is very well-known and simply explained). So I though maybe I can somehow change the whole timekeeping model from real-time to a more non-real-time model (towards a turn based RPG for example) and somehow work around the whole problem of "interactivity". EDIT2: It is not that I don't wanna use any server side technologies. For sure it is not gonna work client-side-only even for the most trivial of the multi-player games, let alone an RPG. So sure there would be a (probably complex) server side component to it (the so called Game Engine I suppose). The problem is not the technology that implements the logic (game mechanics) bits but the communication technology and how it limits the game mechanics abilities (like how real-time or turn based it is gonna be). HTTP is a request-response protocol meaning you get served only if you ask for it (explicitly send a GET or POST request to the server). HTTP server can not inform you if anything of interest happens in the game world unless you refresh the page (as some suggested) or you use some bi-directional tech (totally different animals) like Flash, WebSock, HTML5 etc etc. So maybe the question is: Is it possible to implement a MMORPG using only HTML5/PHP and no periodic page refreshes? if so what would be rules to make it an MMO-RPG? Can't explain it any clearer. Sorry :D

    Read the article

  • Why is multithreading often preferred for improving performance?

    - by user1849534
    I have a question, it's about why programmers seems to love concurrency and multi-threaded programs in general. I'm considering 2 main approaches here: an async approach basically based on signals, or just an async approach as called by many papers and languages like the new C# 5.0 for example, and a "companion thread" that manages the policy of your pipeline a concurrent approach or multi-threading approach I will just say that I'm thinking about the hardware here and the worst case scenario, and I have tested this 2 paradigms myself, the async paradigm is a winner at the point that I don't get why people 90% of the time talk about multi-threading when they want to speed up things or make a good use of their resources. I have tested multi-threaded programs and async program on an old machine with an Intel quad-core that doesn't offer a memory controller inside the CPU, the memory is managed entirely by the motherboard, well in this case performances are horrible with a multi-threaded application, even a relatively low number of threads like 3-4-5 can be a problem, the application is unresponsive and is just slow and unpleasant. A good async approach is, on the other hand, probably not faster but it's not worst either, my application just waits for the result and doesn't hangs, it's responsive and there is a much better scaling going on. I have also discovered that a context change in the threading world it's not that cheap in real world scenario, it's in fact quite expensive especially when you have more than 2 threads that need to cycle and swap among each other to be computed. On modern CPUs the situation it's not really that different, the memory controller it's integrated but my point is that an x86 CPUs is basically a serial machine and the memory controller works the same way as with the old machine with an external memory controller on the motherboard. The context switch is still a relevant cost in my application and the fact that the memory controller it's integrated or that the newer CPU have more than 2 core it's not bargain for me. For what i have experienced the concurrent approach is good in theory but not that good in practice, with the memory model imposed by the hardware, it's hard to make a good use of this paradigm, also it introduces a lot of issues ranging from the use of my data structures to the join of multiple threads. Also both paradigms do not offer any security abut when the task or the job will be done in a certain point in time, making them really similar from a functional point of view. According to the X86 memory model, why the majority of people suggest to use concurrency with C++ and not just an async approach ? Also why not considering the worst case scenario of a computer where the context switch is probably more expensive than the computation itself ?

    Read the article

  • Customizing ASP.NET MVC 2 - Metadata and Validation

    In this article, you will learn about two major extensibility points of ASP.NET MVC 2, the ModelMetadataProvider and the ModelValidatorProvider. These two APIs control how templates are rendered, as well as server-side & client side validation of your model objects.

    Read the article

  • What Should I Do? [closed]

    - by Laxmidi
    What is a reasonable goal in terms of traffic for my Flex 3 site: www.brainpinata.com Since I began a couple of months ago, I've gotten roughly 5500 ad views and 280 ad clicks. And the ad revenue is a whopping $4.80. (I don't use Google Adsense). I advertise my site using Google Adwords to try to build traffic. My budget is $10/day. What should I do? a) Push the marketing. Add a blog. Try to get backlinks, contact blogs, start a Facebook page, tweet, etc. b) Google is only indexing the static content in the SWF. The questions/answers are pulled from a mySQL database. So, Google doesn't index 99% of the content. Should I re-do the site in HTML/Javascript and hard-code the questions for each puzzle? (This would be a challenge as I don't know javascript worth squat.) Or should I hard-code the questions in XML and put them in the Flex app? If I put the questions in an XML file it's roughly 500 KB. Other ideas? c) Should I switch ad networks? (I currently get about 100 visitors a day). My ad network pays so little that if I were to make even $500/month, I would need 550,000 ad views/month, which seems impossible. If I go ahead and switch ad networks, I need to find one that allows iFrames as I've got a Flex website. Which ad networks permit their ads to be shown in iFrames? d) Should I cut and run? I put a lot of work into this project and it would really stink to get nothing out of it. I'm looking for some good advice. Looking forward to your suggestions. Thank you. -Laxmidi

    Read the article

  • Panda Antivirus Pro 2012 and Secunia Windows Updater

    As with other offerings in the Panda Security portfolio, the core of Panda Antivirus Pro 2012's reliability comes from its innovative Collective Intelligence technology. This security model automatically analyzes, classifies, and fixes the approximately 73,000 files PandaLabs receives on a daily basis to offer users the highest protection possible against malware that is not only known, but also unknown. Best of all, the protection is provided with little impact on system performance to ensure a user-friendly experience. Speaking of user-friendly, Panda Antivirus Pro 2012 is described as the...

    Read the article

  • How can I discourage the use of Access?

    - by Greg Buehler
    Lets pretend that a very large company (revenue numbers with more than 8 figures) is looking to do a refresh on a software system, particularly the dashboard used by employees. This system was originally put together in the early 1990's to handle inventory tracking and storage across a variety of facilities (10+). Since this large company is now in the process of implementing some of these inventory processes with SAP they are in need of a major refresh. The existing system: Microsoft Access project performs dashboard duties Unique shipping/receiving configurations at different facilities require unique forms and queries within the Access project Uses 3rd party libraries referenced by Access to directly interface with at control system (read: motors, conveyors, and counters) Individual SQL Server 2000 instances (some traces of pre-update SQL Server 6.0 documents) at each facility The Issue: This system started as a home brewed inventory tracking scheme with a single internal sponsor who is still in charge of the technical direction. The original sponsor prescribing the desired deliverables that are being called for in the current RFP. The RFP describes a system based around a single Access project. Any suggestion that Access is ill suited for a project of this scope are shot down under the reasoning that "it works for the scope now". Are there any case studies, notices, or statements that can be used to disuade this potential customer from repeating their mistake? Does Microsoft make any statements directly about when it is highly recommended to ditch Access?

    Read the article

  • Headaches using distributed version control for traditional teams?

    - by J Cooper
    Though I use and like DVCS for my personal projects, and can totally see how it makes managing contributions to your project from others easier (e.g. your typical Github scenario), it seems like for a "traditional" team there could be some problems over the centralized approach employed by solutions like TFS, Perforce, etc. (By "traditional" I mean a team of developers in an office working on one project that no one person "owns", with potentially everyone touching the same code.) A couple of these problems I've foreseen on my own, but please chime in with other considerations. In a traditional system, when you try to check your change in to the server, if someone else has previously checked in a conflicting change then you are forced to merge before you can check yours in. In the DVCS model, each developer checks in their changes locally and at some point pushes to some other repo. That repo then has a branch of that file that 2 people changed. It seems that now someone must be put in charge of dealing with that situation. A designated person on the team might not have sufficient knowledge of the entire codebase to be able to handle merging all conflicts. So now an extra step has been added where someone has to approach one of those developers, tell him to pull and do the merge and then push again (or you have to build an infrastructure that automates that task). Furthermore, since DVCS tends to make working locally so convenient, it is probable that developers could accumulate a few changes in their local repos before pushing, making such conflicts more common and more complicated. Obviously if everyone on the team only works on different areas of the code, this isn't an issue. But I'm curious about the case where everyone is working on the same code. It seems like the centralized model forces conflicts to be dealt with quickly and frequently, minimizing the need to do large, painful merges or have anyone "police" the main repo. So for those of you who do use a DVCS with your team in your office, how do you handle such cases? Do you find your daily (or more likely, weekly) workflow affected negatively? Are there any other considerations I should be aware of before recommending a DVCS at my workplace?

    Read the article

  • Many-to-many relationships in pharmacology

    - by John Paul Cook
    When I was in my pharmacology class this morning, I realized that the instructor was presenting a classic relational database management system problem: the many-to-many relationship. He said that all of us in nursing school must know our drugs backwards and forwards. I know how to model that! There are so many things in both healthcare and higher education that could benefit from an appropriate application of technology. As a student, I'd like to be able to start with a drug, a disease, a name of...(read more)

    Read the article

  • Next Best Action: an emerging engagement paradigm can elevate customer experience to the next level

    - by Richard Lefebvre
    As customer interactions increase across an expanding number of communication channels, business leaders are struggling to understand and engage with each customer effectively. To address this challenge, leading organizations are adopting strategies around “next best action,” a decision-support model that systematically identifies the next best step to take in the customer conversation—whether that action is providing additional information or targeted services, presenting a unique offer, or taking no action at all... Read the complete article - by Mark A. Stevens (vice president, Insight and Customer Strategy, at Oracle) - here

    Read the article

  • PASS: SQLRally Thoughts

    - by Bill Graziano
    The PASS Board recently decided that we wouldn’t put another US-based SQLRally on the calendar until we had a chance to review the program. I wanted to provide some of my thinking around this. Keep in mind that this is the opinion of one Board member. The Board committed to complete two SQLRally events to determine if an event modeled between SQL Saturday and the Summit was viable. We’ve completed the two events and now it’s time to step back and review the program. This is my seventh year on the PASS Board. Over that time people have asked me why PASS does certain things. Many, many times my answer has been “Because that’s the way we did it last year”. And I am tired of giving that answer. We need to take a step back and review the US-based SQLRally before we schedule another one. It would be irresponsible for me as a Board member to commit resources to this without validating that what we’re doing makes sense for the organization and our members. I have no doubt that this was a great event for the attendees. We just need to validate it’s the best use of our resources. Please keep in mind that we haven’t cancelled the event. We’ve just said we need to review it before scheduling another one. My opinion is that some fairly serious changes are needed to the model before we consider it again – IF we do it again. I’ve come to that conclusion after speaking with the Dallas organizers, our HQ team, our Marketing team, other Board members (including one of the Orlando organizers), attendees in Orlando and Dallas and visiting other similar events. I should point out that their views aren’t unanimous on nearly any part of this event -- which is one of the reasons I want to take some time and think about this before continuing. I think it’s helpful to look at the original goals of what we were trying to accomplish. Andy Warren wrote these up in August of 2010. My summary of these goals and some thoughts on each one is below. Many of these thoughts revolve around the growth of SQL Saturdays. In the two years since that document was written these events have grown significantly. The largest SQL Saturdays are now over 500 people which mean they are nearly the same size as our recent SQLRally. Our goals included: Geographic diversity. We wanted an event in an area of the country that was away from any given Summit location. I think that’s still a valid goal. But we also have SQL Saturdays all over the country. What does SQLRally bring to this that SQLSaturday doesn’t? Speaker growth. One of the stated goals was to build a “farm club” for speakers. This gives us a way for speakers to work up to speaking at Summit by speaking in front of larger crowds. What does SQLRally bring to this that the larger SQL Saturdays aren’t providing? Pre-Conference speakers is one obvious answer here. Lower price. On a per-day basis, SQLRally is roughly 1/4th the price of the Summit. We wanted a way for people to experience something Summit-like at a lower price point. The challenge is that we are very budget constrained at that lower price point. International Event Model.  (I need to write more about this but I’m out of time.  I’ll cover it in the next installment.) There are a number of things I really like about SQLRally. I love the smaller conferences. They give me a chance to meet more people than at something the size of Summit. I like the two day format. That gives you two evenings to be at social events with people. Seeing someone a second day is a great way to build a bond with that person. That’s more difficult to do at a SQL Saturday. We also need to talk about the financial aspects of the event. Last year generated a small $17,000 profit on revenues of $200,000. Percentage-wise that’s reasonable but on an absolute basis it’s not a huge amount in our budget. We think this year will lose between $30,000 and $50,000 and take roughly 1,000 hours of HQ time. We don’t have detailed financials back yet but that’s our best guess at this point. Part of that was driven by using a convention center instead of a hotel. Until we get detailed financials back we won’t have the full picture around the financial impact. This event also takes time and mindshare from our Marketing team. This may sound like a small thing but please don’t underestimate it. Our original vision for this was something that would take very little time from our Marketing team and just a few mentions in the Connector. It turned out to need more than that. And all those mentions and emails take up space we could use to talk about other events and other programs. Last I wanted to talk about some of the things I’m thinking about. I don’t think it’s as simple as saying if we just fix “X” it all gets better. Is this that much better of an event than SQL Saturdays? What if we gave a few SQL Saturdays some extra resources? When SQL Saturdays were around 250 people that wasn’t as viable. With some of those events over 500 we need to reconsider this. We need to get back to a hotel venue. That will help with cost and networking. Is this the best use of the 1,000 HQ hours that we invested in the event? Is our price-point correct? I’m leaning toward raising our price closer to Summit on a per-day basis. I think this will let us put on a higher quality event and alleviate much of the budget pressure. Should growing speakers be a focus? Having top-line pre-conference speakers helps market the event. It will also have an impact on pricing and overall profit. We should also ask if it actually does grow speakers. How many of these people will eventually register for Summit? Attend chapters? Is SQLRally a driver into PASS or is it something that chapters, etc. drive people to? Should we have one paid day and one free instead of two paid days? This is a very interesting model that is used by SQLBits in the UK. This gives you the two day aspect as well as offering options for paid and free attendees. I’m very intrigued by this. Should we focus on a topic? Buried in the minutes is a discussion of whether PASS should have a Business Analytics conference separate from Summit. This is an interesting question to consider. Would making SQLRally be focused on a particular topic make it more attractive? Would that even be a SQLRally? Can PASS effectively manage the two events? (FYI - Probably not.) Would it help differentiate it from Summit and SQL Saturday? These are all questions that I think should be asked and answered before we do this event again. And we can’t do that if we don’t take time to have the discussion. I wanted to get this published before I take off for a few days of vacation. When I get back I’d like to write more about why the international events are different and talk about where we go from here.

    Read the article

  • Are UML class diagrams adequated to design javascript systems?

    - by Vandell
    Given that UML is oriented towards a more classic approach to object orientation, is it still usable in a reliable way to design javascript systems? One specific problem that I can see is that class diagrams are, in fact, a structural view of the system, and javascript is more behaviour driven, how can you deal with it? Please, keep in mind that I'm not talking abot the real world domain here, It's a model for the solution that I'm trying to achieve.

    Read the article

  • PowerPivot, Stocks Exchange and the Moving Average

    - by AlbertoFerrari
    In this post I want to analyze a data model that perform some basic computations over stocks and uses one of the most commonly needed formula there (and in many other business scenarios): a moving average over a specified period in time. Moving averages can be very easily modeled in PowerPivot, but there is the need for some care and, in this post, I try to provide a solid background on how to compute moving averages. In general, a moving average over period T1..T2 is the average of all the values...(read more)

    Read the article

  • Animate multiple entities

    - by Robert
    I'm trying to animate multiple(3) entities using one model(IQM format). It's working but performance is really bad because I'm calling animate function for each entity in my game loop (I think problem is there). What's the best way to animate multiple entities (with different animation ofc) in OpenGL? I think I can try build one VBO / entity for better performances but I don't think it's the best way to do it.

    Read the article

  • Aberdeen 10/25 Webcast: Service Excellence and the Path to Business Transformation

    - by Charles Knapp
    The uncertain economy has had a sustained impact on service organizations and processes. The impact has contributed to new complexities - new customer engagement channels, enhanced user and customer expectations, rapidly evolving technologies, increased competition, and increased compliance and regulatory mandates. Yet many organizations have embraced these challenges by investing in and transforming customer service to evolve, differentiate, and thrive under current constraints. What is their secret? Transforming Support Centers into Profit Centers According to the recent Aberdeen research report, “Service Excellence and the Path to Business Transformation”, service is now viewed as a strategic profit center at nearly 70% of organizations. As customers demand improved service, in terms of speed, efficiency and reliability, an organization's success has become increasingly dependent on optimizing the customer ownership experience. Those service organizations focused on providing easy, consistent, and relevant interactions across the customer lifecycle, including service and support delivery, are experiencing higher levels of customer acquisition and retention and are achieving better revenue and margin growth rates.  Don't miss this opportunity to learn how to transform to provide the next generation of service offerings. Click here to register now for the webcast and download a complimentary copy of this informative new research paper.

    Read the article

  • What is the canonical resource on multi-tenancy web applications using ruby + rails

    - by AlexC
    What is the canonical resource on multi-tenancy web applications using ruby + rails. There are a number of ways to develop rails apps using cloud capabilities with real elastic properties but there seems to be a lack of clarity with how to achieve multitenancy, specifically at the model / data level. Is there a canonical resource on options to developing multitenancy rails applications with the required characteristics of data seperation, security, concurrency and contention required by an enterprise level cloud application.

    Read the article

  • What standards to use in Business Process Modelling?

    - by user1268690
    There are several approaches on how to model a business process in software applications (BPM software). For instance, a processes can be described in BPMN, EPC, IDEF0, SOMF, etc. Additionally, different process execution languages such as BPEL, RPC, Wf-XML are available. If I were to develop software for the BPM-market, which standards should I implement or focus on? Which standards are most suitable if my BPM software was going to be implemented in my client's IT-System?

    Read the article

< Previous Page | 232 233 234 235 236 237 238 239 240 241 242 243  | Next Page >