Search Results

Search found 7211 results on 289 pages for 'enable'.

Page 255/289 | < Previous Page | 251 252 253 254 255 256 257 258 259 260 261 262  | Next Page >

  • #OOW 2012 @PARIS...talking Oracle and Clouds, and Optimized Datacenter

    - by Eric Bezille
    For those of you who want to get most out of Oracle technologies to evolve your IT to the Next Wave, I encourage you to register to the up coming Oracle Optimized Datacenter event that will take place in Paris on November 28th. You will get the opportunity to exchange with Oracle experts and customers having successfully evolve their IT by leveraging Oracle technologies. You will also get the latest news on some of the Oracle systems announcements made during OOW 2012. During this event we will make an update about Oracle and Clouds, from private to public and hybrid models. So in preparing this session, I thought it was a good start to make a status of Cloud Computing in France, and CIO requirements in particular. Starting in 2009 with the first Cloud Camp in Paris, the market has evolved, but the basics are still the same : think hybrid. From Traditional IT to Clouds One size doesn't fit all, and for big companies having already an IT in place, there will be parts eligible to external (public) cloud, and parts that would be required to stay inside the firewalls, so ability to integrate both side is key.  None the less, one of the major impact of Cloud Computing trend on IT, reported by Forrester, is the pressure it makes on CIO to evolve towards the same model that end-users are now used to in their day to day life, where self-service and flexibility are paramount. This is what is driving IT to transform itself toward "a Global Service Provider", or for some as "IT "is" the Business" (see : Gartner Identifies Four Futures for IT and CIO), and for both models toward a Private Cloud Service Provider. In this journey, there is still a big difference between most of existing external Cloud and a firm IT : the number of applications that a CIO has to manage. Most cloud providers today are overly specialized, but at the end of the day, there are really few business processes that rely on only one application. So CIOs has to combine everything together external and internal. And for the internal parts that they will have to make them evolve to a Private Cloud, the scope can be very large. This will often require CIOs to evolve from their traditional approach to more disruptive ones, the time has come to introduce new standards and processes, if they want to succeed. So let's have a look at the different Cloud models, what type of users they are addressing, what value they bring and most importantly what needs to be done by the  Cloud Provider, and what is left over to the user. IaaS, PaaS, SaaS : what's provided and what needs to be done First of all the Cloud Provider will have to provide all the infrastructure needed to deliver the service. And the more value IT will want to provide, the more IT will have to deliver and integrate : from disks to applications. As we can see in the above picture, providing pure IaaS, left a lot to cover for the end-user, that’s why the end-user targeted by this Cloud Service is IT people. If you want to bring more value to developers, you need to provide to them a development platform ready to use, which is what PaaS is standing for, by providing not only the processors power, storage and OS, but also the Database and Middleware platform. SaaS being the last mile of the Cloud, providing an application ready to use by business users, the remaining part for the end-users being configuring and specifying the application for their specific usage. In addition to that, there are common challenges encompassing all type of Cloud Services : Security : covering all aspect, not only of users management but also data flows and data privacy Charge back : measuring what is used and by whom Application management : providing capabilities not only to deploy, but also to upgrade, from OS for IaaS, Database, and Middleware for PaaS, to a full Business Application for SaaS. Scalability : ability to evolve ALL the components of the Cloud Provider stack as needed Availability : ability to cover “always on” requirements Efficiency : providing a infrastructure that leverage shared resources in an efficient way and still comply to SLA (performances, availability, scalability, and ability to evolve) Automation : providing the orchestration of ALL the components in all service life-cycle (deployment, growth & shrink (elasticity), upgrades,...) Management : providing monitoring, configuring and self-service up to the end-users Oracle Strategy and Clouds For CIOs to succeed in their Private Cloud implementation, means that they encompass all those aspects for each component life-cycle that they selected to build their Cloud. That’s where a multi-vendors layered approach comes short in terms of efficiency. That’s the reason why Oracle focus on taking care of all those aspects directly at Engineering level, to truly provide efficient Cloud Services solutions for IaaS, PaaS and SaaS. We are going as far as embedding software functions in hardware (storage, processor level,...) to ensure the best SLA with the highest efficiency. The beauty of it, as we rely on standards, is that the Oracle components that you are running today in-house, are exactly the same that we are using to build Clouds, bringing you flexibility, reversibility and fast path to adoption. With Oracle Engineered Systems (Exadata, Exalogic & SPARC SuperCluster, more specifically, when talking about Cloud), we are delivering all those components hardware and software already engineered together at Oracle factory, with a single pane of glace for the management of ALL the components through Oracle Enterprise Manager, and with high-availability, scalability and ability to evolve by design. To give you a feeling of what does that bring in terms just of implementation project timeline, for example with Oracle SPARC SuperCluster, we have a consistent track of record to have the system plug into existing Datacenter and ready in a week. This includes Oracle Database, OS, virtualization, Database Storage (Exadata Storage Cells in this case), Application Storage, and all network configuration. This strategy enable CIOs to very quickly build Cloud Services, taking out not only the complexity of integrating everything together but also taking out the automation and evolution complexity and cost. I invite you to discuss all those aspect in regards of your particular context face2face on November 28th.

    Read the article

  • Microsoft BUILD 2013 Day 1&ndash;Keynote

    - by Tim Murphy
    Originally posted on: http://geekswithblogs.net/tmurphy/archive/2013/06/27/microsoft-build-2013-day-1ndashkeynote.aspx This one is going to be a little long because the keynote was jam-packed so bare with me. The keynote for the first day of BUILD 2013 was kicked off by Steve Balmer.  He made it very clear that Microsoft’s focus is on accelerating its time to market with products and product updates.  His quote was that “Rapid release” is the new norm.  He continued by showing off several new Lumias that have been buzzing around the internet for a while and announce that Sprint will now be carrying the HTC 8XT and Samsung ATIV. Balmer is known for repeating words or phrase for affect.  This time it was “Rapid release, rapid release” and “Touch, touch, touch, touch, touch, …”.  This was fun, but even more fun was when he announce that all attendees would receive an Acer Iconia 8” tablet. SCORE! The next subject Balmer focused on is new apps.  The three new ones were Flipboard, Facebook and NFL Fantasy Football.  I liked the first two because these are ones that people coming from other platforms are missing.  The NFL app is great just because it targets a demographic that can be fanatical.  If these types of apps keep coming than the missing app argument goes away. While many Negative Nancy’s are describing Windows 8.1 as Windows 180 Steve Balmer chose to call it a “refined blend” as in a coffee that has been improved with a new mix.  This includes more multi-tasking options and leveraging Bing straight throughout the entire ecosystem. He ended this first section by explaining that this will also bring more Bing development opportunities to the community. Steve Balmer was followed by Julie Larson-Green who spent her time on stage selling us on Windows 8 all over again from my point of view.  Something that I would not have thought was needed until I had listened to some other attendees who had a number of concerns and complaints.  She showed a number of new gestures that will come with Windows 8.1, and while they were cool I was left wondering if they really improved the experience.  I guess only time will tell. I did like the fact that it the UI implementation to bring up “All Apps” now mirrors that of Windows Phone.  The consistency is a big step forward that I hope to see continue.  The cool factor went up from there as she swiped content from a desktop (mega-tablet) to the XBox One.  This seamless experience I believe is what is really needed for any future platform to be relevant. I was much more enthused by the presentation of Antoine Leblond who humbled us by letting us know that there are 5k new API.  How that can be or how anyone would ever use all of them is another question.  His announcement was that the Visual Studio 2013 preview would be available today along with the Windows 8.1 bits.  One of the features of VS2013 that he demonstrated is the power consumption profiler.  With battery life being a key factor with consumer consumption devices this is a welcome addition. He didn’t limit his presentation to VS2013 features though.  He showed how the Store has been redesigned to enable better search and discoverability of apps and how Win 8.1 can perform multiple screen scales depending on the resolution of the device automatically.  The last feature he demoed was the real time video streaming API which he made sure we understood by attaching a Surface to a little robot.  Oh, but there was one more thing.  Antoine and Julie announce that all attendees would also be getting Surface Pros.  BONUS! How much more could there be?  Gurdeep Singh Pall was about to pile on.  He introduced us to Bing as a platform (BaaP?).  He said if they (Microsoft) could do something with and API that is good 3rd party developers can do something that is dynamite and showed us some of the tools they had produced.  These included natural user interface improvements such as voice commands that looked to put Siri to shame.  Add to that 3D, OCR and translation capabilities and the future looks to be full of opportunities. Balmer then came out to show us one last thing.  Project Spark is a game design environment that will be available for Windows 8.1, XBox 360 and XBox One.  All I can say is that if my kids get their hands on this they are going to be able to learn some of what dad does in a much more enjoyable way. At the end of it all I was both exhausted and energized by what I saw.  What could they have possibly left for the day 2 keynote?  I hear it will feature Scott Hanselman.  If that is right we are in for a treat.  See you there. del.icio.us Tags: BUILD 2013,Windows 8.1,Winodws Phone,XAML,Keynote,Bing,Visual Studio 2013,Project Spark

    Read the article

  • Solaris 11.2: Functional Deprecation

    - by alanc
    In Solaris 11.1, I updated the system headers to enable use of several attributes on functions, including noreturn and printf format, to give compilers and static analyzers more information about how they are used to give better warnings when building code. In Solaris 11.2, I've gone back in and added one more attribute to a number of functions in the system headers: __attribute__((__deprecated__)). This is used to warn people building software that they’re using function calls we recommend no longer be used. While in many cases the Solaris Binary Compatibility Guarantee means we won't ever remove these functions from the system libraries, we still want to discourage their use. I made passes through both the POSIX and C standards, and some of the Solaris architecture review cases to come up with an initial list which the Solaris architecture review committee accepted to start with. This set is by no means a complete list of Obsolete function interfaces, but should be a reasonable start at functions that are well documented as deprecated and seem useful to warn developers away from. More functions may be flagged in the future as they get deprecated, or if further passes are made through our existing deprecated functions to flag more of them. Header Interface Deprecated by Alternative Documented in <door.h> door_cred(3C) PSARC/2002/188 door_ucred(3C) door_cred(3C) <kvm.h> kvm_read(3KVM), kvm_write(3KVM) PSARC/1995/186 Functions on kvm_kread(3KVM) man page kvm_read(3KVM) <stdio.h> gets(3C) ISO C99 TC3 (Removed in ISO C11), POSIX:2008/XPG7/Unix08 fgets(3C) gets(3C) man page, and just about every gets(3C) reference online from the past 25 years, since the Morris worm proved bad things happen when it’s used. <unistd.h> vfork(2) PSARC/2004/760, POSIX:2001/XPG6/Unix03 (Removed in POSIX:2008/XPG7/Unix08) posix_spawn(3C) vfork(2) man page. <utmp.h> All functions from getutent(3C) man page PSARC/1999/103 utmpx functions from getutentx(3C) man page getutent(3C) man page <varargs.h> varargs.h version of va_list typedef ANSI/ISO C89 standard <stdarg.h> varargs(3EXT) <volmgt.h> All functions PSARC/2005/672 hal(5) API volmgt_check(3VOLMGT), etc. <sys/nvpair.h> nvlist_add_boolean(3NVPAIR), nvlist_lookup_boolean(3NVPAIR) PSARC/2003/587 nvlist_add_boolean_value, nvlist_lookup_boolean_value nvlist_add_boolean(3NVPAIR) & (9F), nvlist_lookup_boolean(3NVPAIR) & (9F). <sys/processor.h> gethomelgroup(3C) PSARC/2003/034 lgrp_home(3LGRP) gethomelgroup(3C) <sys/stat_impl.h> _fxstat, _xstat, _lxstat, _xmknod PSARC/2009/657 stat(2) old functions are undocumented remains of SVR3/COFF compatibility support If the above table is cut off when viewing in the blog, try viewing this standalone copy of the table. To See or Not To See To see these warnings, you will need to be building with either gcc (versions 3.4, 4.5, 4.7, & 4.8 are available in the 11.2 package repo), or with Oracle Solaris Studio 12.4 or later (which like Solaris 11.2, is currently in beta testing). For instance, take this oversimplified (and obviously buggy) implementation of the cat command: #include <stdio.h> int main(int argc, char **argv) { char buf[80]; while (gets(buf) != NULL) puts(buf); return 0; } Compiling it with the Studio 12.4 beta compiler will produce warnings such as: % cc -V cc: Sun C 5.13 SunOS_i386 Beta 2014/03/11 % cc gets_test.c "gets_test.c", line 6: warning: "gets" is deprecated, declared in : "/usr/include/iso/stdio_iso.h", line 221 The exact warning given varies by compilers, and the compilers also have a variety of flags to either raise the warnings to errors, or silence them. Of couse, the exact form of the output is Not An Interface that can be relied on for automated parsing, just shown for example. gets(3C) is actually a special case — as noted above, it is no longer part of the C Standard Library in the C11 standard, so when compiling in C11 mode (i.e. when __STDC_VERSION__ >= 201112L), the <stdio.h> header will not provide a prototype for it, causing the compiler to complain it is unknown: % gcc -std=c11 gets_test.c gets_test.c: In function ‘main’: gets_test.c:6:5: warning: implicit declaration of function ‘gets’ [-Wimplicit-function-declaration] while (gets(buf) != NULL) ^ The gets(3C) function of course is still in libc, so if you ignore the error or provide your own prototype, you can still build code that calls it, you just have to acknowledge you’re taking on the risk of doing so yourself. Solaris Studio 12.4 Beta % cc gets_test.c "gets_test.c", line 6: warning: "gets" is deprecated, declared in : "/usr/include/iso/stdio_iso.h", line 221 % cc -errwarn=E_DEPRECATED_ATT gets_test.c "gets_test.c", line 6: "gets" is deprecated, declared in : "/usr/include/iso/stdio_iso.h", line 221 cc: acomp failed for gets_test.c This warning is silenced in the 12.4 beta by cc -erroff=E_DEPRECATED_ATT No warning is currently issued by Studio 12.3 & earler releases. gcc 3.4.3 % /usr/sfw/bin/gcc gets_test.c gets_test.c: In function `main': gets_test.c:6: warning: `gets' is deprecated (declared at /usr/include/iso/stdio_iso.h:221) Warning is completely silenced with gcc -Wno-deprecated-declarations gcc 4.7.3 % /usr/gcc/4.7/bin/gcc gets_test.c gets_test.c: In function ‘main’: gets_test.c:6:5: warning: ‘gets’ is deprecated (declared at /usr/include/iso/stdio_iso.h:221) [-Wdeprecated-declarations] % /usr/gcc/4.7/bin/gcc -Werror=deprecated-declarations gets_test.c gets_test.c: In function ‘main’: gets_test.c:6:5: error: ‘gets’ is deprecated (declared at /usr/include/iso/stdio_iso.h:221) [-Werror=deprecated-declarations] cc1: some warnings being treated as errors Warning is completely silenced with gcc -Wno-deprecated-declarations gcc 4.8.2 % /usr/bin/gcc gets_test.c gets_test.c: In function ‘main’: gets_test.c:6:5: warning: ‘gets’ is deprecated (declared at /usr/include/iso/stdio_iso.h:221) [-Wdeprecated-declarations] while (gets(buf) != NULL) ^ % /usr/bin/gcc -Werror=deprecated-declarations gets_test.c gets_test.c: In function ‘main’: gets_test.c:6:5: error: ‘gets’ is deprecated (declared at /usr/include/iso/stdio_iso.h:221) [-Werror=deprecated-declarations] while (gets(buf) != NULL) ^ cc1: some warnings being treated as errors Warning is completely silenced with gcc -Wno-deprecated-declarations

    Read the article

  • Cookbook: SES and UCM setup

    - by George Maggessy
    The purpose of this post is to guide you setting up the integration between UCM and SES. On my next post I’ll show different approaches to integrate WebCenter Portal, UCM and SES based on some common scenarios. Let’s get started. WebCenter Content Configuration WebCenter Content has a component that adds functionality to the content server to allow it to be searched via the Oracle SES. To enable the component installation, go to Administration -&gt; Admin Server and select SESCrawlerExport. Click the update button and restart UCM_server1 managed server. Once the managed server is back, we’ll configure the component. In the menu, under Administration you should see SESCrawlerExport. Click on the link. You’ll see the window below. Click on Configure SESCrawlerExport. Configure the values below: Hostname: SES hostname. Feed Location: Directory where data feeds will be saved. Metadata List: List of metadata that will be searchable by SES. After updating the values click on the Update button. Come back to the SESCrawlerExport Administration UI and click on Take Snapshot button. It will create the data feeds in the specified Feed Location. To check if the correct configuration was done, please access the following URL http://&lt;ucm_server&gt;:&lt;port&gt;/cs/idcplg?IdcService=SES_CRAWLER_DOWLOAD_CONFIG&amp;source=default. It should download config file in the format below: &lt;?xml version="1.0" encoding="UTF-8"?&gt; &lt;rsscrawler xmlns="http://xmlns.oracle.com/search/rsscrawlerconfig"&gt; &lt;feedLocation&gt;&lt;![CDATA[http://adc6160699.us.oracle.com:16200/cs/idcplg?IdcService=SES_CRAWLER_DOWNLOAD_CONTROL&amp;source=default]]&gt;&lt;/feedLocation&gt; &lt;errorFileLocation&gt;&lt;![CDATA[http://adc6160699.us.oracle.com:16200/cs/idcplg?IdcService=SES_CRAWLER_STATUS&amp;IsJava=1&amp;source=default&amp;StatusFeed=]]&gt;&lt;/errorFileLocation&gt; &lt;feedType&gt;controlFeed&lt;/feedType&gt; &lt;sourceName&gt;default&lt;/sourceName&gt; &lt;securityType&gt;attributeBased&lt;/securityType&gt; &lt;securityAttribute name="Account" grant="true"/&gt; &lt;securityAttribute name="DocSecurityGroup" grant="true"/&gt; &lt;securityAttribute name="Collab" grant="true"/&gt; &lt;/rsscrawler&gt; Make sure Account and DocSecurityGroup values are true. SES Configuration Let’s start by configuring the Identity Plug-ins in SES. Go to Global Settings -&gt; System -&gt; Identity Management Setup. Select Oracle Content Server and click the Activate button. We’ll populate the following values: HTTP endpoint for authentication: URL to WebCenter Content. Notice that /cs/idcplg was added at the end of the URL. Admin User: UCM Admin user. This user must have access to all CPOE content. Password: Password to Admin user. Authentication Type: NATIVE. Go back to the Home tab and click on Sources on the top left. Select Oracle Content Server on the right and click the Create button. Configuration URL: URL that point to the configuration file. Example: http://&lt;ucm_hostname&gt;:&lt;port&gt;/cs/idcplg?IdcService=SES_CRAWLER_DOWNLOAD_CONFIG&amp;source=default. User ID: UCM Admin user. Password: Password to Admin user. Click on the Authorization tab and add the appropriate values to the fields below. Make sure you see the ACCOUNT and DOCSECURITYGROUP security attributes at the end of the page. HTTP endpoint for authorization: http://&lt;ucm_hostname&gt;:&lt;port&gt;/cs/idcplg. Display URL prefix: http://&lt;ucm_hostname&gt;:&lt;port&gt;/cs. Administrator user: UCM Admin user. Administrator password. On the Document Types tab, add the documents that should be indexed by SES. As our last step, we’ll configure the Federation Trusted Entities under Global Settings. Entity Name: The user must be present in both the identity management server configured for your WebCenter application and the identity management server configured for Oracle SES. For instance, I used weblogic in my sample. Password: Entity user password.\ Now you are ready to test the integration on the SES UI: http://&lt;ses hostname&gt;:&lt;port&gt;/search/query/.

    Read the article

  • Seizing the Moment with Mobility

    - by Divya Malik
    Empowering people to work where they want to work is becoming more critical now with the consumerisation of technology. Employees are bringing their own devices to the workplace and expecting to be productive wherever they are. Sales people welcome the ability to run their critical business applications where they can be most effective which is typically on the road and when they are still with the customer. Oracle has invested many years of research in understanding customer's Mobile requirements. “The keys to building the best user experience were building in a lot of flexibility in ways to support sales, and being useful,” said Arin Bhowmick, Director, CRM, for the Applications UX team. “We did that by talking to and analyzing the needs of a lot of people in different roles.” The team studied real-life sales teams. “We wanted to study salespeople in context with their work,” Bhowmick said. “We studied all user types in the CRM world because we wanted to build a user interface and user experience that would cater to sales representatives, marketing managers, sales managers, and more. Not only did we do studies in our labs, but also we did studies in the field and in mobile environments because salespeople are always on the go.” Here is a recent post from Hernan Capdevila, Vice President, Oracle Fusion Apps which was featured on the Oracle Applications Blog.  Mobile devices are forcing a paradigm shift in the workplace – they’re changing the way businesses can do business and the type of cultures they can nurture. As our customers talk about their mobile needs, we hear them saying they want instant-on access to enterprise data so workers can be more effective at their jobs anywhere, anytime. They also are interested in being more cost effective from an IT point of view. The mobile revolution – with the idea of BYOD (bring your own device) – has added an interesting dynamic because previously IT was driving the employee device strategy and ecosystem. That's been turned on its head with the consumerization of IT. Now employees are figuring out how to use their personal devices for work purposes and IT has to figure out how to adapt. Blurring the Lines between Work and Personal Life My vision of where businesses will be five years from now is that our work lives and personal lives will be more interwoven together. In turn, enterprises will have to determine how to make employees’ work lives fit more into the fabric of their personal lives. And personal devices like smartphones are going to drive significant business value because they let us accomplish things very incrementally. I can be sitting on a train or in a taxi and be productive. At the end of any meeting, I can capture ideas and tasks or follow up with people in real time. Mobile devices enable this notion of seizing the moment – capitalizing on opportunities that might otherwise have slipped away because we're not connected. For the industry shapers out there, this is game changing. The lean and agile workforce is definitely the future. This notion of the board sitting down with the executive team to lay out strategic objectives for a three- to five-year plan, bringing in HR to determine how they're going to staff the strategic activities, kicking off the execution, and then revisiting the plan in three to five years to create another three- to five-year plan is yesterday's model. Businesses that continue to approach innovating in that way are in the dinosaur age. Today it's about incremental planning and incremental execution, which requires a lot of cohesion and synthesis within the workforce. There needs to be this interweaving notion within the workforce about how ideas cascade down, how people engage, how they stay connected, and how insights are shared. How to Survive and Thrive in Today’s Marketplace The notion of Facebook isn’t new. We lived it pre-Internet days with America Online and Prodigy – Facebook is just the renaissance of these services in a more viral and pervasive way. And given the trajectory of the consumerization of IT with people bringing their personal tooling to work, the enterprise has no option but to adapt. The sooner that businesses realize this from a top-down point of view the sooner that they will be able to really drive significant innovation and adapt to the marketplace. There are a small number of companies right now (I think it's closer to 20% rather than 80%, but the number is expanding) that are able to really innovate in this incremental marketplace. So from a competitive point of view, there's no choice but to be social and stay connected. By far the majority of users on Facebook and LinkedIn are mobile users – people on iPhones, smartphones, Android phones, and tablets. It's not the couch people, right? It's the on-the-go people – those people at the coffee shops. Usually when you're sitting at your desk on a big desktop computer, typically you have better things to do than to be on Facebook. This is a topic I'm extremely passionate about because I think mobile devices are game changing. Mobility delivers significant value to businesses – it also brings dramatic simplification from a functional point of view and transforms our work life experience. Hernan Capdevila Vice President, Oracle Applications Development

    Read the article

  • SharePoint 2010 Diagnostic Studio Remote Diag

    - by juanlarios
    I have had some time this week to try out some tools that I have been meaning to try out. This week I am trying out the SP 2010 Diagnostic Studio. I installed it successfully and tried it on my development evironment. I was able to build a report and a snapshot of the environment. I decided to turn my attention to my Employer's intranet environment. This would allow me to analyze it and measure it against benchmarks. I didn't want to install the Diagnostic studio on the Production Envorinment, lucky for me, the Diagnostic studio can be run remotely, well...kind of. Issue My development environment is a stand alone, full installation of SharePoint 2010 Server. It has Office 2010, SQL 2008 Enterprise, a DC...well you get the point, it's jammed packed! But more importantly it's a stand alone, self contained VM environment. Well Microsoft has instructions as to how to connect remotely with Diagnostic Studio here. The deciving part of this is that the SP2010DS prompts you for credentails. So I thought I was getting the right account to run the reports. I tried all the Power Shell commands in the link above but I still ended up getting the following errors: 06/28/2011 12:50:18    Connecting to remote server failed with the following error message : The WinRM client cannot process the request...If the SPN exists, but CredSSP cannot use Kerberos to validate the identity of the target computer and you still want to allow the delegation of the user credentials to the target computer, use gpedit.msc and look at the following policy: Computer Configuration -> Administrative Templates -> System -> Credentials Delegation -> Allow Fresh Credentials with NTLM-only Server Authentication.  Verify that it is enabled and configured with an SPN appropriate for the target computer. For example, for a target computer name "myserver.domain.com", the SPN can be one of the following: WSMAN/myserver.domain.com or WSMAN/*.domain.com. Try the request again after these changes. For more information, see the about_Remote_Troubleshooting Help topic. 06/28/2011 12:54:47    Access to the path '\\<targetserver>\C$\Users\<account logging in>\AppData\Local\Temp' is denied. You might also get an error message like this: The WinRM client cannot process the request. A computer policy does not allow the delegation of the user credentials to the target computer. Explanation After looking at the event logs on the target environment, I noticed that there were a several Security Exceptions. After looking at the specifics around who was denied access, I was able to see the account that was being denied access, it was the client machine administrator account. Well of course that was never going to work!!! After some quick Googling, the last error message above will lead you to edit the Local Group Policy on the client server. And although there are instructions from microsoft around doing this, it really will not work in this scenario. Notice the Description and how it only applices to authentication mentioned? Resolution I can tell you what I did, but I wish there was a better way but I simply don't know if it's duable any other way. Because my development environment had it's own DC, I didn't really want to mess with Kerberos authentication. I would also not be smart to connect that server to the domain, considering it has it's own DC. I ended up installing SharePoint 2010 Diagnostic Studio on another Windows 7 Dev environment I have, and connected the machien to the domain. I ran all the necesary remote credentials commands mentioned here. Those commands add the group policy for you! Once I did this I was able to authenticate properly and I was able to get the reports. Conclusion   You can run SharePoint 2010 Diagnostic Studio Remotely but it will require some specific scenarions. A couple of things I should mention is that as far as I understand, SP2010 DS, will install agents on your target environment to run tests and retrieve the data. I was a Farm Administrator, and also a Server Admin on SharePoint Server. I am not 100% sure if you need all those permissions but I that's just what I have to my internal intranet.   I deally I would like to have a machine that I can have SharePoint 2010 DIagnostic Studio installed and I can run that against client environments. It appears that I will not be able to do that, unless I enable Kerberos on my Windows 7 Machine now. If you have it installed in the same way I would like to have it, please let me know, I'll keep trying to get what I'm after. Hope this helps someone out there doing the same.

    Read the article

  • Upgrading to Code Based Migrations EF 4.3.1 with Connector/Net 6.6

    - by GABMARTINEZ
    Entity Framework 4.3.1 includes a new feature called code first migrations.  We are adding support for this feature in our upcoming 6.6 release of Connector/Net.  In this walk-through we'll see the workflow of code-based migrations when you have an existing application and you would like to upgrade to this EF 4.3.1 version and use this approach, so you can keep track of the changes that you do to your database.   The first thing we need to do is add the new Entity Framework 4.3.1 package to our application. This should via the NuGet package manager.  You can read more about why EF is not part of the .NET framework here. Adding EF 4.3.1 to our existing application  Inside VS 2010 go to Tools -> Library Package Manager -> Package Manager Console, this will open the Power Shell Host Window where we can work with all the EF commands. In order to install this library to your existing application you should type Install-Package EntityFramework This will make some changes to your application. So Let's check them. In your .config file you'll see a  <configSections> which contains the version you have from EntityFramework and also was added the <entityFramework> section as shown below. This section is by default configured to use SQL Express which won't be necesary for this case. So you can comment it out or leave it empty. Also please make sure you're using the Connector/Net 6.6.x version which is the one that has this support as is shown in the previous image. At this point we face one issue; in order to be able to work with Migrations we need the __MigrationHistory table that we don't have yet since our Database was created with an older version. This table is used to keep track of the changes in our model. So we need to get it in our existing Database. Getting a Migration-History table into an existing database First thing we need to do to enable migrations in our existing application is to create our configuration class which will set up the MySqlClient Provider as our SQL Generator. So we have to add it with the following code: using System.Data.Entity.Migrations;     //add this at the top of your cs file public class Configuration : DbMigrationsConfiguration<NameOfYourDbContext>  //Make sure to use the name of your existing DBContext { public Configuration() { this.AutomaticMigrationsEnabled = false; //Set Automatic migrations to false since we'll be applying the migrations manually for this case. SetSqlGenerator("MySql.Data.MySqlClient", new MySql.Data.Entity.MySqlMigrationSqlGenerator());     }   }  This code will set up our configuration that we'll be using when executing all the migrations for our application. Once we have done this we can Build our application so we can check that everything is fine. Creating our Initial Migration Now let's add our Initial Migration. In Package Manager Console, execute "add-migration InitialCreate", you can use any other name but I like to set this as our initial create for future reference. After we run this command, some changes were done in our application: A new Migrations Folder was created. A new class migration call InitialCreate which in most of the cases should have empty Up and Down methods as long as your database is up to date with your Model. Since all your entities already exists, delete all duplicated code to create any entity which exists already in your Database if there is any. I found this easier when you don't have any pending updates to do to your database. Now we have our empty migration that will make no changes in our database and represents how are all the things at the begining of our migrations.  Finally, let's create our MigrationsHistory table. Optionally you can add SQL code to delete the edmdata table which is not needed anymore. public override void Up() { // Just make sure that you used 4.1 or later version         Sql("DROP TABLE EdmMetadata"); } From our Package Manager Console let's type: Update-database; If you like to see the operations made on each Update-database command you can use the flag -verbose after the Update-database. This will make two important changes.  It will execute the Up method in the initial migration which has no changes in the database. And second, and very important,  it will create the __MigrationHistory table necessary to keep track of your changes. And next time you make a change to your database it will compare the current model to the one stored in the Model Column of this table. Conclusion The important thing of this walk through is that we must create our initial migration before we start doing any changes to our model. This way we'll be adding the necessary __MigrationsHistory table to our existing database, so we can keep our database up to date with all the changes we do in our context model using migrations. Hope you have found this information useful. Please let us know if you have any questions or comments, also please check our forums here where we keep answering questions in general for the community.  Happy MySQL/Net Coding!

    Read the article

  • Most Innovative IDM Projects: Awards at OpenWorld

    - by Tanu Sood
    On Tuesday at Oracle OpenWorld 2012, Oracle recognized the winners of Innovation Awards 2012 at a ceremony presided over by Hasan Rizvi, Executive Vice President at Oracle. Oracle Fusion Middleware Innovation Awards recognize customers for achieving significant business value through innovative uses of Oracle Fusion Middleware offerings. Winners are selected based on the uniqueness of their business case, business benefits, level of impact relative to the size of the organization, complexity and magnitude of implementation, and the originality of architecture. This year’s Award honors customers for their cutting-edge solutions driving business innovation and IT modernization using Oracle Fusion Middleware. The program has grown over the past 6 years, receiving a record number of nominations from customers around the globe. The winners were selected by a panel of judges that ranked each nomination across multiple different scoring categories. Congratulations to both Avea and ETS for winning this year’s Innovation Award for Identity Management. Identity Management Innovation Award 2012 Winner – Avea Company: Founded in 2004, AveA is the sole GSM 1800 mobile operator of Turkey and has reached a nationwide customer base of 12.8 million as of the end of 2011 Region: Turkey (EMEA) Products: Oracle Identity Manager, Oracle Identity Analytics, Oracle Access Management Suite Business Drivers: ·         To manage the agility and scale required for GSM Operations and enable call center efficiency by enabling agents to change their identity profiles (accounts and entitlements) rapidly based on call load. ·         Enhance user productivity and call center efficiency with self service password resets ·         Enforce compliance and audit reporting ·         Seamless identity management between AveA and parent company Turk Telecom Innovation and Results: ·         One of the first Sun2Oracle identity management migrations designed for high performance provisioning and trusted reconciliation built with connectors developed on the ICF architecture that provides custom user interfaces for  dynamic and rapid management of roles and entitlements along with entitlement level attestation using closed loop remediation between Oracle Identity Manager and Oracle Identity Analytics. ·         Dramatic reduction in identity administration and call center password reset tasks leading to 20% reduction in administration costs and 95% reduction in password related calls. ·         Enhanced user productivity by up to 25% to date ·         Enforced enterprise security and reduced risk ·         Cost-effective compliance management ·         Looking to seamlessly integrate with parent and sister companies’ infrastructure securely. Identity Management Innovation Award 2012 Winner – Education Testing Service (ETS)       See last year's winners here --Company: ETS is a private nonprofit organization devoted to educational measurement and research, primarily through testing. Region: U.S.A (North America) Products: Oracle Access Manager, Oracle Identity Federation, Oracle Identity Manager Business Drivers: ETS develops and administers more than 50 million achievement and admissions tests each year in more than 180 countries, at more than 9,000 locations worldwide.  As the business becomes more globally based, having a robust solution to security and user management issues becomes paramount. The organizations was looking for: ·         Simplified user experience for over 3000 company users and more than 6 million dynamic student and staff population ·         Infrastructure and administration cost reduction ·         Managing security risk by controlling 3rd party access to ETS systems ·         Enforce compliance and manage audit reporting ·         Automate on-boarding and decommissioning of user account to improve security, reduce administration costs and enhance user productivity ·         Improve user experience with simplified sign-on and user self service Innovation and Results: 1.    Manage Risk ·         Centralized system to control user access ·         Provided secure way of accessing service providers' application using federated SSO. ·         Provides reporting capability for auditing, governance and compliance. 2.    Improve efficiency ·         Real-Time provisioning to target systems ·         Centralized provisioning system for user management and access controls. ·         Enabling user self services. 3.    Reduce cost ·         Re-using common shared services for provisioning, SSO, Access by application reducing development cost and time. ·         Reducing infrastructure and maintenance cost by decommissioning legacy/redundant IDM services. ·         Reducing time and effort to implement security functionality in business applications (“onboard” instead of new development). ETS was able to fold in new and evolving requirement in addition to the initial stated goals realizing quick ROI and successfully meeting business objectives. Congratulations to the winners once again. We will be sure to bring you more from these Innovation Award winners over the next few months.

    Read the article

  • Webcor Builders Coordinates Construction Schedules and Mitigates Potential Delays More Efficiently with Integrated Project Management

    - by Sylvie MacKenzie, PMP
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} With more than 40 years of commercial construction experience, Webcor Builders is a leading builder of distinguished, high-profile projects, including high-rise condominiums and hotels, laboratories, healthcare centers, and public works projects. Webcor is also known for its award-winning concrete, interior construction, historic restoration, and seismic renovation work. The company has completed more than 50 million square feet of projects to date. Considering the variety and complexity of the construction projects Webcor undertakes, an integrated project management solution is critical to ensuring optimal efficiency and completing client projects on time and on budget. The company previously used a number of scheduling systems for its various building projects. These packages provided different levels of schedule detail and required schedulers, engineers, and other employees to learn multiple systems. From an IT cost and complexity perspective, the company had to manage multiple scheduling systems and pay for multiple sets of licenses. The company looked to standardize on an enterprise project management system, and selected Oracle’s Primavera P6 Enterprise Project Portfolio Management. Webcor uses the solution’s advanced capabilities to schedule complex projects, analyze delays, model and propose multiple scenarios to demonstrate and mitigate delays and cost overruns, and process that information efficiently to deliver the scheduling precision that public and private projects require. In fact, the solution was instrumental in helping the company’s expansion into public sector projects during the recent economic downturn, and with Primavera P6 in place, it can deliver the precise schedule reporting required for large public projects. With Primavera P6 in place, the company could deliver the precise scheduling and milestone reporting capabilities required for large public projects. The solution is in managing the high-profile University of California – Berkeley Memorial Stadium project. Webcor was hired as construction manager and general contractor for the stadium renovation project, which is a fast-paced project located near the seismically active Hayward Fault Zone. Due to the University of California’s football schedule, meeting the Universities deadline for the coming season placed Webcor in a situation where risk awareness and early warnings of issues would be paramount. Webcor and the extended project team needed a solution that could instantly analyze alternate scenarios to mitigate potential delays; Primavera would deliver those answers.The team would also need to enable multiple stakeholders to use an internet-based platform to access the schedule from various locations, and model complicated sequencing requirements where swift decisions would be made to keep the project on track. The schedule is an integral part of Webcor’s construction management process for the stadium project. Rather than providing the client with the industry-standard monthly update, Webcor updates the critical path method (CPM) schedule on a weekly basis. The project team also reviews the schedule and updates weekly to confirm that progress and forecasted performance are accurate. Hired by the University for their ability to deliver in high risk environments The Webcor team was hit recently with a design supplement that could have added up to 70 days to the project. Using Oracle Primavera P6 the team sprung into action analyzing multiple “what if” scenarios to review mitigation means and methods.  Determined to make sure the Bears could take the field in the coming season the project team nearly eliminated the impact with their creative analysis in working the schedule. The total time from the issuance of the final design supplement to an agreed mitigation response was less than one week; leveraging the Oracle Primavera solution Webcor was able to deliver superior customer value With the ability to efficiently manage projects and schedules, Webcor can ensure it completes its projects on time and on budget, as well as inform clients about what changes to plans will mean in terms of delays and additional costs. Read the complete customer case study at :  http://www.oracle.com/us/corporate/customers/customersearch/webcor-builders-1-primavera-ss-1639886.html

    Read the article

  • OpenGL 3 and the Radeon HD 4850x2

    - by rotard
    A while ago, I picked up a copy of the OpenGL SuperBible fifth edition and slowly and painfully started teaching myself OpenGL the 3.3 way, after having been used to the 1.0 way from school way back when. Making things more challenging, I am primarily a .NET developer, so I was working in Mono with the OpenTK OpenGL wrapper. On my laptop, I put together a program that let the user walk around a simple landscape using a couple shaders that implemented per-vertex coloring and lighting and texture mapping. Everything was working brilliantly until I ran the same program on my desktop. Disaster! Nothing would render! I have chopped my program down to the point where the camera sits near the origin, pointing at the origin, and renders a square (technically, a triangle fan). The quad renders perfectly on my laptop, coloring, lighting, texturing and all, but the desktop renders a small distorted non-square quadrilateral that is colored incorrectly, not affected by the lights, and not textured. I suspect the graphics card is at fault, because I get the same result whether I am booted into Ubuntu 10.10 or Win XP. I did find that if I pare the vertex shader down to ONLY outputting the positional data and the fragment shader to ONLY outputting a solid color (white) the quad renders correctly. But as SOON as I start passing in color data (whether or not I use it in the fragment shader) the output from the vertex shader is distorted again. The shaders follow. I left the pre-existing code in, but commented out so you can get an idea what I was trying to do. I'm a noob at glsl so the code could probably be a lot better. My laptop is an old lenovo T61p with a Centrino (Core 2) Duo and an nVidia Quadro graphics card running Ubuntu 10.10 My desktop has an i7 with a Radeon HD 4850 x2 (single card, dual GPU) from Saphire dual booting into Ubuntu 10.10 and Windows XP. The problem occurs in both XP and Ubuntu. Can anyone see something wrong that I am missing? What is "special" about my HD 4850x2? string vertexShaderSource = @" #version 330 precision highp float; uniform mat4 projection_matrix; uniform mat4 modelview_matrix; //uniform mat4 normal_matrix; //uniform mat4 cmv_matrix; //Camera modelview. Light sources are transformed by this matrix. //uniform vec3 ambient_color; //uniform vec3 diffuse_color; //uniform vec3 diffuse_direction; in vec4 in_position; in vec4 in_color; //in vec3 in_normal; //in vec3 in_tex_coords; out vec4 varyingColor; //out vec3 varyingTexCoords; void main(void) { //Get surface normal in eye coordinates //vec4 vEyeNormal = normal_matrix * vec4(in_normal, 0); //Get vertex position in eye coordinates //vec4 vPosition4 = modelview_matrix * vec4(in_position, 0); //vec3 vPosition3 = vPosition4.xyz / vPosition4.w; //Get vector to light source in eye coordinates //vec3 lightVecNormalized = normalize(diffuse_direction); //vec3 vLightDir = normalize((cmv_matrix * vec4(lightVecNormalized, 0)).xyz); //Dot product gives us diffuse intensity //float diff = max(0.0, dot(vEyeNormal.xyz, vLightDir.xyz)); //Multiply intensity by diffuse color, force alpha to 1.0 //varyingColor.xyz = in_color * diff * diffuse_color.xyz; varyingColor = in_color; //varyingTexCoords = in_tex_coords; gl_Position = projection_matrix * modelview_matrix * in_position; }"; string fragmentShaderSource = @" #version 330 //#extension GL_EXT_gpu_shader4 : enable precision highp float; //uniform sampler2DArray colorMap; //in vec4 varyingColor; //in vec3 varyingTexCoords; out vec4 out_frag_color; void main(void) { out_frag_color = vec4(1,1,1,1); //out_frag_color = varyingColor; //out_frag_color = vec4(varyingColor, 1) * texture(colorMap, varyingTexCoords.st); //out_frag_color = vec4(varyingColor, 1) * texture(colorMap, vec3(varyingTexCoords.st, 0)); //out_frag_color = vec4(varyingColor, 1) * texture2DArray(colorMap, varyingTexCoords); }"; Note that in this code the color data is accepted but not actually used. The geometry is outputted the same (wrong) whether the fragment shader uses varyingColor or not. Only if I comment out the line varyingColor = in_color; does the geometry output correctly. Originally the shaders took in vec3 inputs, I only modified them to take vec4s while troubleshooting.

    Read the article

  • ASP.NET Multi-Select Radio Buttons

    - by Ajarn Mark Caldwell
    “HERESY!” you say, “Radio buttons are for single-select items!  If you want multi-select, use checkboxes!”  Well, I would agree, and that is why I consider this a significant bug that ASP.NET developers need to be aware of.  Here’s the situation. If you use ASP:RadioButton controls on your WebForm, then you know that in order to get them to behave properly, that is, to define a group in which only one of them can be selected by the user, you use the Group attribute and set the same value on each one.  For example: 1: <asp:RadioButton runat="server" ID="rdo1" Group="GroupName" checked="true" /> 2: <asp:RadioButton runat="server" ID="rdo2" Group="GroupName" /> With this configuration, the controls will render to the browser as HTML Input / Type=radio tags and when the user selects one, the browser will automatically deselect the other one so that only one can be selected (checked) at any time. BUT, if you user server-side code to manipulate the Checked attribute of these controls, it is possible to set them both to believe that they are checked. 1: rdo2.Checked = true; // Does NOT change the Checked attribute of rdo1 to be false. As long as you remain in server-side code, the system will believe that both radio buttons are checked (you can verify this in the debugger).  Therefore, if you later have code that looks like this 1: if (rdo1.Checked) 2: { 3: DoSomething1(); 4: } 5: else 6: { 7: DoSomethingElse(); 8: } then it will always evaluate the condition to be true and take the first action.  The good news is that if you return to the client with multiple radio buttons checked, the browser tries to clean that up for you and make only one of them really checked.  It turns out that the last one on the screen wins, so in this case, you will in fact end up with rdo2 as checked, and if you then make a trip to the server to run the code above, it will appear to be working properly.  However, if your page initializes with rdo2 checked and in code you set rdo1 to checked also, then when you go back to the client, rdo2 will remain checked, again because it is the last one and the last one checked “wins”. And this gets even uglier if you ever set these radio buttons to be disabled.  In that case, although the client browser renders the radio buttons as though only one of them is checked the system actually retains the value of both of them as checked, and your next trip to the server will really frustrate you because the browser showed rdo2 as checked, but your DoSomething1() routine keeps getting executed. The following is sample code you can put into any WebForm to test this yourself. 1: <body> 2: <form id="form1" runat="server"> 3: <h1>Radio Button Test</h1> 4: <hr /> 5: <asp:Button runat="server" ID="cmdBlankPostback" Text="Blank Postback" /> 6: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 7: <asp:Button runat="server" ID="cmdEnable" Text="Enable All" OnClick="cmdEnable_Click" /> 8: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 9: <asp:Button runat="server" ID="cmdDisable" Text="Disable All" OnClick="cmdDisable_Click" /> 10: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 11: <asp:Button runat="server" ID="cmdTest" Text="Test" OnClick="cmdTest_Click" /> 12: <br /><br /><br /> 13: <asp:RadioButton ID="rdoG1R1" GroupName="Group1" runat="server" Text="Group 1 Radio 1" Checked="true" /><br /> 14: <asp:RadioButton ID="rdoG1R2" GroupName="Group1" runat="server" Text="Group 1 Radio 2" /><br /> 15: <asp:RadioButton ID="rdoG1R3" GroupName="Group1" runat="server" Text="Group 1 Radio 3" /><br /> 16: <hr /> 17: <asp:RadioButton ID="rdoG2R1" GroupName="Group2" runat="server" Text="Group 2 Radio 1" /><br /> 18: <asp:RadioButton ID="rdoG2R2" GroupName="Group2" runat="server" Text="Group 2 Radio 2" Checked="true" /><br /> 19:  20: </form> 21: </body> 1: protected void Page_Load(object sender, EventArgs e) 2: { 3:  4: } 5:  6: protected void cmdEnable_Click(object sender, EventArgs e) 7: { 8: rdoG1R1.Enabled = true; 9: rdoG1R2.Enabled = true; 10: rdoG1R3.Enabled = true; 11: rdoG2R1.Enabled = true; 12: rdoG2R2.Enabled = true; 13: } 14:  15: protected void cmdDisable_Click(object sender, EventArgs e) 16: { 17: rdoG1R1.Enabled = false; 18: rdoG1R2.Enabled = false; 19: rdoG1R3.Enabled = false; 20: rdoG2R1.Enabled = false; 21: rdoG2R2.Enabled = false; 22: } 23:  24: protected void cmdTest_Click(object sender, EventArgs e) 25: { 26: rdoG1R2.Checked = true; 27: rdoG2R1.Checked = true; 28: } 29: 30: protected void Page_PreRender(object sender, EventArgs e) 31: { 32:  33: } After you copy the markup and page-behind code into the appropriate files.  I recommend you set a breakpoint on Page_Load as well as cmdTest_Click, and add each of the radio button controls to the Watch list so that you can walk through the code and see exactly what is happening.  Use the Blank Postback button to cause a postback to the server so you can inspect things without making any changes. The moral of the story is: if you do server-side manipulation of the Checked status of RadioButton controls, then you need to set ALL of the controls in a group whenever you want to change one.

    Read the article

  • Multiple Audio Issues

    - by Lerp
    I am having issues with my audio on Ubuntu 12.04, I will try and give as much detail as possible so sorry if there's too much detail. The Problem Audio plays from both speakers and headphone regardless of what connector I choose and regardless of the profile I use. The microphone is constantly being played through headphones & speakers. The headphone audio is extremely quiet but plays from both ears when I select "Headphones" for the connector in Sound Settings. The headphone audio only plays from one ear and is quiet (but not as quiet as above) when I select "Analogue Output" for the connector in Sound Settings. I can only select "Headphones" as the connector in Sound Settings if I set the profile to either "Analogue Stereo Output/Duplex", all others only allow me to choose "Analogue Output" for the connector. Despite the headphone sound issues, the speaker sound is fine apart from the fact that I am not able to select which output is used, they just both play. My headphone and microphone are plugged into the front and my speakers are plugged into the back. What I have tried I have put everything in alsamixer to 100 apart from "Front Mic Boost" which I have set to 0. Command Output aplay -l **** List of PLAYBACK Hardware Devices **** card 0: Intel [HDA Intel], device 0: AD198x Analog [AD198x Analog] Subdevices: 0/1 Subdevice #0: subdevice #0 card 0: Intel [HDA Intel], device 1: AD198x Digital [AD198x Digital] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: Intel [HDA Intel], device 2: AD198x Headphone [AD198x Headphone] Subdevices: 1/1 Subdevice #0: subdevice #0 arecord -l **** List of CAPTURE Hardware Devices **** card 0: Intel [HDA Intel], device 0: AD198x Analog [AD198x Analog] Subdevices: 2/3 Subdevice #0: subdevice #0 Subdevice #1: subdevice #1 Subdevice #2: subdevice #2 cat /proc/asound/cards 0 [Intel ]: HDA-Intel - HDA Intel HDA Intel at 0xf7ff8000 irq 70 cat /proc/asound/modules 0 snd_hda_intel cat /proc/asound/card*/codec* | grep "Codec" Codec: Analog Devices AD1989B cat /etc/modprobe.d/alsa-base.conf # autoloader aliases install sound-slot-0 /sbin/modprobe snd-card-0 install sound-slot-1 /sbin/modprobe snd-card-1 install sound-slot-2 /sbin/modprobe snd-card-2 install sound-slot-3 /sbin/modprobe snd-card-3 install sound-slot-4 /sbin/modprobe snd-card-4 install sound-slot-5 /sbin/modprobe snd-card-5 install sound-slot-6 /sbin/modprobe snd-card-6 install sound-slot-7 /sbin/modprobe snd-card-7 # Cause optional modules to be loaded above generic modules install snd /sbin/modprobe --ignore-install snd $CMDLINE_OPTS && { /sbin/modprobe --quiet --use-blacklist snd-ioctl32 ; /sbin/modprobe --quiet --use-blacklist snd-seq ; } # # Workaround at bug #499695 (reverted in Ubuntu see LP #319505) install snd-pcm /sbin/modprobe --ignore-install snd-pcm $CMDLINE_OPTS && { /sbin/modprobe --quiet --use-blacklist snd-pcm-oss ; : ; } install snd-mixer /sbin/modprobe --ignore-install snd-mixer $CMDLINE_OPTS && { /sbin/modprobe --quiet --use-blacklist snd-mixer-oss ; : ; } install snd-seq /sbin/modprobe --ignore-install snd-seq $CMDLINE_OPTS && { /sbin/modprobe --quiet --use-blacklist snd-seq-midi ; /sbin/modprobe --quiet --use-blacklist snd-seq-oss ; : ; } # install snd-rawmidi /sbin/modprobe --ignore-install snd-rawmidi $CMDLINE_OPTS && { /sbin/modprobe --quiet --use-blacklist snd-seq-midi ; : ; } # Cause optional modules to be loaded above sound card driver modules install snd-emu10k1 /sbin/modprobe --ignore-install snd-emu10k1 $CMDLINE_OPTS && { /sbin/modprobe --quiet --use-blacklist snd-emu10k1-synth ; } install snd-via82xx /sbin/modprobe --ignore-install snd-via82xx $CMDLINE_OPTS && { /sbin/modprobe --quiet --use-blacklist snd-seq ; } # Load saa7134-alsa instead of saa7134 (which gets dragged in by it anyway) install saa7134 /sbin/modprobe --ignore-install saa7134 $CMDLINE_OPTS && { /sbin/modprobe --quiet --use-blacklist saa7134-alsa ; : ; } # Prevent abnormal drivers from grabbing index 0 options bt87x index=-2 options cx88_alsa index=-2 options saa7134-alsa index=-2 options snd-atiixp-modem index=-2 options snd-intel8x0m index=-2 options snd-via82xx-modem index=-2 options snd-usb-audio index=-2 options snd-usb-caiaq index=-2 options snd-usb-ua101 index=-2 options snd-usb-us122l index=-2 options snd-usb-usx2y index=-2 # Ubuntu #62691, enable MPU for snd-cmipci options snd-cmipci mpu_port=0x330 fm_port=0x388 # Keep snd-pcsp from being loaded as first soundcard options snd-pcsp index=-2 # Keep snd-usb-audio from beeing loaded as first soundcard options snd-usb-audio index=-2 Hopefully I have provided enough information, I will happily provide anymore information needed. Thank you. Update Reinstalling alsa-base and pulseaudio fixed the headphone issues I was having.

    Read the article

  • how to label a cuboid using open gl?

    - by usha
    hi this is how my 3dcuboid looks ,i have attached complete code , i want to label this cuboid using different name across sides how is it possible using opengl in android...plz help me out public class MyGLRenderer implements Renderer { Context context; Cuboid rect; private float mCubeRotation; // private static float angleCube = 0; // Rotational angle in degree for cube (NEW) // private static float speedCube = -1.5f; // Rotational speed for cube (NEW) public MyGLRenderer(Context context) { rect = new Cuboid(); this.context = context; } public void onDrawFrame(GL10 gl) { // TODO Auto-generated method stub gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT); gl.glLoadIdentity(); // Reset the model-view matrix gl.glTranslatef(0.2f, 0.0f, -8.0f); // Translate right and into the screen gl.glScalef(0.8f, 0.8f, 0.8f); // Scale down (NEW) gl.glRotatef(mCubeRotation, 1.0f, 1.0f, 1.0f); // gl.glRotatef(angleCube, 1.0f, 1.0f, 1.0f); // rotate about the axis (1,1,1) (NEW) rect.draw(gl); mCubeRotation -= 0.15f; //angleCube += speedCube; } public void onSurfaceChanged(GL10 gl, int width, int height) { // TODO Auto-generated method stub if (height == 0) height = 1; // To prevent divide by zero float aspect = (float)width / height; // Set the viewport (display area) to cover the entire window gl.glViewport(0, 0, width, height); // Setup perspective projection, with aspect ratio matches viewport gl.glMatrixMode(GL10.GL_PROJECTION); // Select projection matrix gl.glLoadIdentity(); // Reset projection matrix // Use perspective projection GLU.gluPerspective(gl, 45, aspect, 0.1f, 100.f); gl.glMatrixMode(GL10.GL_MODELVIEW); // Select model-view matrix gl.glLoadIdentity(); // Reset } public void onSurfaceCreated(GL10 gl, EGLConfig config) { // TODO Auto-generated method stub gl.glClearColor(0.0f, 0.0f, 0.0f, 1.0f); // Set color's clear-value to black gl.glClearDepthf(1.0f); // Set depth's clear-value to farthest gl.glEnable(GL10.GL_DEPTH_TEST); // Enables depth-buffer for hidden surface removal gl.glDepthFunc(GL10.GL_LEQUAL); // The type of depth testing to do gl.glHint(GL10.GL_PERSPECTIVE_CORRECTION_HINT, GL10.GL_NICEST); // nice perspective view gl.glShadeModel(GL10.GL_SMOOTH); // Enable smooth shading of color gl.glDisable(GL10.GL_DITHER); // Disable dithering for better performance }} public class Cuboid{ private FloatBuffer mVertexBuffer; private FloatBuffer mColorBuffer; private ByteBuffer mIndexBuffer; private float vertices[] = { //width,height,depth -2.5f, -1.0f, -1.0f, 1.0f, -1.0f, -1.0f, 1.0f, 1.0f, -1.0f, -2.5f, 1.0f, -1.0f, -2.5f, -1.0f, 1.0f, 1.0f, -1.0f, 1.0f, 1.0f, 1.0f, 1.0f, -2.5f, 1.0f, 1.0f }; private float colors[] = { // R,G,B,A COLOR 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, 1.0f, 1.0f, 0.5f, 0.0f, 1.0f, 1.0f, 0.5f, 0.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 1.0f, 0.0f, 1.0f, 1.0f }; private byte indices[] = { // VERTEX 0,1,2,3,4,5,6,7 REPRESENTATION FOR FACES 0, 4, 5, 0, 5, 1, 1, 5, 6, 1, 6, 2, 2, 6, 7, 2, 7, 3, 3, 7, 4, 3, 4, 0, 4, 7, 6, 4, 6, 5, 3, 0, 1, 3, 1, 2 }; public Cuboid() { ByteBuffer byteBuf = ByteBuffer.allocateDirect(vertices.length * 4); byteBuf.order(ByteOrder.nativeOrder()); mVertexBuffer = byteBuf.asFloatBuffer(); mVertexBuffer.put(vertices); mVertexBuffer.position(0); byteBuf = ByteBuffer.allocateDirect(colors.length * 4); byteBuf.order(ByteOrder.nativeOrder()); mColorBuffer = byteBuf.asFloatBuffer(); mColorBuffer.put(colors); mColorBuffer.position(0); mIndexBuffer = ByteBuffer.allocateDirect(indices.length); mIndexBuffer.put(indices); mIndexBuffer.position(0); } public void draw(GL10 gl) { gl.glFrontFace(GL10.GL_CW); gl.glVertexPointer(3, GL10.GL_FLOAT, 0, mVertexBuffer); gl.glColorPointer(4, GL10.GL_FLOAT, 0, mColorBuffer); gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glEnableClientState(GL10.GL_COLOR_ARRAY); gl.glDrawElements(GL10.GL_TRIANGLES, 36, GL10.GL_UNSIGNED_BYTE, mIndexBuffer); gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); gl.glDisableClientState(GL10.GL_COLOR_ARRAY); } } public class Draw3drect extends Activity { private GLSurfaceView glView; // Use GLSurfaceView // Call back when the activity is started, to initialize the view @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); glView = new GLSurfaceView(this); // Allocate a GLSurfaceView glView.setRenderer(new MyGLRenderer(this)); // Use a custom renderer this.setContentView(glView); // This activity sets to GLSurfaceView } // Call back when the activity is going into the background @Override protected void onPause() { super.onPause(); glView.onPause(); } // Call back after onPause() @Override protected void onResume() { super.onResume(); glView.onResume(); } }

    Read the article

  • How do I rotate my monitor using xorg?

    - by user1106405
    I have just installed KUbuntu 12.10, and I am attempting to rotate my monitor 90 deg to the left. When I add the option to rotate, the monitor seems to ignore the directive. I'm currently using dual 02:00.0 VGA compatible controller: NVIDIA Corporation GF104 [GeForce GTX 460] (rev a1) 03:00.0 VGA compatible controller: NVIDIA Corporation GF104 [GeForce GTX 460] (rev a1) and NVidia driver version 310 My xorg.conf is as follows: # nvidia-settings: X configuration file generated by nvidia-settings # nvidia-settings: version 304.51 (buildd@komainu) Fri Oct 12 12:53:49 UTC 2012 # nvidia-xconfig: X configuration file generated by nvidia-xconfig # nvidia-xconfig: version 310.14 ([email protected]) Tue Oct 9 13:04:01 PDT 2012 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 1280 0 Screen 1 "Screen1" RightOf "Screen0" Screen 2 "Screen2" 0 0 InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" Option "Xinerama" "1" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Unknown" ModelName "Samsung SyncMaster" HorizSync 30.0 - 81.0 VertRefresh 56.0 - 60.0 Option "DPMS" EndSection Section "Monitor" Identifier "Monitor1" VendorName "Unknown" ModelName "DELL 1908WFP" HorizSync 30.0 - 83.0 VertRefresh 56.0 - 75.0 EndSection Section "Monitor" Identifier "Monitor2" VendorName "Unknown" ModelName "DELL 1907FP" HorizSync 30.0 - 81.0 VertRefresh 56.0 - 76.0 EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GTX 460" BusID "PCI:2:0:0" Screen 0 EndSection Section "Device" Identifier "Device1" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GTX 460" BusID "PCI:2:0:0" Screen 1 EndSection Section "Device" Identifier "Device2" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GTX 460" BusID "PCI:3:0:0" EndSection Section "Screen" # Removed Option "metamodes" "DFP-0: nvidia-auto-select +0+0; DFP-0: nvidia-auto-select +0+0; DFP-0: 1920x1200 +0+0; DFP-0: 1920x1200_60 +0+0; DFP-0: 1600x1200 +0+0; DFP-0: 1600x1200_60 +0+0; DFP-0: 1280x1024 +0+0; DFP-0: 1280x1024_60 +0+0; DFP-0: 1280x960 +0+0; DFP-0: 1280x960_60 +0+0; DFP-0: 1024x768 +0+0; DFP-0: 1024x768_60 +0+0; DFP-0: 800x600 +0+0; DFP-0: 800x600_60 +0+0; DFP-0: 800x600_56 +0+0; DFP-0: 640x480 +0+0; DFP-0: 640x480_60 +0+0; DFP-0: nvidia-auto-select @1920x1080 +0+0; DFP-0: nvidia-auto-select @1920x720 +0+0" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "Stereo" "0" Option "metamodes" "DFP-0: nvidia-auto-select +0+0; DFP-0: nvidia-auto-select +0+0; DFP-0: 1920x1200 +0+0; DFP-0: 1920x1200_60 +0+0; DFP-0: 1600x1200 +0+0; DFP-0: 1600x1200_60 +0+0; DFP-0: 1280x1024 +0+0; DFP-0: 1280x1024_60 +0+0; DFP-0: 1280x960 +0+0; DFP-0: 1280x960_60 +0+0; DFP-0: 1024x768 +0+0; DFP-0: 1024x768_60 +0+0; DFP-0: 800x600 +0+0; DFP-0: 800x600_60 +0+0; DFP-0: 800x600_56 +0+0; DFP-0: 640x480 +0+0; DFP-0: 640x480_60 +0+0; DFP-0: nvidia-auto-select +0+0; DFP-0: nvidia-auto-select +0+0" SubSection "Display" Depth 24 EndSubSection EndSection Section "Screen" Identifier "Screen1" Device "Device1" Monitor "Monitor1" DefaultDepth 24 Option "Stereo" "0" Option "metamodes" "DFP-2: nvidia-auto-select +0+0" Option "Rotate" "left" SubSection "Display" Depth 24 EndSubSection EndSection Section "Screen" Identifier "Screen2" Device "Device2" Monitor "Monitor2" DefaultDepth 24 Option "Stereo" "0" Option "metamodes" "nvidia-auto-select +0+0" SubSection "Display" Depth 24 EndSubSection EndSection Section "Extensions" Option "Composite" "Enable" EndSection Edit: If I delete the xorg.conf and reboot, I am able to rotate my monitor, however, my third monitor is not recognized: Screen 0: minimum 8 x 8, current 3360 x 1200, maximum 16384 x 16384 DVI-I-0 disconnected (normal left inverted right x axis y axis) DVI-I-1 disconnected (normal left inverted right x axis y axis) DVI-I-2 connected 1920x1200+0+0 (normal left inverted right x axis y axis) 518mm x 324mm 1920x1200 60.0*+ 1600x1200 60.0 1280x1024 60.0 1280x960 60.0 1024x768 60.0 800x600 60.3 56.2 640x480 59.9 HDMI-0 disconnected (normal left inverted right x axis y axis) DVI-I-3 connected 1440x900+1920+0 (normal left inverted right x axis y axis) 408mm x 255mm 1440x900 59.9*+ 75.0 1280x1024 75.0 60.0 1280x800 59.8 1152x864 75.0 1024x768 75.0 70.1 60.0 800x600 75.0 72.2 60.3 56.2 640x480 75.0 72.8 59.9

    Read the article

  • Using Definition of Done to Drive Agile Maturity

    - by Dylan Smith
    I’ve been an Agile Coach at a lot of different clients over the years, and I want to share an approach I use to help them adopt and mature over time. It’s important to realize that “Agile” is not a black/white yes/no thing. Teams can be varying degrees of agile. I think of this as their agile maturity level. When I coach teams I want them to start out being a little agile, and get more agile as they mature. The approach I teach them is to use the definition of done as a technique to continuously improve their agile maturity over time. We’re probably all familiar with the concept of “Done Done” that represents what *actually* being done a feature means. Not just when a developer says he’s done right after he writes that last line of code that makes the feature kind-of work. Done Done means the coding is done, it’s been tested, installers and deployment packages have been created, user manuals have been updated, architecture docs have been updated, etc. To enable teams to internalize the concept of “Done Done”, they usually get together and come up with their Definition of Done (DoD) that defines all the activities that need to be completed before a feature is considered Done Done. The Done Done technique typically is applied only to features (aka User Stories). What I do is extend this to apply to several concepts such as User Stories, Sprints, Releases (and sometimes Check-Ins). During project kick-off I’ll usually sit down with the team and go through an exercise of creating DoD’s for each of these concepts (Stories/Sprints/Releases). We’ll usually start by just brainstorming a bunch of activities that could end up in these various DoD’s. Here’s some examples: Code Reviews StyleCop FxCop User Manuals Updated Architecture Docs Updated Tested by QA Tested by UAT Installers Created Support Knowledge Base Updated Deployment Instructions (for Ops) written Automated Unit Tests Run Automated Integration Tests Run Then we start by arranging these activities into the place they occur today (e.g. Do you do UAT testing only once per release? every sprint? every feature?). If the team was previously Waterfall most of these activities probably end up in the Release DoD. An extremely mature agile team would probably have most of these activities in the DoD for the User Stories (because an extremely mature agile team will probably do continuous deployment and release every story). So what we need to do as a team, is work to move these activities from their current home (Release DoD) down into the Sprint DoD and eventually into the User Story DoD (and maybe into the lower-level Check-In DoD if we decide to use that). We don’t have to move them all down to User Story immediately, but as a team we figure out what we think we’re capable of moving down to the Sprint cycle, and Story cycle immediately, and that becomes our starting DoD’s. Over time the team makes an effort to continue moving activities down from Release->Sprint->Story as they become more agile and more mature. I try to encourage them to envision a world in which they deploy to production as each User Story is completed. They would need to be updating User Manuals, creating installers, doing UAT testing (typical Release cycle activities) on every single User Story. They may never actually reach that point, but they should envision that, and strive to keep driving the activities down closer to the User Story cycle s they mature. This is a great technique to give a team an easy-to-follow roadmap to mature their agile practices over time. Sure there’s other aspects to maturity outside of this, but it’s a great technique, that’s easy to visualize, to drive agility into the team. Just keep moving those activities (aka “gates”) down the board from Release->Sprint->Story. I’ll try to give an example of what a recent client of mine had for their DoD’s (this is from memory, so probably not 100% accurate): Release Create/Update deployment Instructions For Ops Instructional Videos Updated Run manual regression test suite UAT Testing In this case that meant deploying to an environment shared across the enterprise that mirrored production and asking other business groups to test their own apps to ensure we didn’t break anything outside our system Sprint Deploy to UAT Environment But not necessarily actually request UAT testing occur User Guides updated Sprint Features Video Created In this case we decided to create a video each sprint showing off the progress (video version of Sprint Demo) User Story Manual Test scripts developed and run Tested by BA Deployed in shared QA environment Using automated deployment process Peer Code Review Code Check-In Compiled (warning-free) Passes StyleCop Passes FxCop Create installer packages Run Automated Tests Run Automated Integration Tests PS – One of my clients had a great question when we went through this activity. They said that if a Sprint is by definition done when the end-date rolls around (time-boxed), isn’t a DoD on a sprint meaningless – it’s done on the end-date regardless of whether those other activities are complete or not? My answer is that while that statement is true – the sprint is done regardless when the end date rolls around – if the DoD activities haven’t been completed I would consider the Sprint a failure (similar to not completing what was committed/planned – failure may be too strong a word but you get the idea). In the Retrospective that will become an agenda item to discuss and understand why we weren’t able to complete the activities we agreed would need to be completed each Sprint.

    Read the article

  • Experience Oracle Enterprise Data Quality at OpenWorld 2012

    - by Mala Narasimharajan
    This year Enterprise Data Quality features sessions designed to cover topics above and beyond product strategy and roadmap. Specifically, we have planned to have sessions around data governance, how does EDQ enable data acquisition, migration as well as integration. In addition, check out our hands-on-lab to see for yourself the new enhancements and integration to EDQ. Finally, no OpenWorld is complete without fully exploring the DemoGrounds – come and learn how enterprise data quality is critical to enterprise success and fit-for-purpose data. SESSIONS Mon Oct 1   1:45 PM - 2:45 PM Oracle Enterprise Data Quality: Product Overview and Roadmap Moscone West – 3006 Wed Oct 3   1:15 PM - 2:15 PM Data Preparation and Ongoing Governance with the Oracle Enterprise Data Quality Platform                                                   Moscone West – 3000 Thurs – Oct 4   12:45 PM - 1:45 PM Data Acquisition, Migration, and Integration with the Oracle Enterprise Data Quality Platform                                                 Moscone West – 3005     HANDS ON LAB – Mon Oct 1   4:45 PM - 5:45 PM Introduction to Oracle Enterprise Data Quality Application                                                                                                                   Marriott Marquis - Salon ½ DEMO   Trusted Data with Oracle Enterprise Data Quality Moscone South, Right - S-243

    Read the article

  • My rhythm game runs choppy even with high frame rate

    - by felipedrl
    I'm coding a rhythm game and the game runs smoothly with uncapped fps. But when I try to cap it around 60 the game updates in little chunks, like hiccups, as if it was skipping frames or at a very low frame rate. The reason I need to cap frame rate is because in some computers I tested, the fps varies a lot (from ~80 - ~250 fps) and those drops are noticeable and degrade response time. Since this is a rhythm game this is very important. This issue is driving me crazy. I've spent a few weeks already on it and still can't figure out the problem. I hope someone more experienced than me could shed some light on it. I'll try to put here all the hints I've tried along with two pseudo codes for game loops I tried, so I apologize if this post gets too lengthy. 1st GameLoop: const uint UPDATE_SKIP = 1000 / 60; uint nextGameTick = SDL_GetTicks(); while(isNotDone) { // only false when a QUIT event is generated! if (processEvents()) { if (SDL_GetTicks() > nextGameTick) { update(UPDATE_SKIP); render(); nextGameTick += UPDATE_SKIP; } } } 2nd Game Loop: const uint UPDATE_SKIP = 1000 / 60; while (isNotDone) { LARGE_INTEGER startTime; QueryPerformanceCounter(&startTime); // process events will return false in case of a QUIT event processed if (processEvents()) { update(frameTime); render(); } LARGE_INTEGER endTime; do { QueryPerformanceCounter(&endTime); frameTime = static_cast<uint>((endTime.QuadPart - startTime.QuadPart) * 1000.0 / frequency.QuadPart); } while (frameTime < UPDATE_SKIP); } [1] At first I thought it was a timer resolution problem. I was using SDL_GetTicks, but even when I switched to QueryPerformanceCounter, supposedly less granular, I saw no difference. [2] Then I thought it could be due to a rounding error in my position computation and since game updates are smaller in high FPS that would be less noticeable. Indeed there is an small error, but from my tests I realized that it is not enough to produce the position jumps I'm getting. Also, another intriguing factor is that if I enable vsync I'll get smooth updates @60fps regardless frame cap code. So why not rely on vsync? Because some computers can force a disable on gfx card config. [3] I started printing the maximum and minimum frame time measured in 1sec span, in the hope that every a few frames one would take a long time but still not enough to drop my fps computation. It turns out that, with frame cap code I always get frame times in the range of [16, 18]ms, and still, the game "does not moves like jagger". [4] My process' priority is set to HIGH (Windows doesn't allow me to set REALTIME for some reason). As far as I know there is only one thread running along with the game (a sound callback, which I really don't have access to it). I'm using AudiereLib. I then disabled Audiere by removing it from the project and still got the issue. Maybe there are some others threads running and one of them is taking too long to come back right in between when I measured frame times, I don't know. Is there a way to know which threads are attached to my process? [5] There are some dynamic data being created during game run. But It is a little bit hard to remove it to test. Maybe I'll have to try harder this one. Well, as I told you I really don't know what to try next. Anything, I mean, anything would be of great help. What bugs me more is why at 60fps & vsync enabled I get an smooth update and at 60fps & no vsync I don't. Is there a way to implement software vsync? I mean, query display sync info? Thanks in advance. I appreciate the ones that got this far and yet again I apologize for the long post. Best Regards from a fellow coder.

    Read the article

  • The Best BPM Journey: More Exciting Destinations with Process Accelerators

    - by Cesare Rotundo
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Oracle Open World (OOW) earlier this month has been a great occasion to discuss with our BPM customers. It was interesting to hear definite patterns emerging from those conversations: “BPM is a journey”, “experiences to share”, “our organization now understands what BPM is”, and my favorite (with some caveats): “BPM is like wine tasting, once you start, you want to try more”. These customers have started their journey, climbed up the learning curve, and reached a vantage point that allows them to see their next BPM destination. They see the next few processes they are going to tackle and improve with BPM. These processes/destinations target both horizontal processes where BPM replaces or coordinates manual activities, and critical industry processes that the company needs to improve to compete and deliver increasing value. Each new destination generates value, allowing the organization to reduce the cost of manual processes that were not supported by apps/custom development, and increase efficiency of end-to-end processes partially covered by apps/custom dev. The question we wanted to answer is how to help organizations experience deeper success with BPM, by increasing their awareness of the potential for reaching new targets, and equipping them with the right tools. We decided that we needed to identify destinations, and plot routes to show the fastest path to those destinations. In the end we want to enable customers to reach “Process Excellence”: continuously set new targets and consistently and efficiently reach them. The result is Oracle Process Accelerators (PA), solutions built using the rich functionality in Oracle BPM Suite. PAs offers a rapidly expanding list of exciting destinations. Our launch of the latest installment of Process Accelerators at Oracle Open World includes new Industry-focused solutions such as Public Sector Incident Reporting and Financial Services Loan Origination, and improved other horizontal PAs, including Travel Request Management, Document Routing and Approval, and Internal Service Requests. Just before OOW we had extended the Oracle deployment of Travel Request Management, riding the enthusiastic response from early adopters among travelers (employees), management and support (approvers). “Getting there first” means being among the first to extract value from the PA approach, while acquiring deeper insights into the customers’ perspective. This is especially noteworthy when it comes to PAs, a set of solutions designed to be quickly deployed and iteratively improved by customers. The OOW launch has generated immediate feedback from customers, non-customers, analysts, and partners. They all confirmed that both Business and IT at organizations benefit from PAs when it comes to exploring the potential for BPM to improve their business processes. PAs help customers visualize what can be done with BPM, and PAs are made to be extended: you can see your destination, change the path to fit your needs, and deploy. We're discovering new destinations/processes that the market wants us to support, generic enough across industries and within industries. We'll keep on building sets of requirements, deliver functional design, construct solutions using Oracle BPM, and test them not only functionally but for performance, scalability, clustering, making them robust, product-quality. Delivering BPM solutions with product-grade quality is the equivalent of following a tried-and-tested path on a map. Do you know of existing destinations in your industry? If yes, we can draw a path to innovative processes together.

    Read the article

  • Sass interface in HTML6 for upload files.

    - by Anirudha
    Originally posted on: http://geekswithblogs.net/anirugu/archive/2013/11/04/sass-interface-in-html6-for-upload-files.aspx[This post is about experiment & imagination] From Windows XP (ever last OS I tried) I have seen a feature that is about send file to pen drive and make shortcut on Desktop. In XP, Win7 (Win8 have this too, not removed) just select the file right click > send to and you can send this file to many places. In my menu it’s show me Skype because I have installed it. this skype confirm that we can add our own app here to make it more easy for user to send file in our app. Nowadays Many people use Cloud or online site to store the file. In case of html5 drag and drop you need to have site opened and have opened that page which is about file upload. You need to select all  and drag and drop. after drag and drop file is simply uploaded to server and site show you on list (if no error happen). but this file upload is seriously not worthy since I have to open the site when I do this operation.   Through this post I want to describe a feature that can make this thing better.  This API is simply called SASS FILE UPLOAD API Through This API when you surf the site and come into file upload page then the page will tell you that we also have SASS FILE API support. Enable it for better result.   How this work This API feature are activated on 2 basis. 1. Feature are disabled by default on site (or you can change it if it’s not) 2. This API allow specific site to upload the files. Files upload may have some rule. For example (minimum or maximum size of file to uploaded, which format the site allowed you to upload). In case of resume site you will be allowed to use .doc (according to code of site)   How browser recognize that Site have SASS service. In HTML source of  the site, the code have a meta tag similar to this <meta name=”sass-upload-api” path=”/upload.json”/> Remember that upload.json is one file that has define the value of many settings {   "cookie_name": "ck_file",   "maximum_allowed_perday": 24,   "allowed_file_extensions","*.png,*.jpg,*.jpeg,*.gif",   "method": [       {           "get": "file/get",           "routing":"/file/get/{fileName}"       },       {           "post": "file/post",           "routing":"/file/post/{fileName}"       },       {           "delete": "file/delete",           "routing":"/file/delete/{fileName}"       },         {           "put": "file/put",           "routing":"/file/put/{fileName}"       },        {           "all": "file/all",           "routing":"/file/all/{fileName}"       }    ] } cookie name is simply a cookie which should be stored in browser and define in json. we define the cookie_name so we can easily share then with service in Windows system. This cookie will be accessible with the service so it’s security based safe. other cookie will not be shared.   The cookie will be post,put, get from this location. The all location will be simply about showing a whole list of file. This will gave a treeview kind of json to show the directories on sever.   for example example.com if you have activated the API with this site then you will seen a send to option in your explorer.exe when you send you will got a windows open which folder you want to use to send the file. The windows will also describe the limit and how much you can upload. This thing never required site to opened. When you upload the file this will be uploaded through FTP protocol. FTP protocol are better for performance.   How this API make thing faster. Suppose you want to ask a question and want to post image. you just do it and get it ready when you open stackoverflow.com now stackoverflow will only tell you which file you want to put on your current question that you asking for. second use is about people use cloud app.   There is no need of drag and drop anymore. we just need to do it without drag and drop it. we doesn’t need to open the site either. This thing is still in experiment level. I will update this post when I got some progress on this API.

    Read the article

  • Oracle Cloud Applications: The Right Ingredients Baked In

    - by yaldahhakim
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} Oracle Cloud Applications: The Right Ingredients Baked In Eggs, flour, milk, and sugar. The magic happens when you mix these ingredients together. The same goes for the hottest technologies fast changing how IT impacts our organizations today: cloud, social, mobile, and big data. By themselves they’re pretty good; combining them with a great recipe is what unlocks real transformation power. Choosing the right cloud can be very similar to choosing the right cake. First consider comparing the core ingredients that go into baking a cake and the core design principles in building a cloud-based application. For instance, if flour is the base ingredient of a cake, then rich functionality that spans complete business processes is the base of an enterprise-grade cloud. Cloud computing is more than just consuming an "application as service", and having someone else manage it for you. Rather, the value of cloud is about making your business more agile in the marketplace, and shortening the time it takes to deliver and adopt new innovation. It’s also about improving not only the efficiency at which we communicate but the actual quality of the information shared as well. Data from different systems, like ingredients in a cake, must also be blended together effectively and evaluated through a consolidated lens. When this doesn’t happen, for instance when data in your sales cloud doesn't seamlessly connect with your order management and other “back office” applications, the speed and quality of information can decrease drastically. It’s like mixing ingredients in a strainer with a straw – you just can’t bring it all together without losing something. Mixing ingredients is similar to bringing clouds together, and co-existing cloud applications with traditional on premise applications. This is where a shared services  platform built on open standards and Service Oriented Architecture (SOA) is critical. It’s essentially a cloud recipe that calls for not only great ingredients, but also ingredients you can get locally or most likely already have in your kitchen (or IT shop.) Open standards is the best way to deliver a cost effective, durable application integration strategy – regardless of where your apps are deployed. It’s also the best way to build your own cloud applications, or extend the ones you consume from a third party. Just like using standard ingredients and tools you already have in your kitchen, a standards based cloud enables your IT resources to ensure a cloud works easily with other systems. Your IT staff can also make changes using tools they are already familiar with. Or even more ideal, enable business users to actually tailor their experience without having to call upon IT for help at all. This frees IT resources to focus more on developing new innovative services for the organization vs. run and maintain. Carrying the cake analogy forward, you need to add all the ingredients in before you bake it. The same is true with a modern cloud. To harness the full power of cloud, you can’t leave out some of the most important ingredients and just layer them on top later. This is what a lot of our niche competitors have done when it comes to social, mobile, big data and analytics, and other key technologies impacting the way we do business. The transformational power of these technology trends comes from having a strategy from the get-go that combines them into a winning recipe, and delivers them in a unified way. In looking at ways Oracle’s cloud is different from other clouds – not only is breadth of functionality rich across functional pillars like CRM, HCM, ERP, etc. but it embeds social, mobile, and rich intelligence capabilities where they make the most sense across business processes. This strategy enables the Oracle Cloud to uniquely deliver on all three of these dimensions to help our customers unlock the full power of these transformational technologies.

    Read the article

  • iOS Support with Windows Azure Mobile Services – now with Push Notifications

    - by ScottGu
    A few weeks ago I posted about a number of improvements to Windows Azure Mobile Services. One of these was the addition of an Objective-C client SDK that allows iOS developers to easily use Mobile Services for data and authentication.  Today I'm excited to announce a number of improvement to our iOS SDK and, most significantly, our new support for Push Notifications via APNS (Apple Push Notification Services).  This makes it incredibly easy to fire push notifications to your iOS users from Windows Azure Mobile Service scripts. Push Notifications via APNS We've provided two complete tutorials that take you step-by-step through the provisioning and setup process to enable your Windows Azure Mobile Service application with APNS (Apple Push Notification Services), including all of the steps required to configure your application for push in the Apple iOS provisioning portal: Getting started with Push Notifications - iOS Push notifications to users by using Mobile Services - iOS Once you've configured your application in the Apple iOS provisioning portal and uploaded the APNS push certificate to the Apple provisioning portal, it's just a matter of uploading your APNS push certificate to Mobile Services using the Windows Azure admin portal: Clicking the “upload” within the “Push” tab of your Mobile Service allows you to browse your local file-system and locate/upload your exported certificate.  As part of this you can also select whether you want to use the sandbox (dev) or production (prod) Apple service: Now, the code to send a push notification to your clients from within a Windows Azure Mobile Service is as easy as the code below: push.apns.send(deviceToken, {      alert: 'Toast: A new Mobile Services task.',      sound: 'default' }); This will cause Windows Azure Mobile Services to connect to APNS (Apple Push Notification Service) and send a notification to the iOS device you specified via the deviceToken: Check out our reference documentation for full details on how to use the new Windows Azure Mobile Services apns object to send your push notifications. Feedback Scripts An important part of working with any PNS (Push Notification Service) is handling feedback for expired device tokens and channels. This typically happens when your application is uninstalled from a particular device and can no longer receive your notifications. With Windows Notification Services you get an instant response from the HTTP server.  Apple’s Notification Services works in a slightly different way and provides an additional endpoint you can connect to poll for a list of expired tokens. As with all of the capabilities we integrate with Mobile Services, our goal is to allow developers to focus more on building their app and less on building infrastructure to support their ideas. Therefore we knew we had to provide a simple way for developers to integrate feedback from APNS on a regular basis.  This week’s update now includes a new screen in the portal that allows you to optionally provide a script to process your APNS feedback – and it will be executed by Mobile Services on an ongoing basis: This script is invoked periodically while your service is active. To poll the feedback endpoint you can simply call the apns object's getFeedback method from within this script: push.apns.getFeedback({       success: function(results) {           // results is an array of objects with a deviceToken and time properties      } }); This returns you a list of invalid tokens that can now be removed from your database. iOS Client SDK improvements Over the last month we've continued to work with a number of iOS advisors to make improvements to our Objective-C SDK. The SDK is being developed under an open source license (Apache 2.0) and is available on github. Many of the improvements are behind the scenes to improve performance and memory usage. However, one of the biggest improvements to our iOS Client API is the addition of an even easier login method.  Below is the Objective-C code you can now write to invoke it: [client loginWithProvider:@"twitter"                     onController:self                        animated:YES                      completion:^(MSUser *user, NSError *error) {      // if no error, you are now logged in via twitter }]; This code will automatically present and dismiss our login view controller as a modal dialog on the specified controller.  This does all the hard work for you and makes login via Twitter, Google, Facebook and Microsoft Account identities just a single line of code. My colleague Josh just posted a short video demonstrating these new features which I'd recommend checking out: Summary The above features are all now live in production and are available to use immediately.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using Mobile Services today. Visit the Windows Azure Mobile Developer Center to learn more about how to build apps with Mobile Services. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Storing non-content data in Orchard

    - by Bertrand Le Roy
    A CMS like Orchard is, by definition, designed to store content. What differentiates content from other kinds of data is rather subtle. The way I would describe it is by saying that if you would put each instance of a kind of data on its own web page, if it would make sense to add comments to it, or tags, or ratings, then it is content and you can store it in Orchard using all the convenient composition options that it offers. Otherwise, it probably isn't and you can store it using somewhat simpler means that I will now describe. In one of the modules I wrote, Vandelay.ThemePicker, there is some configuration data for the module. That data is not content by the definition I gave above. Let's look at how this data is stored and queried. The configuration data in question is a set of records, each of which has a number of properties: public class SettingsRecord { public virtual int Id { get; set;} public virtual string RuleType { get; set; } public virtual string Name { get; set; } public virtual string Criterion { get; set; } public virtual string Theme { get; set; } public virtual int Priority { get; set; } public virtual string Zone { get; set; } public virtual string Position { get; set; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Each property has to be virtual for nHibernate to handle it (it creates derived classed that are instrumented in all kinds of ways). We also have an Id property. The way these records will be stored in the database is described from a migration: public int Create() { SchemaBuilder.CreateTable("SettingsRecord", table => table .Column<int>("Id", column => column.PrimaryKey().Identity()) .Column<string>("RuleType", column => column.NotNull().WithDefault("")) .Column<string>("Name", column => column.NotNull().WithDefault("")) .Column<string>("Criterion", column => column.NotNull().WithDefault("")) .Column<string>("Theme", column => column.NotNull().WithDefault("")) .Column<int>("Priority", column => column.NotNull().WithDefault(10)) .Column<string>("Zone", column => column.NotNull().WithDefault("")) .Column<string>("Position", column => column.NotNull().WithDefault("")) ); return 1; } When we enable the feature, the migration will run, which will create the table in the database. Once we've done that, all we have to do in order to use the data is inject an IRepository<SettingsRecord>, which is what I'm doing from the set of helpers I put under the SettingsService class: private readonly IRepository<SettingsRecord> _repository; private readonly ISignals _signals; private readonly ICacheManager _cacheManager; public SettingsService( IRepository<SettingsRecord> repository, ISignals signals, ICacheManager cacheManager) { _repository = repository; _signals = signals; _cacheManager = cacheManager; } The repository has a Table property, which implements IQueryable<SettingsRecord> (enabling all kind of Linq queries) as well as methods such as Delete and Create. Here's for example how I'm getting all the records in the table: _repository.Table.ToList() And here's how I'm deleting a record: _repository.Delete(_repository.Get(r => r.Id == id)); And here's how I'm creating one: _repository.Create(new SettingsRecord { Name = name, RuleType = ruleType, Criterion = criterion, Theme = theme, Priority = priority, Zone = zone, Position = position }); In summary, you create a record class, a migration, and you're in business and can just manipulate the data through the repository that the framework is exposing. You even get ambient transactions from the work context.

    Read the article

  • Eloqua Experience 2013: Mystique, Modern Marketing and Masterful Engagement

    - by Mike Stiles
    The following is a guest post from Erick Mott, a social business leader at Oracle Eloqua. There’s a growing gap between 20th century marketing and a modern marketing way of doing business. I can’t think of a better example of modern marketing in action than what more than 2,000 people experienced in San Francisco at #EE13; customer-obsession, multichannel content, and real-time engagement all coming together at one extraordinary event. This was my first Eloqua Experience as a new Oracle Eloqua employee. In weeks prior, I heard about the mystique but didn’t know what to expect. What I’ve come to understand with more clarity is everything we do revolves around customer success, and we operate and educate at all times with these five tenets in mind: 1. Targeting: Really Know Your Buyer 2. Engagement: Create a 1:1 Relationship 3. Conversion: Visualize Guided Thinking 4. Analysis: Learn What’s Working 5. Marketing Technology: Enable and Extend the Cloud Product News from Eloqua Experience 2013 We made some announcements that John Stetic, VP of Products, Oracle Eloqua covers in this brief ‘Modern Marketing Minute’ video recorded after Wednesday’s keynote; summarized below, too: Oracle Eloqua AdFocus: While understanding the impact of a specific marketing channel was formerly relegated to marketers’ wish lists, the channels we now focus on are digital, social, and mobile. AdFocus gives marketers a single platform to dynamically create, manage and measure display ads alongside owned and earned media. AdFocus enables marketers to target only key accounts or prospects you want to reach with display ads, as well as provide creative content or personalized ad copy based on their persona and activities. Oracle Eloqua Profiler: The details of what we now know about customers have expanded into a universal customer profile, which can be used to create highly targeted segments. Marketers now can take data that’s not even stored in Eloqua to help targeted and score prospects for a complete, multichannel view of the customer. Profiler gives sales reps one, detailed view of the prospect to extend views beyond Oracle Eloqua asset activity (emails, forms, page views) to any external assets stored in Oracle Eloqua. Marketing Resource Management: New capabilities create more secure and controlled access to marketing resources and data. New integrations provide greater insight into campaign resources and management through a central marketing calendar and simplify resource management. Integrated Sales and Marketing Funnel: An integrated sales and marketing funnel view gives marketing and sales users, cross-functional teams, and executive management a consistent and clear view of pipeline performance. It also quickly provides users with historical metrics across different time spans and conditions. Eloqua AppCloud: More than 20 new AppCloud partners have been added to the community, which now includes 100+ apps. Eloqua AppCloud now provides modern marketers with an even broader range of marketing applications that help expand and enrich sales and marketing efforts; easily accessible in the Topliners Community. Social Capabilities: Recent integration between Oracle Eloqua and Oracle Social Relationship Management (SRM) deliver a comprehensive, scalable and integrated modern marketing solution. New capabilities include better tracking of social activities for a more complete customer profile. Engage Facebook custom audiences with AdFocus to deliver ads and meaningful experiences through trusted social networks. Biggest and Best Eloqua Experience. There’s a lot of talk in the industry about the Marketing Cloud. At Oracle Eloqua, we have been on a mission of delivering the most advanced and integrated modern marketing technology on the planet. It’s not just a concept but reality with proven execution, as seen first-hand this week in San Francisco. In this video, Kevin Akeroyd, SVP of Oracle Eloqua, provides some highlights of what made this year’s Eloqua Experience, exceptional, including Steve Woods’ presentation about the journey of modern marketers and Andrea Ward’s conversation with Vince Gilligan, creator of the Breaking Bad television series. The 2013 Markie Awards The Oracle Eloqua Marketing Cloud was best exemplified for me as 19 Markies were awarded to customers for their exceptional creativity and results as modern marketers. Wow, what a night to remember with so many committed and talented people working to create an extraordinary experience! To learn more about how to become a modern marketer, check out these resources. We look forward to seeing you next year at Eloqua Experience. More on Erick: 20 years experience at Oracle, Ektron, Sitecore, Lyris, Habeas, Nokia, creatorbase, Mark Monitor, Cisco Systems, GlobalFluency, Sun Microsystems, Philips NV, Elm Products and CBS TV. Patent holder with agency, Fortune 500, media, and startup company expertise. @mikestiles

    Read the article

  • How do I pass vertex and color positions to OpenGL shaders?

    - by smoth190
    I've been trying to get this to work for the past two days, telling myself I wouldn't ask for help. I think you can see where that got me... I thought I'd try my hand at a little OpenGL, because DirectX is complex and depressing. I picked OpenGL 3.x, because even with my OpenGL 4 graphics card, all my friends don't have that, and I like to let them use my programs. There aren't really any great tutorials for OpenGL 3, most are just "type this and this will happen--the end". I'm trying to just draw a simple triangle, and so far, all I have is a blank screen with my clear color (when I set the draw type to GL_POINTS I just get a black dot). I have no idea what the problem is, so I'll just slap down the code: Here is the function that creates the triangle: void CEntityRenderable::CreateBuffers() { m_vertices = new Vertex3D[3]; m_vertexCount = 3; m_vertices[0].x = -1.0f; m_vertices[0].y = -1.0f; m_vertices[0].z = -5.0f; m_vertices[0].r = 1.0f; m_vertices[0].g = 0.0f; m_vertices[0].b = 0.0f; m_vertices[0].a = 1.0f; m_vertices[1].x = 1.0f; m_vertices[1].y = -1.0f; m_vertices[1].z = -5.0f; m_vertices[1].r = 1.0f; m_vertices[1].g = 0.0f; m_vertices[1].b = 0.0f; m_vertices[1].a = 1.0f; m_vertices[2].x = 0.0f; m_vertices[2].y = 1.0f; m_vertices[2].z = -5.0f; m_vertices[2].r = 1.0f; m_vertices[2].g = 0.0f; m_vertices[2].b = 0.0f; m_vertices[2].a = 1.0f; //Create the VAO glGenVertexArrays(1, &m_vaoID); //Bind the VAO glBindVertexArray(m_vaoID); //Create a vertex buffer glGenBuffers(1, &m_vboID); //Bind the buffer glBindBuffer(GL_ARRAY_BUFFER, m_vboID); //Set the buffers data glBufferData(GL_ARRAY_BUFFER, sizeof(m_vertices), m_vertices, GL_STATIC_DRAW); //Set its usage glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex3D), 0); glVertexAttribPointer(1, 4, GL_FLOAT, GL_TRUE, sizeof(Vertex3D), (void*)(3*sizeof(float))); //Enable glEnableVertexAttribArray(0); glEnableVertexAttribArray(1); //Check for errors if(glGetError() != GL_NO_ERROR) { Error("Failed to create VBO: %s", gluErrorString(glGetError())); } //Unbind... glBindVertexArray(0); } The Vertex3D struct is as such... struct Vertex3D { Vertex3D() : x(0), y(0), z(0), r(0), g(0), b(0), a(1) {} float x, y, z; float r, g, b, a; }; And finally the render function: void CEntityRenderable::RenderEntity() { //Render... glBindVertexArray(m_vaoID); //Use our attribs glDrawArrays(GL_POINTS, 0, m_vertexCount); glBindVertexArray(0); //unbind OnRender(); } (And yes, I am binding and unbinding the shader. That is just in a different place) I think my problem is that I haven't fully wrapped my mind around this whole VertexAttribArray thing (the only thing I like better in DirectX was input layouts D:). This is my vertex shader: #version 330 //Matrices uniform mat4 projectionMatrix; uniform mat4 viewMatrix; uniform mat4 modelMatrix; //In values layout(location = 0) in vec3 position; layout(location = 1) in vec3 color; //Out values out vec3 frag_color; //Main shader void main(void) { //Position in world gl_Position = vec4(position, 1.0); //gl_Position = projectionMatrix * viewMatrix * modelMatrix * vec4(in_Position, 1.0); //No color changes frag_color = color; } As you can see, I've disable the matrices, because that just makes debugging this thing so much harder. I tried to debug using glslDevil, but my program just crashes right before the shaders are created... so I gave up with that. This is my first shot at OpenGL since the good old days of LWJGL, but that was when I didn't even know what a shader was. Thanks for your help :)

    Read the article

  • Indexed Drawing in OpenGL not working

    - by user2050846
    I am trying to render 2 types of primitives- - points ( a Point Cloud ) - triangles ( a Mesh ) I am rendering points simply without any index arrays and they are getting rendered fine. To render the meshes I am using indexed drawing with the face list array having the indices of the vertices to be rendered as Triangles. Vertices and their corresponding vertex colors are stored in their corresponding buffers. But the indexed drawing command do not draw anything. The code is as follows- Main Display Function: void display() { simple->enable(); simple->bindUniform("MV",modelview); simple->bindUniform("P", projection); // rendering Point Cloud glBindVertexArray(vao); // Vertex buffer Point Cloud glBindBuffer(GL_ARRAY_BUFFER,vertexbuffer); glEnableVertexAttribArray(0); glVertexAttribPointer(0,3,GL_FLOAT,GL_FALSE,0,0); // Color Buffer point Cloud glBindBuffer(GL_ARRAY_BUFFER,colorbuffer); glEnableVertexAttribArray(1); glVertexAttribPointer(1,3,GL_FLOAT,GL_FALSE,0,0); // Render Colored Point Cloud //glDrawArrays(GL_POINTS,0,model->vertexCount); glDisableVertexAttribArray(0); glDisableVertexAttribArray(1); // ---------------- END---------------------// //// Floor Rendering glBindBuffer(GL_ARRAY_BUFFER,fl); glEnableVertexAttribArray(0); glEnableVertexAttribArray(1); glVertexAttribPointer(0,3,GL_FLOAT,GL_FALSE,0,0); glVertexAttribPointer(1,4,GL_FLOAT,GL_FALSE,0,(void *)48); glDrawArrays(GL_QUADS,0,4); glDisableVertexAttribArray(0); glDisableVertexAttribArray(1); // -----------------END---------------------// //Rendering the Meshes //////////// PART OF CODE THAT IS NOT DRAWING ANYTHING //////////////////// glBindVertexArray(vid); for(int i=0;i<NUM_MESHES;i++) { glBindBuffer(GL_ARRAY_BUFFER,mVertex[i]); glEnableVertexAttribArray(0); glEnableVertexAttribArray(1); glVertexAttribPointer(0,3,GL_FLOAT,GL_FALSE,0,0); glVertexAttribPointer(1,3,GL_FLOAT,GL_FALSE,0,(void *)(meshes[i]->vertexCount*sizeof(glm::vec3))); //glDrawArrays(GL_TRIANGLES,0,meshes[i]->vertexCount); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,mFace[i]); //cout<<gluErrorString(glGetError()); glDrawElements(GL_TRIANGLES,meshes[i]->faceCount*3,GL_FLOAT,(void *)0); glDisableVertexAttribArray(0); glDisableVertexAttribArray(1); } glUseProgram(0); glutSwapBuffers(); glutPostRedisplay(); } Point Cloud Buffer Allocation Initialization: void initGLPointCloud() { glGenBuffers(1,&vertexbuffer); glGenBuffers(1,&colorbuffer); glGenBuffers(1,&fl); //Populates the position buffer glBindBuffer(GL_ARRAY_BUFFER,vertexbuffer); glBufferData(GL_ARRAY_BUFFER, model->vertexCount * sizeof (glm::vec3), &model->positions[0], GL_STATIC_DRAW); //Populates the color buffer glBindBuffer(GL_ARRAY_BUFFER, colorbuffer); glBufferData(GL_ARRAY_BUFFER, model->vertexCount * sizeof (glm::vec3), &model->colors[0], GL_STATIC_DRAW); model->FreeMemory(); // To free the not needed memory, as the data has been already // copied on graphic card, and wont be used again. glBindBuffer(GL_ARRAY_BUFFER,0); } Meshes Buffer Initialization: void initGLMeshes(int i) { glBindBuffer(GL_ARRAY_BUFFER,mVertex[i]); glBufferData(GL_ARRAY_BUFFER,meshes[i]->vertexCount*sizeof(glm::vec3)*2,NULL,GL_STATIC_DRAW); glBufferSubData(GL_ARRAY_BUFFER,0,meshes[i]->vertexCount*sizeof(glm::vec3),&meshes[i]->positions[0]); glBufferSubData(GL_ARRAY_BUFFER,meshes[i]->vertexCount*sizeof(glm::vec3),meshes[i]->vertexCount*sizeof(glm::vec3),&meshes[i]->colors[0]); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,mFace[i]); glBufferData(GL_ELEMENT_ARRAY_BUFFER,meshes[i]->faceCount*sizeof(glm::vec3), &meshes[i]->faces[0],GL_STATIC_DRAW); meshes[i]->FreeMemory(); //glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,0); } Initialize the Rendering, load and create shader and calls the mesh and PCD initializers. void initRender() { simple= new GLSLShader("shaders/simple.vert","shaders/simple.frag"); //Point Cloud //Sets up VAO glGenVertexArrays(1, &vao); glBindVertexArray(vao); initGLPointCloud(); //floorData glBindBuffer(GL_ARRAY_BUFFER, fl); glBufferData(GL_ARRAY_BUFFER, sizeof(floorData), &floorData[0], GL_STATIC_DRAW); glBindBuffer(GL_ARRAY_BUFFER,0); glBindVertexArray(0); //Meshes for(int i=0;i<NUM_MESHES;i++) { if(i==0) // SET up the new vertex array state for indexed Drawing { glGenVertexArrays(1, &vid); glBindVertexArray(vid); glGenBuffers(NUM_MESHES,mVertex); glGenBuffers(NUM_MESHES,mColor); glGenBuffers(NUM_MESHES,mFace); } initGLMeshes(i); } glEnable(GL_DEPTH_TEST); } Any help would be much appreciated, I have been breaking my head on this problem since 3 days, and still it is unsolved.

    Read the article

< Previous Page | 251 252 253 254 255 256 257 258 259 260 261 262  | Next Page >