Search Results

Search found 30252 results on 1211 pages for 'network programming'.

Page 1063/1211 | < Previous Page | 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070  | Next Page >

  • Ubuntu 12.04 can't detect internal mobile broadband (Gobi 2000)

    - by Anega
    Hi I have been trying Ubuntu to detect and connect using the buit in mobile broadband capability in my HP 110 netbook but until now nothing seems to work Output of lspci command: 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev e2) 00:1f.0 ISA bridge: Intel Corporation 82801GBM (ICH7-M) LPC Interface Bridge (rev 02) 00:1f.2 SATA controller: Intel Corporation 82801GBM/GHM (ICH7-M Family) SATA Controller [AHCI mode] (rev 02) 01:00.0 Network controller: Broadcom Corporation BCM4312 802.11b/g LP-PHY (rev 01) 02:00.0 Ethernet controller: Atheros Communications Inc. AR8132 Fast Ethernet (rev c0) Output of lsusb command: Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 002: ID 1fea:0008 Bus 005 Device 002: ID 03f0:2a1d Hewlett-Packard Bus 001 Device 005: ID 03f0:241d Hewlett-Packard Gobi 2000 Wireless Modem (QDL mode) So far I have been trying what is intended on several pages, trying to update firmware using wine and then moving .mda or whatever files the update package GobiInstaller.mdi throws out. The results are always the same: After running Output of wine msiexec /a GobiInstaller.msi /qb TARGETDIR="c:\temp" fixme:advapi:GetCurrentHwProfileA (0x33fba8) semi-stub fixme:heap:HeapSetInformation (nil) 1 (nil) 0 fixme:win:RegisterDeviceNotificationA (hwnd=0x13e250, filter=0xf7e984,flags=0x00000001) returns a fake device notification handle! fixme:heap:HeapSetInformation (nil) 1 (nil) 0 fixme:heap:HeapSetInformation (nil) 1 (nil) 0 fixme:advapi:RegisterEventSourceW ((null),L"Bonjour Service"): stub fixme:advapi:ReportEventA (0xcafe4242,0x0004,0x0000,0x00000064, (nil),0x0001,0x00000000,0x79e58c,(nil)): stub fixme:advapi:ReportEventW (0xcafe4242,0x0004,0x0000,0x00000064, (nil),0x0001,0x00000000,0x12e6d0,(nil)): stub fixme:winsock:WSAIoctl WS_SIO_UDP_CONNRESET stub fixme:winsock:WSAIoctl -> SIO_ADDRESS_LIST_CHANGE request: stub fixme:iphlpapi:DeleteIpForwardEntry (pRoute 0x79e920): stub fixme:iphlpapi:CreateIpForwardEntry (pRoute 0x79e958): stub fixme:advapi:ReportEventA (0xcafe4242,0x0004,0x0000,0x00000064, (nil),0x0001,0x00000000,0x79e58c,(nil)): stub fixme:advapi:ReportEventW (0xcafe4242,0x0004,0x0000,0x00000064, (nil),0x0001,0x00000000,0x12e6d0,(nil)): stub fixme:service:EnumServicesStatusW resume handle not supported fixme:service:EnumServicesStatusW resume handle not supported fixme:advapi:ReportEventA (0xcafe4242,0x0004,0x0000,0x00000064,(nil),0x0001,0x00000000,0x79e58c,(nil)): stub fixme:advapi:ReportEventW (0xcafe4242,0x0004,0x0000,0x00000064, (nil),0x0001,0x00000000,0x12e6d0,(nil)): stub fixme:netapi32:NetGetJoinInformation Semi-stub (null) 0x79e644 0x79e63c fixme:winsock:WSAIoctl WS_SIO_UDP_CONNRESET stub fixme:storage:create_storagefile Storage share mode not implemented. err:msi:ITERATE_Actions Execution halted, action L"_693CD41C_A4A2_4FA1_8888_FC56C9E6E13B" returned 1603 err:rpc:I_RpcGetBuffer no binding err:rpc:I_RpcGetBuffer no binding andres@andres-HP-Mini-110-1100:~/.wine/drive_c/Qualcomm$ fixme:advapi:ReportEventA (0xcafe4242,0x0004,0x0000,0x00000064,(nil),0x0001,0x00000000,0x79e588,(nil)): stub fixme:advapi:ReportEventW (0xcafe4242,0x0004,0x0000,0x00000064, (nil),0x0001,0x00000000,0x12e6d0,(nil)): stub And creates 2 empty folders, I have been trying hard and I am not sure if I am doing it the way it should be. Thanks

    Read the article

  • Why does aptitude want to remove a bunch of files?

    - by Mediterran81
    Recently I encountered dependencies resolve issues when using APTITUDE (it is my favorite). Nevertheless, I started to feel that APTITUDE does not behave as it is supposed to be in 64 bits systems while apt-get works fine. Can someone confirm that APTITUDE is buggy in Ubuntu 11.10 amd64? Edit: For example, when tried to install ntfs-config using APTITUDE, it asked me to remove over 100 packages (skype for example), while using apt-get worked fine. han@L502X:~$ sudo aptitude install ntfs-config [sudo] password for han: The following NEW packages will be installed: ntfs-3g{ab} ntfs-config 0 packages upgraded, 2 newly installed, 0 to remove and 0 not upgraded. Need to get 0 B/640 kB of archives. After unpacking 2,466 kB will be used. The following packages have unmet dependencies: ntfs-3g: Conflicts: ntfsprogs but 2.0.0-1ubuntu4 is installed. The following actions will resolve these dependencies: Remove the following packages: 1) flashplugin-downloader 2) flashplugin-installer 3) libasound2 4) libasound2-plugins 5) libasyncns0 6) libatk1.0-0 7) libaudio2 8) libavahi-client3 9) libavahi-common3 10) libc6 11) libcairo2 12) libcomerr2 13) libcups2 14) libcurl3 15) libdatrie1 16) libdb5.1 17) libdbus-1-3 18) libdbusmenu-qt2 19) libexpat1 20) libffi6 21) libflac8 22) libfontconfig1 23) libfreetype6 24) libgcc1 25) libgcrypt11 26) libgdk-pixbuf2.0-0 27) libglib2.0-0 28) libgnutls26 29) libgpg-error0 30) libgssapi-krb5-2 31) libgtk2.0-0 32) libice6 33) libidn11 34) libjack-jackd2-0 35) libjasper1 36) libjpeg62 37) libjson0 38) libk5crypto3 39) libkeyutils1 40) libkrb5-3 41) libkrb5support0 42) liblcms1 43) libldap-2.4-2 44) libmng1 45) libnspr4 46) libnspr4-0d 47) libnss3 48) libnss3-1d 49) libogg0 50) libpango1.0-0 51) libpcre3 52) libpixman-1-0 53) libpng12-0 54) libpulse0 55) libqt4-dbus 56) libqt4-declarative 57) libqt4-network 58) libqt4-script 59) libqt4-sql 60) libqt4-xml 61) libqt4-xmlpatterns 62) libqtcore4 63) libqtgui4 64) librtmp0 65) libsamplerate0 66) libsasl2-2 67) libsasl2-modules 68) libselinux1 69) libsm6 70) libsndfile1 71) libspeexdsp1 72) libsqlite3-0 73) libssl1.0.0 74) libstdc++6 75) libtasn1-3 76) libthai0 77) libtiff4 78) libuuid1 79) libvorbis0a 80) libvorbisenc2 81) libwrap0 82) libx11-6 83) libxau6 84) libxcb-render0 85) libxcb-shm0 86) libxcb1 87) libxcomposite1 88) libxcursor1 89) libxdamage1 90) libxdmcp6 91) libxext6 92) libxfixes3 93) libxft2 94) libxi6 95) libxinerama1 96) libxrandr2 97) libxrender1 98) libxss1 99) libxt6 100) libxv1 101) nspluginviewer 102) nspluginwrapper 103) ntfsprogs 104) skype 105) sni-qt 106) zlib1g Leave the following dependencies unresolved: 107) flashplugin-downloader recommends libasound2-plugins (>= 1.0.16) Accept this solution? [Y/n/q/?]

    Read the article

  • DataContractSerializer: type is not serializable because it is not public?

    - by Michael B. McLaughlin
    I recently ran into an odd and annoying error when working with the DataContractSerializer class for a WP7 project. I thought I’d share it to save others who might encounter it the same annoyance I had. So I had an instance of  ObservableCollection<T> that I was trying to serialize (with T being a class I wrote for the project) and whenever it would hit the code to save it, it would give me: The data contract type 'ProjectName.MyMagicItemsClass' is not serializable because it is not public. Making the type public will fix this error. Alternatively, you can make it internal, and use the InternalsVisibleToAttribute attribute on your assembly in order to enable serialization of internal members - see documentation for more details. Be aware that doing so has certain security implications. This, of course, was malarkey. I was trying to write an instance of MyAwesomeClass that looked like this: [DataContract] public class MyAwesomeClass { [DataMember] public ObservableCollection<MyMagicItemsClass> GreatItems { get; set; }   [DataMember] public ObservableCollection<MyMagicItemsClass> SuperbItems { get; set; }     public MyAwesomeClass { GreatItems = new ObservableCollection<MyMagicItemsClass>(); SuperbItems = new ObservableCollection<MyMagicItemsClass>(); } }   That’s all well and fine. And MyMagicItemsClass was also public with a parameterless public constructor. It too had DataContractAttribute applied to it and it had DataMemberAttribute applied to all the properties and fields I wanted to serialize. Everything should be cool, but it’s not because I keep getting that “not public” exception. I could tell you about all the things I tried (generating a List<T> on the fly to make sure it wasn’t ObservableCollection<T>, trying to serialize the the Collections directly, moving it all to a separate library project, etc.), but I want to keep this short. In the end, I remembered my the “Debug->Exceptions…” VS menu option that brings up the list of exception-related circumstances under which the Visual Studio debugger will break. I checked the “Thrown” checkbox for “Common Language Runtime Exceptions”, started the project under the debugger, and voilà: the true problem revealed itself. Some of my properties had fairly elaborate setters whose logic I wanted to ignore. So for some of them, I applied an IgnoreDataMember attribute to them and applied the DataMember attribute to the underlying fields instead. All of which, in line with good programming practices, were private. Well, it just so happens that WP7 apps run in a “partial trust” environment and outside of “full trust”-land, DataContractSerializer refuses to serialize or deserialize non-public members. Of course that exception was swallowed up internally by .NET so all I ever saw was that bizarre message about things that I knew for certain were public being “not public”. I changed all the private fields I was serializing to public and everything worked just fine. In hindsight it all makes perfect sense. The serializer uses reflection to build up its graph of the object in order to write it out. In partial trust, you don’t want people using reflection to get at non-public members of an object since there are potential security problems with allowing that (you could break out of the sandbox pretty quickly by reflecting and calling the appropriate methods and cause some havoc by reflecting and setting the appropriate fields in certain circumstances. The fact that you cannot reflect your own assembly seems a bit heavy-handed, but then again I’m not a compiler writer or a framework designer and I have no idea what sorts of difficulties would go into allowing that from a compilation standpoint or what sorts of security problems allowing that could present (if any). So, lesson learned. If you get an incomprehensible exception message, turn on break on all thrown exceptions and try running it again (it might take a couple of tries, depending) and see what pops out. Chances are you’ll find the buried exception that actually explains what was going on. And if you’re getting a weird exception when trying to use DataContractSerializer complaining about public types not being public, chances are you’re trying to serialize a private or protected field/property.

    Read the article

  • Send raw data to USB parallel port after upgrading to 11.10

    - by zaphod
    I have a laser cutter connected via a generic USB to parallel adapter. The laser cutter speaks HPGL, as it happens, but since this is a laser cutter and not a plotter, I usually want to generate the HPGL myself, since I care about the ordering, speed, and direction of cuts and so on. In previous versions of Ubuntu, I was able to print to the cutter by copying an HPGL file directly to the corresponding USB "lp" device. For example: cp foo.plt /dev/usblp1 Well, I just upgraded to Ubuntu 11.10 oneiric, and I can't find any "lp" devices in /dev anymore. D'oh! What's the preferred way to send raw data to a parallel port in Ubuntu? I've tried System Settings Printing + Add, hoping that I might be able to associate my device with some kind of "raw printer" driver and print to it with a command like lp -d LaserCutter foo.plt But my USB to parallel adapter doesn't seem to show up in the list. What I do see are my HP Color LaserJet, two USB-to-serial adapters, "Enter URI", and "Network Printer". Meanwhile, over in /dev, I do see /dev/ttyUSB0 and /dev/ttyUSB1 devices for the 2 USB-to-serial adapters. I don't see anything obvious corresponding to the HP printer (which was /dev/usblp0 prior to the upgrade), except for generic USB stuff. For example, sudo find /dev | grep lp produces no output. I do seem to be able to print to the HP printer just fine, though. The printer setup GUI gives it a device URI starting with "hp:" which isn't much help for the parallel adapter. The CUPS administrator's guide makes it sound like I might need to feed it a device URI of the form parallel:/dev/SOMETHING, but of course if I had a /dev/SOMETHING I'd probably just go on writing to it directly. Here's what dmesg says after I disconnect and reconnect the device from the USB port: [ 924.722906] usb 1-1.1.4: USB disconnect, device number 7 [ 959.993002] usb 1-1.1.4: new full speed USB device number 8 using ehci_hcd And here's how it shows up in lsusb -v: Bus 001 Device 008: ID 1a86:7584 QinHeng Electronics CH340S Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 1.10 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 8 idVendor 0x1a86 QinHeng Electronics idProduct 0x7584 CH340S bcdDevice 2.52 iManufacturer 0 iProduct 2 USB2.0-Print iSerial 0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 32 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0x80 (Bus Powered) MaxPower 96mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 2 bInterfaceClass 7 Printer bInterfaceSubClass 1 Printer bInterfaceProtocol 2 Bidirectional iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x82 EP 2 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0020 1x 32 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x02 EP 2 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0020 1x 32 bytes bInterval 0 Device Status: 0x0000 (Bus Powered)

    Read the article

  • Oracle Customer Experience Summit @ OpenWorld

    - by Christie Flanagan
    This first-ever Oracle Customer Experience Summit @ OpenWorld kicked off yesterday, bringing together established thought leaders and practitioners in customer experience. The first day saw noted marketing and customer experience thought leader, Seth Godin, take the stage to discuss how rapidly accelerating change and adoption are driving new behaviors and higher expectations in a massively disruptive transformation in which the customer now holds the power. His presentation gave us in-depth insight into this always-connected, always-sharing experience revolution we are witnessing.If you haven’t yet made it over to the Oracle Customer Experience Summit at The Westin St. Francis and the recently made over Oracle Square (aka Union Square), there’s still time today and tomorrow to network with industry peers and hear best practices from those who have steered their ventures through the disruptive trends of customer experience and have proven, successful strategies to share for driving strategic customer-centric initiatives. If you’re interested in learning how Oracle WebCenter helps businesses meet the demands of the customer experience revolution, be sure to check out these sessions at the Oracle Customer Experience Summit later today:Using the Online Customer Experience to Drive Engagement and Marketing Success Thursday, Oct 4, 4:15 PM - 5:15 PM - St. Francis - GeorgianMariam Tariq - Senior Director Product Management, Oracle Stephen Schleifer - Senior Principal Product Manager, Oracle Richard Backx - Business IT Architect/Consultant, KPN NL Netco CE Channels Online The online channel is a critical means of reaching and engaging customers. Online marketing efforts today must be targeted, interactive, and consistent to provide customers with a seamless experience. These efforts must include integrated management of Web, mobile, and social channels—supported by cross-channel customer data and campaigns—and integration with commerce to drive an engaging and differentiated online customer experience. Attend this session to learn how you can use the online channel to increase customer loyalty and drive the success of your marketing initiatives.Empowering Your Frontline Employees: Sales and Service Enterprise Collaboration Thursday, Oct 4, 5:30 PM - 6:30 PM - St. Francis - Elizabethan ABStephen Fioretti - VP, Product Management, Oracle Peter Doolan - Group Vice President, Sales Engineering, Oracle Andrew Kershaw - Sr Director Business Development, Oracle Marty Marcinczyk - VP Customer Experience Engineering, Comcast A focus on the employee experience is critical, because it can make or break your customers’ experiences, directly or indirectly. Engaged and empowered frontline employees become your best advocates and inspire your brand champions. This session explores proven approaches and tools, including social collaboration tools, that can help you empower and enable your frontline teams to improve customer and employee experiences.And before you go, you'll also want to explore the Innovation Tents in Oracle Square which feature leading-edge customer experience demonstrations; attend our customer journey mapping workshop; and learn at sessions focused on innovating differentiated experiences that drive cross-functional alignment.

    Read the article

  • Essbase 11.1.2 - AgtSvrConnections Essbase Configuration Setting

    - by Ann Donahue
    AgtSvrConnections is a documented Essbase configuration setting used in conjunction with the AgentThreads and ServerThreads settings. Basically, when a user logs into Essbase, the AgentThreads connects to the ESSBASE process then the AgtSvrConnections will connect the ESSBASE process to the ESSSVR application process which then the ServerThreads are used for end user activities. In Essbase 11.1.2, the default value of the AgtSvrConnections setting was changed to 5. In previous Essbase releases, the AgtSvrConnections setting default value is 1. It is recommended that tuning the AgtSvrConnections settings be done incrementally by 1 or 2 maximum and based on the number of concurrent Set Active/Clear Active calls. In the Essbase DBA Guide and Technical Reference, the maximum setting recommended is to not exceed what is set for AgentThreads, however, we have found that most customers do not need to exceed a setting of 10. In general, it is ok to set AgtSvrConnections close to the AgentThreads setting, however, there have been customers that needed an AgentThread setting greater than 10 and we have found that the AgtSvrConnections setting higher than 5-10 could have a negative impact on Essbase due to too many TCP ports used unnecessarily. As with all Essbase.cfg settings, it is best to set values to what is needed based on process load and not arbitrarily set to high values. In order to monitor and tune the AgtSvrConnections setting, monitor the application log for logins and Set Active/Clear Active messages. If there are a lot of logins and Set Active/Clear Active messages happening in a short period of time making it appear that the login is taking longer, incrementally increase the AgtSvrConnections setting by 1 or 2, which can then help with login speed. The login performance tolerance is different from one customer environment to another since there are other factors that can impact this performance i.e. network latency. What is happening in Essbase when a user logs in: ESSBASE issues a Set Active to the ESSSVR process. Each application has its own ESSSVR process. Set Active then calls MultipleAsyncLogout and waits on the pipe connection. MultipleAsyncLogout goes back to ESSBASE. ESSBASE then needs to send the logout back to the ESSSVR process. When the AgtSvrConnections setting needs to be increased from the default of 5, it is because Essbase cannot find a connection since the previous connections are used by ESSBASE-ESSSVR. In this example, we may want to increase AgtSvrConnections from 5 to 7 to improve the login performance. Again, it is best to set Essbase settings to what is needed based on process load and not arbitrarily set to high values. In general, stress or performance testing environments using automated tools may need higher than normal settings. This is because automated processes run at high speeds for logging in and logging out. Typically, in a real life production environment, the settings are much closer to default values.

    Read the article

  • Three Global Telecoms Soar With Siebel

    - by michael.seback
    Deutsche Telekom Group Selects Oracle's Siebel CRM to Underpin Next-Generation CRM Strategy The Deutsche Telekom Group (DTAG), one of the world's leading telecommunications companies, and a customer of Oracle since 2001, has invested in Oracle's Siebel CRM as the standard platform for its Next Generation CRM strategy; a move to lower the cost of managing its 120 million customers across its European businesses. Oracle's Siebel CRM is planned to be deployed in Germany and all of the company's European business within five years. "...Our Next-Generation strategy is a significant move to lower our operating costs and enhance customer service for all our European customers. Not only is Oracle underpinning this strategy, but is also shaping the way our company operates and sells to customers. We look forward to working with Oracle over the coming years as the technology is extended across Europe," said Dr. Steffen Roehn, CIO Deutsche Telekom AG... "The telecommunications industry is currently undergoing some major changes. As a result, companies like Deutsche Telekom are needing to be more intelligent about the way they use technology, particularly when it comes to customer service. Deutsche Telekom is a great example of how organisations can use CRM to not just improve services, but also drive more commercial opportunities through the ability to offer highly tailored offers, while the customer is engaged online or on the phone," said Steve Fearon, vice president CRM, EMEA Read more. Telecom Argentina S.A. Accelerates Time-to-Market for New Communications Products and Services Telecom Argentina S.A. offers basic telephone, urban landline, and national and international long-distance services...."With Oracle's Siebel CRM and Oracle Communication Billing and Revenue Management, we started a technological transformation that allows us to satisfy our critical business needs, such as improving customer service and quickly launching new phone and internet products and services." - Saba Gooley, Chief Information Officer, Wire Line and Internet Services, Telecom Argentina S.A.Read more. Türk Telekom Develops Benefits-Driven CRM Roadmap Türk Telekom Group provides integrated telecommunication services from public switched telephone network (PSTN) and global systems for mobile communications technology (GSM). to broadband internet...."Oracle Insight provided us with a structured deployment approach that makes sense for our business. It quantified the benefits of the CRM solution allowing us to engage with the relevant business owners; essential for a successful transformation program." - Paul Taylor, VP Commercial Transformation, Türk Telekom Read more.

    Read the article

  • Agile PLM Highlights from Oracle OpenWorld 2012

    - by Kerrie Foy
    Thank you to everyone who joined us at Oracle OpenWorld this year, either in person or virtually (thanks for tweeting #oowplm)!  From customer presentations to after-hours networking opportunities, there was a lot to see and do during the entire conference. Sessions It was our pleasure to feature several customer speakers during our PLM sessions at OpenWorld from such companies as Starbucks, Coca-Cola, Facebook, Eli Lilly, and many more.  Each had a unique perspective to share and fascinating insight into how they successfully leverage Agile PLM to facilitate profitable innovation, protect brand integrity, streamline operations, manage compliance, launch faster, etc.  For example, during the Product Value Chain keynote session, CIO Chris Bedi of JDSU shared how they implemented Agile PLM to support business imperatives around rapid innovation, centralizing product information, collaboration, and eliminate the “Excel gymnastics” required to obtain global portfolio visibility. In just 120 days after implementing, JDSU employees reported significant improvements around product record management, new product introduction, engineering collaboration and more, which created a better work environment to enable critical innovation. I could write on and on about the almost 20 sessions! So to spare yourselves, please visit launch.oracle.com/?plmopenworld2012; it’s a curated selection of PLM presentations from the OpenWorld Content Catalog and available on-demand. Enjoy! Agile Innovation Management During OpenWorld, we announced an exciting new addition to the Agile PLM applications called Innovation Management that redefines the industry’s scope of product lifecycle management.  Our broad vision of complete enterprise PLM for the entire Product Value Chain already broke new ground by helping organizations extend PLM disciplines downstream by connecting product design to commercialization processes; now we are helping executives look farther upstream in the early innovation phases to ultimately close the gap between strategy and execution that so commonly nags innovation initiatives.  More on this coming soon so stay tuned! Unique Networking Opportunities  We know it can be challenging during OpenWorld to find time to productively connect and network with your industry peers, so we hosted an Agile PLM “Birds of a Feather” networking brunch for the second year in a row.  At a fine restaurant close to Moscone we hosted nine tables, each with only ten seats to encourage active conversation.  Furthermore, guests could select from a list of predetermined table topics sponsored by a specialized PLM partner to guarantee – even more so – that they were seated with like-minded company and optimizing their time at the conference.  Everyone enjoyed the opportunity to easily connect with other PLM users during OpenWorld in a more casual setting. What’s Next? Thank you again to all who joined us!  If you haven't yet, mark your calendar to join us for the next Oracle Agile PLM conference at the Value Chain Summit in San Francisco, February 4-6 in 2013!  We’ll have 40 sessions of PLM content in four tracks. Don’t miss it! You can sign up to be notified when official registration opens by visiting www.oracle.com/goto/vcs. 

    Read the article

  • What DX level does my graphics card support? Does it go to 11?

    - by Daniel Moth
    Recently I run into a situation that I have run into quite a few times. Someone encounters a machine and the question arises: "Is there a DirectX 11 card in this machine?". Typically the reason you are interested in that is because cards with DirectX 11 drivers fully support DirectCompute (and by extension C++ AMP) for GPGPU programming. The driver specifically is WDDM (1.1 on Windows 7 and Windows 8 introduces WDDM 1.2 with cool new capabilities). There are many ways for figuring out if you have a DirectX11 card, so here are the approaches that you can use, with a bonus right at the end of the post. Run DxDiag WindowsKey + R, type DxDiag and hit Enter. That is the DirectX diagnostic tool, which unfortunately, only tells you on the "System" tab what is the highest version of DirectX installed on your machine. So if it reports DirectX 11, that doesn't mean you have a DX11 driver! The "Display" tab has a promising "DDI version" label, but unfortunately that doesn't seem to be accurate on the machines I've tested it with (or I may be misinterpreting its use). Either way, this tool is not the one you want for this purpose, although it is good for telling you the WDDM version among other things. Use the Microsoft hardware page There is a Microsoft Windows 7 compatibility center, that lists all hardware (tip: use the advanced search) and you could try and locate your device there… good luck. Use Wikipedia or the hardware vendor's website Use the Wikipedia page for the vendor cards, for both nvidia and amd. Often this information will also be in the specifications for the cards on the IHV site, but is is nice that wikipedia has a single page per vendor that you can search etc. There is a column in the tables for API support where you can see the DirectX version. Check if it is one of these recommended DX11 cards You may not have a DirectX 11 card and are interested in purchasing one. While I am in no position to make recommendations, I will list here some cards from two big IHVs that we know are DirectX 11 capable. Some AMD (aka ATI) cards Low end, inexpensive DX11 hardware: Radeon 5450, 5550, 6450, 6570 Mid range (decent perf, single precision): Radeon 5750, 5770, 6770, 6790 High end (capable of double precision): Radeon 5850, 5870, 6950, 6970 Single precision APUs: AMD E-Series APUs AMD A-Series APUs Some NVIDIA cards Low end, inexpensive DX11 hardware: GeForce GT430, GT 440, GT520, GTS 450 Quadro 400, 600 Mid-range (decent perf, single precision): GeForce GTX 460, GTX 550 Ti, GTX 560, GTX 560 Ti Quadro 2000 High end (capable of double precision): GeForce GTX 480, GTX 570, GTX 580, GTX 590, GTX 595 Quadro 4000, 5000, 6000 Tesla C2050, C2070, C2075 Get the DirectX SDK and run DirectX Caps Viewer Download and install the June 2010 DirectX SDK. As part of that you now have the DirectX Capabilities Viewer utility (find it in your start menu by searching for "DirectX Caps Viewer", the filename is DXCapsViewer.exe). It will list all your devices (emulated, and real hardware ones) under the first node. Expand the hardware entries and then expand again the Direct3D 11 folder. If you see D3D_FEATURE_LEVEL_11_ under that, then your card supports feature level 11 which means it supports DirectCompute and C++ AMP. In the following screenshot of one of my old laptops, the card only goes to feature level 10. Run a utility from the web that just tells you! Of course, writing some C++ AMP code that enumerates accelerators and lists the ones that are capable is trivial. However that requires that you have redistributed the runtime, so a more broadly applicable approach is to use the DX APIs directly to enumerate the DX11 capable cards. That is exactly what the development lead for C++ AMP has done and he describes and shares that utility at this post. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Business Strategy - Google Case Study

    Business strategy defined by SMBTN.com is a term used in business planning that implies a careful selection and application of resources to obtain a competitive advantage in anticipation of future events or trends. In more general terms business strategy is positioning a company so that it has the greatest competitive advantage over others in the markets and industries that they participate in. This process involves making corporate decisions regarding which markets to provide goods and services, pricing, acceptable quality levels, and how to interact with others in the marketplace. The primary objective of business strategy is to create and increase value for all of its shareholders and stakeholders through the creation of customer value. According to InformationWeek.com, Google has a distinctive technology advantage over its competitors like Microsoft, eBay, Amazon, Yahoo. Google utilizes custom high-performance systems which are cost efficient because they can scale to extreme workloads. This hardware allows for a huge cost advantage over its competitors. In addition, InformationWeek.com interviewed Stephen Arnold who stated that Google’s programmers are 50%-100% more productive compared to programmers working for their competitors.  He based this theory on Google’s competitors having to spend up to four times as much just to keep up. In addition to Google’s technological advantage, they also have developed a decentralized management schema where employees report directly to multiple managers and team project leaders. This allows for the responsibility of the technology department to be shared amongst multiple senior level engineers and removes the need for a singular department head to oversee the activities of the department.  This is a unique approach from the standard management style. Typically a department head like a CIO or CTO would oversee the department’s global initiatives and business functionality.  This would then be passed down and administered through middle management and implemented by programmers, business analyst, network administrators and Database administrators. It goes without saying that an IT professional’s responsibilities would be directed by Google’s technological advantage and management strategy.  Simply because they work within the department, and would have to design, develop, and support the high-performance systems and would have to report multiple managers and project leaders on a regular basis. Since Google was established and driven by new and immerging technology, all other departments would be directly impacted by the technology department.  In fact, they would have to cater to the technology department since it is a huge driving for in the success of Google. Reference: http://www.smbtn.com/smallbusinessdictionary/#b http://www.informationweek.com/news/software/linux/showArticle.jhtml?articleID=192300292&pgno=1&queryText=&isPrev=

    Read the article

  • Display Driver Issue on an hp TX2-1160ea Notebook

    - by Sam
    I'm new to Linux and recently switched to Ubuntu 11.04 due to my project requirement. My laptop has been freezing and going to black screen of death when I run anything related to display (Share desktop, stream video, etc). Today I went through the Ubuntu forum to install the appropriate graphic driver and, after doing it, I rebooted my PC. It gave an error before login saying "select the recovery mode" and after clicking OK, it didn't give the same error on reboot but I've lost the 11.04 graphical interface and all I see is the interface of Ubuntu v10 with slow visuals (even scrolling up/down on browser is really slow). For the reference, here's a desktop screenshot so that you can understand the situation. Also the laptop is overheating. How can I fix this problem? How can i get the Ubuntu 11.04 view back? I also tried Google, but couldn't find any issue like this. Some general Information: Laptop: HP TouchSmart TX2-1160ea Processor: AMD Turion TX2 Memory: 4GB OS: Ubuntu 11.04 Some debugging information:: $ report-hw | grep controller lspci -knn: 00:11.0 SATA controller [0106]: ATI Technologies Inc SB7x0/SB8x0/SB9x0 SATA Controller [AHCI mode] [1002:4391] lspci -knn: 00:14.3 ISA bridge [0601]: ATI Technologies Inc SB7x0/SB8x0/SB9x0 LPC host controller [1002:439d] lspci -knn: 01:05.0 VGA compatible controller [0300]: ATI Technologies Inc RS780M/RS780MN [Radeon HD 3200 Graphics] [1002:9612] lspci -knn: 08:00.0 Network controller [0280]: Broadcom Corporation BCM4322 802.11a/b/g/n Wireless LAN Controller [14e4:432b] (rev 01) lspci -knn: 09:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller [10ec:8168] (rev 02) And: $ dpkg -l '*fglrx*' Desired=Unknown/Install/Remove/Purge/Hold | Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend |/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad) ||/ Name Version Description +++-==============-==============-============================================ ii fglrx 2:8.840-0ubunt Video driver for the ATI graphics accelerato ii fglrx-amdcccle 2:8.840-0ubunt Catalyst Control Center for the ATI graphics un fglrx-control <none> (no description available) un fglrx-control- <none> (no description available) ii fglrx-dev 2:8.840-0ubunt Video driver for the ATI graphics accelerato un fglrx-driver <none> (no description available) un fglrx-driver-d <none> (no description available) un fglrx-kernel-s <none> (no description available) un fglrx-modalias <none> (no description available) un xfree86-driver <none> (no description available) un xfree86-driver <none> (no description available) un xorg-driver-fg <none> (no description available) un xorg-driver-fg <none> (no description available) If you need any more information that could help, just ask.

    Read the article

  • ArchBeat Link-o-Rama for 2012-07-11

    - by Bob Rhubart
    Is the future of retail showrooming? | GigaOm "The digital shopper isn’t just digital and she expects to be served seamlessly across all channels, physical and digital," reports GigaOm. Twenty years into the Internet era and the changes just keep coming. Solution architects take note... Agile Bureaucracy: When Practices become Principles | Jim Highsmith.com "Principles and values are a critical part of keeping individuals in organizations aligned and engaged," says Agile guru Jim Highsmith, "but the more pseudo-principles are piled on top of principles, the less and less organizations are able to adapt." Oracle Fusion Applications 11g Basics | Michel Schildmeijer "We are trying to build up a Oracle Fusion Apps environment on a Exalogic system, though still on bare metal, because officially there still is no Oracle VM available yet on Exalogic," says Michel Schildmeijer, an Oracle Fusion Middleware Architect at Qualogy. "It is a bit of a challenge, but getting to know the basics and which components the install, build and configure phase use, might bring you a step further on the way." Process Centric Banking: Loan Origination Solution | Manish Palaparthy This interesting, detailed post by Manish Palaparthy explains the process behind the execution of a proof-of-concept for a Fusion Middleware-based loan-origination solution for a bank. The solution incorporates Oracle BPM Suite, Webcenter, and ADF technolgies in a SOA infrastructure. How eBay and Facebook are Cleaning Up Data Centers | Amy Gallo - HBR The Cloud has needs! As reported by Amy Gallo in an article in the Harvard Business Review, "The electricity demand of data centers and the telecommunications network is rivaling that of most nations. If the cloud were itself a country, it would rank fifth in the world on energy demand behind the U.S., China, Russia, and Japan." Do WebLogic configuration from ANT | Edwin Biemond "With WebLogic WLST you can script the creation of all your Application DataSources or SOA Integration artifacts( like JMS etc)," says Oracle ACE Edwin Biemond. "This is necessary if your domain contains many WebLogic artifacts or you have more then one WebLogic environment. If so, you want to script this so you can configure a new WebLogic domain in minutes and you can repeat this task with always the same result." Oracle Special-Edition E-Book: Cloud Architecture for Dummies Learn how to architect and model your cloud implementation to drive efficiency and leverage economies of scale with Cloud Architecture for Dummies, a free Oracle e-book. (Registration required.) Thought for the Day "One of the best things to come out of the home computer revolution could be the general and widespread understanding of how severely limited logic really is." — Frank Herbert Source: SoftwareQuotes.com

    Read the article

  • Best depth sorting method for a Top Down 2D game using a 3D physics engine

    - by Alic44
    I've spent many days googling this and still have issues with my game engine I'd like to ask about, which I haven't seen addressed before. I think the problem is that my game is an unusual combination of a completely 2D graphical approach using XNA's SpriteBatch, and a completely 3D engine (the amazing BEPU physics engine) with rotation mostly disabled. In essence, my question is similar to this one (the part about "faux 3D"), but the difference is that in my game, the player as well as every other creature is represented by 3D objects, and they can all jump, pick up other objects, and throw them around. What this means is that sorting by one value, such as a Z position (how far north/south a character is on the screen) won't work, because as soon as a smaller creature jumps on top of a larger creature, or a box, and walks backwards, the moment its z value is less than that other creature, it will appear to be behind the object it is actually standing on. I actually originally solved this problem by splitting every object in the game into physics boxes which MUST have a Y height equal to their Z depth. I then based the depth sorting value on the object's y position (how high it is off the ground) PLUS its z position (how far north or south it is on the screen). The problem with this approach is that it requires all moving objects in the game to be split graphically into chunks which match up with a physical box which has its y dimension equal to its z dimension. Which is stupid. So, I got inspired last night to rewrite with a fresh approach. My new method is a little more complex, but I think a little more sane: every object which needs to be sorted by depth in the game exposes the interface IDepthDrawable and is added to a list owned by the DepthDrawer object. IDepthDrawable contains: public interface IDepthDrawable { Rectangle Bounds { get; } //possibly change this to a class if struct copying of the xna Rectangle type becomes an issue DepthDrawShape DepthShape { get; } void Draw(SpriteBatch spriteBatch); } The Bounds Rectangle of each IDepthDrawable object represents the 2D Axis-Aligned Bounding Box it will take up when drawn to the screen. Anything that doesn't intersect the screen will be culled at this stage and the remaining on-screen IDepthDrawables will be Bounds tested for intersections with each other. This is where I get a little less sure of what I'm doing. Each group of collisions will be added to a list or other collection, and each list will sort itself based on its DepthShape property, which will have access to the object-to-be-drawn's physics information. For starting out, lets assume everything in the game is an axis aligned 3D Box shape. Boxes are pretty easy to sort. Something like: if (depthShape1.Back > depthShape2.Front) //if depthShape1 is in front of depthShape2. //depthShape1 goes on top. else if (depthShape1.Bottom > depthShape2.Top) //if depthShape1 is above depthShape2. //depthShape1 goes on top. //if neither of these are true, depthShape2 must be in front or above. So, by sorting draw order by several different factors from the physics engine, I believe I can get a really correct draw order. My question is, is this a good way of going about this, or is there some tried and true, tested way which is completely different and has somehow completely eluded me on the internets? And, if this does seem like a good way to remake my draw order sorting, what's the right sorting algorithm for reordering the Bounds Rectangle collision lists, and how do you deal with a Bounds Rectangle colliding with two different object which don't collide with eachother. I know these are solved problems, but I've only been programming for a year so any specific input here will be greatly appreciated. Thanks for reading this far, ye who made it -- sorry it was so long!

    Read the article

  • New R Interface to Oracle Data Mining Available for Download

    - by charlie.berger
      The R Interface to Oracle Data Mining ( R-ODM) allows R users to access the power of Oracle Data Mining's in-database functions using the familiar R syntax. R-ODM provides a powerful environment for prototyping data analysis and data mining methodologies. R-ODM is especially useful for: Quick prototyping of vertical or domain-based applications where the Oracle Database supports the application Scripting of "production" data mining methodologies Customizing graphics of ODM data mining results (examples: classification, regression, anomaly detection) The R-ODM interface allows R users to mine data using Oracle Data Mining from the R programming environment. It consists of a set of function wrappers written in source R language that pass data and parameters from the R environment to the Oracle RDBMS enterprise edition as standard user PL/SQL queries via an ODBC interface. The R-ODM interface code is a thin layer of logic and SQL that calls through an ODBC interface. R-ODM does not use or expose any Oracle product code as it is completely an external interface and not part of any Oracle product. R-ODM is similar to the example scripts (e.g., the PL/SQL demo code) that illustrates the use of Oracle Data Mining, for example, how to create Data Mining models, pass arguments, retrieve results etc. R-ODM is packaged as a standard R source package and is distributed freely as part of the R environment's Comprehensive R Archive Network (CRAN). For information about the R environment, R packages and CRAN, see www.r-project.org. R-ODM is particularly intended for data analysts and statisticians familiar with R but not necessarily familiar with the Oracle database environment or PL/SQL. It is a convenient environment to rapidly experiment and prototype Data Mining models and applications. Data Mining models prototyped in the R environment can easily be deployed in their final form in the database environment, just like any other standard Oracle Data Mining model. What is R? R is a system for statistical computation and graphics. It consists of a language plus a run-time environment with graphics, a debugger, access to certain system functions, and the ability to run programs stored in script files. The design of R has been heavily influenced by two existing languages: Becker, Chambers & Wilks' S and Sussman's Scheme. Whereas the resulting language is very similar in appearance to S, the underlying implementation and semantics are derived from Scheme. R was initially written by Ross Ihaka and Robert Gentleman at the Department of Statistics of the University of Auckland in Auckland, New Zealand. Since mid-1997 there has been a core group (the "R Core Team") who can modify the R source code archive. Besides this core group many R users have contributed application code as represented in the near 1,500 publicly-available packages in the CRAN archive (which has shown exponential growth since 2001; R News Volume 8/2, October 2008). Today the R community is a vibrant and growing group of dozens of thousands of users worldwide. It is free software distributed under a GNU-style copyleft, and an official part of the GNU project ("GNU S"). Resources: R website / CRAN R-ODM

    Read the article

  • SQLAuthority News – Why I am Going to Attend #SQLPASS Summit 2012 – Seattle

    - by pinaldave
    I am going to Seattle I once again attend SQLPASS this year. This will be my fourth SQLPASS. Lots of people ask me why I am going to SQLPASS every year. Well there are so many different reasons for that. I go to SQLPASS because – I love it!  Here are few of the reasons I go to SQLPASS. Meet friends whom I have never met before Meet community at large – it is fun to hang around with like minded people Meet Rick Morelan – my book co-author and friend Attend various SQL Parties – there are so many parties around – see the list below Explore various new tools from various third party vendors Meet fellow Chapter Leaders and Regional Mentors And of course attend SQL Server Learning Sessions from industry known experts. The three-day event will be marked by a lot of learning, sharing, and networking, which will help me increase both my knowledge and contacts. PASS Summit provides me a golden opportunity to build my network as well as to identify and meet potential customers or employees. If I am a consultant or vendor who is looking for better career opportunities, PASS Summit is the perfect platform to meet and show my skills to my new potential customers and employers. Further, breakfasts, lunches, and evening receptions, which are included with registration, are meant to provide more and more networking opportunities. At PASS Summit, I gain not only new ideas but also inspire myself from top professionals and experts. Learning new things about SQL Server, interacting with different kinds of professionals, and sharing issues and solutions will definitely improve my understanding and turn me into a better SQL Server professional who can leverage and optimize SQL Server to improve business. I am going – are you joining? Note: This is re-blogged with modification from my 2 years old blog posts on a similar subject. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: About Me, Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, SQLServer, T SQL, Technology

    Read the article

  • Chargeback and showback...both a 'throw back'

    - by llaszews
    Been getting asked again by customers and partners about chargeback and showback in the cloud so thought I would blog on my response to this question. Charge Back background, information and industry analysis: Cloud computing is all about shared resources. These shared resources are computer servers (including memory and CPU), network devices, hard disk storage, database servers, application servers, cooling, floor space, electricity and more. These resources are shared by departments within a company, or by a number of companies, when resources are hosted in the public or hybrid cloud. Currently, hosting providers that run other companies on their cloud platforms do not have an accurate way to measure the shared computing resources used by a specific user let alone used by a specific customer. Additionally, companies running their own cloud data centers, for private or hybrid clouds, have no way of measure and charging back the departments in the company that are using these shared cloud resources. In both cases, the lack of determine shared resource costs and to charge them back to the company, department or user that is using this resources is limited a clear measure of business benefit and impacting company’s ability to measure the Return on Investment (ROI). An IT chargeback system is an accounting strategy that applies the costs of IT services, hardware or software to the business unit in which they are used. This system contrasts with traditional IT accounting models in which a centralized department bears all of the IT costs in an organization and those costs are treated simply as corporate overhead. Showback involves showing the IT costs to a department or customer but not actually charging them for their IT usage. Showback is a gradual method of introducing chargeback into an enterprise. Most companies implement a show back mechanism before a full chargeback system is put in place. Oracle chargeback product: Oracle Enterprise Manager provides tools for defining detailed Chargeback plans spanning different metrics collected for each type of resources as well as defining Cost Centers for grouping costs across multiple developers. Chargeback plans can use not only usage based costs, but also configuration based costs (e.g. version of the platform) or fixed costs (e.g. flat-rate management fee). Chargeback has rich out of the box reports. Trending reports show how charge and resource consumption varies over time, while Summary reports show the breakdown of charges or usage by different dimensions such as Cost Center or Target Type. These reports help consumers in understanding how their charges relate to their consumption and also assist the IT department with budgeting and planning activities. With BI Publisher, the reports can be made available in a variety of formats such as PDF, HTML, Word, Excel or PowerPoint.

    Read the article

  • Common Areas For Securing Web Services

    The only way to truly keep a web service secure is to host it on a web server and then turn off the server. In real life no web service is 100% secure but there are methodologies for increasing the security around web services. In order for consumers of a web service they must adhere to the service’s Service-Level Agreement (SLA).  An SLA is a digital contract between a web service and its consumer. This contract defines what methods and protocols must be used to access the web service along with the defined data formats for sending and receiving data through the service. If either part does not abide by the contract then the service will not be accessible for consumption. Common areas for securing web services: Universal Discovery Description Integration  (UDDI) Web Service Description Language  (WSDL) Application Level Network Level “UDDI is a specification for maintaining standardized directories of information about web services, recording their capabilities, location and requirements in a universally recognized format.” (UDDI, 2010) WSDL on the other hand is a standardized format for defining a web service. A WSDL describes the allowable methods for accessing the web service along with what operations it performs. Web services in the Application Level can control access to what data is available by implementing its own security through various methodologies but the most common method is to have a consumer pass in a token along with a system identifier so that they system can validate the users access to any data or actions that they may be requesting. Security restrictions can also be applied to the host web server of the service by restricting access to the site by IP address or login credentials. Furthermore, companies can also block access to a service by using firewall rules and only allowing access to specific services on certain ports coming from specific IP addresses. This last methodology may require consumers to obtain a static IP address and then register it with the web service host so that they will be provide access to the information they wish to obtain. It is important to note that these areas can be secured in any combination based on the security level tolerance dictated by the publisher of the web service. This being said, the bare minimum security implantation must be in the Application Level within the web service itself. Typically I create a security layer within a web services exposed Internet that requires a consumer identifier and a consumer token. This information is then used to authenticate the requesting consumer before the actual request is performed. Refernece:UDDI. (2010). Retrieved 11 13, 2011, from LooselyCoupled.com: http://www.looselycoupled.com/glossary/UDDIService-Level Agreement (SLA). (n.d.). Retrieved 11 13, 2011, from SearchITChannel: http://searchitchannel.techtarget.com/definition/service-level-agreement

    Read the article

  • Microsoft Build 2012 Day 1 Keynote Summary

    - by Tim Murphy
    So I have finally dried the tears after watching the Keynote for Build 2012.  This wasn’t because it was an emotional presentation, but because for the second year I missed the goodies.  Each on site attendee got a Surface RT, a Lumia 920 and a voucher for 100GB of SkyDrive storage. The event was opened with the announcement that in the three days since the launch of Windows 8 over 4 million upgrades have been sold.  I don’t care who you are that is an impressive stat.  Ballmer then spent a fair amount of time remaking the case for the Windows and Windows Phone platforms similar to what we have heard over the last to launch events. There were some cool, but non-essential demos.  The one that was the most fun was the Perceptive Pixel 82” slate device.  At first glance I wondered why I would ever want such a device, but then Ballmer explained it’s possible use for schools and boardrooms.  The actually made sense. Then things got strange.  Steve started explaining features that developers could leverage.  Usually this type of information is left to the product leads.  He focused on the integration with the Charms features such as Search and Share. Steve “Guggs” Guggenheim showed off an app that would appeal to my kids from Disney called “Agent P” which is base on Phineas and Ferb.  Then he got to the meat of the presentation.  We found out that you could add a tile that can be used to sell ad space.  In the same vein we also found out that you could use Microsoft’s, Paypal’s or any commerce engine of your own creation or choosing. For those who are interested in sports and especially developing sports apps you would have found the small presentation from Michael Bayle of ESPN.  He introduced the ESPN app which has tons of features.  For the developers in the crowd he also mentioned that ESPN has an API available at developer.espn.com. During the launch events we were told apps were coming.  In this presentation we were actually shown a scrolling list of logos and told about a couple of them.  Ballmer mentioned specifically Twitter, SAP and DropBox.  These are impressive names that were just a couple of the list impressive names. Steve Ballmer addressed the question of why you should develop for the Windows 8 platform.  He feels that Microsoft has the best commercial terms for developers, a better way to build apps than other platforms and a variety of form factors.  His key point though was the available volume of customers given the current Windows install base and assuming even a flat growth of the platform.  This he backed with a promise that Microsoft is going to do better at marketing and you won’t be able to avoid the ads that they are bringing out. The last section of the key note was present by Kevin Gallo from the Windows Phone team.  This was the real reason I tuned into the webcast.  He impressed upon those watching that the strength of developing for the Microsoft platform is the common programming model that now exist.  While there are difference between form factor implementations you can leverage code across them. He claimed that 90% of developer requests for Windows Phone 8 had been implemented.  These include: More controls with better performance Better live tiles including lock screen integration Speech support in custom apps Easier submission to the market place App camera integration VOIP and chat support Bluetooth and NFC support Native C++ development Direct 3D development   The quote from Kevin that stood out for me was that “Take your Dramamine and buckle your seatbelt type of games are coming to Windows Phone 8”.  He back this up by displaying a list of game development frameworks and then having Unity come out and do a demo. Ok, almost done … The last two things of note for me were the announcement that the SDK is immediately available at dev.windowsphone.com and that they were reducing the cost of an individual developer account to $8 for the next 8 days. Let the development commence. del.icio.us Tags: Build 2012,Windows 8,Windows Phone 8,Windows Phone

    Read the article

  • SSAS: Utility to export SQL code from your cube's Data Source View (DSV)

    - by DrJohn
    When you are working on a cube, particularly in a multi-person team, it is sometimes necessary to review what changes that have been done to the SQL queries in the cube's data source view (DSV). This can be a problem as the SQL editor in the DSV is not the best interface to review code. Now of course you can cut and paste the SQL into SSMS, but you have to do each query one-by-one. What is worse your DBA is unlikely to have BIDS installed, so you will have to manually export all the SQL yourself and send him the files. To make it easy to get hold of the SQL in a Data Source View, I developed a C# utility which connects to an OLAP database and uses Analysis Services Management Objects (AMO) to obtain and export all the SQL to a series of files. The added benefit of this approach is that these SQL files can be placed under source code control which means the DBA can easily compare one version with another. The Trick When I came to implement this utility, I quickly found that the AMO API does not give direct access to anything useful about the tables in the data source view. Iterating through the DSVs and tables is easy, but getting to the SQL proved to be much harder. My Google searches returned little of value, so I took a look at the idea of using the XmlDom to open the DSV’s XML and obtaining the SQL from that. This is when the breakthrough happened. Inspecting the DSV’s XML I saw the things I was interested in were called TableType DbTableName FriendlyName QueryDefinition Searching Google for FriendlyName returned this page: Programming AMO Fundamental Objects which hinted at the fact that I could use something called ExtendedProperties to obtain these XML attributes. This simplified my code tremendously to make the implementation almost trivial. So here is my code with appropriate comments. The full solution can be downloaded from here: ExportCubeDsvSQL.zip   using System;using System.Data;using System.IO;using Microsoft.AnalysisServices; ... class code removed for clarity// connect to the OLAP server Server olapServer = new Server();olapServer.Connect(config.olapServerName);if (olapServer != null){ // connected to server ok, so obtain reference to the OLAP databaseDatabase olapDatabase = olapServer.Databases.FindByName(config.olapDatabaseName);if (olapDatabase != null){ Console.WriteLine(string.Format("Succesfully connected to '{0}' on '{1}'",   config.olapDatabaseName,   config.olapServerName));// export SQL from each data source view (usually only one, but can be many!)foreach (DataSourceView dsv in olapDatabase.DataSourceViews){ Console.WriteLine(string.Format("Exporting SQL from DSV '{0}'", dsv.Name));// for each table in the DSV, export the SQL in a fileforeach (DataTable dt in dsv.Schema.Tables){ Console.WriteLine(string.Format("Exporting SQL from table '{0}'", dt.TableName)); // get name of the table in the DSV// use the FriendlyName as the user inputs this and therefore has control of itstring queryName = dt.ExtendedProperties["FriendlyName"].ToString().Replace(" ", "_");string sqlFilePath = Path.Combine(targetDir.FullName, queryName + ".sql"); // delete the sql file if it exists... file deletion code removed for clarity// write out the SQL to a fileif (dt.ExtendedProperties["TableType"].ToString() == "View"){ File.WriteAllText(sqlFilePath, dt.ExtendedProperties["QueryDefinition"].ToString());}if (dt.ExtendedProperties["TableType"].ToString() == "Table"){ File.WriteAllText(sqlFilePath, dt.ExtendedProperties["DbTableName"].ToString()); } } } Console.WriteLine(string.Format("Successfully written out SQL scripts to '{0}'", targetDir.FullName)); } }   Of course, if you are following industry best practice, you should be basing your cube on a series of views. This will mean that this utility will be of limited practical value unless of course you are inheriting a project and want to check if someone did the implementation correctly.

    Read the article

  • Bluetooth firmware problem in Ubuntu 13.04

    - by chanzerre
    I have a [Dell Inspiron][1] 15R 5520 laptop. Bluetooth is not working at all. rfkill list all gives 0: hci0: Bluetooth Soft blocked: no Hard blocked: no 1: phy0: Wireless LAN Soft blocked: no Hard blocked: no 2: brcmwl-0: Wireless LAN Soft blocked: no Hard blocked: no dmesg|grep -i bluetooth gives [ 13.644428] Bluetooth: Core ver 2.16 [ 13.644445] Bluetooth: HCI device and connection manager initialized [ 13.644453] Bluetooth: HCI socket layer initialized [ 13.644455] Bluetooth: L2CAP socket layer initialized [ 13.644461] Bluetooth: SCO socket layer initialized [ 15.861363] Bluetooth: hci0 command 0x1003 tx timeout [ 15.903443] Bluetooth: can't load firmware, may not work correctly [ 17.332535] Bluetooth: BNEP (Ethernet Emulation) ver 1.3 [ 17.332538] Bluetooth: BNEP filters: protocol multicast [ 17.332544] Bluetooth: BNEP socket layer initialized [ 17.393768] Bluetooth: RFCOMM TTY layer initialized [ 17.393781] Bluetooth: RFCOMM socket layer initialized [ 17.393783] Bluetooth: RFCOMM ver 1.11 hciconfig gives hci0: Type: BR/EDR Bus: USB BD Address: E0:06:E6:D5:DB:46 ACL MTU: 1021:8 SCO MTU: 64:1 UP RUNNING PSCAN ISCAN RX bytes:687 acl:0 sco:0 events:56 errors:0 TX bytes:2024 acl:0 sco:0 commands:52 errors:0 I have visited the site http://wireless.kernel.org/en/users/Drivers/b43 and according to it lspci -vnn -d 14e4: gives 08:00.0 Network controller [0280]: Broadcom Corporation BCM43142 802.11b/g/n [14e4:4365] (rev 01) Subsystem: Dell Wireless 1704 802.11n + BT 4.0 [1028:0016] Flags: bus master, fast devsel, latency 0, IRQ 17 Memory at c1500000 (64-bit, non-prefetchable) [size=32K] Capabilities: <access denied> Kernel driver in use: wl So I got my PCI-ID as 14e4:4365 which it says is not supported. The alternative is wl. What should I do? My Wi-Fi is working normally without any problems, but Bluetooth is not working. sudo dpkg -i wireless-bcm43142-dkms_6.20.55.19-1_amd64.deb gives following error (Reading database ... 208543 files and directories currently installed.) Unpacking wireless-bcm43142-dkms (from wireless-bcm43142-dkms_6.20.55.19-1_amd64.deb) ... Setting up wireless-bcm43142-dkms (6.20.55.19-1) ... Loading new wireless-bcm43142-6.20.55.19 DKMS files... Building only for 3.8.0-23-generic Building initial module for 3.8.0-23-generic Traceback (most recent call last): File "/usr/share/apport/package-hooks/dkms_packages.py", line 22, in <module> import apport ImportError: No module named apport Error! Bad return status for module build on kernel: 3.8.0-23-generic (x86_64) Consult /var/lib/dkms/wireless-bcm43142/6.20.55.19/build/make.log for more information.

    Read the article

  • Why Healthcare Today Needs BPM and SOA by Avio

    - by JuergenKress
    Within the past couple years, the Patient Protection and Affordable Care Act has led to significant changes in the healthcare industry. A highly-complex supply chain between patients, providers, buyers and insurance companies has led to a lack of overall collaboration when it comes to processes. The first open enrollment deadline for products on the Health Insurance Exchange has passed. So what now? Let’s take a brief look at how things have changed and what organizations can do to stay in (and ahead of) the game. New requirements, new processes Organizations that have not adapted processes to meet new regulatory requirements will fall further behind. New regulatory requirements effectively make some legacy applications obsolete, require batch process to move to real-time, and more. Business Process Management (BPM) can help organizations bring data processes in line while helping IT redesign processes rather than change code or replace existing applications. BPM fills in application gaps and links critical information systems for a more visible, efficient and auditable organization. Social and mobile solutions BPM technology also facilitates social and mobile solutions that can help meet new needs. Patients are dependent on a network of doctors, pharmacists, families and others. Social solutions can connect members of the patient’s community in ways never seen before - enabling real-time, relevant communication. Likewise, mobile technology supports social solutions, and BPM is the most efficient way to make processes simple and role-based. It unties medical professionals from their offices by enabling them to access timely information and alerts anywhere. Why SOA is also needed Integrating BPM with Service-Oriented Architecture (SOA) also plays a critical role in the development of healthcare solutions that work. SOA can create a single end-to-end process, integrate applications and move them into a common workflow. While SOA enables the reutilization of existing IT infrastructure, BPM supports the process optimization, monitoring and social aspects. SOA and BPM applications support business analysts as they model, create and monitor processes - providing real-time insight and a unified workflow of process activities. Read “New” Solutions for a New Healthcare Landscape on our blog to learn more. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: Avio,Healthcare,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • How to Mentor a Junior Developer

    - by Josh Johnson
    This title is a little broad but I may need to give a little background before I can ask my question properly. I know that similar questions have been asked here already. But in my case I'm not asking if I should be mentoring someone or if the person is a good fit for being a software developer. That is not my place to judge. I have not been asked outright, but it is apparent that myself and other fellow senior developers are to mentor the new developers that start here. I have no problem with this whatsoever and, in many cases, it lends me a fresh perspective on things and I end up learning in the process. Also, I remember how beneficial it was in the beginning of my career when someone would take some time to teach me something. When I say "new developer" they could be anywhere from fresh out of college to having a year or two of experience. Recently and in the past we've had people start here who seem to have an attitude toward development/programming which is different from mine and hard for me to reconcile; they seem to extract just enough information to get the task done but not really learn from it. I find myself going over and over the same issues with them. I understand that part of this could be a personality thing, but I feel it's my job to do my best and sort of push them out of the nest while they're under my wing, so to speak. How can I impart just enough information so that they will learn but not give so much as to solve the problem for them? Or perhaps: What's the proper response to questions that are designed to take the path of least resistance and, in essence, force them to learn instead of take the easy way out? These questions are probably more general teaching questions and don't have that much to do specifically with software development. Note: I do not get a say in what tasks they are working on. Management doles the task out and it could be anything from a very simple bug fix to starting an entire application by themselves. While this is not ideal by any means and obviously presents its own gauntlet of challenges, I feel it's a topic best left for another question. So the best I can do is help them with the problem at hand and try to help them break it down into simpler problems and also check their commit logs and point out mistakes that they made. My main objectives are to: Help them out and give them the tools they need to start becoming more self-reliant. Steer them in the right direction and break bad development habits early on. Lessen the amount of time I spend with them (the personality type described above seems to need much more one-on-one time and does not do well over IM or email. While that's generally fine, I can't always stop what I'm working on, break my stride, and help them debug an error on a moments notice; I have my own projects that need to get done).

    Read the article

  • Come see us at JavaU at JavaOne!

    - by tmcginn
    In just a little under a month, JavaOne will be in full swing (no pun intended) and thousands of Java developers will gather to hear the latest Java news, immerse themselves in Java technology and learn some new things. This year, I am fortunate enough to be able to attend, along with my Java curriculum development colleagues Matt Heimer and Mike Williams. We start our week at JavaOne teaching a one-day session at JavaU on Sunday morning. If you have never attended a training session through JavaU, you should check it out. There are some terrific sessions this year, and it might help to justify your trip to JavaOne if you can say it was for training! This year I am teaching a one day session on Java SE 7 New Features - a great session for anyone interested in the specific details of what is new in Java SE 7. Matt is teaching a one-day session on Developing Portable Java EE applications with the Enterprise JavaBeans 3.1 API and Java Persistence 2.0 API  EJB, and Mike is doing a one-day session on developing Rich Client applications with Java SE 7 using Java FX 2. I asked Matt and Mike to tell me what developers can expect from their sessions. Matt: "My session will get you up to speed on everything you need to know to create portable Java EE 6 applications using EJB 3.1 and JPA 2. I am going to cover why everyone can benefit from using EJBs (and why developers should relearn them if they haven't looked at them for years). Students who attend my session will see JPA examples showcasing how to use relational databases in an enterprise applications without programming to JDBC and without writing SQL statements. EJB and JPA benefit from being paired together, so I will also show how transaction management is easier in a container. I encourage students to bring a laptop and code as they learn!" Mike: "My session covers how to develop a rich client application using Java FX 2. Starting with the basic concepts of JavaFX, students will see how a JavaFX application is built from its layout, to its controls, to its data structures. In addition, more advanced controls like charts, smart tables, and transitions will be added to the application. Finally, a quick review of JavaFX concurrency and data binding is included. Blended with the core concepts the session will include some of the latest JavaFX technology. This includes using Scene Builder to create a JavaFX UI and connecting your XML UI definition to Java code.  In addition, packaging of the JavaFX application will be covered with some examples of the new native packaging features." As I mentioned, my session covers the changes in the Java for SE 7, including the  language changes that were voted into Java SE 7 from Project Coin. I will also look at how you can take advantage if the the new I/O library (NIO.2) for writing applications that work with files, directories and file systems. We will also look at the changes in Asynchronous I/O that are a part of the changes in NIO/2. We will spend some time looking at the changes to the Java Virtual Machine as well, including support for dynamically typed languages (JSR-292). We will spend some time looking at the Java Concurrency enhancements (JSR-166), including the new Fork/Join framework. And we'll round out the day with a look at changes in Swing, XML and a number of smaller changes in the API's. And, if these topics aren't grabbing your interest, take a look at the other 10 sessions that range from topics on architecture to how to pass the Oracle Certified Programmer I and II exams. See you soon!

    Read the article

  • Responding to the page unload in a managed bean

    - by frank.nimphius
    Though ADF Faces provides an uncommitted data warning functionality, developers may have the requirement to respond to the page unload event within custom application code, programmed in a managed bean. The af:clientListener tag that is used in ADF Faces to listen for JavaScript and ADF Faces client component events does not provide the option to listen for the unload event. So this often recommended way of implementing JavaScript in ADF Faces does not work for this use case. To send an event from JavaScript to the server, ADF Faces provides the af:serverListener tag that you use to queue a CustomEvent that invokes method in a managed bean. While this is part of the solution, during testing, it turns out, the browser native JavaScript unload event itself is not very helpful to send an event to the server using the af:serverListener tag. The reason for this is that when the unload event fires, the page already has been unloaded and the ADF Faces AdfPage object needed to queue the custom event already returns null. So the solution to the unload page event handling is the unbeforeunload event, which I am not sure if all browsers support them. I tested IE and FF and obviously they do though. To register the beforeunload event, you use an advanced JavaScript programming technique that dynamically adds listeners to page events. <af:document id="d1" onunload="performUnloadEvent"                      clientComponent="true"> <af:resource type="javascript">   window.addEventListener('beforeunload',                            function (){performUnloadEvent()},false)      function performUnloadEvent(){   //note that af:document must have clientComponent="true" set   //for JavaScript to access the component object   var eventSource = AdfPage.PAGE.findComponentByAbsoluteId('d1');   //var x and y are dummy variables obviously needed to keep the page   //alive for as long it takes to send the custom event to the server   var x = AdfCustomEvent.queue(eventSource,                                "handleOnUnload",                                {args:'noargs'},false);   //replace args:'noargs' with key:value pairs if your event needs to   //pass arguments and values to the server side managed bean.   var y = 0; } </af:resource> <af:serverListener type="handleOnUnload"                    method="#{UnloadHandler.onUnloadHandler}"/> // rest of the page goes here … </af:document> The managed bean method called by the custom event has the following signature:  public void onUnloadHandler(ClientEvent clientEvent) {  } I don't really have a good explanation for why the JavaSCript variables "x" and "y" are needed, but this is how I got it working. To me it ones again shows how fragile custom JavaScript development is and why you should stay away from using it whenever possible. Note: If the unload event is produced through navigation in JavaServer Faces, then there is no need to use JavaScript for this. If you know that navigation is performed from one page to the next, then the action you want to perform can be handled in JSF directly in the context of the lifecycle.

    Read the article

  • Getting WLAN on my Laptop to work (Medion MD98300)

    - by Anand Böhmer
    Dear Ubuntu Community, I am having difficulties to get the WLAN Adapter on my Medion 98300 Laptop to work. The WLAN Card seems to be connected through an internal USB Interface and the Card itself had shown up as a wirelles Network while installing Ubuntu. I have tried a few things earlier, but none of my google reasearches have brought me to a working solution... I am quite new to the Linux System and only knoew a couple of terminal commands so far, so I probably have missed out on a few possible solutions. Maybe you can help me? Thank you very much in advance! A fre minimal technical Details: AMD Turion 64 X2 Dual-Core Mobile Technolgie TL-50 NVIDIA GeForce Go 6150 SanDisk 64GB SSD 2GB RAM DDR2 667 nForce Chipset (I forgt the Version, but deductable from the GPU I guess) WiFi: ZyDAS ZD1211B 802.11g Thank's a lot again! :) UPDATE: I tried around myself a little and found a guide on the Linux Mint forums that helped! I already had tried to install the linux backport modules etc. What I finally did was update the linux firmware and run the following command: echo "options acer_wmi wirelles=1" sudo tee /etc/modprobe.d/acer_wmi.conf and rebooted now I found and could connect to networks, but unfortunately I found, that the link quality was very bad, around 40 to 50. Eventhough my Router is running at high power and is only 6 Meters away! I then switched a few channels, but that did not improve much. Before, under Windows, I had a very good link quality and had the entire 16mbit/s internet connection at disposal, now I can only get about 3-5Mbit/s. better then nothing, but still pretty bad! The "TX power" is fixed on 20dBm and iwconfig says, that the "Power Management" is off... Maybe the power of the module is set too low?... UPDATE2: I figured that 20dBm is a normal power output. I even tried to change the power using iwconfig wlan0 txpower INTEGERHERE but, obviously my "Card" does not support more then 20. More would probably be illegal as well, so I won't even use more then 20. I guess that I will have to figure out a way, or maybe just switch cards. Are the Mailboard-USB-Connectors on a laptop of the same property as the standard external ones? If so, I could simply solder a micro Wirelless N Adapter onto the board :)

    Read the article

< Previous Page | 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070  | Next Page >