Search Results

Search found 18450 results on 738 pages for 'mobile programming'.

Page 645/738 | < Previous Page | 641 642 643 644 645 646 647 648 649 650 651 652  | Next Page >

  • Creating a Corporate Data Hub

    - by BuckWoody
    The Windows Azure Marketplace has a rich assortment of data and software offerings for you to use – a type of Software as a Service (SaaS) for IT workers, not necessarily for end-users. Among those offerings is the “Data Hub” – a  codename for a project that ironically actually does what the codename says. In many of our organizations, we have multiple data quality issues. Finding data is one problem, but finding it just once is often a bigger problem. Lots of departments and even individuals have stored the same data more than once, and in some cases, made changes to one of the copies. It’s difficult to know which location or version of the data is authoritative. Then there’s the problem of accessing the data. It’s fairly straightforward to publish a database, share or other location internally to store the data. But then you have to figure out who owns it, how it is controlled, and pass out the various connection strings to those who want to use it. And then you need to figure out how to let folks access the internal data externally – bringing up all kinds of security issues. Finally, in many cases our user community wants us to combine data from the internally sources with external data, bringing up the security, strings, and exploration features up all over again. Enter the Data Hub. This is an online offering, where you assign an administrator and data stewards. You import the data into the service, and it’s available to you - and only you and your organization if you wish. The basic steps for this service are to set up the portal for your company, assign administrators and permissions, and then you assign data areas and import data into them. From there you make them discoverable, and then you have multiple options that you or your users can access that data. You’re then able, if you wish, to combine that data with other data in one location. So how does all that work? What about security? Is it really that easy? And can you really move the data definition off to the Subject Matter Experts (SME’s) that know the particular data stack better than the IT team does? Well, nothing good is easy – but using the Data Hub is actually pretty simple. I’ll give you a link in a moment where you can sign up and try this yourself. Once you sign up, you assign an administrator. From there you’ll create data areas, and then use a simple interface to bring the data in. All of this is done in a portal interface – nothing to install, configure, update or manage. After the data is entered in, and you’ve assigned meta-data to describe it, your users have multiple options to access it. They can simply use the portal – which actually has powerful visualizations you can use on any platform, even mobile phones or tablets.     Your users can also hit the data with Excel – which gives them ultimate flexibility for display, all while using an authoritative, single reference for the data. Since the service is online, they can do this wherever they are – given the proper authentication and permissions. You can also hit the service with simple API calls, like this one from C#: http://msdn.microsoft.com/en-us/library/hh921924  You can make HTTP calls instead of code, and the data can even be exposed as an OData Feed. As you can see, there are a lot of options. You can check out the offering here: http://www.microsoft.com/en-us/sqlazurelabs/labs/data-hub.aspx and you can read the documentation here: http://msdn.microsoft.com/en-us/library/hh921938

    Read the article

  • Oracle at ARM TechCon

    - by Tori Wieldt
    ARM TechCon is a technical conference for hardware and software engineers, Oct. 30-Nov 1 in Santa Clara, California. Days two and three of the conference will be geared towards systems designers and software developers, those interested in building ARM processor-based modules, boards, and systems. It will cover all of the hardware and software, tools, ranging from low-power design, networking and connectivity, open source software, and security. Oracle is a sponsor of ARM TechCon, and will present three Java sessions and a hands-on-lab:  "Do You Like Coffee with Your Dessert? Java and the Raspberry Pi" - The Raspberry Pi, an ARM-powered single board computer running a full Linux distro off an SD card has caused a huge wave of interest among developers. This session looks at how Java can be used on a device such as this. Using Java SE for embedded devices and a port of JavaFX, the presentation includes a variety of demonstrations of what the Raspberry Pi is capable of. The Raspberry Pi also provides GPIO line access, and the session covers how this can be used from Java applications. Prepare to be amazed at what this tiny board can do. (Angela Caicedo, Java Evangelist) "Modernizing the Explosion of Advanced Microcontrollers with Embedded Java" - This session explains why Oracle Java ME Embedded is the right choice for building small, connected, and intelligent embedded solutions, such as industrial control applications, smart sensing, wireless connectivity, e-health, or general machine-to-machine (M2M) functionality---extending your business to new areas, driving efficiency, and reducing cost. The new Oracle Java ME Embedded product brings the benefits of Java technology to microcontroller platforms. It is a full-featured, complete, compliant software runtime with value-add features targeted to the embedded space and has the ability to interface with additional hardware components, remote manageability, and over-the-air software updates. It is accompanied by a feature-rich set of tools free of charge. (Fareed Suliman, Java Product Manager) "Embedded Java in Smart Energy and Healthcare" - This session covers embedded Java products and technologies that enable smart and connect devices in the Smart Energy and Healthcare/Medical industries. (speaker Kevin Lee) "Java SE Embedded Development on ARM Made Easy" - This Hands-on Lab aims to show that developers already familiar with the Java develop/debug/deploy lifecycle can apply those same skills to develop Java applications, using Java SE Embedded, on embedded devices. (speaker Jim Connors) In the Oracle booth #603, you can see the following demos: Industry Solutions with JavaThis exhibit consists of a number of industry solutions and how they can be powered by Java technology deployed on embedded systems.  Examples in consumer devices, home gateways, mobile health, smart energy, industrial control, and tablets all powered by applications running on the Java platform are shown.  Some of the solutions demonstrate the ability of Java to connect intelligent devices at the edge of the network to the datacenter or the cloud as a total end-to-end platform.Java in M2M with QualcommThis station will exhibit a new M2M solutions platform co-developed by Oracle and Qualcomm that enables wireless communications for embedded smart devices powered by Java, and share the types of industry solutions that are possible.  In addition, a new platform for wearable devices based on the ARM Cortex M3 platform is exhibited.Why Java for Embedded?Demonstration platforms will show how traditional development environments, tools, and Java programming skills can be used to create applications for embedded devices.  The advantages that Java provides because of  the runtime's abstraction of software from hardware, modularity and scalability, security, and application portability and manageability are shared with attendees. Drop by and see why Java is an optimal applications platform for embedded systems.

    Read the article

  • BI&EPM in Focus November 2012

    - by Mike.Hallett(at)Oracle-BI&EPM
    Customers ·       San Diego Unified School District Harnesses Attendance, Procurement, and Operational Data with Oracle Exalytics, Generating $4.4 Million in Savings ·       NilsonGroup chooses Oracle Exalytics In-Memory Machine as their solution to access critical data to keep its stores competitive with real-time Mobile BI:  Video ·       Nykredit, in the Danish Financial Sector, describes their experiences from testing the Exalytics Business Intelligence Machine: Video  ·       Sodexo chose Oracle Exalytics as their business analytics platform:  Video ·       AstraZeneca (US, Canada, MedImmune) Improves Insight, Analytics, and Reporting, Enterprisewide with Unified Planning on a Single Platform ·       Experian Consolidates Reporting Systems for One, Global View of Financial Data and Improves Planning for Continued Growth ·       Munchkin Gives its Line of Children’s Products Plenty of Room to Grow in an Upgraded Enterprise Application Environment ·       Top 20 EPM Customer Snapshots, in One Handy Booklet (link) ·       Customer and Partner Successes: Link to Complete Archive Enterprise Performance Management ·       Nov 15: Is Hope and Email the Core of your Reconciliation Process? (link) ·       Replay: Integrated Business Planning, Featuring Leggett & Platt (link) ·       Whitepaper: The New Competitive Advantage - Strategic CIO's Embrace the Cloud (link) ·       Press: Oracle Enterprise Performance Management Driving Significant Improvements in Budget Management and Reporting for Public Sector Organizations (link) ·       Enterprise Performance Management Video Feature Overviews, Now Available on YouTube (link) ·       NEW Solution Brief - Oracle Hyperion Planning on the Oracle Exalytics In-Memory Machine (link) ·       For Insurance sector: Datasheet for new release V2.0 - Oracle Quantitative Management and Reporting for Solvency II (link) ·       Whitepaper FSN 2012: Managing Risk and Uncertainty, an Executive's Guide to Integrated Business Planning (link) ·       NEW Datasheet for Oracle Planning and Budgeting Cloud Service (link) ·       Blog: Planning in the Cloud - For Real Business Intelligence ·       Press: Latest Release of Oracle Exalytics In-Memory Machine Software Enables Customers to View and Analyze Data at the Speed of Business (link) ·       Press: New Release (11.1.1.6.2BP1) of Oracle Business Intelligence Enables Users to Quickly Access and Analyze Key Business Information, Anytime, Anywhere (link) ·       Mark Hurd Interviewed on USA Today about Big Data & Analytics (link) ·       Whitepaper: Mastering Big Data - CFO Strategies to Transform Insight into Opportunity (link) ·       Nov 15: Improve Asset Utilization. Achieve Greater Profitability: Oracle Enterprise Asset Management Analytics (link) ·       Replay: Oracle Enterprise Asset Mgmt Analytics and Oracle Manufacturing Analytics (link) ·       Overload to Impact: An Industry Scorecard on Big Data Business Challenges (link) ·       Webcast Replay: Overview of Oracle Endeca Informational Discovery (link) ·       OBIEE 11g: Required and Recommended Patches and Patch Sets (link)

    Read the article

  • Wireless device bug on 13.10. BCM4313 registers as eth1 instead of wlan0 and no internet access

    - by user205691
    My Hotel wiFi requires me to login with a username & password after connecting to the hotspot. So, my browser would open a page with username & passwrd fields to login and then connect to internet. But unfortunately, firefox & chromium dont seem to work. i dont think it is browser related but a setting for the wifi router or driver which is creating this issue. using Broadcom 801.11 STA wireless driver (proprietary). tried open source as well but same result !! The image linked below shows my wifi connection setting & Chromium. The login page itself comes up after a long time and after entering the credentials, it keeps loading for ever !! it is the same case for every other browser.. so i dont think its browser issue but something to do with wifi setting or network manager stuff.. interestingly, i am able to connect to WiFi networks with WPA key without any issue. Adhoc hotspot is a problem and that is my regular home network :( .. I hope i can get some help solving this issue ! I have tried repeating the same hotspot after login from my android, by creating a virtual repeater with WPA key and it works. I can browse on ubuntu using this method.. but cant be doing this regularly ! I tried loading the same login page of the hotel wifi while browsing through my repeater wifi created on mobile and screen shot attached below. the page loads up quick and easy.. so this means something is wrong with the way network manager handles adhoc connectivity & login ?? i installed wicd0 but it crashes on startup and not helpful at all ! Screenshot of Chromium page Login page with repeated hotspot ifconfig in my terminal results: krishna@krishna-HP-ENVY-4-Notebook-PC:~$ ifconfig eth0 Link encap:Ethernet HWaddr 28:92:4a:1d:54:fa UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) eth1 Link encap:Ethernet HWaddr e0:06:e6:89:fa:49 inet addr:10.24.1.71 Bcast:10.24.1.255 Mask:255.255.255.0 inet6 addr: fe80::e206:e6ff:fe89:fa49/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:10940 errors:0 dropped:0 overruns:0 frame:348431 TX packets:6611 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:7669631 (7.6 MB) TX bytes:864195 (864.1 KB) Interrupt:17 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:2146 errors:0 dropped:0 overruns:0 frame:0 TX packets:2146 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:166120 (166.1 KB) TX bytes:166120 (166.1 KB) I wonder why is the wireless configured under eth1 ? I think this is a bug with earlier ubuntu versions, but is this normal in 13.10 or is there a wrong configuration here ? The wireless device in my pc is BCM4313 and i have installed the bcmwl-kernel-sources, wireless-tools to support the device. i also reinstalled the bcmwl-kernel as suggested on broadcom website, via synaptic package manager. Nothing has changed this situation ! I tried booting into liveUSB and then ifconfig results show wireless under wlan0. But then the wireless connects and loads the login page. So is the problem with the device configuration now ? i really want to get this fixed before i start configuring the other stuff like ATI graphics and such on the laptop for overheating.. lack of internet access is too bad a bug for me :P any help is appreciated!

    Read the article

  • How can I be prepared to join a company?

    - by Aerovistae
    There's more to it than that, but this title was the best way I could think of to sum it up. I'm a senior in a good computer science program, and I'm graduating early. About to start interviews and all whatnot. I'm not a super-experienced programmer, not one of those people who started in middle school. I'm decent at this, but I'm not among the best, not nearly. I have to do an awful lot of googling. So today I'm meeting some fellow for lunch at a campus cafe to discuss some front-end details when this tall, good-looking guy begs pardon, says he's new to campus, says he's wondering if we know where he can go to sign up for recruiting developers. Quickly evolves into long conversation: he's the CEO of a seems-to-be-doing-well start-up. Hiring passionate interns and full-times. Sounds great! I take one look at his site on my own computer later, immediately spot a major bug. No idea how to fix it, but I see it. I go over to the page code, and good god. It's the standard amount of code you would expect from a full-scale web application, a couple dozen pages of HTML and scripts. I don't even know where to start reading it. I've built sites from scratch, but obviously never on that scale, nor have I ever worked on one of that scale. I have no idea which bit might generate the bug. But that sets me thinking: How could someone like me possibly settle into an environment like that? A start-up is a very high-pressure working environment. I don't know if I can work at that pace under those constraints-- I would hate to let people down. And with only 10 employees, it's not like anyone has much time to help you get your bearings. Somewhere in there is a question. Can you see it? I'm asking for general advice here. Maybe even anecdotal advice. Is joining a start-up right out of college a scary process? Am I overestimating what it would take to figure out the mass of code behind this site? What's the likelihood a decent but only moderately-experienced coder could earn his pay at such a place? For instance, I know nothing of server-side/back-end programming. Never touched it. That scares me.

    Read the article

  • Windows Phone 7 and WS-Trust

    - by Your DisplayName here!
    A question that I often hear these days is: “Can I connect a Windows Phone 7 device to my existing enterprise services?”. Well – since most of my services are typically issued token based, this requires support for WS-Trust and WS-Security on the client. Let’s see what’s necessary to write a WP7 client for this scenario. First I converted the Silverlight library that comes with the Identity Training Kit to WP7. Some things are not supported in WP7 WCF (like message inspectors and some client runtime hooks) – but besides that this was a simple copy+paste job. Very nice! Next I used the WSTrustClient to request tokens from my STS: private WSTrustClient GetWSTrustClient() {     var client = new WSTrustClient(         new WSTrustBindingUsernameMixed(),         new EndpointAddress("https://identity.thinktecture.com/…/issue.svc/mixed/username"),         new UsernameCredentials(_txtUserName.Text, _txtPassword.Password));     return client; } private void _btnLogin_Click(object sender, RoutedEventArgs e) {     _client = GetWSTrustClient();       var rst = new RequestSecurityToken(WSTrust13Constants.KeyTypes.Bearer)     {         AppliesTo = new EndpointAddress("https://identity.thinktecture.com/rp/")     };       _client.IssueCompleted += client_IssueCompleted;     _client.IssueAsync(rst); } I then used the returned RSTR to talk to the WCF service. Due to a bug in the combination of the Silverlight library and the WP7 runtime – symmetric key tokens seem to have issues currently. Bearer tokens work fine. So I created the following binding for the WCF endpoint specifically for WP7. <customBinding>   <binding name="mixedNoSessionBearerBinary">     <security authenticationMode="IssuedTokenOverTransport"               messageSecurityVersion="WSSecurity11 WSTrust13 WSSecureConversation13 WSSecurityPolicy12 BasicSecurityProfile10">       <issuedTokenParameters keyType="BearerKey" />     </security>     <binaryMessageEncoding />     <httpsTransport/>   </binding> </customBinding> The binary encoding is not necessary, but will speed things up a little for mobile devices. I then call the service with the following code: private void _btnCallService_Click(object sender, RoutedEventArgs e) {     var binding = new CustomBinding(         new BinaryMessageEncodingBindingElement(),         new HttpsTransportBindingElement());       _proxy = new StarterServiceContractClient(         binding,         new EndpointAddress("…"));     using (var scope = new OperationContextScope(_proxy.InnerChannel))     {         OperationContext.Current.OutgoingMessageHeaders.Add(new IssuedTokenHeader(Globals.RSTR));         _proxy.GetClaimsAsync();     } } works. download

    Read the article

  • Moving abroad - Relocation advice

    - by Tim Koekkoek
    Oracle offers graduates from different European countries the opportunity to start their career abroad. Some already have experience with living abroad as they have done an exchange semester or internship in another county, for others it is the first time they will move abroad. Rui started in October 2011 as a Business Development Consultant in Dublin and moved from Portugal to Dublin, Ireland to start his career. For those planning to leave their home country and who desire to work abroad, he will share some tips and tricks in this article. When you’re faced with an opportunity like this, there are lots of things that will come to your mind. Sometimes it can be either very exciting, or even stressful. 1. First of all, try to relax. If you are certain you are moving abroad, all you need to do is some research about the country where you’re going to live, get to know its culture (gastronomy, important dates and events, its economy and effective ways to keep you in touch with your family and friends – such as mobile companies and Internet services), and start to understand the best locations (with good access) you could/should live in are. Don’t forget that initially you can be limited by transport and therefore it is important to explore the ideal places for you. During this time, Oracle provides everything you’ll need (papers, documents, etc.) to cross borders. 2. When you arrive, you understand that you are in a new country, in a new place, where all things (or most) are unknown to you. Before you panic, try to see it as a new challenge where new opportunities will come. Sometimes, it’s not easy I know, but the very best a new place has to give to you, is the opportunity to understand a new culture, get to know other people, other ways of working, and grow both as a person and professionally. So, you have nothing to lose in this kind of experiment. 3. When you arrive at Oracle, there’s a fantastic team that will help you with settling in, HR, Payroll, Relocation, IT. In my case, Oracle helped me with the relocation, they supported me to arrange everything such as helping out with all the paperwork and finding a new apartment. As you can see they will do their best to help you to be successful! 4. Engage with your new co-workers. Going to a place where you don’t know anyone can be tough sometimes but see it as an opportunity to meet people from all over the world and share experiences. Embrace it. 5. Plan ahead, try to get the most information possible and use it. Oracle is a multinational enterprise that will allow you to get to know a new labour market and give you the flexibility you need to understand your view of employment and occupation, giving you the very best opportunities to join different teams and working areas, so that you can work where you fit best. Good luck! If you’re thinking about starting a career abroad, read the following article: http://www.overseasdigest.com/movingtips.htm it can be very useful to you. Interested in starting your career at Oracle like Rui has? Please have a look at https://campus.oracle.com for all of our latest vacancies.

    Read the article

  • Brain Teaser: How Did I Do This (Part 1: The Solution)

    - by Geertjan
    In Part 1: The Challenge, published this time last week, I introduced a "brain teaser". The brain teaser asks you to figure out how to allow images and other files to be meaningfully dropped onto a NetBeans Platform application, i.e., on the drop something useful should happen with the dropped file: if the file is an image, the image should open in the IDE; if the file is a PDF document, the PDF viewer should open externally; if the file is a text file, it should open as a text in the IDE, etc. Solution. And here is the solution: http://bits.netbeans.org/dev/javadoc/org-openide-windows/org/openide/windows/ExternalDropHandler.html When an implementation of the "ExternalDropHandler" class is available in the global Lookup, and an object is being dragged over some part of the main window, the window system may call the methods of this class to decide whether it can accept or reject the drag operation. And when the object is actually dropped, this class will be asked to handle the drop. OK, so go ahead and implement the above class and put it into the Lookup. Or... guess what? The NetBeans Platform has a default implementation of the above class, appropriately named "DefaultExternalDropHandler". Not only is this useful to learn about how to implement the ExternalDropHandler class (i.e., by reading the source here): you can simply include the module that contains this class in your own NetBeans Platform application and then your application will be able to receive external drag/drop events and do something meaningful with them thanks to the DefaultExternalDropHandler. Do this: Open your NetBeans Platform application in NetBeans IDE. Right-click the application in the Projects window and choose Properties. In the Libraries tab, expand the "ide" cluster, and select "User Utilities". (That's where "DefaultExternalDropHandler.java" is found and registered in the Lookup.) Now click the "Resolve" button, if it appears, because some additional related modules need to now be included, if they haven't been included yet. Again in the "ide" cluster in the Libraries tab, select "Image". That's the Image Editor. Click OK. Run the application. Drag an image or some other type of file into your application, from outside the application, and you'll see the application tries to handle the drop. If the file being dragged is an image, it will open in the Image Editor, which you included in the previous step of these instructions. Hurray, you're done. Without any programming at all, you've added a cool new feature to your application.

    Read the article

  • Attaching new animations onto skeleton via props, a good idea?

    - by Cardin
    I'm thinking of coding a game with an idea of mine. I've coded 2D games before, but I'm new to 3D programming, so I'd like to ask if this idea of mine is feasible or out of my depth. I'm making a game where there are many different characters for the player to choose from (JRPG style). So to save time, I have an idea of creating many different varied characters using a completely naked body mesh and animation skeleton, standardised across all characters. For example, by placing different hair, boots, armor props on the character mesh, new characters can be formed. Kinda like playing dress-up with a barbie doll. I'm thinking this can be done by having a bone on the prop that I can programmically attach to the main mesh. Also, I plan to have some props add new animations to the base skeleton, so equipping some particular props would give it new attack, damage, idle animations. This is because I can't expect the character to have the same swinging animation if he had a big sword or an axe. I think this might be possible if the prop has its own instance of the animation skeleton with just only the new animations, and parenting the base body mesh to this new skeleton. So all the base body mesh has are just the basic animations, other animations come from the props. My concerns are, 1) the props might not attach to the mesh properly and jitter a lot, 2) since prop and body are animated differently, the props and base mesh will cause visual artefacts, like the naked thighs showing through the pants when the character walks, 3) a custom pipeline have to be developed to export skeletons without mesh, and also to attach the base body mesh to a new skeleton during runtime in the game. So my question: are these features considered 'easy' to code? Or am I trying to do something few have ever succeeded with on their own? It feels like all these can be done given enough time and I know I definitely have to do a bit of bone matrix calculations, but I really don't want to drag out the development timeline unnecessarily from coding mathematically intense things or analyzing how to parse 3D export formats. I'm currently only at the Game Design stage, so if these features aren't a good idea, I can simply change the design of the game. (Unrelated to question) I could always, as last resort, have the characters have predetermined outfit and weapon selections so as to animate everything manually.

    Read the article

  • Creating a Corporate Data Hub

    - by BuckWoody
    The Windows Azure Marketplace has a rich assortment of data and software offerings for you to use – a type of Software as a Service (SaaS) for IT workers, not necessarily for end-users. Among those offerings is the “Data Hub” – a  codename for a project that ironically actually does what the codename says. In many of our organizations, we have multiple data quality issues. Finding data is one problem, but finding it just once is often a bigger problem. Lots of departments and even individuals have stored the same data more than once, and in some cases, made changes to one of the copies. It’s difficult to know which location or version of the data is authoritative. Then there’s the problem of accessing the data. It’s fairly straightforward to publish a database, share or other location internally to store the data. But then you have to figure out who owns it, how it is controlled, and pass out the various connection strings to those who want to use it. And then you need to figure out how to let folks access the internal data externally – bringing up all kinds of security issues. Finally, in many cases our user community wants us to combine data from the internally sources with external data, bringing up the security, strings, and exploration features up all over again. Enter the Data Hub. This is an online offering, where you assign an administrator and data stewards. You import the data into the service, and it’s available to you - and only you and your organization if you wish. The basic steps for this service are to set up the portal for your company, assign administrators and permissions, and then you assign data areas and import data into them. From there you make them discoverable, and then you have multiple options that you or your users can access that data. You’re then able, if you wish, to combine that data with other data in one location. So how does all that work? What about security? Is it really that easy? And can you really move the data definition off to the Subject Matter Experts (SME’s) that know the particular data stack better than the IT team does? Well, nothing good is easy – but using the Data Hub is actually pretty simple. I’ll give you a link in a moment where you can sign up and try this yourself. Once you sign up, you assign an administrator. From there you’ll create data areas, and then use a simple interface to bring the data in. All of this is done in a portal interface – nothing to install, configure, update or manage. After the data is entered in, and you’ve assigned meta-data to describe it, your users have multiple options to access it. They can simply use the portal – which actually has powerful visualizations you can use on any platform, even mobile phones or tablets.     Your users can also hit the data with Excel – which gives them ultimate flexibility for display, all while using an authoritative, single reference for the data. Since the service is online, they can do this wherever they are – given the proper authentication and permissions. You can also hit the service with simple API calls, like this one from C#: http://msdn.microsoft.com/en-us/library/hh921924  You can make HTTP calls instead of code, and the data can even be exposed as an OData Feed. As you can see, there are a lot of options. You can check out the offering here: http://www.microsoft.com/en-us/sqlazurelabs/labs/data-hub.aspx and you can read the documentation here: http://msdn.microsoft.com/en-us/library/hh921938

    Read the article

  • Innovation for Retailers

    - by David Dorf
    One of my main objectives for this blog is to point out emerging technologies and how they might apply to the retail industry.  But ideas are just the beginning; retailers either have to rely on vendors or have their own lab to explore these ideas and see which ones work.  (A healthy dose of both is probably the best solution.)  The Nordstrom Innovation Lab is a fine example of dedicating resources to cultivate ideas and test prototypes. The video below, from 2011, is a case study in which the team builds an iPad app that helps customers purchase sunglasses in the store.  Customers take pictures of themselves wearing different sunglasses, then can do side-by-side comparisons. There are a few interesting take-aways from their process.  First, they are working in the store alongside employees and customers.  There's no concept of documenting all the requirements then building the product.  Instead, they work closely with those that will be using the app in order to fully understand what's needed.  When they find an issue, they change the software onsite and try again.  This iterative prototyping ensures their product hits the mark.  Feels like Extreme Programming if you recall that movement. Second, they have time-boxed the project to one week.  Either it works or it doesn't, and either way they've only expended a week's worth of resources.  Innovation always entails failure, and those that succeed are often good at detecting failure quickly then adjusting.  Fail fast and fail often. Third, its not always about technology.  I was impressed they used paper designs to walk through user stories and help understand the needs of the customer.  Pen and paper is the innovator's most powerful tool. Our Retail Applied Research (RAR) team uses some of these concepts in our development process.  (Calling it a process is probably overkill.)  We try to give life to concepts quickly so the rest of organization can help us decide if we're heading the right direction.  It takes many failures before finding a successful product.

    Read the article

  • More elegant way to avoid hard coding the format of a a CSV file?

    - by dsollen
    I know this is trivial issue, but I just feel this can be more elegant. So I need to write/read data files for my program, lets say they are CSV for now. I can implement the format as I see fit, but I may have need to change that format later. The simply thing to do is something like out.write(For.getValue()+","+bar.getMinValue()+","+fi.toString()); This is easy to write, but obviously is guilty of hard coding and the general 'magic number' issue. The format is hard-coded, requires parsing of the code to figure out the file format, and changing the format requires changing multiple methods. I could instead have my constants specifying the location that I want each variable to be saved in the CSV file to remove some of the 'magic numbers'; then save/load into the an array at the location specified by the constants: int FOO_LOCATION=0; int BAR_MIN_VAL_LOCATION=1; int FI_LOCATION=2 int NUM_ARGUMENTS=3; String[] outputArguments=new String[NUM_ARGUMENTS]; outputArguments[FOO_LOCATION] = foo.getValue(); outputArgumetns[BAR_MIN_VAL_LOCATION] = bar.getMinValue(); outptArguments[FI_LOCATOIN==fi.toString(); writeAsCSV(outputArguments); But this is...extremely verbose and still a bit ugly. It makes it easy to see the format of existing CSV and to swap the location of variables within the file easily. However, if I decide to add an extra value to the csv I need to not only add a new constant, but also modify the read and write methods to add the logic that actually saves/reads the argument from the array; I still have to hunt down every method using these variables and change them by hand! If I use Java enums I can clean this up slightly, but the real issue is still present. Short of some sort of functional programming (and java's inner classes are too ugly to be considered functional) I still have no obvious way of clearly expressing what variable is associated with each constant short of writing (and maintaining) it in the read/write methods. For instance I still need to write somewhere that the FOO_LOCATION specifies the location of foo.getValue(). It seems as if there should be a prettier, easier to maintain, manner for approaching this? Incidentally, I'm working in java at the moment, however, I am interested conceptually about the design approach regardless of language. Some library in java that does all the work for me is definitely welcome (though it may prove more hassle to get permission to add it to the codebase then to just write something by hand quickly), but what I'm really asking is more about how to write elegant code if you had to do this by hand.

    Read the article

  • Admin Panel like Custom Framework

    - by bhuvin
    I want to Create a Framework , like Admin panel , which can rule almost all the aspects of what is shown on the frontend. For an (most basic) example: If suppose the links which are to be shown in a navigation area is passed from the server, with the order and the url , etc. The whole aim is to save the time on the tedious tasks. You can just start creating menus and start assigning pages to it. Give a url, actual files which are to be rendered (in case of static files.), in case of dynamic files, giving the file accordingly. And all this is fully server side manageable using different portlets, sort of things. So basic Roadmap is having : Areas like: Header Area - Which can contain logos, links etc. Navigation Area - Which can contains links and submenus. Content Area - Now this is where the tricky part is that that it has zones like: left, center & right. It contains Order in which it has to be displayed. So, when someday we want to change the way the articles appear on the page, we can do so easily, without any deployments. Now these zones can have n number of internal elements, like the word cloud, or the advertisement area. Footer Area: Again similar as Header Area. Currently there is a preexisting custom framework, which uses XSLT files for pulling out data from the server side. And it has the above capabilities. For example: If there's a grid it will be having a <table> tag embedded in the XSLT file. Now whatever might be the source of the data, we serialize this as XML and give it to the XSLT file and the html is derived from this and is appended to the layer in a page. The problem with this approach is: The XSLT conversion is occurring on the server side, so the server is responsible for getting the data, running XSLT transform, and append the html generated to the layer div. So, according to me, firstly this isn't the server's concern to do so. Secondly for larger applications this might be slower. Debugging isn't possible for XSLT transformation. So, whenever we face problems with data its always a bit of a trial & error method. Maintaining it is a bit of an eerie job i.e. styling changes, and other stuff. Adding dynamic values. Like JavaScript can't actually be very easily used in this. Secondly, we can't use JQuery or any other libraries with this since this is all occurring on the server. For now what I have thought about is using Templating - Javascript - JSON combination in place of XSLT, this will be offloaded to the client and the rendering will take place accordingly. This could solve the above problems and also could add mobile support for the same. Only problem which I could think of is that: It is much work and adding new portlets on the go needs to be looked into. What could be the alternatives for this? What kind of problems are there with the JavaScript approach? What are the different ways to implement the same? Are there any existing frameworks for similar usage?

    Read the article

  • Reasons to Use a VM For Development

    - by George Stocker
    Background: I work at a start-up company, where one team uses Virtual Machines to connect to a remote server to do their development, and another team (the team I'm on) uses local IIS/SQL Server 2005/Visual Studio installations to conduct work. Team VM is located about 1000 miles from Team Non-VM, and the servers the VMs run off of are located near Team VM (Latency, for those that are wondering, is about 50ms). A person high in the company is pushing for Team Non-VM to use virtual machines for programming, development, and testing. The latter point we agree on -- we want Virtual Machines to test configurations and various aspects of the web application in a 'clean' state. The Problem: What we don't agree on is having developers using RDP to connect to a desktop remotely that contains Visual Studio, SQL Server, and IIS to do the same development we could do locally on our laptops. I've tried the VM set-up, and besides the color issue, there is a latency issue that is rather noticeable, not to mention that since we're a start-up, a good number of employees work from home on occasion with our work laptops, and this move would cut off the laptops. They'd be turned in. Reasons to Use Remote VMs for Development (Not Testing!): Here are the stated reasons that this person wants us to use VMs: They work for TeamVM. They keep the source code "safe". If we want to work from home, we could just use our home PCs. Licenses (I don't know what the argument is, only that it's been used). Reasons not to use Remote VMs for Development: Here are the stated reasons why we don't want to use VMs: We like working from home. We get a lot done on our own time. We're not going to use our Home PCs to do work related stuff. The Latency is noticeable. Support for the VMs (if they go down, or if we need a new VM) takes a while. We don't have administrative privileges on the VM, and are unable to change settings as needed. What I'm looking for from the community is this: What reasons would you give for not using VMs for development? Keep in mind these are remote VMs -- this isn't a VM running on a local desktop. It's using the laptop (or a desktop) as a thin client for a remote VM. Also, on the other side of the coin: Is there something we're missing that makes VMs more palatable for development? Edit: I think 'safe' is used in term of corporate espionage, or more correctly if the Laptop gets stolen, the person who stole would have access to our source code. The former (as we've pointed out, is always going to be a possibility -- companies stop that with litigation, there isn't a technical solution (so far as I can see)). The latter point is ( though I don't know its usefulness in a corporate scenario) mitigated by Truecrypt'ing the entire volume.

    Read the article

  • Where does a "Technical Programmer" fit in, and what does the title mean? [closed]

    - by Mike E
    Was: "What is a 'Technical Programmer'"? I've noticed in job posting boards a few postings, all from European companies in the games industry, for a "Technical Programmer". The job description was similar, having to do with tools development, 3d graphics programming, etc. It seems to be somewhere between a Technical Artist who's more technical than artist or who can code, and a Technical Director but perhaps without the seniority/experience. Information elsewhere on the position is sparse. The title seems redundant and I haven't seen any American companies post jobs by that name, exactly. One example is this job posting on gamedev.net which isn't exactly thorough. In case the link dies: Subject: Technical Programmer Frictional Games, the creators of Amnesia: The Dark Descent and the Penumbra series, are looking for a talented programmer to join the company! You will be working for a small team with a big focus on finding new and innovating solutions. We want you who are not afraid to explore uncharted territory and constantly learn new things. Self-discipline and independence are also important traits as all work will be done from home. Some the things you will work with include: 3D math, rendering, shaders and everything else related. Console development (most likely Xbox 360). Hardware implementations (support for motion controls, etc). All coding is in C++, so great skills in that is imperative. Revised Summarised Question: So, where does a programmer of this nature fit in to software development team? If I had these on my team, what tasks am I expecting them to complete? Can I ask one to build a new level editor, or optimize the rendering engine? It doesn't seem to be a "tools programmer" which focuses on producing artist tools, often in high-level languages like C#, Python, or Java. Nor does it seem to be working directly on the engine, nor a graphics programmer, as such. Yet, a strong C++ requirement, which was mirrored in other postings besides this one I quoted. Edited To Add As far as it being a low-level programmer, I had considered that but lacking from the posting was a requirement of Assembly. Instead, they tend to require familiarity with higher-level hardware APIs such as DirectX, or DirectInput. I wasn't fully clear in my original post. I think, however, that Mathew Foscarini has it right in his answer, so barring someone who definitely works with or as a "Technical Programmer" stepping in to provide a clearer explanation, I'll go with that. A generalist, which also fits the description of a more-technical-than-artist TA.

    Read the article

  • CFOs: Do You Have a Playbook for Growth?

    - by Oracle Accelerate for Midsize Companies
    by Jim Lein, Oracle Midsize Programs In most global markets, CFOs are optimistic about their company's growth opportunities. Deloitte's CFO Signals Report, "Time to Accelerate" found that: In the U.K. business optimism is at its highest level in three-and-a-half years Optimism in North America rose from a strong +42% last quarter (Q2 to Q3 2013) to an even stronger +54%. The inaugural Southeast Asia survey, 44% of CFOs reported a positive outlook despite worries over the Chinese economy and political uncertainty. Sustainable and profitable business growth doesn't usually happen by accident. Company's need a playbook for growth that's owned by the CFO. And today, that playbook must leverage the six enabling technologies--Social, Big Data, Mobile, Cloud, Analytics, and The Internet of Things (or, as Oracle president Mark Hurd explains, "The Internet of the People"). On Monday June 9 at  2:00 pm Eastern, CFO.com is hosting a webcast, "The CFO Playbook on Growth: How CFOs Can Boost Efficiency and Performance with Automation". Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} “Investing in technology begins with a business metric driven business case with clear tangible business results expected," says John Lieblang, Affiliate Partner with Waterstone Management Group. "The progressive CFO has learned how to forge a partnership with the CIO to align everyone in the 'result value chain' to be accountable for the business results not just for functional technology.” Click HERE to register  Looking for more news and information about Oracle Solutions for Midsize Companies? Read the latest Oracle for Midsize Companies Newsletter Sign-up to receive the latest communications from Oracle’s industry leaders and experts Jim Lein I evangelize Oracle's enterprise solutions for growing midsize companies. I recently celebrated 15 years with Oracle, having joined JD Edwards in 1999. I'm based in Evergreen, Colorado and love relating stories about creativity and innovation whether they be about software, live music, or the mountains. The views expressed here are my own, and not necessarily those of Oracle.

    Read the article

  • core.* files eating up server space (~50MB)

    - by skytreader
    I'm renting server space from someone and, upon logging in my control panel after quite sometime, noticed an abnormal spike (~50MB) in the disk usage. Upon investigating, I found a lot of core.* files scattered around my public_html directory. Each one is more than 5MB in size but no more than 6MB. The * part is all numbers (in programming regex, that should be core\.\d+). I downloaded one and checked the contents. There was a lot of balderdash characters (NUL mostly, but also a scattering of ETB, ETX, STX) but there's this block of readable text which says: This text is part of the internal format of your mail folder, and is not a real message. It is created automatically by the mail system software. If deleted, important folder data will be lost, and it will be re-created with the data reset to initial values. Pretty self-explanatory. A few blocks above the text are some more readable messages that look like logs but is sandwiched in between non printable characters. I've extracted some below. Scan not valid for mh mailboxes Bogus character 0x%x in news state Can't rewrite news state %.80s Error closing backup news state %.80s No state for newsgroup %.80s found Now, a few concerns: Am I under attack? The messages seem to be about my webmail but I don't use my personal webmail that much---only for a vanity email address and an inbox for an outdated comments system. However, lately, I seem to notice a spike in the spam for my vanity mail. (Note: the comments system is covered by a captcha but every now and then some get through. My vanity email has a spam filter but it isn't as good as I'd like). Next, if this is a feature, can I turn it off? Is it advisable to? I've only 150MB so you see why I'm fretting over a 50MB spike. Some final details: my only server-side scripts are in PHP. The directory which accumulated the most number of these core files is the one containing the Wordpress-managed subdomain of my site. I manage my server through CPanel. Lastly, I decided to delete this files and after some checking nothing seems amiss in my websites nor in my mail. They are indeed the ones responsible for the ~50MB spike as my disk space usage is back to expected.

    Read the article

  • How quickly to leave contract-to-hire gig where you don't want to be hired? [closed]

    - by nono
    So you move to a big new city with tons of software development opportunity, having taken a six month contract-to-hire job. The company treats you really well and has a good team and work environment. However, the recruiter assured you when offering the gig that it would be a good position in which you can advance your learning from more senior developers (a primary concern of yours) but you're starting to realize that a job recruiter isn't going to understand that the team in question isn't very up on modern software practices (you start to sympathize with this guy and read his post over and over again: http://stackoverflow.com/questions/1586166/career-killer-nhibernate-oop-design-patterns-domain-driven-design-test-driv) and that much of the company's software is very old and very very poorly architected, and the company (like so many others) seems to be only concerned with continually extending the software without investing in any structural improvements. You're absolutely dismayed at how long it takes your team (including) to fulfill simple feature requests (maybe 500-1000% longer than with better designed software that you've worked on in the past), but no one else there seems to think anything of it. You find that the work and the company's business are intensely uninteresting to you, but due to the convoluted design of their various software systems, fulfilling the work will require as much mental engagement as any other development position. You feel a bit naive about not having asked the right questions during your interview process, and for not having anticipated that your team at your former podunk company might possibly be light-years ahead of any team in Big Shiny City, but you know you don't want to stay at this place, and (were it not for your personal, after-hours studying and personal programming efforts) fear that you might actually give a worse interview after completing your 6 months than you did when you started at the place. You read about how hard of a time local companies are having filling their positions with qualified software development candidates. You read all sorts of fabulous sounding job postings online and feel like you're really missing out. In spite of the comfortable environment you feel like you would willingly accept a somewhat more demanding or aggressive lifestyle to feel like you are learning and progressing and producing something meaningful. My questions are: how quickly do you leave and how do you go about giving a polite reason for departing? The contract is written to allow them to "can" you and to allow you to leave with 2 weeks notice. Do you ethically owe the 6 months? Upon taking the position, the company told you they were not interested in candidates who were intending to only stay for 6 months and then leave (you were not intending to bail after 6 months, at that time), so perhaps they might be fine if you split now, knowing that you don't want to stick around for the full time hire?

    Read the article

  • JavaOne 2013: (Key) Notes of a conference – State of the Java platform and all the roadmaps by Amis

    - by JuergenKress
    Last week’s JavaOne conference provided insights in the roadmap of the Java platform as well as in the current state of things in the Java community. The close relationship between Oracle and IBM concerning Java, the (continuing) lack of such a relationship with Google, the support from Microsoft for Java applications on its Azure cloud and the vibrant developer community – with over 200 different Java User Groups in many countries of the world. There were no major surprises or stunning announcements. Java EE 7 (release in June) was celebrated, the progress of Java 8 SE explained as well as the progress on Java Embedded and ME. The availability of NetBeans 7.4 RC1 and JDK 8 Early Adopters release as well as the open sourcing of project Avatar probably were the only real news stories. The convergence of JavaFX and Java SE is almost complete; the upcoming alignment of Java SE Embedded and Java ME is the next big consolidation step that will lead to a unified platform where developers can use the same skills, development tools and APIs on EE, SE, SE Embedded and ME development. This means that anything that runs on ME will run on SE (Embedded) and EE – not necessarily the reverse because not all SE APIs are part of the compact profile or the ME environment. However, the trimming down of the SE libraries and the increased capabilities of devices mean that a pretty rich JVM runs on many devices – such as JavaFX 8 on the Raspberry PI. The major theme of the conference was Internet of Things. A world of things that are smart and connected, devices like sensors, cameras and equipment from cars, fridges and television sets to printers, security gates and kiosks that all run Java and are all capable of sending data over local network connections or directly over the internet. The number of devices that has these capabilities is rapidly growing. This means that the number of places where Java programs can help program the behavior of devices is growing too. It also means that the volume of data generated is expanding and that we have to find ways to harvest that data, possibly do a local pre-processing (filter, aggregate) and channel the data to back end systems. Terms typically used are edge devices (small, simple, publishing data), gateways (receiving data from many devices, collecting and consolidating, pre-processing, sending onwards to back end – typically using real time event processing) and enterprise services – receiving the data-turned-information from the gateways to further consolidate, distribute and act upon. A cheap device like the Raspberry PI is a perfect way to get started as a Java developer with what embedded (device) programming means and how interaction with physical input and output takes place. Roadmaps The over all progress on Java is visualized in this overview: Read the full article here. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: Amis,OOW,Oracle OpenWorld,JavaOne,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • RadioButtons and Lambda Expressions

    - by MightyZot
    Radio buttons operate in groups. They are used to present mutually exclusive lists of options. Since I started programming in Windows 20 years ago, I have always been frustrated about how they are implemented. To make them operate as a group, you put your radio buttons in a group box. Conversely, to group radio buttons in HTML, you simply give them all the same name. Radio buttons with the same name or ID in HTML operate as one mutually exclusive group of options. In C#, all your radio buttons must have unique names and you use group boxes to group them. I’m in the process of converting some old code to C# and I’m tasked with creating a user control with groups of radio buttons on it. I started out writing the traditional switch…case statements to check the appropriate radio button based upon value, loops to uncheck them all, etc. Then it occurred to me that I could stick the radio buttons in a Dictionary or List and use Lambda expressions to make my code a lot more maintainable. So, here is what I ended up with: Here is a dictionary that contains my list of radio buttons and their values. I used their values as the keys, so that I can select them by value. Now, instead of using loops and switch…case statements to control the radio buttons, I use the lambda syntax and extension methods. Selecting a Radio Button by Value This code is inside of a property accessor, so “value” represents the value passed into the property accessor. The “First” extension method uses the delegate represented by the lambda expression to select the radio button (actually KeyValuePair) that represents the passed in value. Finally, the resulting checkbox is checked. Since the radio buttons are in the same group, they function as a group, the appropriate radio button is selected while the others are unselected. Reading the Value This is the get accessor for the property that returns the value of the checked radio button. Now, if you’re using binding, this code is likely not necessary; however, I didn’t want to use binding in this case, so I think this is a good alternative to the traditional loops and switch…case statements.

    Read the article

  • Understanding the levels of computing

    - by RParadox
    Sorry, for my confused question. I'm looking for some pointers. Up to now I have been working mostly with Java and Python on the application layer and I have only a vague understanding of operating systems and hardware. I want to understand much more about the lower levels of computing, but it gets really overwhelming somehow. At university I took a class about microprogramming, i.e. how processors get hard-wired to implement the ASM codes. Up to now I always thought I wouldn't get more done if learned more about the "low level". One question I have is: how is it even possible that hardware gets hidden almost completely from the developer? Is it accurate to say that the operating system is a software layer for the hardware? One small example: in programming I have never come across the need to understand what L2 or L3 Cache is. For the typical business application environment one almost never needs to understand assembler and the lower levels of computing, because nowadays there is a technology stack for almost anything. I guess the whole point of these lower levels is to provide an interface to higher levels. On the other hand I wonder how much influence the lower levels can have, for example this whole graphics computing thing. So, on the other hand, there is this theoretical computer science branch, which works on abstract computing models. However, I also rarely encountered situations, where I found it helpful thinking in the categories of complexity models, proof verification, etc. I sort of know, that there is a complexity class called NP, and that they are kind of impossible to solve for a big number of N. What I'm missing is a reference for a framework to think about these things. It seems to me, that there all kinds of different camps, who rarely interact. The last few weeks I have been reading about security issues. Here somehow, much of the different layers come together. Attacks and exploits almost always occur on the lower level, so in this case it is necessary to learn about the details of the OSI layers, the inner workings of an OS, etc.

    Read the article

  • MySQL Connector/Net 6.6.4 RC1 has been released

    - by fernando
    MySQL Connector/Net 6.6.4, a new version of the all-managed .NET driver  for MySQL has been released.  This is the Release Candidate intended to introduce users to the new features in the release.  This release is feature complete it should be stable enough for users to understand the new features and how we expect them to work.  As is the case with all non-GA releases, it should not be used in any production environment.  It is appropriate for use with MySQL server versions 5.0-5.6 It is now available in source and binary form from http://dev.mysql.com/downloads/connector/net/#downloads and mirror sites (note that not all mirror sites may be up to date at this point-if you can't find this version on some mirror, please try again later or choose another download site.) The 6.6 version of MySQL Connector/Net brings the following new features:   * Stored routine debugging   * Entity Framework 4.3 Code First support   * Pluggable authentication (now third parties can plug new authentications mechanisms into the driver).   * Full Visual Studio 2012 support: everything from Server Explorer to Intellisense&  the Stored Routine debugger. The following specific fixes are addressed in this version: - Fixed Entity Framework + mysql connector/net in partial trust throws exceptions (MySql bug #65036, Oracle bug #14668820). - Added support in Parser for Datetime and Time types with precision when using Server 5.6 (No bug Number). - Fix for bug TIMESTAMP values are mistakenly represented as DateTime with Kind = Local (Mysql bug #66964, Oracle bug #14740705). The release is available to download athttp://dev.mysql.com/downloads/connector/net/6.6.html Documentation ------------------------------------- You can view current Connector/Net documentation at http://dev.mysql.com/doc/refman/5.5/en/connector-net.html For specific topics: Stored Routine Debugger:http://dev.mysql.com/doc/refman/5.5/en/connector-net-visual-studio-debugger.html Authentication plugin:http://dev.mysql.com/doc/refman/5.5/en/connector-net-programming-authentication-user-plugin.html You can find our team blog at http://blogs.oracle.com/MySQLOnWindows. You can also post questions on our forums at http://forums.mysql.com/. Enjoy and thanks for the support!

    Read the article

  • Pinterest and the Rising Power of Imagery

    - by Mike Stiles
    If images keep you glued to a screen, you’re hardly alone. Countless social users are letting their eyes do the walking, waiting for that special photo to grab their attention. And perhaps more than any other social network, Pinterest has been giving those eyes plenty of room to walk. Pinterest came along in 2010. Its play was that users could simply create topic boards and pin pictures to the appropriate boards for sharing. Yes there are some words, captions mostly, but not many. The speed of its growth raised eyebrows. Traffic quadrupled in the last quarter of 2011, with 7.51 million unique visitors in December alone. It now gets 1.9 billion monthly page views. And it was sticky. In the US, the average time a user spends strolling through boards and photos on Pinterest is 15 minutes, 50 seconds. Proving the concept of browsing a catalogue is not dead, it became a top 5 referrer for several apparel retailers like Land’s End, Nordstrom, and Bergdorfs. Now a survey of online shoppers by BizRate Insights says that Pinterest is responsible for more purchases online than Facebook. Over 70% of its users are going there specifically to keep up with trends and get shopping ideas. And when they buy, the average order value is $179. Pinterest is also scoring better in terms of user engagement. 66% of pinners regularly follow and repin retailers, whereas 17% of Facebook fans turn to that platform for purchase ideas. (Facebook still wins when it comes to reach and driving traffic to 3rd-party sites by the way). Social posting best practices have consistently shown that posts with photos are rewarded with higher engagement levels. You may be downright Shakespearean in your writing, but what makes images in the digital world so much more powerful than prose? 1. They transcend language barriers. 2. They’re fun and addictive to look at. 3. They can be consumed in fractions of a second, important considering how fast users move through their social content (admit it, you do too). 4. They’re efficient gateways. A good picture might get them to the headline. A good headline might then get them to the written content. 5. The audience for them surpasses demographic limitations. 6. They can effectively communicate and trigger an emotion. 7. With mobile use soaring, photos are created on those devices and easily consumed and shared on them. Pinterest’s iPad app hit #1 in the Apple store in 1 day. Even as far back as 2009, over 2.5 billion devices with cameras were on the streets generating in just 1 year, 10% of the number of photos taken…ever. But let’s say you’re not a retailer. What if you’re a B2B whose products or services aren’t visual? Should you worry about your presence on Pinterest? As with all things, you need a keen awareness of who your audience is, where they reside online, and what they want to do there. If it doesn’t make sense to put a tent stake in Pinterest, fine. But ignore the power of pictures at your own peril. If not visually, how are you going to attention-grab social users scrolling down their News Feeds at top speed? You’re competing with every other cool image out there from countless content sources. Bore us and we’ll fly right past you.

    Read the article

  • What is the recommended way to output values to FBO targets? (OpenGL 3.3 + GLSL 330)

    - by datSilencer
    I'll begin by apologizing for any dumb assumptions you might find in the code below since I'm still pretty much green when it comes to OpenGL programming. I'm currently trying to implement deferred shading by using FBO's and their associated targets (textures in my case). I have a simple (I think :P) geometry+fragment shader program and I'd like to write its Fragment Shader stage output to three different render targets (previously bound by a call to glDrawBuffers()), like so: #version 330 in vec3 WorldPos0; in vec2 TexCoord0; in vec3 Normal0; in vec3 Tangent0; layout(location = 0) out vec3 WorldPos; layout(location = 1) out vec3 Diffuse; layout(location = 2) out vec3 Normal; uniform sampler2D gColorMap; uniform sampler2D gNormalMap; vec3 CalcBumpedNormal() { vec3 Normal = normalize(Normal0); vec3 Tangent = normalize(Tangent0); Tangent = normalize(Tangent - dot(Tangent, Normal) * Normal); vec3 Bitangent = cross(Tangent, Normal); vec3 BumpMapNormal = texture(gNormalMap, TexCoord0).xyz; BumpMapNormal = 2 * BumpMapNormal - vec3(1.0, 1.0, -1.0); vec3 NewNormal; mat3 TBN = mat3(Tangent, Bitangent, Normal); NewNormal = TBN * BumpMapNormal; NewNormal = normalize(NewNormal); return NewNormal; } void main() { WorldPos = WorldPos0; Diffuse = texture(gColorMap, TexCoord0).xyz; Normal = CalcBumpedNormal(); } If my render target textures are configured as: RT1:(GL_RGB32F, GL_RGB, GL_FLOAT, GL_TEXTURE0, GL_COLOR_ATTACHMENT0) RT2:(GL_RGB32F, GL_RGB, GL_FLOAT, GL_TEXTURE1, GL_COLOR_ATTACHMENT1) RT3:(GL_RGB32F, GL_RGB, GL_FLOAT, GL_TEXTURE2, GL_COLOR_ATTACHMENT2) And assuming that each texture has an internal format capable of contaning the incoming data, will the fragment shader write the corresponding values to the expected texture targets? On a related note, do the textures need to be bound to the OpenGL context when they are Multiple Render Targets? From some Googling, I think there are two other ways to output to MRTs: 1: Output each component to gl_FragData[n]. Some forum posts say this method is deprecated. However, looking at the latest OpenGL 3.3 and 4.0 specifications at opengl.org, the core profiles still mention this approach. 2: Use a typed output array variable for the expected type. In this case, I think it would be something like this: out vec3 [3] output; void main() { output[0] = WorldPos0; output[1] = texture(gColorMap, TexCoord0).xyz; output[2] = CalcBumpedNormal(); } So which is then the recommended approach? Is there a recommended approach at all if I plan to code on top of OpenGL 3.3? Thanks for your time and help!

    Read the article

  • StringBuffer behavior in LWJGL

    - by Michael Oberlin
    Okay, I've been programming in Java for about ten years, but am entirely new to LWJGL. I have a specific problem whilst attempting to create a text console. I have built a class meant to abstract input polling to it, which (in theory) captures key presses from the Keyboard object and appends them to a StringBuilder/StringBuffer, then retrieves the completed string after receiving the ENTER key. The problem is, after I trigger the String return (currently with ESCAPE), and attempt to print it to System.out, I consistently get a blank line. I can get an appropriate string length, and I can even sample a single character out of it and get complete accuracy, but it never prints the actual string. I could swear that LWJGL slipped some kind of thread-safety trick in while I wasn't looking. Here's my code: static volatile StringBuffer command = new StringBuffer(); @Override public void chain(InputPoller poller) { this.chain = poller; } @Override public synchronized void poll() { //basic testing for modifier keys, to be used later on boolean shift = false, alt = false, control = false, superkey = false; if(Keyboard.isKeyDown(Keyboard.KEY_LSHIFT) || Keyboard.isKeyDown(Keyboard.KEY_RSHIFT)) shift = true; if(Keyboard.isKeyDown(Keyboard.KEY_LMENU) || Keyboard.isKeyDown(Keyboard.KEY_RMENU)) alt = true; if(Keyboard.isKeyDown(Keyboard.KEY_LCONTROL) || Keyboard.isKeyDown(Keyboard.KEY_RCONTROL)) control = true; if(Keyboard.isKeyDown(Keyboard.KEY_LMETA) || Keyboard.isKeyDown(Keyboard.KEY_RMETA)) superkey = true; while(Keyboard.next()) if(Keyboard.getEventKeyState()) { command.append(Keyboard.getEventCharacter()); } if (Framework.isConsoleEnabled() && Keyboard.isKeyDown(Keyboard.KEY_ESCAPE)) { System.out.println("Escape down"); System.out.println(command.length() + " characters polled"); //works System.out.println(command.toString().length()); //works System.out.println(command.toString().charAt(4)); //works System.out.println(command.toString().toCharArray()); //blank line! System.out.println(command.toString()); //blank line! Framework.disableConsole(); } //TODO: Add command construction and console management after that } } Maybe the answer's obvious and I'm just feeling tired, but I need to walk away from this for a while. If anyone sees the issue, please let me know. This machine is running the latest release of Java 7 on Ubuntu 12.04, Mate desktop environment. Many thanks.

    Read the article

< Previous Page | 641 642 643 644 645 646 647 648 649 650 651 652  | Next Page >