Search Results

Search found 20484 results on 820 pages for 'small projects'.

Page 452/820 | < Previous Page | 448 449 450 451 452 453 454 455 456 457 458 459  | Next Page >

  • Why CFOs Should Care About Big Data

    - by jmorourke
    The topic of “big data” clearly has reached a tipping point in 2012.  With plenty of coverage over the past few years in the IT press, we are now starting to see the topic of “big data” covered in mainstream business press, including a cover story in the October 2012 issue of the Harvard Business Review.  To help customers understand the challenges of managing “big data” as well as the opportunities that can be created by leveraging “big data”, Oracle has recently run and published the results of a customer survey, as well as white papers and articles on this topic.  Most recently, we commissioned a white paper titled “Mastering Big Data: CFO Strategies to Transform Insight into Opportunity”. The premise here is that “big data” is not just a topic that CIOs should pay attention to, but one that CFOs should understand and take advantage of as well.  Clearly, whoever masters the art and science of big data will be positioned for competitive advantage in their industries or markets.  That’s why smart CFOs are taking control of big data and business analytics projects, not just to uncover new ways to drive growth in a slowing global economy, but also to be a catalyst for change in the enterprise.  With an increasing number of CFOs now responsible for overseeing IT investments and providing strategic insight to the board, CFOs will be increasingly called upon to take a leadership role in assessing the value of “big data” initiatives, building on their traditional skills in reporting and helping managers analyze data to support decision making. Here’s a link to the white paper referenced above, which is posted on the Oracle C-Central/CFO web site, as well as some other resources that can help CFOs master the topic of “big data”: White Paper “Mastering Big Data:  CFO Strategies to Transform Insight into Opportunity CFO Market Watch article:  “Does Big Data Affect the CFO?” Oracle Survey Report:  “From Overload to Impact – An Industry Scorecard on Big Data Industry Challenges” Upcoming Big Data Webcast with Andrew McAfee Here’s a general link to Oracle C-Central/CFO in case you want to start there: www.oracle.com/c-central/cfo Feel free to contact me if you have any questions or need additional information:  [email protected]

    Read the article

  • How do I start my career on a 3-year-old degree [on hold]

    - by Gabriel Burns
    I received my bachelor's degree in Com S (second major in math) in December 2011. I didn't have the best GPA (I was excellent at programming projects and had a deep understanding of CS concepts, but school is generally not the best format for displaying my strengths), and my only internship was with a now-defunct startup. After graduation I applied for several jobs, had a fair number of interviews, but never got hired. After a while, I got somewhat discouraged, and though I still said I was looking, and occasionally applied for something, my pace slowed down considerably. I remain convinced that software development is the right path for me, and that I could make a real contribution to someones work force, but I'm at a loss as to how I can convince anyone of this. My major problems are as follows. Lack of professional experience-- a problem for every entry-level programmer, I suppose, but everyone seems to want someone with a couple of years under their belt. Rustiness-- I've not really done any programming in about a year, and since school all I've really done is various programming competitions and puzzles. (codechef, hackerrank, etc.) I need a way to sharpen my skills. Long term unemployment-- while I had a basic fast-food job after I graduated, I've been truly unemployed for about a year now. Furthermore, no one has ever hired me as a programmer, and any potential employer is liable to wonder why. Old References-- my references are all college professors and one supervisor from my internship, none of whom I've had any contact with since I graduated. Confidence-- I have no doubt that I could be a good professional programmer, and make just about any employer glad that they hired me, but I'm aware of my red flags as a candidate, and have a hard time heading confidently into an interview. How can I overcome these problems and keep my career from being over before it starts?

    Read the article

  • How to build a "traffic AI"?

    - by Lunikon
    A project I am working on right now features a lot of "traffic" in the sense of cars moving along roads, aircraft moving aroun an apron etc. As of now the available paths are precalculated, so nodes are generated automatically for crossings which themselves are interconnected by edges. When a character/agent spawns into the world it starts at some node and finds a path to a target node by means of a simply A* algorithm. The agent follows the path and ultimately reaches its destination. No problem so far. Now I need to enable the agents to avoid collisions and to handle complex traffic situations. Since I'm new to the field of AI I looked up several papers/articles on steering behavior but found them to be too low-level. My problem consists less of the actual collision avoidance (which is rather simple in this case because the agents follow strictly defined paths) but of situations like one agent leaving a dead-end while another one wants to enter exactly the same one. Or two agents meeting at a bottleneck which only allows one agent to pass at a time but both need to pass it (according to the optimal route found before) and they need to find a way to let the other one pass first. So basically the main aspect of the problem would be predicting traffic movement to avoid dead-locks. Difficult to describe, but I guess you get what I mean. Do you have any recommendations for me on where to start looking? Any papers, sample projects or similar things that could get me started? I appreciate your help!

    Read the article

  • Any ideas about how to make Programming Techniques Class more interesting.

    - by Eedoh
    Hello. I already found similar question here on SO, but almost all the answers were more philosophical, then practical. I'd like You to share some of Your PRACTICAL ideas about how to make my course more interesting. It doesn't matter how much effort it takes from me. I even thought about trying to motivate them to pick some topic in the beginning of the course and to work on it as some kind of real, small, startup project that they could maybe financially exploit once it's finished. But I'm afraid that most of them will not get the project to the end, and that it could be boring to them working on one thing all year long. Also I thought about involving them in Torcs, but I'm afraid most of them wouldn't be up to the task. Btw, Torcs is Car Racing Simulation, but there's an API for developers so they can develop their own AI for the driver, and then race their cars against the other programmer's AI's. I'm not asking here for problem examples, as I asked a separate question about that. I need ideas about making my lectures more interesting and fun.

    Read the article

  • OpenGL profiling with AMD PerfStudio 2

    - by Aurus
    I'm rendering just a really small amount of polygons for my UI but however I still tried to increase the FPS. In the end I removed redundant calls which increased the FPS. I really don't want to lose FPS for nothing so I keep looking for more improvements. The first thing I noticed is the "huge" time where no calls are made before SwapBuffer (the black one). Well I know that OpenGL works asynchronous so SwapBuffer has to wait until everything is done. But shouldn't PerfStudio mark this time also as black ? Correct me If I am wrong. The second thing I noticed is that some glUniform2f calls just take longer (the brown ones). I mean they should all upload 2floats to the GPU how can the time be so different from call to call. The program isn't even changed or something like that. I also tried to look at other programs like gDebugger or CodeXL but they often crashed and they show less statistics (only # of calls or redundant calls etc.) EDIT: I also realized that the draw calls also have different durations, which was obvious for me but sometimes drawing more vertices is faster than drawing less vertices.

    Read the article

  • How to provide value?

    - by Francisco Garcia
    Before I became a consultant all I cared about was becoming a highly skilled programmer. Now I believe that what my clients need is not a great hacker, coder, architect... or whatever. I am more and more convinced every day that there is something of greater value. Everywhere I go I discover practices where I used to roll my eyes in despair. I saw the software industry with pink glasses and laughed or cried at them depending on my mood. I was so convinced everything could be done better. Now I believe that what my clients desperately need is finding a balance between good engineering practices and desperate project execution. Although a great design can make a project cheap to maintain thought many years, usually it is more important to produce quick fast and cheap, just to see if the project can succeed. Before that, it does not really matters that much if the design is cheap to maintain, after that, it might be too late to improve things. They need people who get involved, who do some clandestine improvements into the project without their manager approval/consent/knowledge... because they are never given time for some tasks we all know are important. Not all good things can be done, some of them must come out of freewill, and some of them must be discussed in order to educate colleagues, managers, clients and ourselves. Now my big question is. What exactly are the skills and practices aside from great coding that can provide real value to the economical success of software projects? (and not the software architecture alone)

    Read the article

  • The year ahead, 2011.

    - by andrewstopford
    When I look back at last years look at 2010 my blogging rate has not changed much (I suspect this is largely down to using Twitter a lot) but my interests this year have developed a lot further. My view on 2010 would be that Microsoft would commit more to OSS, while I wanted to see more hires from that audience and more projects on Outercurve foundation instead there has been support for JQuery and Gems (aka NuGet). I would love to see more from Microsoft on the OSS front in 2011, Outercurve could become like the Apache foundation with enough support. Staying on the Microsoft front I predict that 2011 will bring the following. C# 5.0 will go RTM (still no MOP though) The next release of VS will go alpha or early beta MS MVC 4.0 (I think by Mix time) and maybe this release will get a command line. I also suspect that Microsoft will want to target the tablet market with WP7 in 2011 (Mix 2011 maybe...). I also predict the following Java will fork with Apache\Google. Oracle will then take them to court and the whole thing will boil right through 2011 (Java have had enough court cases, come on guys). Java and the JVM will sadly not move forward at all in 2011. Android will cause Apple a serious headache, both the smartphone and tablet market will see figures cut from Apple share. By the end of 2011 the current 70% apple market share will be 40-50%. As the features, performance and price of Android devices gets ever better Apple will be left out in the open. Lastly after 7 years I intend to move this blog away from weblogs. In 2011 I will be exploring Java, Ruby\Rails and Android and such subjects don't make sense to talk about it here. See you in 2011.

    Read the article

  • Where to go after having a good grasp of a language?

    - by Alex M.
    I have been programming as a hobby for the past few years now (most of high school and 1 year in cs in college) and although I've came to the conclusion that a career in CS isn't for me I switched over to math (which pairs what I love about programming with my interest in physical sciences) but I miss writing code. Recently I've had an interest in low-level programming. Understanding how compilers work, learning some basics of assembly language and trying to get out of my comfort zone. The problem is that since I've been out of the CS programs, I'm not faced with much opportunities to write code. I do intend to take a few CS classes in college (a lot of CS stuff is opened to math majors) but that won't come for until next year. So I ask: What are the steps to take in order to keep improving as a programmer once you're passed the basic steps? How do you find projects to keep you going? Beside my newly discovered interest in assembly language, I've been writing code in C and have been interested in FOSS. Thanks!

    Read the article

  • Co-worker uses ridiculous commenting convention, how to cope? [closed]

    - by Jessica Friedman
    A co-worker in the small start-up I work at writes (C++) code like this: // some class class SomeClass { // c'tor SomeClass(); // d'tor ~SomeClass(); // some function void someFunction(int x, int y); }; // some function void SomeClass::someFunction(int x, int y) { // init worker m_worker.init(); // log LOG_DEBUG("Worker initialized"); // find current cache auto it = m_currentCache.find(); // flush if (it->flush() == false) { // return return false } // return return true } This is how he writes 100% of his code: a spacer line, a useless comment which says nothing other than what is plainly stated in the following statement, and the statement itself. This is absolutely driving me insane. A simple class written by him spans 3 times as much as it's supposed to, It looks well commented but the comments contain no new information. In fact the code is completely undocumented in any normal definition of "documentation". All of the comments are just a repetition of what is written in C++ in the following line. I've confronted him several times about it and each time he seems to understand what I am saying but then goes on to not change his coding and not fix old code which is written like this. I've went on and on again and again about the distinct disadvantages of writing code like this but nothing get through to him. Other co-workers doesn't seem to mind it as much and management doesn't seem to really care. What do I do? (sorry for the rant)

    Read the article

  • Making LISPs manageable

    - by Andrea
    I am trying to learn Clojure, which seems a good candidate for a successful LISP. I have no problem with the concepts, but now I would like to start actually doing something. Here it comes my problem. As I mainly do web stuff, I have been looking into existing frameworks, database libraries, templating libraries and so on. Often these libraries are heavily based on macros. Now, I like very much the possibility of writing macros to get a simpler syntax than it would be possible otherwise. But it definitely adds another layer of complexity. Let me take an example of a migration in Lobos from a blog post: (defmigration add-posts-table (up [] (create clogdb (table :posts (integer :id :primary-key ) (varchar :title 250) (text :content ) (boolean :status (default false)) (timestamp :created (default (now))) (timestamp :published ) (integer :author [:refer :authors :id] :not-null)))) (down [] (drop (table :posts )))) It is very readable indeed. But it is hard to recognize what the structure is. What does the function timestamp return? Or is it a macro? Having all this freedom of writing my own syntax means that I have to learn other people's syntax for every library I want to use. How can I learn to use these components effectively? Am I supposed to learn each small DSL as a black box?

    Read the article

  • Library Organization in .NET

    - by Greg Ros
    I've written a .NET bitwise operations library as part of my projects (stuff ranging from get MSB set to some more complicated bitwise transformations) and I mean to release it as free software. I'm a bit confused about a design aspect of the library, though. Many of the methods/transformations in the library come with different endianness. A simple example is a getBitAt method that regards index 0 as the least significant bit, or the most significant bit, depending on the version used. In practice, I've found that using separate functions for different endianness results in much more comprehensible and reusable code than assuming all operations are little-endian or something. I'm really stumped regarding how best to package the library. Should I have methods that have LE and BE versions take an enum parameter in their signature, e.g. Endianness.Little, Endianness.Big? Should I have different static classes with identically named methods? such as MSB.GetBit and LSB.GetBit On a much wider note, is there a standard I could use in cases like this? Some guide? Is my design issue trivial? I have a perfectionist bent, and I sometimes get stuck on tricky design issues like this... Note: I've sort of realized I'm using endianness somewhat colloquially to refer to the order/place value of digital component parts (be they bits, bytes, or words) in a larger whole, in any setting. I'm not talking about machine-level endianness or serial transmission endianness. Just about place-value semantics in general. So there isn't a context of targeting different machines/transmission techniques or something.

    Read the article

  • Ask the Readers: What Do You Have Set as Your Homepage?

    - by Mysticgeek
    When if comes to setting a homepage in your browser, it’s really based on personal preference. Today we want to know what you have set as your homepage in your favorite browser. Browser Homepage There are a lot of search sites that allow you to customize your homepage such as iGoogle, MSN, and Yahoo. Some people enjoy having a homepage set up as a dashboard of sorts. While others like simplicity and set it to Google or leave it blank. Not surprisingly in a small office or corporation you will see a lot of workstations set to MSN or the company SharePoint site. Unfortunately, a lot of free software tries to change you default homepage as well, like in this example when installing Windows Live Essentials. Make sure to avoid this by not rushing through software install wizards, and carefully opt out of such options. What is set as your homepage in your favorite web browser…both for work and at home? Leave us a comment and join in the discussion! Similar Articles Productive Geek Tips Ask the Readers: Which Web Browser Do You Use?How-To Geek Comment PolicyMysticgeek Blog: A Look at Internet Explorer 8 Beta 1 on Windows XPSet the Default Browser on Ubuntu From the Command LineAnnouncing the How-To Geek Forums TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 Get a free copy of WinUtilities Pro 2010 World Cup Schedule Boot Snooze – Reboot and then Standby or Hibernate Customize Everything Related to Dates, Times, Currency and Measurement in Windows 7 Google Earth replacement Icon (Icons we like) Build Great Charts in Excel with Chart Advisor

    Read the article

  • XNA texture stretching at extreme coordinates

    - by Shaun Hamman
    I was toying around with infinitely scrolling 2D textures using the XNA framework and came across a rather strange observation. Using the basic draw code: spriteBatch.Begin(SpriteSortMode.Deferred, null, SamplerState.PointWrap, null, null); spriteBatch.Draw(texture, Vector2.Zero, sourceRect, Color.White, 0.0f, Vector2.Zero, 2.0f, SpriteEffects.None, 1.0f); spriteBatch.End(); with a small 32x32 texture and a sourceRect defined as: sourceRect = new Rectangle(0, 0, Window.ClientBounds.Width, Window.ClientBounds.Height); I was able to scroll the texture across the window infinitely by changing the X and Y coordinates of the sourceRect. Playing with different coordinate locations, I noticed that if I made either of the coordinates too large, the texture no longer drew and was instead replaced by either a flat color or alternating bands of color. Tracing the coordinates back down, I found the following at around (0, -16,777,000): As you can see, the texture in the top half of the image is stretched vertically. My question is why is this occurring? Certainly I can do things like bind the x/y position to some low multiple of 32 to give the same effect without this occurring, so fixing it isn't an issue, but I'm curious about why this happens. My initial thought was perhaps it was overflowing the coordinate value or some such thing, but looking at a data type size chart, the next closest below is an unsigned short with a range of about 32,000, and above is an unsigned int with a range of around 2,000,000,000 so that isn't likely the cause.

    Read the article

  • Good design for a simple site that contains a blog

    - by bporter
    What is a good design for a simple web site with mostly static pages and a blog? I am helping a friend build this for their small business. We are looking for a simple approach that can be implemented fairly quickly. (I am a programmer and can help with coding, hosting, etc.) One option is to use a site like virb, which lets you choose from one of their themes and build a site pretty easily. You can also include a blog. They host the site for a pretty low monthly rate. I recommended this option, but my friend wants a design that is unique and custom. So, I took one of the themes and started modifying the HTML and CSS. This might still be a good option, but... ...If we are going to greatly modify it, why not just create the static pages from scratch and use something like Wordpress for the blog. Is this a good option? It looks fairly easy to integrate Wordpress with a site so that the design and behavior are really cohesive. Is this a good idea? Do you recommend any other approaches?

    Read the article

  • Sharing an internet connection through the Ethernet port

    - by Bob Cunningham
    I have a small living room PC (Bohica, running fully-updated Ubuntu 10.10/Maverick) connected to my HDTV that I use for web browsing and media streaming. It connects via WiFi (wlan0) to my Fedora server (Snafu) that in turn connects to the internet. I use static addressing, and everything has been working fine. I just got a Blu-ray player, and I'd like to give it wired network access to the internet via Bohica's available wired ethernet port (eth0). So far, I haven't been to get eth0 and the network configured to get the Blu-ray player talking to the internet. Here's my wlan0 configuration: ip4 addr: 192.168.0.100 mask: /24 (255.255.255.0) gateway: 192.168.0.4 (fedora box) The Blu-ray player is set to an IP of 192.168.0.98/24, with the same gateway as above. I want eth0 set to an IP of 192.168.0.99/24, but when I do this using nm-connection-editor I lose internet access (the system tries to use eth0 as the default internet access interface). How do I get my blu-ray player to talk to the internet through Bohica, and do so without disrupting my current (working) network? Thanks. Edit: Here's the relevant output from nm-tool with the Blu-ray player connected: $ nm-tool NetworkManager Tool State: connected - Device: eth0 Type: Wired Driver: forcedeth State: disconnected Default: no HW Address: 90:FB:A6:2C:94:32 Capabilities: Carrier Detect: yes Speed: 100 Mb/s Wired Properties Carrier: on - Device: wlan0 [wlan0] Type: 802.11 WiFi Driver: ndiswrapper State: connected Default: yes HW Address: 00:26:5A:C0:D0:05 IPv4 Settings: Address: 192.168.0.100 Prefix: 24 (255.255.255.0) Gateway: 192.168.0.4

    Read the article

  • BizTalk 2009 - Error when Testing Map with Flat File Source Schema

    - by StuartBrierley
    I have recently been creating some flat file schemas using the BizTalk Server 2009 Flat File Schema Wizard.  I have then been mapping these flat file schemas to a "normal" xml schema format. I have not previsouly had any cause to map flat files and ran into some trouble when testing the first of these flat file maps; with an instance of the flat file as the source it threw an XSL transform error: Test Map.btm: error btm1050: XSL transform error: Unable to write output instance to the following <file:///C:\Documents and Settings\sbrierley\Local Settings\Temp\_MapData\Test Mapping\Test Map_output.xml>. Data at the root level is invalid. Line 1, position 1. Due to the complexity of the map in question I decided to created a small test map using the same source and destination schemas to see if I could pinpoint the problem.  Although the source message instance vaildated correctly against the flat file schema, when I then tested this simplified map I got the same error. After a time of fruitless head scratching and some serious google time I figured out what the problem was. Looking at the map properties I noticed that I had the test map input set to "XML" - for a flat file instance this should be set to "native".

    Read the article

  • Data that has been deleted in P6, how is it updated in Analytics

    - by Jeffrey McDaniel
    In P6 Reporting Database 2.0 the ETL process looked to the refrdel table in the P6 PMDB to determine which projects were deleted. The refrdel table could not be cleared out between ETL runs or those deletes would be lost. After the ETL process is run the refrdel can be cleared out. It is important to keep any purging of the refrdel in a consistent cycle so the ETL process can pick up these deletes and process them accordingly.  In P6 Reporting Database 2.2 and higher the Extended Schema is used as the data source. In the Extended Schema, deleted data is filtered out by the views. The Extended Schema services will handle any interaction with the refrdel table, this concern with timing refrdel cleanup and ETL runs is not applicable as of this release. In the Extended Schema tables (ex. TaskX) there can still be deleted data present. The Extended Schema views join on the primary PMDB tables (ex. Task) and filter out any deleted data.  Any data that was deleted that remains in the Extended Schema tables can be cleaned out at a designated time by running the clean up procedure as documented in the P6 Extended Schema white paper. This can be run occasionally but is not necessary to run often unless large amounts of data has been deleted.

    Read the article

  • Agile project management, agile development: early integration

    - by Matías Fidemraizer
    I believe that agile works if everything is agile. In software development area, in my opinion, if team members' code is integrated early, code will be more in sync and this has a lot of pros: Early integration helps team members to avoid painful merges. Encourages better coding habits, because everyone makes sure that they don't break co-workers' code everyday. Both developers and architects (code reviewers) may detect bad design decisions or just wrong development directions in real-time, preventing useless work. Actually I'm talking about getting the latest version of code base and checking-in your own code to the source control in a daily basis. When you start your coding day (i.e. you arrive to your work), your first action is updating your code base with the latest version from the source control. In the other hand, when you're about an hour to leave from your work and go home, your last action is checking-in your code to the source control and be sure that your day work doesn't break the project's build process. Rather than updating and checking-in your code once you finished an entire task, I believe the best approach is fixing small and flexible personal milestones and checking-in the code once you finish one of these. I really believe that this coding approach fits better in the agile project management concept. Do you know some document, blog post, wiki, article or whatever that you can suggest me that could be in sync with my opinion?. And, do you find any problem working with this approach?. Thank you in advance.

    Read the article

  • How do I render only part of a texture to a point sprite in OpenGL ES for Android?

    - by nbolton
    Using the libgdx framework, I've figured out how to render a texture to a point sprite. The problem is, it renders the entire texture to the point sprite, where I only want a small part of it (since it's an isometric tile image). Here's a snippet from some demo code I wrote... create() { renderer = new ImmediateModeRenderer(); tiles = Gdx.graphics.newTexture( Gdx.files.internal("data/tiles2.png"), TextureFilter.MipMap, TextureFilter.Linear, TextureWrap.ClampToEdge, TextureWrap.ClampToEdge); Gdx.gl.glClearColor(0.6f, 0.7f, 0.9f, 1); Gdx.gl.glEnable(GL10.GL_TEXTURE_2D); Gdx.gl.glEnable(GL11.GL_POINT_SPRITE_OES); Gdx.gl11.glTexEnvi( GL11.GL_POINT_SPRITE_OES, GL11.GL_COORD_REPLACE_OES, GL11.GL_TRUE); Gdx.gl10.glPointSize(s); tiles.bind(); } render() { Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT); renderer.begin(GL10.GL_POINTS); // render 3 point sprites at various 3d points renderer.vertex(-.1f, 0, -.1f); renderer.vertex(0, 0, 0); renderer.vertex(.1f, 0, .1f); // ... more vertices here if needed (one for each sprite) ... renderer.end(); }

    Read the article

  • ROI in choosing a CMS solution

    - by Tio
    At the company I work for we need a CMS. The question is, what to choose, for me I think the best solution is to develop one of our own, but we ( my boss and I ), talked about using Drupal. But my boss is completely non-technical, and want's to take a lot of shortcut's which for programming is utterly bad. Too many shortcut's ( and that's why just last Friday we had a bug on one of our systems that caused a lot of panic ). So I'm trying to investigate on the ROI of using already existing CMS solutions VS developing our own customized CMS ( based on a open source library or not ). So that I can sell this to my boss. I'm almost sure that developing a customized CMS is the best for our small company. After a search on google I found this: Choose between a commercial, open source, or customized CMS, but the link is from 2003, it has some truth's, but the world changed a lot from 2003. But I can't seem to find anything else about it. I've developed my own CMS, so I know it's not the most easy thing to do, and that it takes time. Can someone give me any tips? EDIT: With CMS I mean Content Management System, to manage the webpages of our clients.

    Read the article

  • Using Coherence API to get POF bytes

    - by Bruno.Borges
    Someone raised the question on how to use the Coherence API to get the bytes of an object in POF (Portable Object Format) programatically. So I came up with this small code that shows the very cool API simple usage :-)   SimplePofContext spc = new SimplePofContext();    spc.registerUserType(0, User.class, new UserSerializer());    // consider UserSerializer as an implementation of PofSerializer            User u = new User();    u.setId(21);    u.setName("Some Name");    u.setEmail("[email protected]");            ByteArrayOutputStream baos = new ByteArrayOutputStream();    DataOutput dataOutput = new DataOutputStream(baos);    BufferOutput bufferOutput = new WrapperBufferOutput(dataOutput);    spc.serialize(bufferOutput, u);            byte[] byteArray = baos.toByteArray();    System.out.println(Arrays.toString(byteArray));  Easy, isn't?

    Read the article

  • Join our webinar: What CFOs Want From IT -- Unlocking Growth with Emerging Technologies

    - by Di Seghposs
    According to the 2012 Gartner-FEI research, big data, analytics, and new mobile, social & cloud computing platforms are increasingly on the CFOs radar screen because of their potential to unlock new growth opportunities. Join Oracle Chair Jeff Henley, & Oracle's Reggie Bradford & Rich Clayton as they explore CFO strategies & best practices for driving real value from IT investments in these areas: Why CFOs should get involved in big data and business analytics projects, and what best practices they can adopt to ensure their success How CFOs are leveraging new mobile and cloud computing platforms to address enterprise demands quickly and cost effectively How CFOs can partner with CMOs to maximize the value of IT investments in social technologies that can help create new growth opportunities CFOs have more responsibility over IT than ever before.  Learn how Oracle unlocks the transformative power of IT to take your business to the next level of performance.   Date:Tuesday, November 27, 2012 Time:8:00 a.m. PST / 11:00 a.m. EST Register now.

    Read the article

  • How to display/add play button on youtube image?

    - by Zakir Sajib
    I have embedded the youtube image from youtube server which is "http://img.youtube.com/vi/0.jpg", now there is no play button shows up on middle of the image as we see usually any youtube videos! I tried to use the following code to get an image on top of the youtube image but it shows bigger picture, i know why <a class="fancybox" href="#video"> <img src="/wp-content/themes/mytheme/images/play_button.png" no-repeat width: 0px height:0px; style="background: url(http://img.youtube.com/vi/<?php echo $youtubeid ; ?>/0.jpg) transparent" width="180" height="150"/> <div id ="video"> // here is my embedded youtube usual code </div> both images shown in 180 x 150 size, but thats not what i want. I want youtube image will be shown in 180 x 150 size and play button image (play_button.png) will be display in middle of the youtube image in small size. Any clue in css or coding in php will be great favour.

    Read the article

  • Secure Coding Practices in .NET

    - by SoftwareSecurity
    Thanks to everyone who helped pack the room at the Fox Valley Day of .NET.   This presentation was designed to help developers understand why secure coding is important, what areas to focus on and additional resources.  You can find the slides here. Remember to understand what you are really trying to protect within your application.  This needs to be a conversation between the application owner, developer and architect.  Understand what data (or Asset) needs to be protected.  This could be passwords, credit cards, Social Security Numbers.   This also may be business specific information like business confidential data etc.  Performing a Risk and Privacy Assessment & Threat Model on your applications even in a small way can help you organize this process. These are the areas to pay attention to when coding: Authentication & Authorization Logging & Auditing Event Handling Session and State Management Encryption Links requested Slides Books The Security Development Lifecycle: SDL: A Process for Developing Demonstrably More Secure Software Threat Modeling Writing Secure Code The Web Application Hackers Handbook  Secure Programming with Static Analysis   Other Resources: OWASP OWASP Top 10 OWASP WebScarab OWASP WebGoat Internet Storm Center Web Application Security Consortium Events: OWASP AppSec 2011 in Minneapolis

    Read the article

  • Generalist Languages: Dying or Alive and Well?

    - by dsimcha
    Around here, it seems like there's somewhat of a consensus that generalist programming languages (that try to be good at everything, support multiple paradigms, support both very high- and very low-level programming), etc. are a bad idea, and that it's better to pick the right tool for the job and use lots of different languages. I see three major areas where this is flawed: Interfacing multiple languages is always at least a source of friction and is sometimes practically impossible. How severe a problem this is depends on how fine-grained the interfacing is. Near the boundary between the two languages, though, you're basically limited to the intersection of their features, and you have to care about things like binary interfaces that you usually wouldn't. Passing complex data structures (i.e. not just primitives and arrays of primitives) between languages is almost always a hassle. Furthermore, shifting between different syntaxes, different conventions, etc. can be confusing and annoying, though this is a fairly minor complaint. Requirements are never set in stone. I hate picking a language thinking it's the right tool for the job, then realizing that, when some new requirement surfaces, it's actually a terrible choice for that requirement. This has happened to me several times before, usually when working with languages that are very slow, very domain specific and/or has very poor concurrency/parallelism support. When you program in a language for a while, you start to build up a personal toolbox of small utility functions/classes/programs. The value of these goes drastically down if you're forced to use a different language than the one you've accumulated all this code in. What am I missing here? Why shouldn't more focus be placed on generalist languages? Are generalist languages as a category dying or alive and well?

    Read the article

< Previous Page | 448 449 450 451 452 453 454 455 456 457 458 459  | Next Page >