Search Results

Search found 4012 results on 161 pages for 'concept hierarchy'.

Page 143/161 | < Previous Page | 139 140 141 142 143 144 145 146 147 148 149 150  | Next Page >

  • Where Facebook Stands Heading Into 2013

    - by Mike Stiles
    In our last blog, we looked at how Twitter is positioned heading into 2013. Now it’s time to take a similar look at Facebook. 2012, for a time at least, seemed to be the era of Facebook-bashing. Between a far-from-smooth IPO, subsequent stock price declines, and anxiety over privacy, the top social network became a target for comedians, politicians, business journalists, and of course those who were prone to Facebook-bash even in the best of times. But amidst the “this is the end of Facebook” headlines, the company kept experimenting, kept testing, kept innovating, and pressing forward, committed as always to the user experience, while concurrently addressing monetization with greater urgency. Facebook enters 2013 with over 1 billion users around the world. Usage grew 41% in Brazil, Russia, Japan, South Korea and India in 2012. In the Middle East and North Africa, an average 21 new signups happen per minute. Engagement and time spent on the site would impress the harshest of critics. Facebook, while not bulletproof, has become such an integrated daily force in users’ lives, it’s getting hard to imagine any future mass rejection. You want to see a company recognizing weaknesses and shoring them up. Mobile was a weakness in 2012 as Facebook was one of many caught by surprise at the speed of user migration to mobile. But new mobile interfaces, better mobile ads, speed upgrades, standalone Messenger and Pages mobile apps, and the big dollar acquisition of Instagram, were a few indicators Facebook won’t play catch-up any more than it has to. As a user, the cool thing about Facebook is, it knows you. The uncool thing about Facebook is, it knows you. The company’s walking a delicate line between the public’s competing desires for customized experiences and privacy. While the company’s working to make privacy options clearer and easier, Facebook’s Paul Adams says data aggregation can move from acting on what a user is engaging with at the moment to a more holistic view of what they’re likely to want at any given time. To help learn about you, there’s Open Graph. Embedded through diverse partnerships, the idea is to surface what you’re doing and what you care about, and help you discover things via your friends’ activities. Facebook’s Director of Engineering, Mike Vernal, says building mobile social apps connected to Facebook in such ways is the next wave of big innovation. Expect to see that fostered in 2013. The Facebook site experience is always evolving. Some users like that about Facebook, others can’t wait to complain about it…on Facebook. The Facebook focal point, the News Feed, is not sacred and is seeing plenty of experimentation with the insertion of modules. From upcoming concerts, events, suggested Pages you might like, to aggregated “most shared” content from social reader apps, plenty could start popping up between those pictures of what your friends had for lunch.  As for which friends’ lunches you see, that’s a function of the mythic EdgeRank…which is also tinkered with. When Facebook changed it in September, Page admins saw reach go down and the high anxiety set in quickly. Engagement, however, held steady. The adjustment was about relevancy over reach. (And oh yeah, reach was something that could be charged for). Facebook wants users to see what they’re most likely to like, based on past usage and interactions. Adding to the “cream must rise to the top” philosophy, they’re now even trying out ordering post comments based on the engagement the comments get. Boy, it’s getting competitive out there for a social engager. Facebook has to make $$$. To do that, they must offer attractive vehicles to marketers. There are a myriad of ad units. But a key Facebook marketing concept is the Sponsored Story. It’s key because it encourages content that’s good, relevant, and performs well organically. If it is, marketing dollars can amplify it and extend its reach. Brands can expect the rollout of a search product and an ad network. That’s a big deal. It takes, as Open Graph does, the power of Facebook’s user data and carries it beyond the Facebook environment into the digital world at large. No one could target like Facebook can, and some analysts think it could double their roughly $5 billion revenue stream. As every potential revenue nook and cranny is explored, there are the users themselves. In addition to Gifts, Facebook thinks users might pay a few bucks to promote their own posts so more of their friends will see them. There’s also word classifieds could be purchased in News Feeds, though they won’t be called classifieds. And that’s where Facebook stands; a wildly popular destination, a part of our culture, with ever increasing functionalities, the biggest of big data, revenue strategies that appeal to marketers without souring the user experience, new challenges as a now public company, ongoing privacy concerns, and innovations that carry Facebook far beyond its own borders. Anyone care to write a “this is the end of Facebook” headline? @mikestilesPhoto via stock.schng

    Read the article

  • Say What? Podcasting As Part of Your Content Marketing

    - by Mike Stiles
    What do you usually do in your car on the way to work?  Sing along to radio? Stream Pandora or iHeartRadio? Talk on the phone? Sit in total silence? Whatever it is you do, you could be using that time to make yourself an expert in any range of topics…using podcasts. We invite you to follow or subscribe to the daily Oracle Social Spotlight podcast, a quick roundup of the day’s top stories around social marketing and the social networks. After podcasts arrived in 2004, growth was steady but slow. The concept was strong: anyone with a passion for any subject could make a show for anyone who cared to listen. Enter the smartphone, iTunes, new podcasting platforms, and social, and podcasting became easier than ever and made more sense for both podcasters and listeners. Stats show 1 in 5 smartphone owners are podcast consumers and 29% of Americans have listened to a podcast. The potential audience is also larger than ever. “Baked in” podcast apps on over 200 million devices expose users to volumes of audio content with just a tap. 97 million Americans are driving to work every day by themselves. And 38% of Americans listen to audio on a digital device each week, a number that’s projected to double by 2015. Does that mean your brand should be podcasting? That’s part of a larger discussion about your overall content strategy, provided you have one. But if you do and podcasting is a component of it, here are some things to keep in mind: Don’t podcast just to do it. Podcast because you thought of a show customers and prospects will like that they can’t get anywhere else. Sound quality matters. Good microphones are not expensive. Bad sound is annoying, makes your brand feel cheap, and will turn today’s sophisticated ears off. The host matters. Many think they belong on the radio. Few actually do. Your brand’s host should be comfortable & likeable. A top advantage of a podcast is people can bond with a real person. It’s a trust opportunity, so don’t take it lightly. The content matters. “All killer, no filler” means don’t allow babbling just to fill enough time for an episode. Value the listeners’ time, because that time is hard to get. Put time, effort and creativity into it. Sure you’re a business, but you’re competing with content from professional media and showbiz producers. If you can include music, sound effects, and things that amuse the ears, do it. If you start, be consistent. The #1 flaw in podcasting is when listeners can’t count on another episode or don’t know when it’s coming. Don’t skip doing shows just because you can. Get committed. Get your cover art right. Podcasting is about audio, but people shop for podcasts by glancing through graphics. Yours has to be professional, cool, and informative to get listeners interested. Cross-promote your podcast on all your channels. The competition for listeners is fierce, so if you have existing audiences you can leverage to launch your show, use them. Optimize it for mobile. Assume that’s where most listening will take place. If you’re using one of the podcast platform apps, you should be in good shape. Frankly, the percentage of brands that are podcasting is quite low, and that’s okay. Once you move beyond blogging and start connecting with real voices, poor execution can do damage. But more (32%) marketers want to learn how to use podcasting, and more (23%) were increasing their podcasting throughout this year. Bottom line, you want to share your brand’s message and stories wherever your audience might be and in whatever way they prefer to take in content. Many prefer to do that while driving or working out, using the eyes and hands-free medium of audio. @mikestilesPhoto: stock.xchng

    Read the article

  • Getting App.config to be configuration specific in VS2010

    - by MarkPearl
    I recently wanted to have a console application that had configuration specific settings. For instance, if I had two configurations “Debug” and “Release”, depending on the currently selected configuration I wanted it to use a specific configuration file (either debug or config). If you are wanting to do something similar, here is a potential solution that worked for me. Setting up a demo app to illustrate the point First, let’s set up an application that will demonstrate the most basic concept. using System; using System.Configuration; namespace ConsoleSpecificConfiguration { class Program { static void Main(string[] args) { Console.WriteLine("Config"); Console.WriteLine(ConfigurationManager.AppSettings["Example Config"]); Console.ReadLine(); } } }   This does a really simple thing. Display a config when run. To do this, you also need a config file set up. My default looks as follows… <?xml version="1.0" encoding="utf-8" ?> <configuration> <appSettings> <add key="Example Config" value="Default"/> </appSettings> </configuration>   Your entire solution will look as follows… Running the project you will get the following amazing output…   Let’s now say instead of having one config file we want depending on whether we are running in “Debug” or “Release” for the solution configuration we want different config settings to be propagated across you can do the following… Step 1 – Create alternate config Files First add additional config files to your solution. You should have some form of naming convention for these config files, I have decided to follow a similar convention to the one used for web.config, so in my instance I am going to add a App.Debug.config and a App.Release.config file BUT you can follow any naming convention you want provided you wire up the rest of the approach to use this convention. My files look as follows.. App.Debug.config <?xml version="1.0" encoding="utf-8" ?> <configuration> <appSettings> <add key="Example Config" value="Debug"/> </appSettings> </configuration>   App.Release.config <?xml version="1.0" encoding="utf-8" ?> <configuration> <appSettings> <add key="Example Config" value="Release"/> </appSettings> </configuration>   Your solution will now look as follows… Step 2 – Create a bat file that will overwrite files The next step is to create a bat file that will overwrite one file with another. If you right click on the solution in the solution explorer there will be a menu option to add new items to the solution. Create a text file called “copyifnewer.bat” which will be our copy script. It’s contents should look as follows… @echo off echo Comparing two files: %1 with %2 if not exist %1 goto File1NotFound if not exist %2 goto File2NotFound fc %1 %2 /A if %ERRORLEVEL%==0 GOTO NoCopy echo Files are not the same. Copying %1 over %2 copy %1 %2 /y & goto END :NoCopy echo Files are the same. Did nothing goto END :File1NotFound echo %1 not found. goto END :File2NotFound copy %1 %2 /y goto END :END echo Done. Your solution should now look as follows…   Step 3 – Customize the Post Build event command line We now need to wire up everything – which we will do using the post build event command line in VS2010. Right click on your project and go to it’s properties We are now going to wire up the script so that when we build our project it will overwrite the default App.config with whatever file we want. The syntax goes as follows… call "$(SolutionDir)copyifnewer.bat" "$(ProjectDir)App.$(ConfigurationName).config" "$(ProjectDir)$(OutDir)\$(TargetFileName).config" Testing it If I now change my project configuration to Release   And then run my application I get the following output… Toggling between Release and Debug mode will show that the config file is changing each time. And that is it!

    Read the article

  • Unexpected behaviour with glFramebufferTexture1D

    - by Roshan
    I am using render to texture concept with glFramebufferTexture1D. I am drawing a cube on non-default FBO with all the vertices as -1,1 (maximum) in X Y Z direction. Now i am setting viewport to X while rendering on non default FBO. My background is blue with white color of cube. For default FBO, i have created 1-D texture and attached this texture to above FBO with color attachment. I am setting width of texture equal to width*height of above FBO view-port. Now, when i render this texture to on another cube, i can see continuous white color on start or end of each face of the cube. That means part of the face is white and rest is blue. I am not sure whether this behavior is correct or not. I expect all the texels should be white as i am using -1 and 1 coordinates for cube rendered on non-default FBO. code: #define WIDTH 3 #define HEIGHT 3 GLfloat vertices8[]={ 1.0f,1.0f,1.0f, -1.0f,1.0f,1.0f, -1.0f,-1.0f,1.0f, 1.0f,-1.0f,1.0f,//face 1 1.0f,-1.0f,-1.0f, -1.0f,-1.0f,-1.0f, -1.0f,1.0f,-1.0f, 1.0f,1.0f,-1.0f,//face 2 1.0f,1.0f,1.0f, 1.0f,-1.0f,1.0f, 1.0f,-1.0f,-1.0f, 1.0f,1.0f,-1.0f,//face 3 -1.0f,1.0f,1.0f, -1.0f,1.0f,-1.0f, -1.0f,-1.0f,-1.0f, -1.0f,-1.0f,1.0f,//face 4 1.0f,1.0f,1.0f, 1.0f,1.0f,-1.0f, -1.0f,1.0f,-1.0f, -1.0f,1.0f,1.0f,//face 5 -1.0f,-1.0f,1.0f, -1.0f,-1.0f,-1.0f, 1.0f,-1.0f,-1.0f, 1.0f,-1.0f,1.0f//face 6 }; GLfloat vertices[]= { 0.5f,0.5f,0.5f, -0.5f,0.5f,0.5f, -0.5f,-0.5f,0.5f, 0.5f,-0.5f,0.5f,//face 1 0.5f,-0.5f,-0.5f, -0.5f,-0.5f,-0.5f, -0.5f,0.5f,-0.5f, 0.5f,0.5f,-0.5f,//face 2 0.5f,0.5f,0.5f, 0.5f,-0.5f,0.5f, 0.5f,-0.5f,-0.5f, 0.5f,0.5f,-0.5f,//face 3 -0.5f,0.5f,0.5f, -0.5f,0.5f,-0.5f, -0.5f,-0.5f,-0.5f, -0.5f,-0.5f,0.5f,//face 4 0.5f,0.5f,0.5f, 0.5f,0.5f,-0.5f, -0.5f,0.5f,-0.5f, -0.5f,0.5f,0.5f,//face 5 -0.5f,-0.5f,0.5f, -0.5f,-0.5f,-0.5f, 0.5f,-0.5f,-0.5f, 0.5f,-0.5f,0.5f//face 6 }; GLuint indices[] = { 0, 2, 1, 0, 3, 2, 4, 5, 6, 4, 6, 7, 8, 9, 10, 8, 10, 11, 12, 15, 14, 12, 14, 13, 16, 17, 18, 16, 18, 19, 20, 23, 22, 20, 22, 21 }; GLfloat texcoord[] = { 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0 }; glGenTextures(1, &id1); glBindTexture(GL_TEXTURE_1D, id1); glGenFramebuffers(1, &Fboid); glTexParameterf(GL_TEXTURE_1D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameterf(GL_TEXTURE_1D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameterf(GL_TEXTURE_1D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexImage1D(GL_TEXTURE_1D, 0, GL_RGBA, WIDTH*HEIGHT , 0, GL_RGBA, GL_UNSIGNED_BYTE,0); glBindFramebuffer(GL_FRAMEBUFFER, Fboid); glFramebufferTexture1D(GL_DRAW_FRAMEBUFFER,GL_COLOR_ATTACHMENT0,GL_TEXTURE_1D,id1,0); draw_cube(); glBindFramebuffer(GL_FRAMEBUFFER, 0); draw(); } draw_cube() { glViewport(0, 0, WIDTH, HEIGHT); glClearColor(0.0f, 0.0f, 0.5f, 1.0f); glClear(GL_COLOR_BUFFER_BIT); glEnableVertexAttribArray(glGetAttribLocation(temp.psId,"position")); glVertexAttribPointer(glGetAttribLocation(temp.psId,"position"), 3, GL_FLOAT, GL_FALSE, 0,vertices8); glDrawArrays (GL_TRIANGLE_FAN, 0, 24); } draw() { glClearColor(1.0f, 0.0f, 0.0f, 1.0f); glClearDepth(1.0f); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glEnableVertexAttribArray(glGetAttribLocation(shader_data.psId,"tk_position")); glVertexAttribPointer(glGetAttribLocation(shader_data.psId,"tk_position"), 3, GL_FLOAT, GL_FALSE, 0,vertices); nResult = GL_ERROR_CHECK((GL_NO_ERROR, "glVertexAttribPointer(position, 3, GL_FLOAT, GL_FALSE, 0,vertices);")); glEnableVertexAttribArray(glGetAttribLocation(shader_data.psId,"inputtexcoord")); glVertexAttribPointer(glGetAttribLocation(shader_data.psId,"inputtexcoord"), 2, GL_FLOAT, GL_FALSE, 0,texcoord); glBindTexture(*target11, id1); glDrawElements ( GL_TRIANGLES, 36,GL_UNSIGNED_INT, indices ); when i change WIDTH=HEIGHT=2, and call a glreadpixels with height, width equal to 4 in draw_cube() i can see first 2 pixels with white color, next two with blue(glclearcolor), next two white and then blue and so on.. Now when i change width parameter in glTeximage1D to 16 then ideally i should see alternate patches of white and blue right? But its not the case here. why so?

    Read the article

  • To Bit or Not To Bit

    - by Johnm
    'Twas a long day of troubleshooting and firefighting and now, with most of the office vacant, you face a blank scripting window to create a new table in his database. Many questions circle your mind like dirty water gurgling down the bathtub drain: "How normalized should this table be?", "Should I use an identity column?", "NVarchar or Varchar?", "Should this column be NULLABLE?", "I wonder what apple blue cheese bacon cheesecake tastes like?" Well, there are times when the mind goes it's own direction. A Bit About Bit At some point during your table creation efforts you will encounter the decision of whether to use the bit data type for a column. The bit data type is an integer data type that recognizes only the values of 1, 0 and NULL as valid. This data type is often utilized to store yes/no or true/false values. An example of its use would be a column called [IsGasoline] which would be intended to contain the value of 1 if the row's subject (a car) had a gasoline engine and a 0 if the subject did not have a gasoline engine. The bit data type can even be found in some of the system tables of SQL Server. For example, the sysssispackages table in the msdb database which contains SQL Server Integration Services Package information for the packages stored in SQL Server. This table contains a column called [IsEncrypted]. A value of 1 indicates that the package has been encrypted while the value of 0 indicates that it is not. I have learned that the most effective way to disperse the crowd that surrounds the office coffee machine is to engage into SQL Server debates. The bit data type has been one of the most reoccurring, as well as the most enjoyable, of these topics. It contains a practical side and a philosophical side. Practical Consideration This data type certainly has its place and is a valuable option for database design; but it is often used in situations where the answer is really not a pure true/false response. In addition, true/false values are not very informative or scalable. Let's use the previously noted [IsGasoline] column for illustration. While on the surface it appears to be a rather simple question when evaluating a car: "Does the car have a gasoline engine?" If the person entering data is entering a row for a Jeep Liberty, the response would be a 1 since it has a gasoline engine. If the person is entering data is entering a row for a Chevrolet Volt, the response would be a 0 since it is an electric engine. What happens when a person is entering a row for the gasoline/electric hybrid Toyota Prius? Would one person's conclusion be consistent with another person's conclusion? The argument could be made that the current intent for the database is to be used only for pure gasoline and pure electric engines; but this is where the scalability issue comes into play. With the use of a bit data type a database modification and data conversion would be required if the business decided to take on hybrid engines. Whereas, alternatively, if the int data type were used as a foreign key to a reference table containing the engine type options, the change to include the hybrid option would only require an entry into the reference table. Philosophical Consideration Since the bit data type is often used for true/false or yes/no data (also called Boolean) it presents a philosophical conundrum of what to do about the allowance of the NULL value. The inclusion of NULL in a true/false or yes/no response simply violates the logical principle of bivalence which states that "every proposition is either true or false". If NULL is not true, then it must be false. The mathematical laws of Boolean logic support this concept by stating that the only valid values of this scenario are 1 and 0. There is another way to look at this conundrum: NULL is also considered to be the absence of a response. In other words, it is the equivalent to "undecided". Anyone who watches the news can tell you that polls always include an "undecided" option. This could be considered a valid option in the world of yes/no/dunno. Through out all of these considerations I have discovered one absolute certainty: When you have found a person, or group of persons, who are willing to entertain a philosophical debate of the bit data type, you have found some true friends.

    Read the article

  • Override an IOCTL Handler in PQOAL

    - by Kate Moss' Big Fan
    When porting or creating a BSP to a new platform, we often need to make change to OEMIoControl or HAL IOCTL handler for more specific. Since Microsoft introduced PQOAL in CE 5.0 and more and more BSP today leverages PQOAL to simplify the OAL, we no longer define the OEMIoControl directly. It is somehow analogous to migrate from pure Windows SDK to MFC; people starts to define those MFC handlers and forgot the WinMain and the big message loop. If you ever take a look at the interface between OAL and Kernel, PUBLIC\COMMON\OAK\INC\oemglobal.h, the pfnOEMIoctl is still there just as the entry point of Windows Program is WinMain since day one. (For those may argue about pfnOEMIoctl is not OEMIoControl, I will encourage you to dig into PRIVATE\WINCEOS\COREOS\NK\OEMMAIN\oemglobal.c which initialized pfnOEMIoctl to OEMIoControl. The interface is just to split OAL and Kernel which no longer linked to one executable file in CE 6, all of the function signature is still identical) So let's trace into PQOAL to realize how it implements OEMIoControl and how can we override an IOCTL handler we interest. First thing to know is the entry point (just as finding the WinMain in MFC), OEMIoControl is defined in PLATFORM\COMMON\SRC\COMMON\IOCTL\ioctl.c. Basically, it does nothing special but scan a pre-defined IOCTL table, g_oalIoCtlTable, and then execute the handler. (The highlight part) Other than that is just for error handling and the use of critical section to serialize the function. BOOL OEMIoControl(     DWORD code, VOID *pInBuffer, DWORD inSize, VOID *pOutBuffer, DWORD outSize,     DWORD *pOutSize ) {     BOOL rc = FALSE;     UINT32 i; ...     // Search the IOCTL table for the requested code.     for (i = 0; g_oalIoCtlTable[i].pfnHandler != NULL; i++) {         if (g_oalIoCtlTable[i].code == code) break;     }     // Indicate unsupported code     if (g_oalIoCtlTable[i].pfnHandler == NULL) {         NKSetLastError(ERROR_NOT_SUPPORTED);         OALMSG(OAL_IOCTL, (             L"OEMIoControl: Unsupported Code 0x%x - device 0x%04x func %d\r\n",             code, code >> 16, (code >> 2)&0x0FFF         ));         goto cleanUp;     }            // Take critical section if required (after postinit & no flag)     if (         g_ioctlState.postInit &&         (g_oalIoCtlTable[i].flags & OAL_IOCTL_FLAG_NOCS) == 0     ) {         // Take critical section                    EnterCriticalSection(&g_ioctlState.cs);     }     // Execute the handler     rc = g_oalIoCtlTable[i].pfnHandler(         code, pInBuffer, inSize, pOutBuffer, outSize, pOutSize     );     // Release critical section if it was taken above     if (         g_ioctlState.postInit &&         (g_oalIoCtlTable[i].flags & OAL_IOCTL_FLAG_NOCS) == 0     ) {         // Release critical section                    LeaveCriticalSection(&g_ioctlState.cs);     } cleanUp:     OALMSG(OAL_IOCTL&&OAL_FUNC, (L"-OEMIoControl(rc = %d)\r\n", rc ));     return rc; }   Where is the g_oalIoCtlTable? It is defined in your BSP. Let's use DeviceEmulator BSP as an example. The PLATFORM\DEVICEEMULATOR\SRC\OAL\OALLIB\ioctl.c defines the table as const OAL_IOCTL_HANDLER g_oalIoCtlTable[] = { #include "ioctl_tab.h" }; And that leads to PLATFORM\DEVICEEMULATOR\SRC\INC\ioctl_tab.h which defined some of IOCTL handler but others are defined in oal_ioctl_tab.h which is under PLATFORM\COMMON\SRC\INC\. Finally, we got the full table body! (Just like tracing MFC, always jumping back and forth). The format of table is very straight forward, IOCTL code, Flags and Handler Function // IOCTL CODE,                          Flags   Handler Function //------------------------------------------------------------------------------ { IOCTL_HAL_INITREGISTRY,                   0,  OALIoCtlHalInitRegistry     }, { IOCTL_HAL_INIT_RTC,                       0,  OALIoCtlHalInitRTC          }, { IOCTL_HAL_REBOOT,                         0,  OALIoCtlHalReboot           }, The PQOAL scans through the table until it find a matched IOCTL code, then invokes the handler function. Since it scans the table from the top which means if we define TWO handler with same IOCTL code, the first one is always invoked with no exception. Now back to the PLATFORM\DEVICEEMULATOR\SRC\INC\ioctl_tab.h, with the following table { IOCTL_HAL_INITREGISTRY,                   0,  OALIoCtlDeviceEmulatorHalInitRegistry     }, ... #include <oal_ioctl_tab.h> Note the IOCTL_HAL_INITREGISTRY handler are defined in both BSP's local ioctl_tab.h and the common oal_ioctl_tab.h, but due to BSP's local handler comes before "#include <oal_ioctl_tab.h>" so we know the OALIoCtlDeviceEmulatorHalInitRegistry always get called. In this example, the DeviceEmulator BSP overrides the IOCTL_HAL_INITREGISTRY handler from OALIoCtlHalInitRegistry to OALIoCtlDeviceEmulatorHalInitRegistry by manipulating the g_oalIoCtlTable table. (In some point of view, it is similar to message map in MFC) Please be aware, when you override an IOCTL handler in PQOAL, you may want to clone the original implementation to your BSP and change to meet your need. It is recommended and save you the redundant works but remember to rename the handler function (Just like the DeviceEmulator it changes the name of OALIoCtlHalInitRegistry to OALIoCtlDeviceEmulatorHalInitRegistry). If you don't change the name, linker may not be happy (due to name conflict) and the more important is by using different handler name, you could always redirect the handler back to original one. (It is like the concept of OOP that calling a function in base class; still not so clear? I am goinf to show you soon!) The OALIoCtlDeviceEmulatorHalInitRegistry setups DeviceEmulator specific registry settings and in the end, if everything goes well, it calls the OALIoCtlHalInitRegistry (PLATFORM\COMMON\SRC\COMMON\IOCTL\reginit.c) to do the rest.     if(fOk) {         fOk = OALIoCtlHalInitRegistry(code, pInpBuffer, inpSize, pOutBuffer,             outSize, pOutSize);     } Now you got the picture, whenever you want to override an IOCTL hadnler that is implemented in PQOAL just Clone the handler function to your BSP as a template. Simple name change for the handler function, and a name change in the IOCTL table header file that maps the IOCTL with the function Implement your IOCTL handler and whenever you need to redirect it back just calling the original handler function. It is the standard way of implementing a custom IOCTL and most Microsoft developers prefer. The mapping of IOCTL routine to IOCTL code is platform specific - you control the header file that does that mapping.

    Read the article

  • Connect to running web role on Azure using Remote Desktop Connection and VS2012

    - by Magnus Karlsson
    We want to be able to collect IntelliTrace information from our running app and also use remote desktop to connect to the IIS and look around(probably debugging). 1. Create certificate 1.1 Right-click the cloud project (marked in red) and select “Configure remote desktop”. 1.2 In the drop down list of certificates, choose <create> at the bottom. 1.3. Follow the instructions, you can set it up with default values. 1.4 When done. Choose the certificate and click “Copy to File…” as seen in the left of the picture above. 1.5. Save the file with any name you want. Now we will save it to local storage to be able to import it to our solution through the azure configuration manager in step 3. 2. Save certificate to local storage Now we need to attach it to our local certificate storage to be able to reach it from our confiuguration manager in visual studio. Microsoft provides the following steps for doing this: http://support.microsoft.com/kb/232137 In order to view the Certificates store on the local computer, perform the following steps: Click Start, and then click Run. Type "MMC.EXE" (without the quotation marks) and click OK. Click Console in the new MMC you created, and then click Add/Remove Snap-in. In the new window, click Add. Highlight the Certificates snap-in, and then click Add. Choose the Computer option and click Next. Select Local Computer on the next screen, and then click OK. Click Close , and then click OK. You have now added the Certificates snap-in, which will allow you to work with any certificates in your computer's certificate store. You may want to save this MMC for later use. Now that you have access to the Certificates snap-in, you can import the server certificate into you computer's certificate store by following these steps: Open the Certificates (Local Computer) snap-in and navigate to Personal, and then Certificates. Note: Certificates may not be listed. If it is not, that is because there are no certificates installed. Right-click Certificates (or Personal if that option does not exist.) Choose All Tasks, and then click Import. When the wizard starts, click Next. Browse to the PFX file you created containing your server certificate and private key. Click Next. Enter the password you gave the PFX file when you created it. Be sure the Mark the key as exportable option is selected if you want to be able to export the key pair again from this computer. As an added security measure, you may want to leave this option unchecked to ensure that no one can make a backup of your private key. Click Next, and then choose the Certificate Store you want to save the certificate to. You should select Personal because it is a Web server certificate. If you included the certificates in the certification hierarchy, it will also be added to this store. Click Next. You should see a summary of screen showing what the wizard is about to do. If this information is correct, click Finish. You will now see the server certificate for your Web server in the list of Personal Certificates. It will be denoted by the common name of the server (found in the subject section of the certificate). Now that you have the certificate backup imported into the certificate store, you can enable Internet Information Services 5.0 to use that certificate (and the corresponding private key). To do this, perform the following steps: Open the Internet Services Manager (under Administrative Tools) and navigate to the Web site you want to enable secure communications (SSL/TLS) on. Right-click on the site and click Properties. You should now see the properties screen for the Web site. Click the Directory Security tab. Under the Secure Communications section, click Server Certificate. This will start the Web Site Certificate Wizard. Click Next. Choose the Assign an existing certificate option and click Next. You will now see a screen showing that contents of your computer's personal certificate store. Highlight your Web server certificate (denoted by the common name), and then click Next. You will now see a summary screen showing you all the details about the certificate you are installing. Be sure that this information is correct or you may have problems using SSL or TLS in HTTP communications. Click Next, and then click OK to exit the wizard. You should now have an SSL/TLS-enabled Web server. Be sure to protect your PFX files from any unwanted personnel. Image of a typical MMC.EXE with the certificates up.   3. Import the certificate to you visual studio project. 3.1 Now right click your equivalent to the MvcWebRole1 (as seen in the first picture under the red oval) and choose properties. 3.2 Choose Certificates. Right click the ellipsis to the right of the “thumbprint” and you should be able to select your newly created certificate here. After selecting it- save the file.   4. Upload the certificate to your Azure subscription. 4.1 Go to the azure management portal, click the services menu icon to the left and choose the service. Click Upload in the bottom menu.     5. Connect to server. Since I tried to use account settings(have to use another name) we have to set up a new name for the connection. No biggie. 5.1 Go to azure management portal, select your service and in the bottom menu, choose “REMOTE”. This will display the configuration for remote connection. It will actually change your ServiceConfiguration.cscfg file. After you change It here it might be good to choose download and replace the one in your project. Set a name that is not your windows azure account name and not Administrator. 5.2 Goto visual studio, click Server Explorer. Choose as selected in the picture below and click “COnnect using remote desktop”.   5.2 You will now be able to log in with the name and password set up in step 5.1. and voila! Windows server 2012, IIS and other nice stuff!   To do this one I’ve been using http://msdn.microsoft.com/en-us/library/windowsazure/ff683671.aspx where you can collect some of this information and additional one.

    Read the article

  • Skype support and dead parrot sketches

    - by Greg Low
    We as an industry have a lot to answer for. One thing is the level of support provided for users. Here's the conversation I'm having with Skype right now. It feels like being in a Monty Python skit. I was really hoping that when Microsoft purchased Skype that things might improve. 8:33:44 PM greglowInitial Question/Comment: Subscriptions (Unlimited Country, Unlimited Europe, Unlimited Region, Unlimited World)8:33:44 PM SystemThank you for contacting Skype Customer Support!8:33:49 PM SystemDonna Faye has joined this session!8:33:50 PM SystemConnected with Donna Faye8:33:50 PM SystemThank you for contacting Skype Customer Support!8:33:54 PM SystemPlease hold for the next available Live Support Agent.8:33:59 PM Donna FayeHello! Welcome to Skype Live Support! My name is Donna L. How may I help you?8:34:09 PM greglowI've got a subscription (skypename greglow)8:34:21 PM greglowIt's set to auto-renew8:34:45 PM greglowWhy have I been getting messages about my Skype number expiring, if it's set to renew automatically8:34:54 PM greglowand has recently renewed automatically8:35:09 PM Donna FayeThank you for the information.8:35:29 PM Donna FayeTo better assist you may I know your Skype name, please?8:35:37 PM greglowgreglow8:35:44 PM Donna FayeThank you.8:36:14 PM Donna FayeI understand your concern that you receive an email notification stating that your calling subscription will about to expire. Am I right?8:36:43 PM greglowNo, I got an email telling me that one of my Skype numbers was going to expire8:37:08 PM greglowWhen I went to the website, it said it was part of the subscription and the subscription was set to auto-renew8:37:18 PM greglowThe subscription auto-renewed on Oct XXth8:37:24 PM greglowBut the number still expired8:37:43 PM greglowWhen I logged on today, it said that the number had expired and that I had 52 days left to reactivate it8:37:58 PM greglowI did that but am trying to understand why that happens at all8:38:36 PM greglowWhy do you expire numbers that are part of auto-renewing subscriptions?8:39:49 PM Donna FayeUpon checking your account, your calling subscription link to three Online Numbers.8:39:55 PM greglowcorrect8:40:10 PM greglowbut until an hour or so ago, one of them had expired. why?8:40:54 PM Donna FayeLet me explain to you that now Online Number are detach to a calling subscription unlike before it was attach to the calling subscription so therefore each Online Number will recur individually.8:41:29 PM Donna FayeUpon checking your account it shows that all your Online Number will expire on October XX, 20138:41:44 PM Donna FayeIf you don't want to be charged again for you can cancel it anytime.8:41:59 PM greglowI have set it to charge automatically8:42:06 PM greglowso why do the numbers expire?8:42:32 PM Donna FayeNo they are not expired they are all active.8:42:33 PM greglowif there is an automatic payment set up, why does anything expire?8:42:52 PM greglowyes, but one of them was expired, and I only fixed it a few hours ago8:43:07 PM greglowI'm trying to understand why it expired8:44:54 PM Donna FayeThe email you got just notify you that you should have good enough funding source of your calling subscription and Online Number in order for this to recur.8:45:12 PM greglowAgreed. I did that but the number still expired. Why?8:45:38 PM Donna FayeHowever do not worry on this hence all Online Number are active and not expired.8:45:41 PM greglowWhen I logged on today it said that the number had expired and that I had 52 days left to reactivate it. Why did that happen?8:46:14 PM Donna FayeMay I know the Online Number you are pertaining, please?The number that had expired was (07) XXXX XXXX (Australia)8:49:19 PM Donna FayeOkay, do not worry on this Online Number because this will expire on October XX, 2013.8:49:38 PM Donna FayeThank you for all the information.8:49:46 PM greglowI understand that it won't expire now until next year. I'm trying to understand why it stopped working this time8:49:51 PM greglowso I can avoid that next time8:50:39 PM Donna FayeWhat do yo stop working, you mean that Online Number is unable to be reach out?8:50:59 PM Donna Faye*what do you mean by stop working8:51:18 PM greglowThe Skype number +61 7 XXXX XXXX stopped working because it expired8:51:32 PM Donna FayeNo, this into expired.8:51:34 PM greglowI'm trying to understand why it expired8:51:50 PM greglowSorry I don't follow "this into expired"8:52:39 PM Donna FayeLet me inform you that this Online Number is not expired.8:52:54 PM Donna FayePlease disregard the notification you see on your Skype account.8:53:02 PM greglowSorry, this is getting very frustrating. I already told you I fixed it today. I want to know why it happened.8:53:20 PM Donna FayeHence I can rest assured you that this is active until October XX, 2013.8:53:21 PM greglowSo I can avoid it happening again, on this account, or on our other accounts8:53:34 PM greglowCan I speak to someone who really understands this please?Donna FayeSkype send a notification to Skype users for their Skype product in order for them to be aware the expiration of your product however your Online Number still active on your account hence when your calling subscription Unlimited World recur your Online Number also recur. 8:57:59 PM Donna Faye I do apologize for this matter, please allow me to resolve this issue for you. 9:02:48 PM greglow My question is still the same: if it's set to auto-pay, why did it expire? 9:03:50 PM greglow Why did it stop working? Why did I have to "reactivate" it? 9:04:08 PM greglow Why didn't it just keep working if it's set to auto-pay? 9:04:30 PM greglow This isn't a difficult concept 9:04:34 PM Donna Faye Let me clarify to you that this is not the really expired what is emphasizing on this is when you don't have good enough funding source when you calling subscription recur the Online Number will possibly expired but seem that you have a good funding source then this will continue until 2013. 9:05:10 PM greglow When I logged on today, it was not working, and your website said it had expired and that I had 52 days left to reactivate it 9:05:17 PM greglow Why?and so on, and so on, and so on.... 

    Read the article

  • Global Perspective: Oracle AppAdvantage Does its Stage Debut in the UK

    - by Tanu Sood
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Global Perspective is a monthly series that brings experiences, business needs and real-world use cases from regions across the globe. This month’s feature is a follow-up from last month’s Global Perspective note from a well known ACE Director based in EMEA. My first contribution to this blog was before Oracle Open World and I was quite excited about where this initiative would take me in my understanding of the value of Oracle Fusion Middleware. Rimi Bewtra from the Oracle AppAdvantage team came as promised to the Oracle ACE Director briefings and explained what this initiative was all about and I then asked the directors to take part in the new survey. The story was really well received and then at the SOA advisory board that many of these ACE Directors already take part in there was a further discussion on how this initiative will help customers understand the benefits of adoption. A few days later Rick Beers launched the program at a lunch of invited customer executives which included one from Pella who talked about their projects (a quick recap on that here). I wasn’t able to stay for the whole event but what really interested me was that these executives who understood the technology but where looking for how they could use them to drive their businesses. Lots of ideas were bubbling up in my head about how we can use this in user groups to help our members, and the timing was fantastic as just three weeks later we had UKOUG_Apps13, our flagship Applications conference in the UK. We had independently working with Oracle marketing in the UK on an initiative called Apps Transformation to help our members look beyond just the application they use today. We have had a Fusion community page but felt the options open are now much wider than Fusion Applications, there are acquired applications, social, mobility and of course the underlying technology, Oracle Fusion Middleware. I was really pleased to be allowed to give the Oracle AppAdvantage story as a session in our conference and we are planning a special Apps Transformation event in March where I hope the Oracle AppAdvantage team will take part and we will have the results of the survey to discuss. But, life also came full circle for me. In my first post, I talked about Andrew Sutherland and his original theory that Oracle Fusion Middleware adoption had technical drivers. Well, Andrew was a speaker at our event and he gave a potted, tech-talk free update on Oracle Open World. Andrew talked about the Prevailing Technology Winds, and what is driving this today and he talked about that in the past it was the move from simply automating processes (ERP etc), through the altering of those processes (SOA) and onto consolidation. The next drivers are around the need to predict, both faster and more accurately; how to better exploit the information that we have available. He went on to talk about The Nexus of Forces: Social, Mobile, Cloud and Information – harnessing these forces of change with Oracle technology. Gartner really likes this concept and if you want to know more you can get their paper here. All this has made me think, and I hope it will make you too. Technology can help us drive our businesses better and understanding your needs can be the first step on your journey, which was the theme of our event in the UK. I spoke to a number of the delegates and I hope to share some of their stories in later posts. If you have a story to share, the survey is at: https://www.surveymonkey.com/s/P335DD3 About the Author: Debra Lilley, Fujitsu Fusion Champion, UKOUG Board Member, Fusion User Experience Advocate and ACE Director. Debra has 18 years experience with Oracle Applications, with E Business Suite since 9.4.1, moving to Business Intelligence Team Leader and then Oracle Alliance Director. She has spoken at over 100 conferences worldwide and posts at debrasoraclethoughts Editor’s Note: Debra has kindly agreed to share her musings and experience in a monthly column on the Fusion Middleware blog so do stay tuned…

    Read the article

  • Updated Agenda for OTN Architect Day Los Angeles (Oct 25)

    - by Bob Rhubart
    Here's the latest information on the session schedule and content for Oracle Technology Network Architect Day in Los Angeles on October 25, 2012. Registration is open, but seating is limited. When: Thursday October 25 12, 2012 8:30am – 5:00pm Where: Sofitel Los Angeles 8555 Beverly Boulevard Los Angeles, CA 90048 Agenda Time Session Title Room 8:30 am - 9:00 am Registration and Continental Breakfast 9:00 am - 9:15 am Welcome and Opening Comments | Bob Rhubart Beverly Ballroom 9:15 am - 10:00 am Engineered Systems: Oracle's Vision for the Future | Ralf Dossmann Oracle's Exadata and Exalogic are impressive products in their own right. But working in combination they deliver unparalleled transaction processing performance with up to a 30x increase over existing legacy systems, with the lowest cost of ownership over a 3 or 5 year basis than any other hardware. In this session you'll learn how to leverage Oracle's Engineered Systems within your enterprise to deliver record-breaking performance at the lowest TCO. Beverly Ballroom 10:00 am - 10:30 am Monitoring and Managing Applications in the Cloud | Basheer Khan Oracle offers a broad portfolio of software and hardware products and services to enable public, private and hybrid clouds to power the enterprise. However, enterprise cloud computing presents new management challenges, that need to be addressed to realize the economic benefits of cloud computing. In this session you will learn about the methods and tools you can use to proactively monitor your end-to-end Oracle Applications environment in the cloud, define service-level objectives, gain insight into your end users, and troubleshoot performance problems from a single console. Beverly Ballroom 10:30 am - 10:45 am Break 10:45 am - 11:30 am Breakout Sessions (pick one) Cloud Computing - Making IT Simple | Dr. James Baty The road to Cloud Computing is not without a few bumps. This session will help to smooth out your journey by tackling some of the potential complications. We'll examine whether standardization is a prerequisite for the Cloud. We'll look at why refactoring isn't just for application code. We'll check out deployable entities and their simplification via higher levels of abstraction. And we'll close out the session with a look at engineered systems and modular clouds. Beverly Ballroom Innovations in Grid Computing with Oracle Coherence | Ashok Aletty Learn how Oracle Coherence can increase the availability, scalability and performance of your existing applications with its advanced low-latency data-grid technologies. Also hear some interesting industry-specific use cases that customers had implemented and how Oracle is integrating Coherence into its Enterprise Java stack. Hollywood Room 11:30 am - 12:15 pm Breakout Sessions (pick one) Enterprise Strategy for Cloud Security | Dave Chappelle Security is high on the list of concerns for many organizations as they evaluate their cloud computing options. This session will examine security in the context of the various forms of cloud computing. We'll consider technical and non-technical aspects of security, and discuss several strategies for cloud computing, from both the consumer and producer perspectives. Beverly Ballroom Oracle Enterprise Manager | Perren Walker This session examines new Oracle Enterprise Manager monitoring, administration, and management features for Oracle Exalogic. It focuses on two management themes: cloud management related to virtualization and applications-to-disk management. For private cloud management, it discusses virtualization management features providing an enhanced set of application deployment capabilities enabling IaaS as well as PaaS interactions. Then from an end-to-end perspective, it covers the specific capabilities and—where applicable—best practices for machine, cloud, middleware, and application administration. Hollywood Room 12:15 pm - 1:15 pm Lunch Beverly Ballroom Lounge 1:15 pm - 2:00 pm Panel Discussion - Q&A with session speakers Beverly Ballroom 2:00 pm - 2:45 pm Breakout Sessions (pick one) Oracle Cloud Reference Architecture | Anbu Krishnaswamy Cloud initiatives are beginning to dominate enterprise IT roadmaps. Successful adoption of Cloud and the subsequent governance challenges warrant a Cloud reference architecture that is applied consistently across the enterprise. This presentation will answer the important questions: What exactly is a Cloud, why you need it, what changes it will bring to the enterprise, and what are the key capabilities of a Cloud infrastructure are - using Oracle's Cloud Reference Architecture, which is part of the IT Strategies from Oracle (ITSO) Cloud Enterprise Technology Strategy ETS). Beverly Ballroom 21st Century SOA | Jeff Davies Service Oriented Architecture has evolved from concept to reality in the last decade. The right methodology coupled with mature SOA technologies has helped customers demonstrate success in both innovation and ROI. In this session you will learn how Oracle SOA Suite's orchestration, virtualization, and governance capabilities provide the infrastructure to run mission critical business and system applications. We'll also take a special look at the convergence of SOA & BPM using Oracle's Unified technology stack. Hollywood Room 2:45 pm - 3:00 pm Break 3:00 pm - 4:00 pm Roundtable Discussion Beverly Ballroom 4:00 pm - 4:15 pm Closing Comments & Readouts from Roundtables Beverly Ballroom 4:15 pm - 5:00 pm Networking / Reception Beverly Ballroom Lounge Note: Session schedule and content subject to change.

    Read the article

  • Detecting Duplicates Using Oracle Business Rules

    - by joeywong-Oracle
    Recently I was involved with a Business Process Management Proof of Concept (BPM PoC) where we wanted to show how customers could use Oracle Business Rules (OBR) to easily define some rules to detect certain conditions, such as duplicate account numbers, duplicate names, high transaction amounts, etc, in a set of transactions. Traditionally you would have to loop through the transactions and compare each transaction with each other to find matching conditions. This is not particularly nice as it relies on more traditional approaches (coding) and is not the most efficient way. OBR is a great place to house these types’ of rules as it allows users/developers to externalise the rules, in a simpler manner, externalising the rules from the message flows and allows users to change them when required. So I went ahead looking for some examples. After quite a bit of time spent Googling, I did not find much out in the blogosphere. In fact the best example was actually from...... wait for it...... Oracle Documentation! (http://docs.oracle.com/cd/E28271_01/user.1111/e10228/rules_start.htm#ASRUG228) However, if you followed the link there was not much explanation provided with the example. So the aim of this article is to provide a little more explanation to the example so that it can be better understood. Note: I won’t be covering the BPM parts in great detail. Use case: Payment instruction file is required to be processed. Before instruction file can be processed it needs to be approved by a business user. Before the approval process, it would be useful to run the payment instruction file through OBR to look for transactions of interest. The output of the OBR can then be used to flag the transactions for the approvers to investigate. Example BPM Process So let’s start defining the Business Rules Dictionary. For the input into our rules, we will be passing in an array of payments which contain some basic information for our demo purposes. Input to Business Rules And for our output we want to have an array of rule output messages. Note that the element I am using for the output is only for one rule message element and not an array. We will configure the Business Rules component later to return an array instead. Output from Business Rules Business Rule – Create Dictionary Fill in all the details and click OK. Open the Business Rules component and select Decision Functions from the side. Modify the Decision Function Configuration Select the decision function and click on the edit button (the pencil), don’t worry that JDeveloper indicates that there is an error with the decision function. Then click the Ouputs tab and make sure the checkbox under the List column is checked, this is to tell the Business Rules component that it should return an array of rule message elements. Updating the Decision Service Next we will define the actual rules. Click on Ruleset1 on the side and then the Create Rule in the IF/THEN Rule section. Creating new rule in ruleset Ok, this is where some detailed explanation is required. Remember that the input to this Business Rules dictionary is a list of payments, each of those payments were of the complex type PaymentType. Each of those payments in the Oracle Business Rules engine is treated as a fact in its working memory. Implemented rule So in the IF/THEN rule, the first task is to grab two PaymentType facts from the working memory and assign them to temporary variable names (payment1 and payment2 in our example). Matching facts Once we have them in the temporary variables, we can then start comparing them to each other. For our demonstration we want to find payments where the account numbers were the same but the account name was different. Suspicious payment instruction And to stop the rule from comparing the same facts to each other, over and over again, we have to include the last test. Stop rule from comparing endlessly And that’s it! No for loops, no need to keep track of what you have or have not compared, OBR handles all that for you because everything is done in its working memory. And once all the tests have been satisfied we need to assert a new fact for the output. Assert the output fact Save your Business Rules. Next step is to complete the data association in the BPM process. Pay extra care to use Copy List instead of the default Copy when doing data association at an array level. Input and output data association Deploy and test. Test data Rule matched Parting words: Ideally you would then use the output of the Business Rules component to then display/flag the transactions which triggered the rule so that the approver can investigate. Link: SOA Project Archive [Download]

    Read the article

  • Quicktips 1: Windows 7 Libraries; New website

    - by Michael B. McLaughlin
    I’m working on several large posts right now. So in the interim, I’ve decided to do shorter posts that contain something I find very helpful. This is the first. I’ve been using Windows 7 since April 2010. It’s the first OS I’ve ever worked with that I actually enjoy. I’ve used many over the years (KERNAL; PC DOS; MS-DOS 3.x+; Windows 3.0, 3.11, 95, 98, 98 SE, Me, NT 3.51, NT 4, 2000, XP, Vista, 7; various GNU/Linux distros starting with Debian 1.2 – most recently Ubuntu 10.04; ProDOS, Mac OS 9.X, Mac OS X (through 10.4); SunOS, Solaris; AIX, z/OS; OpenVMS). Some were frustrating. Some tolerable. Some were “nice except for…”. OS X actually started out as seemingly “nice” until every single release contained a breaking change to some major API and they then decided to flip-off everyone who had bought a Mac as little as two years earlier with the release of Snow Leopard without PPC support. Windows 7 is the first one that’s just “nice” without any qualifiers. There are so many little features that add up to make it nice. Today’s Quicktip is one of them. Quicktip 1: Create a Library for your Code One thing I particularly like about Windows 7 is the Libraries feature in Explorer. Specifically the fact that you can create custom ones. I used to spend a lot of time opening new Explorer windows and navigating my various Visual Studio projects folders. Custom libraries allowed me to simplify that whole process. I now simply go to my “Code” library and there it all is. Adding a new library is easy. Open an Explorer window. If you aren’t in your Libraries when it opens, navigate to Libraries. Click the “New library” button. Give it a name. Then right click on the new library you created and go to “Properties”. Click the “Include a folder…” button. Choose the folder you want and press “Include folder”. Voilà! If you wish to add more, simply click “Include a folder…” again and repeat. It’s true that this is just a small time saver. But it’s one of those things that just adds a really nice touch. ------------------------ In a separate note, just before Christmas I finally finished and published my new website: http://www.bobtacoindustries.com/ . I waited to post here about it until I found time to incorporate a few things I hadn’t had the time to do when I pushed it out for its “soft open”. Most of them are now done and so my site is now formally open. I have no plans or intentions of moving my blog ( http://blog.bobtacoindustries.com/ points here). I quite like it here, both in terms of the interface and also in terms of the concept (and realization thereof) of pooling geek bloggers to create a pool of knowledge and helpful tips, tricks, techniques, and advice. I created it simply because I felt that it was time to have a website as I venture further into my return to the land of software development. The “For Devs” section should hopefully be useful to developers, particularly the links section. It’s my curated list of sites that I regularly visit to solve problems, to help answer questions on Twitter and the AppHub forums, and to learn new things. I’ll be adding links to it periodically and will be including topic areas as I become acquainted with them enough to form a proper list. WPF will likely be the first topic area added. If there are any links you think I should add to the existing topics, let me know! I warn in advance that I’m less inclined to add blogs; there are simply too many good blogs and I do not want to have hundreds per topic area. So blogs are limited primarily, though not exclusively, to acknowledged experts in the subject area who generally blog regularly about it and who usually are part of the team that develops the product or technology in question. I’m much more amenable to including individual blogs posts in the techniques subcategory in the appropriate topic area. Ultimately, it’s a collection of things I find interesting and helpful. So please no hard feelings if I don’t add a link you think is awesome. I may well think it’s awesome too, but conclude that it doesn’t fit with my goals for the dev links area.

    Read the article

  • SOA 11g Technology Adapters – ECID Propagation

    - by Greg Mally
    Overview Many SOA Suite 11g deployments include the use of the technology adapters for various activities including integration with FTP, database, and files to name a few. Although the integrations with these adapters are easy and feature rich, there can be some challenges from the operations perspective. One of these challenges is how to correlate a logical business transaction across SOA component instances. This correlation is typically accomplished via the execution context ID (ECID), but we lose the ECID correlation when the business transaction spans technologies like FTP, database, and files. A new feature has been introduced in the Oracle adapter JCA framework to allow the propagation of the ECID. This feature is available in the forthcoming SOA Suite 11.1.1.7 (PS6). The basic concept of propagating the ECID is to identify somewhere in the payload of the message where the ECID can be stored. Then two Binding Properties, relating to the location of the ECID in the message, are added to either the Exposed Service (left-hand side of composite) or External Reference (right-hand side of composite). This will give the JCA framework enough information to either extract the ECID from or add the ECID to the message. In the scenario of extracting the ECID from the message, the ECID will be used for the new component instance. Where to Put the ECID When trying to determine where to store the ECID in the message, you basically have two options: Add a new optional element to your message schema. Leverage an existing element that is not used in your schema. The best scenario is that you are able to add the optional element to your message since trying to find an unused element will prove difficult in most situations. The schema will be holding the ECID value which looks something like the following: 11d1def534ea1be0:7ae4cac3:13b4455735c:-8000-00000000000002dc Configuring Composite Services/References Now that you have identified where you want the ECID to be stored in the message, the JCA framework needs to have this information as well. The two pieces of information that the framework needs relates to the message schema: The namespace for the element in the message. The XPath to the element in the message. To better understand this, let's look at an example for the following database table: When an Exposed Service is created via the Database Adapter Wizard in the composite, the following schema is created: For this example, the two Binding Properties we add to the ReadRow service in the composite are: <!-- Properties for the binding to propagate the ECID from the database table --> <property name="jca.ecid.nslist" type="xs:string" many="false">  xmlns:ns1="http://xmlns.oracle.com/pcbpel/adapter/db/top/ReadRow"</property> <property name="jca.ecid.xpath" type="xs:string" many="false">  /ns1:EcidPropagationCollection/ns1:EcidPropagation/ns1:ecid</property> Notice that the property called jca.ecid.nslist contains the targetNamespace defined in the schema and the property called jca.ecid.xpath contains the XPath statement to the element. The XPath statement also contains the appropriate namespace prefix (ns1) which is defined in the jca.ecid.nslist property. When the Database Adapter service reads a row from the database, it will retrieve the ECID value from the payload and remove the element from the payload. When the component instance is created, it will be associated with the retrieved ECID and the payload contains everything except the ECID element/value. The only time the ECID is visible is when it is stored safely in the resource technology like the database, a file, or a queue. Simple Database/File/JMS Example This section contains a simplified example of how the ECID can propagate through a database table, a file, and JMS queue. The composite for the example looks like the following: The flow of this example is as follows: Invoke database insert using the insertwithecidbpelprocess_client_ep Service. The InsertWithECIDBPELProcess adds a row to the database via the Database Adapter. The JCA Framework adds the ECID to the message prior to inserting. The ReadRow Service retrieves the record and the JCA Framework extracts the ECID from the message. The ECID element is removed from the message. An instance of ReadRowBPELProcess is created and it is associated with the retried ECID. The ReadRowBPELProcess now writes the record to the file system via the File Adapter. The JCA Framework adds the ECID to the message prior to writing the message to file. The ReadFile Service retrieves the record from the file system and the JCA Framework extracts the ECID from the message. The ECID element is removed from the message. An instance of ReadFileBPELProcess is created and it is associated with the retried ECID. The ReadFileBPELProcess now enqueues the message via the JMS Adapter. The JCA Framework adds the ECID to the message prior to enqueuing the message. The DequeueMessage Service retrieves the record and the JCA Framework extracts the ECID from the message. The ECID element is removed from the message. An instance of DequeueMessageBPELProcess is created and it is associated with the retried ECID. The logical flow ends. When viewing the Flow Trace in the Enterprise Manger, you will now see all the instances correlated via ECID: Please check back here when SOA Suite 11.1.1.7 is released for this example. With the example you can run it yourself and reinforce what has been shared in this blog via a hands-on experience. One final note: the contents of this blog may be included in the official SOA Suite 11.1.1.7 documentation, but you will still need to come here to get the example.

    Read the article

  • TFS API Add Favorites programmatically

    - by Tarun Arora
    01 – What are we trying to achieve? In this blog post I’ll be showing you how to add work item queries as favorites, it is also possible to use the same technique to add build definition as favorites. Once a shared query or build definition has been added as favorite it will show up on the team web access.  In this blog post I’ll be showing you a work around in the absence of a proper API how you can add queries to team favorites. 02 – Disclaimer There is no official API for adding favorites programmatically. In the work around below I am using the Identity service to store this data in a property bag which is used during display of favorites on the team web site. This uses an internal data structure that could change over time, there is no guarantee about the key names or content of the values. What is shown below is a workaround for a missing API. 03 – Concept There is no direct API support for favorites, but you could work around it using the identity service in TFS.  Favorites are stored in the property bag associated with the TeamFoundationIdentity (either the ‘team’ identity or the users identity depending on if these are ‘team’ or ‘my’ favorites).  The data is stored as json in the property bag of the identity, the key being prefixed by ‘Microsoft.TeamFoundation.Framework.Server.IdentityFavorites’. References - Microsoft.TeamFoundation.WorkItemTracking.Client - using Microsoft.TeamFoundation.Client; - using Microsoft.TeamFoundation.Framework.Client; - using Microsoft.TeamFoundation.Framework.Common; - using Microsoft.TeamFoundation.ProcessConfiguration.Client; - using Microsoft.TeamFoundation.Server; - using Microsoft.TeamFoundation.WorkItemTracking.Client; Services - IIdentityManagementService2 - TfsTeamService - WorkItemStore 04 – Solution Lets start by connecting to TFS programmatically // Create an instance of the services to be used during the program private static TfsTeamProjectCollection _tfs; private static ProjectInfo _selectedTeamProject; private static WorkItemStore _wis; private static TfsTeamService _tts; private static TeamSettingsConfigurationService _teamConfig; private static IIdentityManagementService2 _ids; // Connect to TFS programmatically public static bool ConnectToTfs() { var isSelected = false; var tfsPp = new TeamProjectPicker(TeamProjectPickerMode.SingleProject, false); tfsPp.ShowDialog(); _tfs = tfsPp.SelectedTeamProjectCollection; if (tfsPp.SelectedProjects.Any()) { _selectedTeamProject = tfsPp.SelectedProjects[0]; isSelected = true; } return isSelected; } Lets get all the work item queries from the selected team project static readonly Dictionary<string, string> QueryAndGuid = new Dictionary<string, string>(); // Get all queries and query guid in the selected team project private static void GetQueryGuidList(IEnumerable<QueryItem> query) { foreach (QueryItem subQuery in query) { if (subQuery.GetType() == typeof(QueryFolder)) GetQueryGuidList((QueryFolder)subQuery); else { QueryAndGuid.Add(subQuery.Name, subQuery.Id.ToString()); } } }   Pass the name of a valid Team in your team project and a name of a valid query in your team project. The team details will be extracted using the team name and query GUID will be extracted using the query name. These details will be used to construct the key and value that will be passed to the SetProperty method in the Identity service.           Key           “Microsoft.TeamFoundation.Framework.Server.IdentityFavorites..<TeamProjectURI>.<TeamId>.WorkItemTracking.Queries.<newGuid1>”           Value           "{"data":"<QueryGuid>","id":"<NewGuid1>","name":"<QueryKey>","type":"Microsoft.TeamFoundation.WorkItemTracking.QueryItem”}"           // Configure a Work Item Query for the given team private static void ConfigureTeamFavorites(string teamName, string queryName) { _ids = _tfs.GetService<IIdentityManagementService2>(); var g = Guid.NewGuid(); var guid = string.Empty; var teamDetail = _tts.QueryTeams(_selectedTeamProject.Uri).FirstOrDefault(t => t.Name == teamName); foreach (var q in QueryAndGuid.Where(q => q.Key == queryName)) { guid = q.Value; } if(guid == string.Empty) { Console.WriteLine("Query '{0}' - Not found!", queryName); return; } var key = string.Format( "Microsoft.TeamFoundation.Framework.Server.IdentityFavorites..{0}.{1}.WorkItemTracking.Queries{2}", new Uri(_selectedTeamProject.Uri).Segments.LastOrDefault(), teamDetail.Identity.TeamFoundationId, g); var value = string.Format( @"{0}""data"":""{1}"",""id"":""{2}"",""name"":""{3}"",""type"":""Microsoft.TeamFoundation.WorkItemTracking.QueryItem""{4}", "{", guid, g, QueryAndGuid.FirstOrDefault(q => q.Value==guid).Key, "}"); teamDetail.Identity.SetProperty(IdentityPropertyScope.Local, key, value); _ids.UpdateExtendedProperties(teamDetail.Identity); Console.WriteLine("{0}Added Query '{1}' as Favorite", Environment.NewLine, queryName); }   If you have any questions or suggestions leave a comment. Enjoy!

    Read the article

  • Big Data – Buzz Words: What is HDFS – Day 8 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned what is MapReduce. In this article we will take a quick look at one of the four most important buzz words which goes around Big Data – HDFS. What is HDFS ? HDFS stands for Hadoop Distributed File System and it is a primary storage system used by Hadoop. It provides high performance access to data across Hadoop clusters. It is usually deployed on low-cost commodity hardware. In commodity hardware deployment server failures are very common. Due to the same reason HDFS is built to have high fault tolerance. The data transfer rate between compute nodes in HDFS is very high, which leads to reduced risk of failure. HDFS creates smaller pieces of the big data and distributes it on different nodes. It also copies each smaller piece to multiple times on different nodes. Hence when any node with the data crashes the system is automatically able to use the data from a different node and continue the process. This is the key feature of the HDFS system. Architecture of HDFS The architecture of the HDFS is master/slave architecture. An HDFS cluster always consists of single NameNode. This single NameNode is a master server and it manages the file system as well regulates access to various files. In additional to NameNode there are multiple DataNodes. There is always one DataNode for each data server. In HDFS a big file is split into one or more blocks and those blocks are stored in a set of DataNodes. The primary task of the NameNode is to open, close or rename files and directory and regulate access to the file system, whereas the primary task of the DataNode is read and write to the file systems. DataNode is also responsible for the creation, deletion or replication of the data based on the instruction from NameNode. In reality, NameNode and DataNode are software designed to run on commodity machine build in Java language. Visual Representation of HDFS Architecture Let us understand how HDFS works with the help of the diagram. Client APP or HDFS Client connects to NameSpace as well as DataNode. Client App access to the DataNode is regulated by NameSpace Node. NameSpace Node allows Client App to connect to the DataNode based by allowing the connection to the DataNode directly. A big data file is divided into multiple data blocks (let us assume that those data chunks are A,B,C and D. Client App will later on write data blocks directly to the DataNode. Client App does not have to directly write to all the node. It just has to write to any one of the node and NameNode will decide on which other DataNode it will have to replicate the data. In our example Client App directly writes to DataNode 1 and detained 3. However, data chunks are automatically replicated to other nodes. All the information like in which DataNode which data block is placed is written back to NameNode. High Availability During Disaster Now as multiple DataNode have same data blocks in the case of any DataNode which faces the disaster, the entire process will continue as other DataNode will assume the role to serve the specific data block which was on the failed node. This system provides very high tolerance to disaster and provides high availability. If you notice there is only single NameNode in our architecture. If that node fails our entire Hadoop Application will stop performing as it is a single node where we store all the metadata. As this node is very critical, it is usually replicated on another clustered as well as on another data rack. Though, that replicated node is not operational in architecture, it has all the necessary data to perform the task of the NameNode in the case of the NameNode fails. The entire Hadoop architecture is built to function smoothly even there are node failures or hardware malfunction. It is built on the simple concept that data is so big it is impossible to have come up with a single piece of the hardware which can manage it properly. We need lots of commodity (cheap) hardware to manage our big data and hardware failure is part of the commodity servers. To reduce the impact of hardware failure Hadoop architecture is built to overcome the limitation of the non-functioning hardware. Tomorrow In tomorrow’s blog post we will discuss the importance of the relational database in Big Data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Best way to store a large amount of game objects and update the ones onscreen

    - by user3002473
    Good afternoon guys! I'm a young beginner game developer working on my first large scale game project and I've run into a situation where I'm not quite sure what the best solution may be (if there is a lone solution). The question may be vague (if anyone can think of a better title after having read the question, please edit it) or broad but I'm not quite sure what to do and I thought it would help just to discuss the problem with people more educated in the field. Before we get started, here are some of the questions I've looked at for help in the past: Best way to keep track of game objects Elegant way to simulate large amounts of entities within a game world What is the most efficient container to store dynamic game objects in? I've also read articles about different data structures commonly used in games to store game objects such as this one about slot maps, but none of them are really what I'm looking for. Also, if it helps at all I'm using Python 3 to design the game. It has to be Python 3, if I could I would use C++ or Unityscript or something else, but I'm restricted to having to use Python 3. My game will be a form of side scroller shooter game. In said game the player will traverse large rooms with large amounts of enemies and other game objects to update (think some of the larger areas in Cave Story or Iji). The player obviously can't see the entire room all at once, so there is a viewport that follows the player around and renders only a selection of the room and the game objects that it contains. This is not a foreign concept. The part that's getting me confused has to do with how certain game objects are updated. Some of them are to be updated constantly, regardless of whether or not they can be seen. Other objects however are only to be updated when they are onscreen (for example, an enemy would only be updated to react to the player when it is onscreen or when it is in a certain range of the screen). Another problem is that game objects have to be easily referable by other game objects; something that happens in the player's update() method may affect another object in the world. Collision detection in games is always a serious problem. I need a way of containing the game objects such that it minimizes the number of cases when testing for collisions against one another. The final problem is that of creating and destroying game objects. I think this problem is pretty self explanatory. To store the game objects then I've considered a number of different methods. The original method I had was to simply store all the objects in a hash table by an id. This method was simple, and decently fast as it allows all the objects to be looked up in O(1) complexity, and also allows them to be deleted fairly easily. Hash collisions would not be a major problem; I wasn't originally planning on using computer generated ids to store the game objects I was going to rely on them all using ids given to them by the game designer (such names would be strings like 'Player' or 'EnemyWeapon4'), and even if I did use computer generated ids, if I used a decent hashing algorithm then the chances of collisions would be around 1 in 4 billion. The problem with using a hash table however is that it is inefficient in checking to see what objects are in range of the viewport. Considering the fact that certain game objects move (as well as the viewport itself), the only solution I could think of in order to only update objects that are in the viewport would be to iterate through every object in the hash table and check if it is in the viewport or not, updating only the ones that are in the valid area. This would be incredibly slow in scenarios where the amount of game objects exceeds 500, or even 200. The second solution was to store everything in a 2-d list. The world is partitioned up into cells (a tilemap essentially), where each cell or tile is the same size and is square. Each cell would contain a list of the game objects that are currently occupying it (each game object would be inserted into a cell depending on the center of the object's collision mask). A 2-d list would allow me to take the top-left and bottom-right corners of the viewport and easily grab a rectangular area of the grid containing only the cells containing entities that are in valid range to be updated. This method also solves the problem of collision detection; when I take an entity I can find the cell that it is currently in, then check only against entities in it's cell and the 8 cells around it. One problem with this system however is that it prohibits easy lookup of game objects. One solution I had would be to simultaneously keep a hash table that would contain all the positions of the objects in the 2-d list indexed by the id of said object. The major problem with a 2-d list is that it would need to be rebuilt every single game frame (along with the hash table of object positions), which may be a serious detriment to game speed. Both systems have ups and downs and seem to solve some of each other's problems, however using them both together doesn't seem like the best solution either. If anyone has any thoughts, ideas, suggestions, comments, opinions or solutions on new data structures or better implementations of the existing data structures I have in mind, please post, any and all criticism and help is welcome. Thanks in advance! EDIT: Please don't close the question because it has a bad title, I'm just bad with names!

    Read the article

  • Some New .NET Downloads and Resources

    - by Kevin Grossnicklaus
    Last week I was fortunate enough to spend time in Redmond on Microsoft’s campus for the 2011 Microsoft MVP Summit.  It was great to hang out with a number of old friends and get the opportunity to talk tech with the various product teams up at Microsoft.  The weather wasn’t exactly sunny but Microsoft always does a great job with the Summit and everyone had a blast (heck, I even got to run the bases at SafeCo field) While much of what we saw is covered under NDA, there a ton of great things in the pipeline from Microsoft and many things that are already available (or just became so) that I wasn’t necessarily aware of.  The purpose of this post is to share some of the info I learned on resources and tools available to .NET developers today.  Please let me know if you have any questions (or if you know of something else cool which might benefit others). Enjoy! Visual Studio 2010 SP1 Microsoft has issued the RTM release of Visual Studio 2010 SP1.  You can download the full SP1 on MSDN as of today (March 10th to the general public) and take advantage of such things as: Silverlight 4 is included in the box (as opposed to a separate install) Silverlight 4 Profiling WCF RIA Services SP1 Intellitrace for 64-bit and SharePoint ASP.NET now easily supports IIS Express and SQL CE Want a description of all that’s new beyond the above biased list (which arguably only contains items I think are important)?  Check out this KB article. Portable Library Tools CTP Without much fanfare Microsoft has released a CTP of a new add-in to Visual Studio 2010 which simplifies code sharing between projects targeting different runtimes (i.e. Silverlight, WPF, Win7 Phone, XBox).   With this Add-In installed you can add a new project of type “Portable Library” and specify which platforms you wish to target.  Once that is done, any code added to this library will be limited to use only features which are common to all selected frameworks.  Other projects can now reference this portable library and be provided assemblies custom built to their environment.  This greatly simplifies the current process of sharing linked files between platforms like WPF and Silverlight.  You can find out more about this CTP and how it works on this great blog post. Visual Studio Async CTP Microsoft has also released a CTP of a set of language and framework enhancements to provide a much more powerful asynchronous programming model.   Due to the focus on async programming in all types of platforms (and it being the ONLY option in Silverlight and Win7 phone) a move towards a simpler and more understandable model is always a good thing. This CTP (called Visual Studio Async CTP) can be downloaded here.  You can read more about this CTP on this blog post. MSDN Code Samples Gallery Microsoft has also launched new code samples gallery on their MSDN site: http://code.msdn.microsoft.com/.   This site allows you to easily search for small samples of code related to a particular technology or platform.  If a sample of code you are looking for is not found, you can request one via the site and other developers can see your request and provide a sample to the site to suit your needs.  You can also peruse requested samples and, if you find a scenario where you can provide value, upload your own sample for the benefit of others.  Samples are packaged into the VS .vsix format and include any necessary references/dependencies.  By using .vsix as the deployment mechanism, as samples are installed from the site they are kept in your Visual Studio 2010 Samples Gallery and kept for your future reference. If you get a chance, check out the site and see how it is done.  Although a somewhat simple concept, I was very impressed with their implementation and the way they went about trying to suit a need.  I’ll definitely be looking there in the future as need something or want to share something. MSDN Search Capabilities Another item I learned recently and was not aware of (that might seem trivial to some) is the power of the MSDN site’s search capabilities.  Between the Code Samples Gallery described above and the search enhancements on MSDN, Microsoft is definitely investing in their platform to help provide developers of all skill levels the tools and resources they need to be successful. What do I mean by the MSDN search capability and why should you care? If you go to the MSDN home page (http://msdn.microsoft.com) and use the “Search MSDN with Big” box at the very top of the page you will see some very interesting results.  First, the search actually doesn’t just search the MSDN library it searches: MSDN Library All Microsoft Blogs CodePlex StackOverflow Downloads MSDN Magazine Support Knowledgebase (I’m not sure it even ends there but the above are all I know of) Beyond just searching all the above locations, the results are formatted very nicely to give some contextual information based on where the result came from.  For example, if a keyword search returned results from CodePlex, each row in the search results screen would include a large amount of information specific to CodePlex such as: Looking at the above results immediately tells you everything from the page views to the CodePlex ratings.  All in all, knowing that this much information is indexed and available from a single search location will lead me to utilize this as one of my initial searches for development information.

    Read the article

  • Thought Oracle Usability Advisory Board Was Stuffy? Wrong. Justification for Attending OUAB: ROI

    - by ultan o'broin
    Looking for reasons tell your boss why your organization needs to join the Oracle Usability Advisory Board or why you need approval to attend one of its meetings (see the requirements)? Try phrases such as "Continued Return on Investment (ROI)", "Increased Productivity" or "Happy Workers". With OUAB your participation is about realizing and sustaining ROI across the entire applications life-cycle from input to designs to implementation choices and integration, usage and performance and on measuring and improving the onboarding and support experience. If you think this is a boring meeting of middle-aged people sitting around moaning about customizing desktop forms and why the BlackBerry is here to stay, think again! How about this for a rich agenda, all designed to engage the audience in a thought-provoking and feedback-illiciting day of swirling interactions, contextual usage, global delivery, mobility, consumerizationm, gamification and tailoring your implementation to reflect real users doing real work in real environments.  Foldable, rollable ereader devices provide a newspaper-like UK for electronic news. Or a way to wrap silicon chips, perhaps. Explored at the OUAB Europe Meeting (photograph from Terrace Restaurant in TVP. Nom.) At the 7 December 2012 OUAB Europe meeting in Oracle Thames Valley Park, UK, Oracle partners and customers stepped up to the mic and PPT decks with a range of facts and examples to astound any UX conference C-level sceptic. Over the course of the day we covered much ground, but it was all related in a contextual, flexibile, simplication, engagement way aout delivering results for business: that means solving problems. This means being about the user and their tasks and how to make design and technology transforms work into a productive activity that users and bean counters will be excited by. The sessions really gelled for me: 1. Mobile design patterns and the powerful propositions for customers and partners offered by using the design guidance with Oracle ADF Mobile. Customers' and partners' developers existing ADF developers are now productive, efficient ADF Mobile developers applying proven UX guidance using ADF Mobile components and other Oracle Fusion Middleware in the development toolkit. You can find the Mobile UX Design Patterns and Guidance on Building Mobile Apps on OTN. 2. Oracle Voice and Apps. How this medium offers so much potentual in the enterprise and offers a window in Fusion Apps cloud webservices, Oracle RightNow NLP and Nuance technology. Exciting stuff, demoed live on a mobile phone. Stay tuned for more features and modalities and how you can tailor your own apps experience.  3. Oracle RightNow Natural Language Processing (NLP) Virtual Assistant technology (Ella): how contextual intervention and learning from users sessions delivers a great personalized UX for users interacting with Ella, a fifth generation VA to solve problems and seek knowledge. 4. BYOD Keynote: A balanced keynote address contrasting Fujitsu's explaining of the conceprt, challenges, and trends and setting the expectation that BYOD must be embraced in a flexible way,  with the resolute, crafted high security enterprise requirements that nuancing the BYOD concept and proposals with the realities of their world of water tight information and device sharing policies. Fascinating stuff, as well providing anecdotes to make us thing about out own DYOD Deployments. One size does not fit all. 5. Icon Cultural Surveys Results and Insights Arising: Ever wondered about the cultural appropriateness of icons used in software UIs and how these icons assessed for global use? Or considered that social media "Like" icons might be  unacceptable hand gestures in culture or enterprise? Or do the old world icons like Save floppy disk icons still find acceptable? Well the survey results told you. Challenges must be tested, over time, and context of use is critical now, including external factors such as the internet and social media adoption. Indeed the fears about global rejection of the face and hand icons was not borne out, and some of the more anachronistic icons (checkbooks, microphones, real-to-real tape decks, 3.5" floppies for "save") have become accepted metaphors for current actions. More importantly the findings brought into focus the reason for OUAB - engage with and illicit feedback though working groups before we build anything. 6. EReaders and Oracle iBook: What is the uptake and trends of ereaders? And how about a demo of an iBook with enterprise apps content?  Well received by the audience, the session included a live running poll of ereader usage. 7. Gamification Design Jam: Fun, hands on event for teams of Oracle staff, partners and customers, actually building gamified flows, a practice that can be applied right away by customers and partners.  8. UX Direct: A new offering of usability best practices, coming to an external website for you in 2013. FInd a real user, observe their tasks, design and approve, build and measure. Simple stuff to improve apps implications no end. 9. FUSE (an internal term only, basically Fusion Simplified Experience): demo of the new Face of Fusion Applications: inherently mobile, simple to use, social, personalizable and FAST, three great demos from the HCM, CRM and ICT world on how these UX designs can be used in different ways. So, a powerful breadth and depth of UX solutions and opporunities for customers and partners to engage with and explore how they can make their users happy and benefit their business reaping continued ROI from those apps investments. Find out more about the OUAB and how to get involved here ... 

    Read the article

  • LDAP object class violation: attribute ou not allowed in suffix?

    - by Paramaeleon
    I am about to set up a LDAP directory. It is used as a tool to communicate user permissions from a web application to WebDav file system access, e.g. adding a user to the web platform shall allow login to the file system with the same credentials. There are no other usages intended. Following this German tutorial which encourages the use of the attributes c, o, ou etc. over dc, I configured the following suffix and root: suffix "ou=webtool,o=myOrg,c=de" rootdn "cn=ldapadmin,ou=webtool,o=myOrg,c=de" Server starts and I can connect to it by LDAP Admin, which reports “LDAP error: Object lacks”. Well, there aren’t any objects yet. I now want to create the root and admin elements from shell. I created an init.ldif file: dn: ou=webtool,o=myOrg,c=de objectclass: dcObject objectclass: organization dc: webtool o: webtool dn: cn=ldapadmin,ou=webtool,o=myOrg,c=de objectclass: organizationalRole cn: ldapadmin Trying to load the file runs into an error, telling me that ou is not allowed: server:~ # ldapadd -x -D "cn=ldapadmin,ou=webtool,o=myOrg,c=de" -W -f init.ldif Enter LDAP Password: adding new entry "ou=webtool,o=myOrg,c=de" ldap_add: Object class violation (65) additional info: attribute 'ou' not allowed I am not using ou anywhere except in the suffix, so the question: Isn’t it allowed here? What is allowed here? Here is my answer. I am not allowed to post it as answer for 8 hours, so don’t mind that it is part of the question by now. I will move it outside some day, if I don’t forget to do so. There are numberous dependencies for the creation of elements, and error messages are rather confusing if you don’t know of the concept. The objectclass isn’t necessarily dcObject for the databases’ root node, as it is likely to guess when you read several tutoriales. Instead, it must correspond to the object’s type: Here, for a name starting with ou=, it must be organizationalUnit. I found this piece of information in these tables [Link removed due to restriction: Oops! Your edit couldn't be submitted because: We're sorry, but as a spam prevention mechanism, new users can only post a maximum of two hyperlinks. Earn more than 10 reputation to post more hyperlinks. Link is below]. Further on, the object class dictates which properties must and can be added in the record. Here, organizationalUnit must have an ou: entry and must not have neither dc: nor o: entry. The healthy init.ldif file looks like that: dn: ou=webtool,o=myOrg,c=de objectclass: organizationalUnit ou: LDAP server for my webtool dn: cn=ldapadmin,ou=webtool,o=myOrg,c=de objectclass: organizationalRole cn: ldapadmin Note: The page also states: “While many objectClasses show no MUST attributes you must (ouch) follow any hierarchy […] to determine if this is the really case.” I thought that would mean my root record would have to provide the must fields for c= and o= (c: and o:, respectively) but this isn’t the case. Link in answer is (1): http :// www (dot) zytrax (dot) com/books/ldap/ape/ "Appendix E: LDAP - Object Classes and Attributes"

    Read the article

  • Augmenting your Social Efforts via Data as a Service (DaaS)

    - by Mike Stiles
    The following is the 3rd in a series of posts on the value of leveraging social data across your enterprise by Oracle VP Product Development Don Springer and Oracle Cloud Data and Insight Service Sr. Director Product Management Niraj Deo. In this post, we will discuss the approach and value of integrating additional “public” data via a cloud-based Data-as-as-Service platform (or DaaS) to augment your Socially Enabled Big Data Analytics and CX Management. Let’s assume you have a functional Social-CRM platform in place. You are now successfully and continuously listening and learning from your customers and key constituents in Social Media, you are identifying relevant posts and following up with direct engagement where warranted (both 1:1, 1:community, 1:all), and you are starting to integrate signals for communication into your appropriate Customer Experience (CX) Management systems as well as insights for analysis in your business intelligence application. What is the next step? Augmenting Social Data with other Public Data for More Advanced Analytics When we say advanced analytics, we are talking about understanding causality and correlation from a wide variety, volume and velocity of data to Key Performance Indicators (KPI) to achieve and optimize business value. And in some cases, to predict future performance to make appropriate course corrections and change the outcome to your advantage while you can. The data to acquire, process and analyze this is very nuanced: It can vary across structured, semi-structured, and unstructured data It can span across content, profile, and communities of profiles data It is increasingly public, curated and user generated The key is not just getting the data, but making it value-added data and using it to help discover the insights to connect to and improve your KPIs. As we spend time working with our larger customers on advanced analytics, we have seen a need arise for more business applications to have the ability to ingest and use “quality” curated, social, transactional reference data and corresponding insights. The challenge for the enterprise has been getting this data inline into an easily accessible system and providing the contextual integration of the underlying data enriched with insights to be exported into the enterprise’s business applications. The following diagram shows the requirements for this next generation data and insights service or (DaaS): Some quick points on these requirements: Public Data, which in this context is about Common Business Entities, such as - Customers, Suppliers, Partners, Competitors (all are organizations) Contacts, Consumers, Employees (all are people) Products, Brands This data can be broadly categorized incrementally as - Base Utility data (address, industry classification) Public Master Reference data (trade style, hierarchy) Social/Web data (News, Feeds, Graph) Transactional Data generated by enterprise process, workflows etc. This Data has traits of high-volume, variety, velocity etc., and the technology needed to efficiently integrate this data for your needs includes - Change management of Public Reference Data across all categories Applied Big Data to extract statics as well as real-time insights Knowledge Diagnostics and Data Mining As you consider how to deploy this solution, many of our customers will be using an online “cloud” service that provides quality data and insights uniformly to all their necessary applications. In addition, they are requesting a service that is: Agile and Easy to Use: Applications integrated with the service can obtain data on-demand, quickly and simply Cost-effective: Pre-integrated into applications so customers don’t have to Has High Data Quality: Single point access to reference data for data quality and linkages to transactional, curated and social data Supports Data Governance: Becomes more manageable and cost-effective since control of data privacy and compliance can be enforced in a centralized place Data-as-a-Service (DaaS) Just as the cloud has transformed and now offers a better path for how an enterprise manages its IT from their infrastructure, platform, and software (IaaS, PaaS, and SaaS), the next step is data (DaaS). Over the last 3 years, we have seen the market begin to offer a cloud-based data service and gain initial traction. On one side of the DaaS continuum, we see an “appliance” type of service that provides a single, reliable source of accurate business data plus social information about accounts, leads, contacts, etc. On the other side of the continuum we see more of an online market “exchange” approach where ISVs and Data Publishers can publish and sell premium datasets within the exchange, with the exchange providing a rich set of web interfaces to improve the ease of data integration. Why the difference? It depends on the provider’s philosophy on how fast the rate of commoditization of certain data types will occur. How do you decide the best approach? Our perspective, as shown in the diagram below, is that the enterprise should develop an elastic schema to support multi-domain applicability. This allows the enterprise to take the most flexible approach to harness the speed and breadth of public data to achieve value. The key tenet of the proposed approach is that an enterprise carefully federates common utility, master reference data end points, mobility considerations and content processing, so that they are pervasively available. One way you may already be familiar with this approach is in how you do Address Verification treatments for accounts, contacts etc. If you design and revise this service in such a way that it is also easily available to social analytic needs, you could extend this to launch geo-location based social use cases (marketing, sales etc.). Our fundamental belief is that value-added data achieved through enrichment with specialized algorithms, as well as applying business “know-how” to weight-factor KPIs based on innovative combinations across an ever-increasing variety, volume and velocity of data, will be where real value is achieved. Essentially, Data-as-a-Service becomes a single entry point for the ever-increasing richness and volume of public data, with enrichment and combined capabilities to extract and integrate the right data from the right sources with the right factoring at the right time for faster decision-making and action within your core business applications. As more data becomes available (and in many cases commoditized), this value-added data processing approach will provide you with ongoing competitive advantage. Let’s look at a quick example of creating a master reference relationship that could be used as an input for a variety of your already existing business applications. In phase 1, a simple master relationship is achieved between a company (e.g. General Motors) and a variety of car brands’ social insights. The reference data allows for easy sort, export and integration into a set of CRM use cases for analytics, sales and marketing CRM. In phase 2, as you create more data relationships (e.g. competitors, contacts, other brands) to have broader and deeper references (social profiles, social meta-data) for more use cases across CRM, HCM, SRM, etc. This is just the tip of the iceberg, as the amount of master reference relationships is constrained only by your imagination and the availability of quality curated data you have to work with. DaaS is just now emerging onto the marketplace as the next step in cloud transformation. For some of you, this may be the first you have heard about it. Let us know if you have questions, or perspectives. In the meantime, we will continue to share insights as we can.Photo: Erik Araujo, stock.xchng

    Read the article

  • VirtualBox appliance for the Oracle Communications Service Delivery Platform (SDP) Products

    - by chlander
    It's been quite awhile since we last blogged. This blog is written by Leif Lourie, a Curriculum Developer for the Oracle Communications Service Delivery Platform (SDP) products. For the last 8 years, Leif has worked as a Curriculum Developer for many of the telecom-oriented products that Oracle offers. He has been working in the telecom industry for about 25 years and has also worked as a software developer, project manager, and solutions architect. He is currently working on courseware for an upcoming release for one of the Service Delivery Platform products. Thanks to Leif not only for this blog, but for making the VM described in the blog available. Cheryl Lander, Oracle Communications InfoDev Senior Director To be able to download, install and test a product within a day is many times very important for people that are doing the primary evaluation of a software product. If it takes longer, it will require a bigger effort, like a proof-of-concept project with many people involved. Of course, if the product is chosen for a more thorough test, it will probably happen anyway, but then maybe with focus on integration instead of product features. We have a long tradition of creating complex software that is easy to install and test and we have often been praised for the ease of getting our products up and running. One key for this has been that there has always been an installer for Windows, as well as for the production environments that usually are Unix and Linux. And, the windows installer has, in most cases, been released for developing and testing purposes. Lately, this has changed. Our products are very seldom released for the Windows platform, at all. And even the Linux versions are almost always released for 64-bit systems. This is creating problems for many of the people that want to try out our products, since few have access to a 64-bit Linux system of the right platform. Most of us are using a laptop with Windows or Mac OS. Some of us are using Linux or Solaris, but probably a non certified distribution for the product you want to test. My job, among other things, is to develop hands-on practices for our products. For me, it is crucial to have access to environments for installing and using our products. For this reason I have been using virtual machines for many years.I have a ready-made base system, with the necessary tools installed for all the products I create hands-on practices for. Whenever I start working on hands-on practices for a new product or a new version, I just copy the base system and start working with a clean slate. This saves me a lot of time! Now, I would like to start saving time for my favorite student: You! If you are using our products and regularly test new versions you might benefit from the virtual machine that is now available on Oracle Technology Network: The Virtual Machine for the Oracle Communications Service Delivery Platform (SDP) Products. This virtual machine contains an installation of the 64-bit version of Oracle Enterprise Linux, version 6. It also has Oracle Database Express Edition (XE), Oracle Java and Oracle Enterprise Pack for Eclipse installed. By using Oracle VM VirtualBox you may use Windows, OS X, Linux or Solaris on your laptop. VirtualBox can be installed on top of any of these platforms and give you the ability to run virtual machines in your laptop. After downloading and starting the virtual machine you will also need to download the installation files for the product you want to test; for example Oracle Communications Services Gatekeeper or Oracle Communications Online Mediation Controller. In some cases there are lessons and practices available for the products. The freely available courses are listed in Oracle Learning Library as a Collection of Oracle Communications Service Delivery Platform Courses. As time goes by, we will make this list collection bigger. Also, the goal is to update the virtual machine about one to two times per year. So you will always be able to get a well maintained virtual machine for the Service Delivery Platform products from us. We Value Your Feedback If you would like to suggest improvements or report issues on any of the product documentation, curriculum, or training produced by the Oracle Communications Information Development team, you can use these channels: Email [email protected]. Post a comment on this blog. Thanks for reading!

    Read the article

  • Delivering the Integrated Portal Experience!

    - by Michael Snow
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Guest post by Richard Maldonado, Principal Product Manager, Oracle WebCenter Portal Organizations are still struggling to standardize on a user interaction platform which can meet the needs of all their target audiences.  This has not only resulted in inefficient and inconsistent experiences for their users, but it also creates inefficiencies (productivity and costs) for the departments that manage the applications and information systems.  Portals have historically been the unifying platform that provide IT with a common interface which can securely surface the most relevant interactions for a given user and/or group of users.  However, organizations have found that the technologies available have either not provided the flexibility necessary to address all of their use cases, or they rely too much on IT resources to manage, maintain, and evolve.  Empowering  the Business Groups The core issue that IT departments face with delivering portal experiences is having enough resources to respond and address the influx of requirements which come in from the business.  Commonly, when a business group wants a new portal site established for their group, they will submit a request to the IT dept, the IT dept then assigns a resource to an administrator and/or developer to build.  Unfortunately, this approach is not scalable, it can be a time consuming activity which requires significant interaction between the business owner and the IT resource.  A modern user interaction platforms should empower the business groups by providing them tools which they can use to build and manage the portal experiences without the need for IT's involvement.  And because business groups rarely have technical resources (developers) on staff, the tools must be easy enough that virtually any business user could use.  In addition, the tool must be powerful enough to allow them to build the experience that they need, things such as creating a whole new portal, add/manage page and page hierarchy, manage user/group access, add/modify components within the page, etc.  This balance between ease-of-use and flexibility is key to the successful adoption of tools which will ultimately reduce the burden on IT, respond to the needs of the business, and deliver high-value experiences for the users.  Ready or Not, Here They Come: Smartphones and Tablets Recently, several studies have highlighted that smartphone and tablet-style devices have overtaken PC's in both sales and usage.  This shift is further driving organizations to revaluate how they're delivering data, information, and applications to their users.  Users are expecting to get the same level of access and interaction, but in a ways which are optimized for the capabilities of the device that they are using.  Expect More With the ever growing number of new IT projects and flat/shrinking budgets, organizations are looking for comprehensive solutions which can deliver integrated web experiences that are tailored for the users and optimized for mobile devices.  Piecing together a number of point solutions is no longer an option.  A modern portal technology should not only address the traditional needs of integrating and surfacing back-end applications/information, but it should enable the business through easy-to-use tools and accelerate the delivery of mobile optimized experiences.   v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} 12.00 Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} 12.00 Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} WebCenter in Action Series: Qualcomm Provides a Seamless Experience for Customers with Oracle WebCenter Featuring Qualcomm & Keste 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 -"/ /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:Calibri; mso-fareast-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} 12.00 Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-fareast- mso-bidi-font-family:"Times New Roman";}

    Read the article

  • Advice for a distracted, unhappy, recently graduated programmer? [closed]

    - by Re-Invent
    I graduated 4 months ago. I had offers from a few good places to work at. At the same time I wanted to stick to building a small software business of my own, still have some ideas with good potential, some half done projects frozen in my github. But due to social pressures, I chose a job, the pay is great, but I am half-passionate about it. A small team of smart folks building useful product, working out contracts across the world. I've started finding it extremely boring. Boring to the extent that I skip 2-3 days a week together not doing work. Neither do I spend that time progressing any of my own projects. Yes, I feel stupid at the way I'm wasting time, but I don't understand exactly why is it happening. It's as if all the excitement has been drained. What can I do about it? Long version: School - I was in third standard. Only students, 6th grade had access to computer labs. I once peeked into the lab from the little door opening. No hard-disks, MS DOS on 5 1/2 inch floppies. I asked a senior student to play some sound in BASIC. He used PLAY to compose a tune. Boy! I was so excited, I was jumping from within. Back home, asked my brother to teach me some programming. We bought a book "MODERN All About GW-BASIC for Schools & Colleges". The book had everything, right from printing, to taking input, file i/o, game programming, machine level support, etc. I was in 6th standard, wrote my first game - a wheel of fortune, rotated the wheel by manipulating 16 color palette's definition. Got internet soon, got hooked to QuickBasic programming community. Made some more games "007 in Danger", "Car Crush 2" for submission to allbasiccode archives. I was extremely excited about all this. My interests now swayed into "hacking" (computer security). Taught myself some perl, found it annoying, learnt PHP and a bit of SQL. Also taught myself Visual Basic one of the winters and wrote a pacman clone with Direct X. By the time I was in 10th standard, I created some evil tools using visual basic, php and mysql and eventually landed myself into an unpaid side-job at a government facility, building evil tools for them. It was a dream come true for crackers of that time. And so was I, still very excited. Things changed soon, last two years of school were not so great as I was balancing preps for college, work at govt. and studies for school at same time. College - College was opposite of all I had wished it to be. I imagined it to be a place where I'd spend my 4 years building something awesome. It was rather an epitome of rote learning, attendance, rules, busy schedules, ban on personal laptops, hardly any hackers surrounding you and shit like that. We had to take permissions to even introduce some cultural/creative activities in our annual schedule. The labs won't be open on weekends because the lab employees had to have their leaves. Yes, a horrible place for someone like me. I still managed to pull out a project with a friend over 2 months. Showed it to people high in the academia hierarchy. They were immensely impressed, we proposed to allow personal computers for students. They made up half-assed reasons and didn't agree. We felt frustrated. And so on, I still managed to teach myself new languages, do new projects of my own, do an intern at the same govt. facility, start a small business for sometime, give a talk at a conference I'm passionate about, win game-dev and hacking contest at most respected colleges, solve good deal of programming contest problems, etc. At the same time I was not content with all these restrictions, great emphasis on rote learning, and sheer wastage of time due to college. I never felt I was overdoing, but now I feel I burnt myself out. During my last days at college, I did an intern at a bigco. While I spent my time building prototypes for certain LBS, the other interns around me, even a good friend, was just skipping time. I thought maybe, in a few weeks he would put in some serious efforts at work assigned to him, but all he did was to find creative ways to skip work, hide his face from manager, engage people in talks if they try to question his progress, etc. I tried a few time to get him on track, but it seems all he wanted was to "not to work hard at all and still reap the fruits". I don't know how others take such people, but I find their vicinity very very poisonous to one's own motivation and productivity. Over that, the place where I come from, HRs don't give much value to what have you done past 4 years. So towards the end of out intern, we all were offered work at the bigco, but the slacker, even after not writing more than 200 lines of code was made a much better offer. I felt enraged instantly - "Is this how the corp world treats someone who does fruitful, if not extra-ordinary work form them for past 6 months?". Yes, I did try to negotiate and debate. The bigcos seem blind due to departmentalization of responsibilities and many layers of management. I decided not to be in touch with any characters of that depressing play. Probably the busy time I had at college, ignoring friends, ignoring fun and squeezing every bit of free time for myself is also responsible. Probably this is what has drained all my willingness to work for anyone. I find my day job boring, at the same time I with to maintain it for financial reasons. I feel a bit burnt out, unsatisfied and at the same time an urge to quit working for someone else and start finishing my frozen side-projects (which may be profitable). Though I haven't got much to support myself with food, office, internet bills, etc in savings. I still have my day job, but I don't find it very interesting, even though the pay is higher than the slacker, I don't find money to be a great motivator here. I keep comparing myself to my past version. I wonder how to get rid of this and reboot myself back to the way I was in school days - excited about it, tinkering, building, learning new things daily, and NOT BORED?

    Read the article

  • Use your own domain email and tired of SPAM? SPAMfighter FTW

    - by Dave Campbell
    I wouldn't post this if I hadn't tried it... and I paid for it myself, so don't anybody be thinking I'm reviewing something someone sent me! Long ago and far away I got very tired of local ISPs and 2nd phone lines and took the plunge and got hooked up to cable... yeah I know the 2nd phone line concept may be hard for everyone to understand, but that's how it was in 'the old days'. To avoid having to change email addresses all the time, I decided to buy a domain name, get minimal hosting, and use that for all email into the house. That way if I changed providers, all the email addresses wouldn't have to change. Of course, about a dozen domains later, I have LOTS of pop email addresses and even an exchange address to my client's server... times have changed. What also has changed is the fact that we get SPAM... 'back in the day' when I was a beta tester for the first ISP in Phoenix, someone tried sending an ad to all of us, and what he got in return for his trouble was a bunch of core dumps that locked up his email... if you don't know what a core dump is, ask your grandfather. But in today's world, we're all much more civilized than that, and as with many things, the criminals seem to have much more rights than we do, so we get inundated with email offering all sorts of wild schemes that you'd have to be brain-dead to accept, but yet... if people weren't accepting them, they'd stop sending them. I keep hoping that survival of the smartest would weed out the mental midgets that respond and then the jumk email stop, but that hasn't happened yet anymore than finding high-quality hearing aids at the checkout line of Safeway because of all the dimwits playing music too loud inside their car... but that's another whole topic and I digress. So what's the solution for all the spam? And I mean *all*... on that old personal email address, I am now getting over 150 spam messages a day! Yes I know that's why God invented the delete key, but I took it on as a challenge, and it's a matter of principle... why should I switch email addresses, or convert from [email protected] to something else, or have all my email filtered through some service just because some A-Hole somewhere has a site up trying to phish Ma & Pa Kettle (ask your grandfather about that too) out of their retirement money? Well... I got an email from my cousin the other day while I was writing yet another email rule, and there was a banner on the bottom of his email that said he was protected by SPAMfighter. SPAMfighter huh.... so I took a look at their site, and found yet one more of the supposed tools to help us. But... I read that they're a Microsoft Gold Partner... and that doesn't come lightly... so I took a gamble and here's what I found: I installed it, and had to do a couple things: 1) SPAMfighter stuffed the SPAMfighter folder into my client's exchange address... I deleted it, made a new SPAMfighter folder where I wanted it to go, then in the SPAMfighter Clients settings for Outlook, I told it to put all spam there. 2) It didn't seem to be doing anything. There's a ribbon button that you can select "Block", and I did that, wondering if I was 'training' it, but it wasn't picking up duplicates 3) I sent email to support, and wrote a post on the forum (not to self: reply to that post). By the time the folks from the home office responded, it was the next day, and first up, SPAMfighter knocked down everything that came through when Outlook opend... two thumbs up! I disabled my 'garbage collection' rule from Outlook, and told Outlook not to use the junk folder thinking it was interfering. 4) Day 2 seemed to go about like Day 1... but I hung in there. 5) Day 3 is now a whole new day... I had left Outlook open and hadn't looked at the PC since sometime late yesterday afternoon, and when I looked this morning, *every bit* of spam was in the SPAMfighter folder!! I'm a new paying customer After watching SPAMfighter work this morning, I've purchased a 1-year license, and I now can sit and watch as emails come in and disappear from my inbox into the SPAMfighter folder. No more continual tweaking of the rules. I've got SPAMfighter set to 'Very Hard' filtering... personally I'd rather pull the few real emails out of the SPAMfighter folder than pull spam out of the real folders. Yes this is simply another way of using the delete key, but you know what? ... it feels good :) Here's a screenshot of the stats after just about 48 hours of being onboard: Note that all the ones blocked by me were during Day 1 and 2... I've blocked none today, and everything is blocked. Stay in the 'Light!

    Read the article

  • Is Data Science “Science”?

    - by BuckWoody
    I hold the term “science” in very high esteem. I grew up on the Space Coast in Florida, and eventually worked at the Kennedy Space Center, surrounded by very intelligent people who worked in various scientific fields. Recently a new term has entered the computing dialog – “Data Scientist”. Since it’s not a standard term, it has a lot of definitions, and in fact has been disputed as a correct term. After all, the reasoning goes, if there’s no such thing as “Data Science” then how can there be a Data Scientist? This argument has been made before, albeit with a different term – “Computer Science”. In Peter Denning’s excellent article “Is Computer Science Science” (April  2005/Vol. 48, No. 4 COMMUNICATIONS OF THE ACM) there are many points that separate “science” from “engineering” and even “art”.  I won’t repeat the content of that article here (I recommend you read it on your own) but will leverage the points he makes there. Definition of Science To ask the question “is data science ‘science’” then we need to start with a definition of terms. Various references put the definition into the same basic areas: Study of the physical world Systematic and/or disciplined study of a subject area ...and then they include the things studied, the bodies of knowledge and so on. The word itself comes from Latin, and means merely “to know” or “to study to know”. Greek divides knowledge further into “truth” (episteme), and practical use or effects (tekhne). Normally computing falls into the second realm. Definition of Data Science And now a more controversial definition: Data Science. This term is so new and perhaps so niche that the major dictionaries haven’t yet picked it up (my OED reference is older – can’t afford to pop for the online registration at present). Researching the term's general use I created an amalgam of the definitions this way: “Studying and applying mathematical and other techniques to derive information from complex data sets.” Using this definition, data science certainly seems to be science - it's learning about and studying some object or area using systematic methods. But implicit within the definition is the word “application”, which makes the process more akin to engineering or even technology than science. In fact, I find that using these techniques – and data itself – part of science, not science itself. I leave out the concept of studying data patterns or algorithms as part of this discipline. That is actually a domain I see within research, mathematics or computer science. That of course is a type of science, but does not seek for practical applications. As part of the argument against calling it “Data Science”, some point to the scientific method of creating a hypothesis, testing with controls, testing results against the hypothesis, and documenting for repeatability.  These are not steps that we often take in working with data. We normally start with a question, and fit patterns and algorithms to predict outcomes and find correlations. In this way Data Science is more akin to statistics (and in fact makes heavy use of them) in the process rather than starting with an assumption and following on with it. So, is Data Science “Science”? I’m uncertain – and I’m uncertain it matters. Even if we are facing rampant “title inflation” these days (does anyone introduce themselves as a secretary or supervisor anymore?) I can tolerate the term at least from the intent that we use data to study problems across a wide spectrum, rather than restricting it to a single domain. And I also understand those who have worked hard to achieve the very honorable title of “scientist” who have issues with those who borrow the term without asking. What do you think? Science, or not? Does it matter?

    Read the article

< Previous Page | 139 140 141 142 143 144 145 146 147 148 149 150  | Next Page >