Search Results

Search found 333 results on 14 pages for 'historical'.

Page 12/14 | < Previous Page | 8 9 10 11 12 13 14  | Next Page >

  • Why does Ruby have Rails while Python has no central framework?

    - by yar
    This is a(n) historical question, not a comparison-between-languages question: This article from 2005 talks about the lack of a single, central framework for Python. For Ruby, this framework is clearly Rails. Why, historically speaking, did this happen for Ruby but not for Python? (or did it happen, and that framework is Django?) Also, the hypothetical questions: would Python be more popular if it had one, good framework? Would Ruby be less popular if it had no central framework? [Please avoid discussions of whether Ruby or Python is better, which is just too open-ended to answer.] Edit: Though I thought this is obvious, I'm not saying that other frameworks do not exist for Ruby, but rather that the big one in terms of popularity is Rails. Also, I should mention that I'm not saying that frameworks for Python are not as good (or better than) Rails. Every framework has its pros and cons, but Rails seems to, as Ben Blank says in the one of the comments below, have surpassed Ruby in terms of popularity. There are no examples of that on the Python side. WHY? That's the question.

    Read the article

  • Is there programming language with better approach for switch's break statements ?

    - by Vitaly Polonetsky
    It's the same syntax in a way too many languages: switch (someValue) { case OPTION_ONE: case OPTION_LIKE_ONE: case OPTION_ONE_SIMILAR: doSomeStuff1(); break; // EXIT the switch case OPTION_TWO_WITH_PRE_ACTION: doPreActionStuff2(); // the default is to CONTINUE to next case case OPTION_TWO: doSomeStuff2(); break; // EXIT the switch case OPTION_THREE: doSomeStuff3(); break; // EXIT the switch } Now all you know that break statements are required, because the switch will continue to the next case when break statement is missing. We have an example of that with OPTION_LIKE_ONE, OPTION_ONE_SIMILAR and OPTION_TWO_WITH_PRE_ACTION. The problem is that we only need this "skip to next case" very very very rarely. And very often we put break at the end of case. It very easy for a beginner to forget about it. And one of my C teachers even explained it to us as if it was a bug in C language (don't want to talk about it :) I would like to ask if there are any other languages that I don't know of (or forgot about) that handle switch/case like this: switch (someValue) { case OPTION_ONE: continue; // CONTINUE to next case case OPTION_LIKE_ONE: continue; // CONTINUE to next case case OPTION_ONE_SIMILAR: doSomeStuff1(); // the default is to EXIT the switch case OPTION_TWO_WITH_PRE_ACTION: doPreActionStuff2(); continue; // CONTINUE to next case case OPTION_TWO: doSomeStuff2(); // the default is to EXIT the switch case OPTION_THREE: doSomeStuff3(); // the default is to EXIT the switch } The second question: is there any historical meaning to why it is like this in C? May be continue to next case was used far more often than we use it these days ?

    Read the article

  • Data Warehouse: Modelling a future schedule

    - by Pat
    I'm creating a DW that will contain data on financial securities such as bonds and loans. These securities are associated with payment schedules. For example, a bond could pay quarterly, while a mortage would usually pay monthly (sometimes biweekly). The payment schedule is created when the security is traded and, in the majority of cases, will remain unchanged. However, the design would need to accomodate those cases where it does change. I'm currently attempting to model this data and I'm having difficulty coming up with a workable design. One of the most commonly queried fields is "next payment date". Users often want to know when a security will pay next. Therefore, I want to make it as easy as possible for them to get the next payment date and amount for each security. Also, users often run historical queries in which case they'd want the next payment date and amount as of a specific point in time. For example, they may want to look back at 1/31/09 and query the next payment dates (which would usually be in February 2009 for mortgages). It's also common that they want to query a security's entire payment schedule, which might consist of 360 records (30 year mortgage x 12 payments/year). Since the next payment date and amount would be changing each month or even biweekly, these fields wouldn't seem to fit into a slow-changing dimension very well. It would probably make more sense to use a fact table, but I'm unsure of how to model it. Any ideas would be greatly appreciated.

    Read the article

  • Why is T() = T() allowed?

    - by Rimo
    I believe the expression T() creates an rvalue (by the Standard). However, the following code compiles (at least on gcc4.0): class T {}; int main() { T() = T(); } I know technically this is possible because member functions can be invoked on temporaries and the above is just invoking the operator= on the rvalue temporary created from the first T(). But conceptually this is like assigning a new value to an rvalue. Is there a good reason why this is allowed? Edit: The reason I find this odd is it's strictly forbidden on built-in types yet allowed on user-defined types. For example, int(2) = int(3) won't compile because that is an "invalid lvalue in assignment". So I guess the real question is, was this somewhat inconsistent behavior built into the language for a reason? Or is it there for some historical reason? (E.g it would be conceptually more sound to allow only const member functions to be invoked on rvalue expressions, but that cannot be done because that might break some existing code.)

    Read the article

  • Using map() on a _set in a template?

    - by Stuart Grimshaw
    I have two models like this: class KPI(models.Model): """KPI model to hold the basic info on a Key Performance Indicator""" title = models.CharField(blank=False, max_length=100) description = models.TextField(blank=True) target = models.FloatField(blank=False, null=False) group = models.ForeignKey(KpiGroup) subGroup = models.ForeignKey(KpiSubGroup, null=True) unit = models.TextField(blank=True) owner = models.ForeignKey(User) bt_measure = models.BooleanField(default=False) class KpiHistory(models.Model): """A historical log of previous KPI values.""" kpi = models.ForeignKey(KPI) measure = models.FloatField(blank=False, null=False) kpi_date = models.DateField() and I'm using RGraph to display the stats on internal wallboards, the handy thing is Python lists get output in a format that Javascript sees as an array, so by mapping all the values into a list like this: def f(x): return float(x.measure) stats = map(f, KpiHistory.objects.filter(kpi=1) then in the template I can simply use {{ stats }} and the RGraph code sees it as an array which is exactly what I want. [87.0, 87.5, 88.5, 90] So my question is this, is there any way I can achieve the same effect using Django's _set functionality to keep the amount of data I'm passing into the template, up until now I've been passing in a single KPI object to be graphed but now I want to pass in a whole bunch so is there anything I can do with _set {{ kpi.kpihistory_set }} dumps the whole model out, but I just want the measure field. I can't see any of the built in template methods that will let me pull out just the single field I want. How have other people handled this situation?

    Read the article

  • Connecting a django application to a drupal database?

    - by Hans
    I have a 3 - 4000 nodes in a drupal 6 installation on mysql and want to access these data through my django application. I have used manage.py inspectdb to get a skeleton of a model structure. I guess that there are good/historical reasons for drupal's database schemes, but find that there are some hard to understand structure and that there are some challenges in applying django models on the database. Some experiences this far are: node and node revision are intertwined and I solved this by using a OneToOneField (don't need the versions). This meens that the node's body gets accessible through node.vid.body, but it works. Foreign keys need to define the proper db_column to sort out the primary keys. Terms need to use an intermediary table with ManyToManyField.through. Drupal stores both the original and the thumbnailed/resized versions of any image as files in the files table. Does anyone have experiences in accessing drupal data in django? Are there better solution to for example the node <- node revision relationship? Drupal stores time/dates as unix-style timestamps in integerfields. Any recommendations? How about time zones?

    Read the article

  • Best way to update/insert into a table based on a remote table.

    - by martilyo
    I have two very large enterprise tables in an Oracle 10g database. One table keeps the historical information of the other table. The problem is, I'm getting to the point where the records are just too many that my insert update is taking too long and my session is getting killed by the governor. Here's a pseudocode of my update process: sqlsel := 'SELECT col1, col2, col3, sysdate FROM table2@remote_location dpi WHERE (col1, col2, col3) IN ( SELECT col1, col2, col3 FROM table2@remote_location MINUS SELECT DISTINCT col1, col2, col3 FROM table1 mpc WHERE facility = '''||load_facility||''' )'; EXECUTE IMMEDIATE sqlsel BULK COLLECT INTO table1; I've tried the MERGE statement: MERGE INTO table1 t1 USING ( SELECT col1, col2, col3 FROM table2@remote_location ) t2 ON ( t1.col1 = t2.col1 AND t1.col2 = t2.col2 AND t1.col3 = t2.col3 ) WHEN NOT MATCHED THEN INSERT (t1.col1, t1.col2, t1.col3, t1.update_dttm ) VALUES (t2.col1, t2.col2, t2.col3, sysdate ) But there seems to be a confirmed bug on versions prior to Oracle 10.2.0.4 on the merge statement when doing a merge using a remote database. The chance of getting an enterprise upgrade is slim so is there a way to further optimize my first query or write it in another way to have it run best performance wise? Thanks.

    Read the article

  • Return the Column Name for row with last non-null value "Ms Access 2007"

    - by bri1969
    I have a Table, which contains a list of league players. Each season, we record their Points per Dart. Their total PPD for that season is stored in other tables and extracted through other queries, which in turn are imported to the master table "Player History" at the end of the season for use as historical data. The current query retrieves each players PPD for each season they played, when they played last, and how many seasons played. The code for Last season Played has become too long and unstable to use. it was originally created, and split into two separate columns because a single SQL was to long. (LSP1) and LSP2) which work, but as I add seasons, Access does not like the length of code. In short, i need to find a more simple code that will look at each row, and look in that row for the last non null cell and report which column that last non null value is in. So if a player played seasons 30 & 31, but did not play 32..but did play 33, the Column with the code should be titled Last Season Played, and for that Player, it would state "33" in that cell, indicating that this player last played season "33" I will provide both tables and the query.. Please help

    Read the article

  • Strange Puzzle - Invalid memory access of location

    - by Rob Graeber
    The error message I'm getting consistently is: Invalid memory access of location 0x8 rip=0x10cf4ab28 What I'm doing is making a basic stock backtesting system, that is iterating huge arrays of stocks/historical data across various algorithms, using java + eclipse on the latest Mac Os X. I tracked down the code that seems to be causing it. A method that is used to get the massive arrays of data and is called thousands of times. Nothing is retained so I don't think there is a memory leak. However there seems to be a set limit of around 7000 times I can iterate over it before I get the memory error. The weird thing is that it works perfectly in debug mode. Does anyone know what debug mode does differently in Eclipse? Giving the jvm more memory doesn't help, and it appears to work fine using -xint. And again it works perfectly in debug mode. public static List<Stock> getStockArray(ExchangeType e){ List<Stock> stockArray = new ArrayList<Stock>(); if(e == ExchangeType.ALL){ stockArray.addAll(getStockArray(ExchangeType.NYSE)); stockArray.addAll(getStockArray(ExchangeType.NASDAQ)); }else if(e == ExchangeType.ETF){ stockArray.addAll(etfStockArray); }else if(e == ExchangeType.NYSE){ stockArray.addAll(nyseStockArray); }else if(e == ExchangeType.NASDAQ){ stockArray.addAll(nasdaqStockArray); } return stockArray; } A simple loop like this, iterated over 1000s of times, will cause the memory error. But not in debug mode. for (Stock stock : StockDatabase.getStockArray(ExchangeType.ETF)) { System.out.println(stock.symbol); }

    Read the article

  • DB Strategy for inserting into a high read table (Sql Server)

    - by Tom
    Looking for strategies for a very large table with data maintained for reporting and historical purposes, a very small subset of that data is used in daily operations. Background: We have Visitor and Visits tables which are continuously updated by our consumer facing site. These tables contain information on every visit and visitor, including bots and crawlers, direct traffic that does not result in a conversion, etc. Our back end site allows management of the visitor's (leads) from the front end site. Most of the management occurs on a small subset of our visitors (visitors that become leads). The vast majority of the data in our visitor and visit tables is maintained only for a much smaller subset of user activity (basically reporting type functionality). This is NOT an indexing problem, we have done all we can with indexing and keeping our indexes clean, small, and not fragmented. ps: We do not currently have the budget or expertise for a data warehouse. The problem: We would like the system to be more responsive to our end users when they are querying, for instance, the list of their assigned leads. Currently the query is against a huge data set of mostly irrelevant data. I am pondering a few ideas. One involves new tables and a fairly major re-architecture, I'm not asking for help on that. The other involves creating redundant data, (for instance a Visitor_Archive and a Visitor_Small table) where the larger visitor and visit tables exist for inserts and history/reporting, the smaller visitor1 table would exist for managing leads, sending lead an email, need leads phone number, need my list of leads, etc.. The reason I am reaching out is that I would love opinions on the best way to keep the Visitor_Archive and the Visitor_Small tables in sync... Replication? Can I use replication to replicate only data with a certain column value (FooID = x) Any other strategies?

    Read the article

  • Enterprise IPv6 Migration - End of proxypac ? Start of Point-to-Point ? +10K users

    - by Yohann
    Let's start with a diagram : We can see a "typical" IPv4 company network with : An Internet acces through a proxy An "Others companys" access through an dedicated proxy A direct access to local resources All computers have a proxy.pac file that indicates which proxy to use or whether to connect directly. Computers have access to just a local DNS (no name resolution for google.com for example.) By the way ... The company does not respect the RFC1918 internally and uses public addresses! (historical reason). The use of internet proxy explicitly makes it possible to not to have problem. What if we would migrate to IPv6? Step 1 : IPv6 internet access Internet access in IPv6 is easy. Indeed, just connect the proxy in Internet IPv4 and IPv6. There is nothing to do in internal network : Step 2 : IPv6 AND IPv4 in internal network And why not full IPv6 network directly? Because there is always the old servers that are not compatible IPv6 .. Option 1 : Same architecture as in IPv4 with a proxy pac This is probably the easiest solution. But is this the best? I think the transition to IPv6 is an opportunity not to bother with this proxy pac! Option 2 : New architecture with transparent proxy, whithout proxypac, recursive DNS Oh yes! In this new architecture, we have: Explicit Internet Proxy becomes a Transparent Internet Proxy Local DNS becomes a Normal Recursive DNS + authorative for local domains No proxypac Explicit Company Proxy becomes a Transparent Company Proxy Routing Internal Routers reditect IP of appx.ext.example.com to Company Proxy. The default gateway is the Transparent Internet proxy. Questions What do you think of this architecture IPv6? This architecture will reveal the IP addresses of our internal network but it is protected by firewalls. Is this a real big problem? Should we keep the explicit use of a proxy? -How would you make for this migration scenario? -And you, how do you do in your company? Thanks! Feel free to edit my post to make it better.

    Read the article

  • And the Winners of Fusion Middleware Innovation Awards in Data Integration are…

    - by Irem Radzik
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} At OpenWorld, we announced the winners of Fusion Middleware Innovation Awards 2012. Raymond James and Morrison Supermarkets were selected for the data integration category for their innovative use of Oracle’s data integration products and the great results they have achieved. In this blog I would like to briefly introduce you to these award winning projects. Raymond James is a diversified financial services company, which provides financial planning, wealth management, investment banking, and asset management. They are using Oracle GoldenGate and Oracle Data Integrator to feed their operational data store (ODS), which supports application services across the enterprise. A major requirement for their project was low data latency, as key decisions are made based on the data in the ODS. They were able to fulfill this requirement due to the Oracle Data Integrator’s integrated solution with Oracle GoldenGate. Oracle GoldenGate captures changed data from different systems including Oracle Database, HP NonStop and Microsoft SQL Server into a single data store on SQL Server 2008. Oracle Data Integrator provides data transformations for the ODS. Leveraging ODI’s integration with GoldenGate, Raymond James now sees a 9 second median latency (from source commit to ODS target commit). The ODS solution delivers high quality, accurate data for consuming applications such as Raymond James’ next generation client and portfolio management systems as well as real-time operational reporting. It enables timely information for making better decisions. There are more benefits Raymond James achieved with this implementation of Oracle’s data integration solution. The software developers and architects of this solution, Tim Garrod and Ryan Fonnett, have told us during their presentation at OpenWorld that they also reduced application complexity significantly while improving developer productivity through trusted operational services. They were able to utilize CDC to generate alerts for business users, and for applications (for example for cache hydration mechanisms). One cool innovation example among many in this project is that using ODI's flexible architecture, Tim and Ryan could build 24/7 self-healing processes. And these processes have hardly failed. Integration processes fixes the errors itself. Pretty amazing; and a great solution for environments that need such reliability and availability. (You can see Tim and Ryan’s photo with the Innovation Award above.) The other winner of this year in the data integration category, Morrison Supermarkets, is the UK’s 4th largest grocery retailer. The company has been migrating all their legacy applications on to a new-world application set based on Oracle and consolidating all BI on to a single Oracle platform. The company recently implemented Oracle Exadata as the data warehouse engine and uses Oracle Business Intelligence EE. Their goal with deploying GoldenGate and ODI was to provide BI data to the enterprise in a way that it also supports operational decision making requirements from a wide range of Oracle based ERP applications such as E-Business Suite, PeopleSoft, Oracle Retail Suite. They use GoldenGate’s log-based change data capture capabilities and Oracle Data Integrator to populate the Oracle Retail Data Model. The electronic point of sale (EPOS) integration solution they built processes over 80 million transactions/day at busy periods in near real time (15 mins). It provides valuable insight to Retail and Commercial teams for both intra-day and historical trend analysis. As I mentioned in yesterday’s blog, the right data integration platform can transform the business. Here is another example: The point-of-sale integration enabled the grocery chain to optimize its stock management, leading to another award: Morrisons won the Grocer 33 award in 2012 - beating all other major UK supermarkets in product availability. Congratulations, Morrisons,on another award! Celebrating the innovation and the success of our customers with Oracle’s data integration products was definitely a highlight of Oracle OpenWorld for me. I look forward to hearing more from Raymond James, Morrisons, and the other customers that presented their data integration projects at OpenWorld, on how they are creating more value for their organizations.

    Read the article

  • Developing Schema Compare for Oracle (Part 3): Ghost Objects

    - by Simon Cooper
    In the previous blog post, I covered how we solved the problem of dependencies between objects and between schemas. However, that isn’t the end of the issue. The dependencies algorithm I described works when you’re querying live databases and you can get dependencies for a particular schema direct from the server, and that’s all well and good. To throw a (rather large) spanner in the works, Schema Compare also has the concept of a snapshot, which is a read-only compressed XML representation of a selection of schemas that can be compared in the same way as a live database. This can be useful for keeping historical records or a baseline of a database schema, or comparing a schema on a computer that doesn’t have direct access to the database. So, how do snapshots interact with dependencies? Inter-database dependencies don't pose an issue as we store the dependencies in the snapshot. However, comparing a snapshot to a live database with cross-schema dependencies does cause a problem; what if the live database has a dependency to an object that does not exist in the snapshot? Take a basic example schema, where you’re only populating SchemaA: SOURCE   TARGET (using snapshot) CREATE TABLE SchemaA.Table1 ( Col1 NUMBER REFERENCES SchemaB.Table1(col1));   CREATE TABLE SchemaA.Table1 ( Col1 VARCHAR2(100)); CREATE TABLE SchemaB.Table1 ( Col1 NUMBER PRIMARY KEY);   CREATE TABLE SchemaB.Table1 ( Col1 VARCHAR2(100)); In this case, we want to generate a sync script to synchronize SchemaA.Table1 on the database represented by the snapshot. When taking a snapshot, database dependencies are followed, but because you’re not comparing it to anything at the time, the comparison dependencies algorithm described in my last post cannot be used. So, as you only take a snapshot of SchemaA on the target database, SchemaB.Table1 will not be in the snapshot. If this snapshot is then used to compare against the above source schema, SchemaB.Table1 will be included in the source, but the object will not be found in the target snapshot. This is the same problem that was solved with comparison dependencies, but here we cannot use the comparison dependencies algorithm as the snapshot has not got any information on SchemaB! We've now hit quite a big problem - we’re trying to include SchemaB.Table1 in the target, but we simply do not know the status of this object on the database the snapshot was taken from; whether it exists in the database at all, whether it’s the same as the target, whether it’s different... What can we do about this sorry state of affairs? Well, not a lot, it would seem. We can’t query the original database, as it may not be accessible, and we cannot assume any default state as it could be wrong and break the script (and we currently do not have a roll-back mechanism for failed synchronizes). The only way to fix this properly is for the user to go right back to the start and re-create the snapshot, explicitly including the schemas of these 'ghost' objects. So, the only thing we can do is flag up dependent ghost objects in the UI, and ask the user what we should do with it – assume it doesn’t exist, assume it’s the same as the target, or specify a definition for it. Unfortunately, such functionality didn’t make the cut for v1 of Schema Compare (as this is very much an edge case for a non-critical piece of functionality), so we simply flag the ghost objects up in the sync wizard as unsyncable, and let the user sort out what’s going on and edit the sync script as appropriate. There are some things that we do do to alleviate somewhat this rather unhappy situation; if a user creates a snapshot from the source or target of a database comparison, we include all the objects registered from the database, not just the ones in the schemas originally selected for comparison. This includes any extra dependent objects registered through the comparison dependencies algorithm. If the user then compares the resulting snapshot against the same database they were comparing against when it was created, the extra dependencies will be included in the snapshot as required and everything will be good. Fortunately, this problem will come up quite rarely, and only when the user uses snapshots and tries to sync objects with unknown cross-schema dependencies. However, the solution is not an easy one, and lead to some difficult architecture and design decisions within the product. And all this pain follows from the simple decision to allow schema pre-filtering! Next: why adding a column to a table isn't as easy as you would think...

    Read the article

  • Microsoft TechEd 2010 - Day 2 @ Bangalore

    - by sathya
    Microsoft TechEd 2010 - Day 2 @ Bangalore Today is the day 2 @ Microsoft TechEd 2010. We had lot of technical sessions as usual there were many tracks going on side by side and I was attending the Web simplified track, Which comprised of the following sessions :   Developing a scalable Media Application using ASP.NET MVC - This was a kind of little advanced stuff. Anyways I couldn't understand much because this was not my piece of cake and I havent worked on this before ASP.Net MVC Unplugged - This was really great because this session covered from the basics of MVC showing what is Model,View and Controller and how it worked and the speaker went into the details of the same. Building RESTful Applications with the Open Data Protocol - There were some concepts explained about this from the basics on how to build RESTful Services and it went on till some advanced configurations of the same. Developing Scalable Web Applications with AppFabric Caching - This session showed about the integration of AppFabric with the .Net Web Applications. Instead of using Inproc Sessions, we can use this AppFabric as a substitute for Caching and outofProc Session Storage without writing code and doing a little bit of configurations which brings in High Scalability, performance to our applications. (But unfortunately there were no demos for this session ) Deep Dive : WCF RIA Services - This session was also an interactive one, in this the speaker presented from the basics of WCF and took a Book Store Application as a sample and explained all details concepts on linking with RIA Services   Apart from these sessions, in between there happened some small events in the breaks like Some discussions about Technology, Innovations Music Jokes Mimicry, etc. And on doing all these things, the developers were given some kool gifts / goodies like USBs, T-Shirts, etc. And today I got a chance to do the following certification : (70-562) Microsoft Certified Technology Specialist in .NET 3.5 Web Applications Since I already have an MCTS in .NET 2.0, I wanted to do an MCPD and for doing the same I was required to do an update to my MCTS with the .NET 3.5 framework and I did the same I cleared it and now am an MCTS in .NET 3.5 Web Apps And on doing this I got a T-Shirt and they gave something called Learning $ of worth 30$. And in various stalls for attending each quiz or some game or some referrals we got some Learning $ which we can redeem later based on our Total Learning $. I got 105 $ which i was able to redeem and got a Microsoft Learning BagPack, 1 free Microsoft certification offer, a laptop light and an e-learning content activated. And after all these sessions and small events, we had something called Demo Extravaganza like I mentioned yesterday. This was a great funfilled event with lot of goodies for the attendees. There were some lucky draw which enabled 2 attendees to get Netbooks (Sponsored by Intel) and 1 attendee to get X-box (Sponsored by Citrix). After Choosing the raffle in the lucky draw they kept it on a device called Microsoft Surface which is a kind of big touch screen device and on putting the raffle on that it detected the code of the attendee and said intelligently how many sessions that person has attended and if he has attended more than 5 he got a Netbook and this was coded by a guy called Imran. Apart from they showed demos on : Research by 2 Tamilnadu students from Krishna Arts and Science college, taken 1200 photographs of their college from different angles and put that up in Bing maps using silverlight and linked with Photosynth, which showed a 3d view of their college based on the photos they uploaded Reasearch by Microsoft on Panaramic HD views of the images. One young guy from Microsoft Research showed a demo of this on Srivilliputhur Andal Temple, in Tamil Nadu and its history with a panoramic view of the temple and the near by places with narration of the historical information on the same and with the videos embedded in it with high definition images which we can zoom to a very detailed level. Some Demo on a business app with Silverlight, Business Intelligence (BI) and maps integrated. It showed the sales of a particular product across locations. Some kool demos by 2 geeks who used Robots to show their development talents. 2 Robots fought with each other 2 Robots danced in sync for the A.R. Rehman song Humma Humma... A dream home project by Raman. He is currently using the same in his home too. Robots are controlling his home currently. They showed a video on this. Here are the list of activities that Robot does for him When he reads a book, robot automatically scans that and shows that image of that person in the screen (TV or comp) in front of him. It shows a wikipedia about that person. It says that person is not in linked in. do you want to add him If he sees an IPL Match news in the book and smiles it understands he is interested in that and opens a website related to that and shows the current game and the scorecard. It cooks for him It cleans the room for him whenever he leaves the house when he is doing something if some intruder comes inside his house his computer automatically switches his screen showing the video of the person coming inside. When he wakes up it automatically opens up the system, loads his mails and the news by the side, etc. Some Demos on Microsoft Pivot. This was there in livelabs but it is now available in getpivot.com its a pivoting of the pictorial data based on some categories and filters on the searches that we do. And finally on filling up some feedback forms we got T-Shirts and Microsoft Visual Studio 2010 Training Kit CDs. Whats more on TechEd??? Stay tuned!!! Will update you soon on the other happenings!! PS : I typed a lot of content for more than a hour but I pressed a backspace and it went to the previous page and all my content were lost and I was not able to retrieve the same and I typed everything again.

    Read the article

  • Tuning Red Gate: #2 of Many

    - by Grant Fritchey
    In the last installment, I used the SQL Monitor tool to get a snapshot view of the current state of the servers at Red Gate that are giving us trouble. That snapshot suggested some areas where I should focus some time, primarily in which queries were being called most frequently or were running the longest. But, you don't want to just run off & start tuning queries. Remember, the foundation for query tuning is the server itself. So, I want to be sure I'm not looking at some major hardware or configuration issues that I need to address first. Rather than look at the current status of the server, I'm going to look at historical data. Clicking on the Analysis tab of SQL Monitor I get a whole list of counters that I can look at. More importantly, I can look at them over a period of time. Even more importantly, I can compare past periods with current periods to see if we're looking at a progressive issue or not. There are counters here that will give me an indication of load, and there are counters here that will tell me specifics about that load. First, I want to just look at the load to understand where the pain points might be. Trying to drill down before you have detailed information is just bad planning. First thing I'm going to check is the CPU, just to see what's up there. I have two servers I'm interested in, so I'll show you both: Looking at the last 30 days for both servers, well, let's just say that the first server is about what I would expect. It has an average baseline behavior with occasional, regular, peaks. This looks like a system with a fairly steady & predictable load that probably has a nightly batch process that spikes the processor. In short, normal stuff. The points there where the CPU drops radically. that might be worth investigating further because something changed the processing on this system a lot. But the first server. It's all over the place. There's no steady CPU behavior at all. It's spike high for long periods of time. It's up, it's down. I'm really going to have to spend time looking at CPU issues on this server to try to figure out what's up. It might be other processes being shared on the server, it might be something else. Either way, I'm going to have to spend time evaluating this CPU, especially those peeks about a week ago. Looking at the Pages/sec, again, just a measure of load, I see that there are some peaks on the rg-sql02 server, but over all, it looks like a fairly standard load. Plus, the peaks are only up to 550 pages/sec. Remember, this isn't a performance measure, but just a load measurement, but from this, I don't think we're looking at major memory issues, but I may want to correlate these counters with the CPU counters. Again, the other server looks like there's stuff going on. The load is not at all consistent. In fact there was a point earlier in the year that looks pretty severe. Plus the spikes here are twice the size of the other system. We've got a lot more load going on here and I will probably need to drill down on memory usage on this server. Taking a look at the disk transfers/sec the load on both systems seems to roughly correspond to the other load indicators. Notice that drop right in the middle of the graph for rg-sql02. I wonder if the office was closed over that period or a system was down for maintenance. If I saw spikes in memory or disk that corresponded to the drip in CPU, you can assume something was using those other resources and causing a drop, but when everything goes down, it just means that the system isn't gettting used. The disk on the rg-sql01 system isn't spiking exactly the same way as the memory & cpu, so there's a good chance (chance mind you) that any performance issues might not be disk related. However, notice that huge jump at the beginning of the month. Several disks were used more than they were for the rest of the month. That's the load on the server. What about the load on SQL Server itself? Next time.

    Read the article

  • The Customer Experience Imperative: A Game Changer for Brands

    - by Jeri Kelley
    By Anthony Lye, SVP, Cloud Applications Strategy, Oracle We know that customer experience has emerged as a primary differentiator for businesses today.  I’ve talked a lot about the new age of the empowered consumer. At Oracle we’ve spent a lot of time developing technologies and practices that our customers can implement to greatly improve their customer experience strategies. Of course I’m biased, but I think that we have created a portfolio of the best solutions on the planet to help organizations deal with the challenges of providing great customer experiences. We’ve done this because we started to witness some trends over the last few years. As the average person began to utilize social and mobile technologies more frequently and products commoditized, customer experience truly remained the only sustainable differentiator for businesses.In fact, we have seen that customer experience is often driving the success or the failure of a product or a brand. And as end customers have become more vocal about their experiences with companies on social and mobile channels, they now have the power to decide which brands will win and which brands will lose. To address this customer experience imperative, I believe that business today must do three things really well:Connect with your customers. You have to connect with customers whenever, wherever and however they want. Organizations must provide a great experience on their existing channels— the call center, the brick and mortar store, the field sales organizations, the websites and social properties. Businesses must also be great at managing and delivering journeys on these channels, while quickly adapting to embrace the new channels that emerge. You have to understand mobile. You have to understand social. You have to understand kiosks. These are all new routes to market, new channels where your customers may or may not show up. You have to interact with them where they are. You have to present information in a way that's meaningful to them. As well as providing what we would call a multichannel experience. We have to recognize that customers may start their experience on one channel, but end it on a different channel. It’s important that an organization’s technology solutions enable, not just a multichannel strategy, but a strategy that can power new channels and create customer journeys that cross these channels.Get to know your customers. Next, companies need to get to know the customer as intimately as the customer will allow. Today most customer interactions are anonymous, but it’s important for brands to know which customers drive value. Customers want to provide feedback. They want to share their opinions, but they want to know that those opinions are being heard and acted upon. For this to occur, we need to know much more about the customer and then reward them for their loyalty and for their advocacy.Enable connections. The last thing is to enable people to connect or transact with your brand. We've got to make it really, really simple for customers to do business with us. We can't make them repeat the steps; we can't make them tell us their identity for the fifth time as they move between organizations. These silos can no longer sustain or deliver a good customer experience. It's extremely important that companies be where customers want them to be—that we create profitable journeys for us and for them.Organizations have to make sure that there is a single source of truth that defines the customer. We have to make sure that the technology applications that we rely on understand not just the dimensions of multichannel, but of cross-channel too. We have to enable social at the very core of the overall architecture. We have to use historical analytics, real-time decisioning as well as predictive analytics to help personalize and drive an experience. And these are all technologies that IT needs, that IT is familiar with, but needs to enable for the line of business that in turn can enable for the end customer.  This means that we've got to make our solutions available to the customers in the cloud.In this new age of the empowered consumer, businesses have to focus on delivery mechanisms that reduce the overall TCO, while driving a rapid rate of innovation and a more rapid rate of deployment. At the Oracle Customer Experience Summit @ OpenWorld, I’ll discuss these issues and more. I hope that you can join us for what promises to be an unforgettable experience.

    Read the article

  • Agile Executives

    - by Robert May
    Over the years, I have experienced many different styles of software development. In the early days, most of the development was Waterfall development. In the last few years, I’ve become an advocate of Scrum. As I talked about last month, many people have misconceptions about what Scrum really is. The reason why we do Scrum at Veracity is because of the difference it makes in the life of the team doing Scrum. Software is for people, and happy motivated people will build better software. However, not all executives understand Scrum and how to get the information from development teams that use Scrum. I think that these executives need a support system for managing Agile teams. Historical Software Management When Henry Ford pioneered the assembly line, I doubt he realized the impact he’d have on Management through the ages. Historically, management was about managing the process of building things. The people were just cogs in that process. Like all cogs, they were replaceable. Unfortunately, most of the software industry followed this same style of management. Many of today’s senior managers learned how to manage companies before software was a significant influence on how the company did business. Software development is a very creative process, but too many managers have treated it like an assembly line. Idea’s go in, working software comes out, and we just have to figure out how to make sure that the ideas going in are perfect, then the software will be perfect. Lean Manufacturing In the manufacturing industry, Lean manufacturing has revolutionized Henry Ford’s assembly line. Derived from the Toyota process, Lean places emphasis on always providing value for the customer. Anything the customer wouldn’t be willing to pay for is wasteful. Agile is based on similar principles. We’re building software for people, and anything that isn’t useful to them doesn’t add value. Waterfall development would have teams build reams and reams of documentation about how the software should work. Agile development dispenses with this work because excessive documentation doesn’t add value. Instead, teams focus on building documentation only when it truly adds value to the customer. Many other Agile principals are similar. Playing Catch-up Just like in the manufacturing industry, many managers in the software industry have yet to understand the value of the principles of Lean and Agile. They think they can wrap the uncertainties of software development up in a nice little package and then just execute, usually followed by failure. They spend a great deal of time and money trying to exactly predict the future. That expenditure of time and money doesn’t add value to the customer. Managers that understand that Agile know that there is a better way. They will instead focus on the priorities of the near term in detail, and leave the future to take care of itself. They have very detailed two week plans with less detailed quarterly plans. These plans are guided by a general corporate strategy that doesn’t focus on the exact implementation details. These managers also think in smaller features rather than large functionality. This adds a great deal of value to customers, since the features that matter most are the ones that the team focuses on in the near term and then are able to deliver to the customers that are paying for them. Agile managers also realize that stale software is very costly. They know that keeping the technology in their software current is much less expensive and risky than large rewrites that occur infrequently and schedule time in each release for refactoring of the existing software. Agile Executives Even though Agile is a better way, I’ve still seen failures using the Agile process. While some of these failures can be attributed to the team, most of them are caused by managers, not the team. Managers fail to understand what Agile is, how it works, and how to get the information that they need to make good business decisions. I think this is a shame. I’m very pleased that Veracity understands this problem and is trying to do something about it. Veracity is a key sponsor of Agile Executives. In fact, Galen is this year’s acting president for Agile Executives. The purpose of Agile Executives is to help managers better manage Agile teams and see better success. Agile Executives is trying to build a community of executives that range from managers interested in Agile to managers that have successfully adopted Agile. Together, these managers can form a community of support and ideas that will help make Agile teams more successful. Helping Your Team You can help too! Talk with your manager and get them involved in Agile Executives. Help Veracity build the community. If your manager understands Agile better, he’ll understand how to help his teams, which will result in software that adds more value for customers. If you have any questions about how you can be involved, please let me know. Technorati Tags: Agile,Agile Executives

    Read the article

  • Real Time BI in the Real World

    - by tobin.gilman(at)oracle.com
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif"; mso-fareast-font-family:"Times New Roman";} One of my favorite BI offerings from Oracle is a solution called Oracle Real Time Decisions.  Whenever I mention this product in customer meetings, eyes light up.  There are some fascinating examples of customers using it to up-sell, cross-sell, increase customer retention, and reduce risk in real time, with off the charts return on investment. I plan to share some of those stories in a future blog.  In this post however, I want to share some far more common real time analytics use case scenarios that are being addressed with widely deployed Oracle BI and data integration technologies Not all real time BI applications require continuous learning, predictive modeling, and data mining.  Many simply require the ability to integrate, aggregate, and access information that is current (typically within in few minutes or a few seconds).  The use cases are infinite.  A few I've seen: ·         Purchasing agents need to match demand against available inventory ·         Manufacturing planners need to monitor current parts and material against scheduled build plans ·         Airline agents need to match ticket demand against flight schedules, ·         Human resources managers need to track the status of global hiring requisitions against current headcount authorizations...you get the idea. One way of doing this is to run reports or federated queries directly against transactional systems.  That approach can be viable if you only need to access simple data sets on rare occasions.  High volume and complex queries can quickly bog down performance of mission critical transactional systems.  There is an architecturally simple way of solving the problem, and it's being applied by real companies around the world to solve real needs in real time.    Cbeyond is an Atlanta, GA based  provider of voice, data and mobile business applications delivers.  They deliver real time information to its call center agents  as they are interacting with their customers. The data they need resides in production CRM and other transactional systems, but  instead or reporting directly off the those systems, data is first moved to an operational data store (ODS).  Rather than running data intensive, time consuming, and performance degrading batch ETL routines to populate the ODS, Cbeyond uses Oracle Golden Gate software to incrementally capture and move only the changed records from log files of the transactional systems every few minutes.  There is no impact on transactional system performance, and the information needed by call center representatives is up to date.  Oracle Business Intelligence software presents the information to services reps in a rich, visual, and highly interactive format. Avea is similar to Cbeyond.  They are a telecommunications company who integrates billing and customer information in an ODS that is accessed by their call center agents in real time using Oracle Golden Gate and Oracle Business Intelligence.  They've taken it a step further by using the ODS to feed a data warehouse.  The operational data store provides the current information needed by call center agents during "in flight" customer interactions.  The data warehouse is used for more sophisticated analysis of historical data.  For maximum performance, both the ODS and data warehouse run on the Oracle Exadata Database Machine. These are practical illustrations of companies addressing real time reporting and analysis needs using established business intelligence/data warehousing methodologies and tools common to many IT departments.  If real time BI could benefit your organization, you may be already be closer than you thought to having the pieces in place to solving the problem.    Give us a shout if you are interested in learning more or if you have an interesting use or approach to real-time BI.

    Read the article

  • EM CLI, diving in and beyond!

    - by Maureen Byrne
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Doing more in less time… Isn’t that what we all strive to do? With this in mind, I put together two screen watches on Oracle Enterprise Manager 12c command line interface, or EM CLI as it is also known. There is a wealth of information on any topic that you choose to read about, from manual pages to coding documents…might I even say blog posts? In our busy lives it is so nice to just sit back with a short video, watch and learn enough to dive in. Doing more in less time, is the essence of EM CLI. It enables you to script fundamental and complex administrative tasks in an elegant way, thanks to the Jython scripting language. Repetitive tasks can be scripted and reused again and again. Sure, a Graphical User Interface provides a more intuitive step by step approach to tasks, and it provides a way of quickly becoming familiar with a product and its many features, and it is definitely the way to go when viewing performance data and historical trending…but for repetitive and complex tasks, scripting is the way to go! Lets us take the everyday task of creating an administrator. Using EM CLI in interactive mode the command could look like this.. emcli>create_user(name='jan.doe', type='EXTERNAL_USER') This command creates an administrator called jan.doe which is an externally authenticated user, possibly LDAP or SSO, defined by the EXTERNAL_USER tag. The create_user procedure takes many arguments; see the documentation for more information. Now, where EM CLI really shines and shows power is in creating multiple users. Regardless of the number, tens or thousands, the effort is the same. With the use of a standard programming construct, a loop, you can place your create_user() procedure within it. Using a loop allows you to iterate through a previously created list, creating new users until the list is complete. Using EM CLI in Script mode, your Jython loop would look something like this… for user in list_of_users:       create_user(name=user, expire=’true’, password=’welcome123’) This Jython code snippet iterates through a previously defined list of names, list_of_users, and iterates through the list, taking each name, user in this case, and creates an administrator sets the password to welcome123, but forces the user to reset it when they first login. This is only one of over four hundred procedures created to expose Oracle Enterprise Manager 12c functionality in a powerful and programmatic way. It is a few months since we released EM CLI with scripting option. We are seeing many users adapt to this fun and powerful way of using Oracle Enterprise Manager 12c. What are the first steps? Watch these screen watches, and dive in. The first screen watch steps you through where and how to download and install and how to run your first few commands. The Second screen watch steps you through a few scripts. Next time, I am going to show you the basic building blocks to writing a Jython script to perform Oracle Enterprise Manager 12c administrative tasks. Join this growing group of EM CLI users…. Dive in! Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Finding it Hard to Deliver Right Customer Experience: Think BPM!

    - by Ajay Khanna
    Our relationship with our customers is not a just a single interaction and we should not treat it like one. A customer’s relationship with a vendor is like a journey which starts way before customer makes a purchase and lasts long after that. The journey may start with customer researching a product that may lead to the eventual purchase and may continue with support or service needs for the product. A typical customer journey can be represented as shown below: As you may notice, customers tend to use multiple channels to interact with a company throughout their journey.  They also expect that they should get consistent experience, no matter what interaction channel they may choose. Customers do not like to repeat the information they have already provided and expect companies to remember their preferences, and offer them relevant products and services. If the company fails to meet this expectation, customers not only will abandon the purchase and go to the competitor but may also influence others’ purchase decision. Gone are the days when word of mouth was the only medium, and the customer could influence “Six” others. This is the age of social media and customer’s good or bad experience, especially bad get highly amplified and may influence hundreds of others. Challenges that face B2C companies today include: Delivering consistent experience: The reason that delivering consistent experience is challenging is due to fragmented data, disjointed systems and siloed multichannel interactions. Customers tend to get different service quality if they use web vs. phone vs. store. They get different responses from different service agents or get inconsistent answers if they call sales vs. service group in the company. Such inconsistent experiences result in lower customer satisfaction or NPS (net promoter score) numbers. Increasing Revenue: To stay competitive companies frequently introduce new products and services. Delay in launching such offerings has a significant impact on revenue realization. In addition to new product revenue, there are multiple opportunities to up-sell and cross-sell that impact bottom line. If companies are not able to identify such opportunities, bring a product to market quickly, or not offer the right product to the right customer at the right time, significant loss of revenue may occur. Ensuring Compliance: Companies must be compliant to ever changing regulations, these could be about Know Your Customer (KYC), Export/Import regulations, or taxation policies. In addition to government agencies, companies also need to comply with the SLA that they have committed to their customers. Lapse in meeting any of these requirements may lead to serious fines, penalties and loss in business. Companies have to make sure that they are in compliance will all such regulations and SLA commitments, at any given time. With the advent of social networks and mobile technology, companies not only need to focus on process efficiency but also on customer engagement. Improving engagement means delivering the customer experience as the customer is expecting and interacting with the customer at right time using right channel. Customers expect to be able to contact you via any channel of their choice (web, email, chat, mobile, social media), purchase via any viable channel (web, phone, store, mobile). Customers expect companies to understand their particular needs and remember their preferences on repeated visits. To deliver such an integrated, consistent, and contextual experience, power of BPM in must. Your company may be organized in departments like Marketing, Sales, Service. You may hold prospect data in SFA, order information in ERP, customer issues in CRM. However, the experience delivered to the customer must not be constrained by your system legacy. BPM helps in designing the right experience for the right customer and integrates all the underlining channels, systems, applications to make sure right information will be delivered to the right knowledge worker or to the customer every single time.     Orchestrating information across all systems (MDM, CRM, ERP), departments (commerce, merchandising, marketing service) and channels (Email, phone, web, social)  is the key, and that’s what BPM delivers. In addition to orchestrating systems and channels for consistency, BPM also provides an ability for analysis and decision management. By using data from historical transactions, social media and from other systems, users can determine the customer preferences, customer value, and churn propensity. This information, in the context, is then used while making a decision at a process step. Working with real-time decision management system can also suggest right up-sell or cross-sell offers, discounts or next-best-action steps for a particular customer. Timely action on customer issues or request is also a key tenet of a good customer experience. BPM’s complex event processing capabilities help companies to take proactive actions before issues get escalated. BPM system can be designed to listen to a certain event patters then deduce from those customer situations (credit card stolen, baggage lost, change of address) and do a triage before situation goes out of control. If such a situation arises you can send alerts to right people or immediately invoke corrective actions. Last but not least one of BPM’s key values is to drive continuous improvement. Learning about customers past experiences, interactions and social conversations, provide valuable insight. Such insight can be used to improve products, customer facing processes, and customer experience. You may take these insights as an input to design better more efficient and customer friendly sales, contact center or self-service processes. If customer experience is important for your business, make sure you have incorporated BPM as a part of your strategy to design, orchestrate and improve your customer facing processes.

    Read the article

  • CS, SE, HCI, Information Science, Please recommendation for further education of the former performing art manager seeking career in IT industries? [on hold]

    - by Baek Seungjoo
    IT specialists there J Thank you very much for your collective efforts here, and I got huge help reading your professional comments and advices on each questions I have searched so far! This time, I would like to ask for your practical advices or recommendation on what I am struggling on at this moment. I am currently seeking higher education for my career transition from performing art manager and director to “IT software and/or service development and management specialist”. However, as this field is quite new to me, and there are lots of different work positions, I have no idea which grad major I better pursue in order to get qualification. Of course I know this question could sounds wired as it is kind of personal choice. But my lack of understanding on how IT software companies work in general, your practical and experience-based advice will be great help to me, who spent more than two months of self-research on net. OK. Before my question, here is my plan and history, which are quite different from those currently in IT industry I think… 1) Target Firstly, get career transition into IT service or products companies and get experiences. Eventually, pursue IT entrepreneurship in combination with my arts and cultural production and business expertise. 2) Background Career: performing arts director and manager in theatre-based scale opera and musical Art education in youth BA in literature and Chinese studies (Art & Humanities) MA in Cultural & Creative Industries (Art & Humanities) – dissertation with focus on digital prosumption and the lived experience of the prosumer. (a qualitative research on the agents in the digital world) 2) Personally Huge interest in IT hardware and software, and their trend. Skills to build up, repair, tune PCs -of course this is no more than personal hobby, but shows my interests in this field. 4) Problem Encounter a question “So, what do you think you can contribute practically in this position”. This question turn me down everytime I go through job interviews, and I decided more education in the relevant area. Here are my questions. 1) In terms of work positions in IT software companies, I wonder if I can put the comparison of what “Artists” is to “Arts Manager or Director” is what “Developer” is to “Product Manager”. (Of course, this stereotypical division of Artist-Art Manager is out of sense because the domain overlaps to some extent, and is blurring at least in my field, and they are in different contexts, but just speaking easily.) Normally, artist comes with special arts educations, and they live in their own world of artistic inspiration and creation, and they feel alive in practice and on stages. Meanwhile, from the point of staging and managing productions, the role of art manager is critical as well. Our role cares how the production appeals to the audience in effective way, how to make profit and future sustainable management through that, how to set up future strategy in consideration of the external conditions such as political and social circumstances, audience trend and level, other production trends from on-going and historical perspectives, how and what the production make voice to the society from political, economic, humanitarian stances. So, we need keen eyes on economic, political, and societal environment, have to understand human-being and their desires, must know how to make presentation and attract investors, must have sense in managing and fighting over the limited financial resource, how to extend networking and so on. It is common that the two agents create productions in collaboration (normally not in that ideal way but in conflict and fight though J ). So, we need to know each other’s expertise to some extent, for better production. What are the work positions in IT software industries equivalent to the role of “art manager” in performing arts? From my view, considering developers come with special education in the world of computer science, software engineering, or others (self-education sometimes), and they express themselves with the arts of coding, computer languages on the black screen, and make sort of their artistic production online to the audience, I guess there might be someone who collaborate with developers in creating, managing, and launching IT services or products. 2) Which education among CS, SE, HCI, Information Science, is needed for those seeking such work position? Especially for person like me. (At this moment, Information Science has the highest possibility to get in, since I lack Calculus and Math in undergrad educaiton. But please let me know irrespective of this concern, I think there are ways to back it up if CS or SE education needed in my case) 3) Which field between Information Science and HCI can be more practical background regarding job hungting? And which of them have more demands in job market? AS I checked, HCI is more close to CS than IS in its focus of study area. Thank you very much for your patience reading such a long inquiry, and I appreciate to your efforts in advance. Have a nice day in this beautiful summer.

    Read the article

  • Eloqua Experience 2013: Mystique, Modern Marketing and Masterful Engagement

    - by Mike Stiles
    The following is a guest post from Erick Mott, a social business leader at Oracle Eloqua. There’s a growing gap between 20th century marketing and a modern marketing way of doing business. I can’t think of a better example of modern marketing in action than what more than 2,000 people experienced in San Francisco at #EE13; customer-obsession, multichannel content, and real-time engagement all coming together at one extraordinary event. This was my first Eloqua Experience as a new Oracle Eloqua employee. In weeks prior, I heard about the mystique but didn’t know what to expect. What I’ve come to understand with more clarity is everything we do revolves around customer success, and we operate and educate at all times with these five tenets in mind: 1. Targeting: Really Know Your Buyer 2. Engagement: Create a 1:1 Relationship 3. Conversion: Visualize Guided Thinking 4. Analysis: Learn What’s Working 5. Marketing Technology: Enable and Extend the Cloud Product News from Eloqua Experience 2013 We made some announcements that John Stetic, VP of Products, Oracle Eloqua covers in this brief ‘Modern Marketing Minute’ video recorded after Wednesday’s keynote; summarized below, too: Oracle Eloqua AdFocus: While understanding the impact of a specific marketing channel was formerly relegated to marketers’ wish lists, the channels we now focus on are digital, social, and mobile. AdFocus gives marketers a single platform to dynamically create, manage and measure display ads alongside owned and earned media. AdFocus enables marketers to target only key accounts or prospects you want to reach with display ads, as well as provide creative content or personalized ad copy based on their persona and activities. Oracle Eloqua Profiler: The details of what we now know about customers have expanded into a universal customer profile, which can be used to create highly targeted segments. Marketers now can take data that’s not even stored in Eloqua to help targeted and score prospects for a complete, multichannel view of the customer. Profiler gives sales reps one, detailed view of the prospect to extend views beyond Oracle Eloqua asset activity (emails, forms, page views) to any external assets stored in Oracle Eloqua. Marketing Resource Management: New capabilities create more secure and controlled access to marketing resources and data. New integrations provide greater insight into campaign resources and management through a central marketing calendar and simplify resource management. Integrated Sales and Marketing Funnel: An integrated sales and marketing funnel view gives marketing and sales users, cross-functional teams, and executive management a consistent and clear view of pipeline performance. It also quickly provides users with historical metrics across different time spans and conditions. Eloqua AppCloud: More than 20 new AppCloud partners have been added to the community, which now includes 100+ apps. Eloqua AppCloud now provides modern marketers with an even broader range of marketing applications that help expand and enrich sales and marketing efforts; easily accessible in the Topliners Community. Social Capabilities: Recent integration between Oracle Eloqua and Oracle Social Relationship Management (SRM) deliver a comprehensive, scalable and integrated modern marketing solution. New capabilities include better tracking of social activities for a more complete customer profile. Engage Facebook custom audiences with AdFocus to deliver ads and meaningful experiences through trusted social networks. Biggest and Best Eloqua Experience. There’s a lot of talk in the industry about the Marketing Cloud. At Oracle Eloqua, we have been on a mission of delivering the most advanced and integrated modern marketing technology on the planet. It’s not just a concept but reality with proven execution, as seen first-hand this week in San Francisco. In this video, Kevin Akeroyd, SVP of Oracle Eloqua, provides some highlights of what made this year’s Eloqua Experience, exceptional, including Steve Woods’ presentation about the journey of modern marketers and Andrea Ward’s conversation with Vince Gilligan, creator of the Breaking Bad television series. The 2013 Markie Awards The Oracle Eloqua Marketing Cloud was best exemplified for me as 19 Markies were awarded to customers for their exceptional creativity and results as modern marketers. Wow, what a night to remember with so many committed and talented people working to create an extraordinary experience! To learn more about how to become a modern marketer, check out these resources. We look forward to seeing you next year at Eloqua Experience. More on Erick: 20 years experience at Oracle, Ektron, Sitecore, Lyris, Habeas, Nokia, creatorbase, Mark Monitor, Cisco Systems, GlobalFluency, Sun Microsystems, Philips NV, Elm Products and CBS TV. Patent holder with agency, Fortune 500, media, and startup company expertise. @mikestiles

    Read the article

  • Cannot pull correct data from a Javascript array into an HTML form

    - by Isaac
    I am trying to return the description value of the corresponding author name and book title(that are typed in the text boxes). The problem is that the first description displays in the text area no matter what. <h1>Bookland</h1> <div id="bookinfo"> Author name: <input type="text" id="authorname" name="authorname"></input><br /> Book Title: <input type="text" id="booktitle" name="booktitle"></input><br /> <input type="button" value="Find book" id="find"></input> <input type="button" value="Clear Info" id="clear"></input><br /> <textarea rows="15" cols="30" id="destin"></textarea> </div> JavaScript: var bookarray = [{Author: "Thomas Mann", Title: "Death in Venice", Description: "One of the most famous literary works of the twentieth century, this novella embodies" + "themes that preoccupied Thomas Mann in much of his work:" + "the duality of art and life, the presence of death and disintegration in the midst of existence," + "the connection between love and suffering and the conflict between the artist and his inner self." }, {Author: "James Joyce", Title: "A portrait of the artist as a young man", Description: "This work displays an unusually perceptive view of British society in the early 20th century." + "It is a social comedy set in Florence, Italy, and Surrey, England." + "Its heroine, Lucy Honeychurch, struggling against straitlaced Victorian attitudes of arrogance, narroe mindedness and sobbery, falls in love - while on holiday in Italy - with the socially unsuitable George Emerson." }, {Author: "E. M. Forster", Title: "A room with a view", Description: "This book is a fictional re-creation of the Irish writer'sown life and early environment." + "The experiences of the novel's young hero,unfold in astonishingly vivid scenes that seem freshly recalled from life" + "and provide a powerful portrait of the coming of age of a young man ofunusual intelligence, sensitivity and character. " }, {Author: "Isabel Allende", Title: "The house of spirits", Description: "Allende describes the life of three generations of a prominent family in Chile and skillfully combines with this all the main historical events of the time, up until Pinochet's dictatorship." }, {Author: "Isabel Allende", Title: "Of love and shadows", Description: "The whole world of Irene Beltran, a young reporter in Chile at the time of the dictatorship, is destroyed when" + "she discovers a series of killings carried out by government soldiers." + "With the help of a photographer, Francisco Leal, and risking her life, she tries to come up with evidence against the dictatorship." }] function searchbook(){ for(i=0; i &lt; bookarray.length; i++){ if ((document.getElementById("authorname").value &amp; document.getElementById("booktitle").value ) == (bookarray[i].Author &amp; bookarray[i].Title)){ document.getElementById("destin").value =bookarray[i].Description return bookarray[i].Description } else { return "Not Found!" } } } document.getElementById("find").addEventListener("click", searchbook, false)

    Read the article

  • VBA/SQL recordsets

    - by intruesiive
    The project I'm asking about is for sending an email to teachers asking what books they're using for the classes they're teaching next semester, so that the books can be ordered. I have a query that compares the course number of this upcoming semester's classes to the course numbers of historical textbook orders, pulling out only those classes that are being taught this semester. That's where I get lost. I have a table that contains the following: -Professor -Course Number -Year -Book -Title The data looks like this: professor year course number title smith 13 1111 Pride and Prejudice smith 13 1111 The Fountainhead smith 13 1222 The Alchemist smith 12 1111 Pride and Prejudice smith 11 1222 Infinite Jest smith 10 1333 The Bible smith 13 1333 The Bible smith 12 1222 The Alchemist smith 10 1111 Moby Dick johnson 12 1222 The Tipping Point johnson 11 1333 Anna Kerenina johnson 10 1333 Everything is Illuminated johnson 12 1222 The Savage Detectives johnson 11 1333 In Search of Lost Time johnson 10 1333 Great Expectations johnson 9 1222 Proust on the Shore Here's what I need the code to do "on paper": Group the records by professor. Determine every unique course number in that group, and group records by course number. For each unique course number, determine the highest year associated. Then spit out every record with that professor+course number+year combination. With the sample data, the results would be: professor year course number title smith 13 1111 Pride and Prejudice smith 13 1111 The Fountainhead smith 13 1222 The Alchemist smith 13 1333 The Bible johnson 12 1222 The Tipping Point johnson 11 1333 Anna Kerenina johnson 12 1222 The Savage Detectives johnson 11 1333 In Search of Lost Time I'm thinking I should make a record set for each teacher, and within that, another record set for each course number. Within the course number record set, I need the system to determine what the highest year number is - maybe store that in a variable? Then pull out every associated record so that if the teacher ordered 3 books the last time they taught that class (whether it was in 2013 or 2012 and so on) all three books display. I'm not sure I'm thinking of record sets in the right way, though. My SQL so far is basic and clearly doesn't work: SELECT [All].Professor, [All].Course, Max([All].Year) FROM [All] GROUP BY [All].Professor, [All].Course; Thanks for your help.

    Read the article

  • Structure of NAnt build scripts and solution structure on build server

    - by llykke
    We're in the process of streamlining/automating build, integration and unit testing as well as deployment. Our software is developed in Visual Studio where we have use both C# and VB.NET in our projects. A single project can be contained within multiple solutions (i.e. Utils project is used in both ProductA and ProductB solutions) For historical reasons our code repository isn't as well structured as one could have hoped for. E.g. Utils project might be located under ProductA solution (because that's were it was first used) but it was later deemed useful for productB development and merely just included into the solution of productB (but still located in a subdirectory of productA). I would like to use continous integration testing and have setup a CC.NET build server where I intend to use NAnt for creating the actual builds. Question 1: How should I structure my builds on the buildserver? Should I instruct CC.NET to retrieve all the projects for productB into a single library e.g. a file structure similar to -ProductB --Utils --BetterUtils --Data or should I opt for a filestructure similar to this -ProductA --Utils -ProductB --BetterUtils --Data and then just have the NAnt build scripts handle the references? Our references in VS doesn't match the actual location in the code repository so it's not possible today to just check-out productB solution and build it straight away (unfortunately). I hope this question makes sense? Question 2: Is it better to check out all the source code located in different projects into a single file folder (whilst retaining some kind of structure) and then build every thing at once or have multiple projects in CC.NET and then let the CC.NET server handle dependencies? Example: Should I have a seperate project in CC.NET for monitoring the automated build/test of Utils project when it's never released on it's own? Or should I just build/test it whilst building it as part of ProductB? I hope the above makes sense and that you can provide me with some arguments for using either option. We're nowhere near an ideal source code repository structure and I would prefer if I can resolve the lack of repository structure on the build server instead of having to clean up the structure of our repository. Switching away from VSS is (unfortunately) not an option. Right now our build consists of either deploying via VS clickonce or pressing F5 so just getting the build automated would be a huge step up for us. Thanks

    Read the article

< Previous Page | 8 9 10 11 12 13 14  | Next Page >