Search Results

Search found 2399 results on 96 pages for 'wall street journal'.

Page 90/96 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96  | Next Page >

  • Javascript - Canvas image never appears on first function run

    - by Matt
    I'm getting a bit of a weird issue, the image never shows the first time you run the game in your browser, after that you see it every time. If you close your browser and re open it and run the game again, the same issue occurs - you don't see the image the first time you run it. Here's the issue in action, just hit a wall and there's no image the first time on the end game screen. Any help would be appreciated. Regards, Matt function showGameOver() { ctx.clearRect(0, 0, canvas.width, canvas.height); ctx.fillStyle = "black"; ctx.font = "16px sans-serif"; ctx.fillText("Game Over!", ((canvas.width / 2) - (ctx.measureText("Game Over!").width / 2)), 50); ctx.font = "12px sans-serif"; ctx.fillText("Your Score Was: " + score, ((canvas.width / 2) - (ctx.measureText("Your Score Was: " + score).width / 2)), 70); myimage = new Image(); myimage.src = "xcLDp.gif"; var size = [119, 26], //set up size coord = [443, 200]; ctx.font = "12px sans-serif"; ctx.fillText("Restart", ((canvas.width / 2) - (ctx.measureText("Restart").width / 2)), 197); ctx.drawImage( //draw it on canvas myimage, coord[0], coord[1], size[0], size[1] ); $("canvas").click(function(e) { //when click.. if ( testIfOver(this, e, size, coord) ) { startGame(); //reload } }); $("canvas").mousemove(function(e) { //when mouse moving if ( testIfOver(this, e, size, coord) ) { $(this).css("cursor", "pointer"); //change the cursor } else { $(this).css("cursor", "default"); //change it back } }); function testIfOver(ele,ev,size,coord){ if ( ev.pageX > coord[0] + ele.offsetLeft && ev.pageX < coord[0] + size[0] + ele.offsetLeft && ev.pageY > coord[1] + ele.offsetTop && ev.pageY < coord[1] + size[1] + ele.offsetTop ) { return true; } return false; } }

    Read the article

  • Is a many-to-many relationship with extra fields the right tool for my job?

    - by whichhand
    Previously had a go at asking a more specific version of this question, but had trouble articulating what my question was. On reflection that made me doubt if my chosen solution was correct for the problem, so this time I will explain the problem and ask if a) I am on the right track and b) if there is a way around my current brick wall. I am currently building a web interface to enable an existing database to be interrogated by (a small number of) users. Sticking with the analogy from the docs, I have models that look something like this: class Musician(models.Model): first_name = models.CharField(max_length=50) last_name = models.CharField(max_length=50) dob = models.DateField() class Album(models.Model): artist = models.ForeignKey(Musician) name = models.CharField(max_length=100) class Instrument(models.Model): artist = models.ForeignKey(Musician) name = models.CharField(max_length=100) Where I have one central table (Musician) and several tables of associated data that are related by either ForeignKey or OneToOneFields. Users interact with the database by creating filtering criteria to select a subset of Musicians based on data the data on the main or related tables. Likewise, the users can then select what piece of data is used to rank results that are presented to them. The results are then viewed initially as a 2 dimensional table with a single row per Musician with selected data fields (or aggregates) in each column. To give you some idea of scale, the database has ~5,000 Musicians with around 20 fields of related data. Up to here is fine and I have a working implementation. However, it is important that I have the ability for a given user to upload there own annotation data sets (more than one) and then filter and order on these in the same way they can with the existing data. The way I had tried to do this was to add the models: class UserDataSets(models.Model): user = models.ForeignKey(User) name = models.CharField(max_length=100) description = models.CharField(max_length=64) results = models.ManyToManyField(Musician, through='UserData') class UserData(models.Model): artist = models.ForeignKey(Musician) dataset = models.ForeignKey(UserDataSets) score = models.IntegerField() class Meta: unique_together = (("artist", "dataset"),) I have a simple upload mechanism enabling users to upload a data set file that consists of 1 to 1 relationship between a Musician and their "score". Within a given user dataset each artist will be unique, but different datasets are independent from each other and will often contain entries for the same musician. This worked fine for displaying the data, starting from a given artist I can do something like this: artist = Musician.objects.get(pk=1) dataset = UserDataSets.objects.get(pk=5) print artist.userdata_set.get(dataset=dataset.pk) However, this approach fell over when I came to implement the filtering and ordering of query set of musicians based on the data contained in a single user data set. For example, I could easily order the query set based on all of the data in the UserData table like this: artists = Musician.objects.all().order_by(userdata__score) But that does not help me order by the results of a given single user dataset. Likewise I need to be able to filter the query set based on the "scores" from different user data sets (eg find all musicians with a score 5 in dataset1 and < 2 in dataset2). Is there a way of doing this, or am I going about the whole thing wrong?

    Read the article

  • opacity in ie using absolutely positioned divs not working

    - by camomileCase
    I've been banging my head against the wall for a few hours how trying to sort this out. I'm trying to position one div on top of another for the purpose of fading one in on top of the other. The divs will have an image and some other html in them. I cannot get opacity to work in ie8. I've simplified my html as much as possible: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> <html> <head> <style> * { margin: 0; padding: 0; } .carousel-container { position: relative; } .carousel-overlay { position: absolute; } #carousel-container-a { opacity: 1; -ms-filter: "progid:DXImageTransform.Microsoft.Alpha(Opacity=100)"; filter: progid:DXImageTransform.Microsoft.Alpha(Opacity=100); } #carousel-container-b { opacity: 0; -ms-filter: "progid:DXImageTransform.Microsoft.Alpha(Opacity=0)"; filter: progid:DXImageTransform.Microsoft.Alpha(Opacity=0); } h1 { font-size: 100px; } </style> </head> <body> <div id="carousel-container-a" class="carousel-container"> <div class="carousel-overlay" style="left: 10px; top: 10px;"> <h1 style="color: black;">Showcase</h1> </div> <!-- other elements removed for simplicity --> </div> <div id="carousel-container-b" class="carousel-container"> <div class="carousel-overlay" style="left: 20px; top: 20px;"> <h1 style="color: red;">Welcome</h1> </div> <!-- other elements removed for simplicity --> </div> </body> </html> Why doesn't the opacity work? How can I make it work?

    Read the article

  • Filter Facebook Stream by Post privacy?

    - by fabian
    Hi there, i query some wall data within my facebook tab. I was wondering how to filter the data (query) to show only post which are visible to a certain country. $query = " SELECT post_id, created_time, attachment,action_links, privacy FROM stream WHERE source_id = ".$page_id." AND viewer_id = ".$user_id." AND actor_id = ".$actor_id." LIMIT 50"; The Output already show Australia: But how to filter for Australia-Only. Array ( [posts] => Array ( [0] => Array ( [post_id] => 123 [viewer_id] => 123 [source_id] => 123 [type] => 46 [app_id] => [attribution] => [actor_id] => 123 [target_id] => [message] => Only for Austria [attachment] => Array ( [description] => ) [app_data] => [action_links] => [comments] => Array ( [can_remove] => 1 [can_post] => 1 [count] => 0 [comment_list] => ) [likes] => Array ( [href] => http://www.facebook.com/social_graph.php?node_id=118229678189906&class=LikeManager [count] => 0 [sample] => [friends] => [user_likes] => 0 [can_like] => 1 ) [privacy] => Array ( [description] => Austria [value] => CUSTOM [friends] => [networks] => [allow] => [deny] => ) [updated_time] => 1271520716 [created_time] => 1271520716 [tagged_ids] => [is_hidden] => 0 [filter_key] => [permalink] => http://www.facebook.com/pages/ )

    Read the article

  • "Use of uninitialised value" despite of memset

    - by Framester
    Hi there, I allocate a 2d array and use memset to fill it with zeros. #include<stdio.h> #include<string.h> #include<stdlib.h> void main() { int m=10; int n =10; int **array_2d; array_2d = (int**) malloc(m*sizeof(int*)); if(array_2d==NULL) { printf("\n Could not malloc 2d array \n"); exit(1); } for(int i=0;i<m;i++) { ((array_2d)[i])=malloc(n*sizeof(int)); memset(((array_2d)[i]),0,sizeof(n*sizeof(int))); } for(int i=0; i<10;i++){ for(int j=0; j<10;j++){ printf("(%i,%i)=",i,j); fflush(stdout); printf("%i ", array_2d[i][j]); } printf("\n"); } } Afterwards I use valgrind [1] to check for memory errors. I get following error: Conditional jump or move depends on uninitialised value(s) for line 24 (printf("%i ", array_2d[i][j]);). I always thought memset is the function to initialize arrays. How can I get rid off this error? Thanks! Valgrind output: ==3485== Memcheck, a memory error detector ==3485== Copyright (C) 2002-2009, and GNU GPL'd, by Julian Seward et al. ==3485== Using Valgrind-3.5.0-Debian and LibVEX; rerun with -h for copyright info ==3485== Command: ./a.out ==3485== (0,0)=0 (0,1)===3485== Use of uninitialised value of size 4 ==3485== at 0x409E186: _itoa_word (_itoa.c:195) ==3485== by 0x40A1AD1: vfprintf (vfprintf.c:1613) ==3485== by 0x40A8FFF: printf (printf.c:35) ==3485== by 0x8048724: main (playing_with_valgrind.c:39) ==3485== ==3485== ==3485== ---- Attach to debugger ? --- [Return/N/n/Y/y/C/c] ---- ==3485== Conditional jump or move depends on uninitialised value(s) ==3485== at 0x409E18E: _itoa_word (_itoa.c:195) ==3485== by 0x40A1AD1: vfprintf (vfprintf.c:1613) ==3485== by 0x40A8FFF: printf (printf.c:35) ==3485== by 0x8048724: main (playing_with_valgrind.c:39) [1] valgrind --tool=memcheck --leak-check=yes --show-reachable=yes --num-callers=20 --track-fds=yes --db-attach=yes ./a.out [gcc-cmd] gcc -std=c99 -lm -Wall -g3 playing_with_valgrind.c

    Read the article

  • Safely escaping and reading back a file path in ruby

    - by user336851
    I need to save a few informations about some files. Nothing too fancy so I thought I would go with a simple one line per item text file. Something like this : # write io.print "%i %s %s\n" % [File.mtime(fname), fname, Digest::SHA1.file(fname).hexdigest] # read io.each do |line| mtime, name, hash = line.scanf "%i %s %s" end Of course this doesn't work because a file name can contain spaces (breaking scanf) and line breaks (breaking IO#each). The line break problem can be avoided by dropping the use of each and going with a bunch of gets(' ') while not io.eof? mtime = Time.at(io.gets(" ").to_i) name = io.gets " " hash = io.gets "\n" end Dealing with spaces in the names is another matter. Now we need to do some escaping. note : I like space as a record delimiter but I'd have no issue changing it for one easier to use. In the case of filenames though, the only one that could help is ascii nul "\0" but a nul delimited file isn't really a text file anymore... I initially had a wall of text detailing the iterations of my struggle to make a correct escaping function and its reciprocal but it was just boring and not really useful. I'll just give you the final result: def write_name(io, val) io << val.gsub(/([\\ ])/, "\\\\\\1") # yes that' 6 backslashes ! end def read_name(io) name, continued = "", true while continued continued = false name += io.gets(' ').gsub(/\\(.)/) do |c| if c=="\\\\" "\\" elsif c=="\\ " continued=true " " else raise "unexpected backslash escape : %p (%s %i)" % [c, io.path, io.pos] end end end return name.chomp(' ') end I'm not happy at all with read_name. Way too long and akward, I feel it shouldn't be that hard. While trying to make this work I tried to come up with other ways : the bittorrent encoded / php serialize way : prefix the file name with the length of the name then just io.read(name_len.to_i). It works but it's a real pita to edit the file by hand. At this point we're halfway to a binary format. String#inspect : This one looks expressly made for that purpose ! Except it seems like the only way to get the value back is through eval. I hate the idea of eval-ing a string I didn't generate from trusted data. So. Opinions ? Isn't there some lib which can do all this ? Am I missing something obvious ? How would you do that ?

    Read the article

  • Maps with a nested vector

    - by wawiti
    For some reason the compiler won't let me retrieve the vector of integers from the map that I've created, I want to be able to overwrite this vector with a new vector. The error the compiler gives me is ridiculous. Thanks for your help!! The compiler didn't like this part of my code: line_num = miss_words[word_1]; Error: [Wawiti@localhost Lab2]$ g++ -g -Wall *.cpp -o lab2 main.cpp: In function ‘int main(int, char**)’: main.cpp:156:49: error: no match for ‘operator=’ in ‘miss_words.std::map<_Key, _Tp, _Compare, _Alloc>::operator[]<std::basic_string<char>, std::vector<int>, std::less<std::basic_string<char> >, std::allocator<std::pair<const std::basic_string<char>, std::vector<int> > > >((*(const key_type*)(& word_1))) = line_num.std::vector<_Tp, _Alloc>::push_back<int, std::allocator<int> >((*(const value_type*)(& line)))’ main.cpp:156:49: note: candidate is: In file included from /usr/lib/gcc/x86_64-redhat->linux/4.7.2/../../../../include/c++/4.7.2vector:70:0, from header.h:19, from main.cpp:15: /usr/lib/gcc/x86_64-redhat-linux/4.7.2/../../../../include/c++/4.7.2/bits/vector.tcc:161:5: note: std::vector<_Tp, _Alloc>& std::vector<_Tp, _Alloc>::operator=(const std::vector<_Tp, _Alloc>&) [with _Tp = int; _Alloc = std::allocator<int>] /usr/lib/gcc/x86_64-redhat-linux/4.7.2/../../../../include/c++/4.7.2/bits/vector.tcc:161:5: note: no known conversion for argument 1 from ‘void’ to ‘const std::vector<int>&’ CODE: map<string, vector<int> > miss_words; // Creates a map for misspelled words string word_1; // String for word; string sentence; // To store each line; vector<int> line_num; // To store line numbers ifstream file; // Opens file to be spell checked file.open(argv[2]); int line = 1; while(getline(file, sentence)) // Reads in file sentence by sentence { sentence=remove_punct(sentence); // Removes punctuation from sentence stringstream pars_sentence; // Creates stringstream pars_sentence << sentence; // Places sentence in a stringstream while(pars_sentence >> word_1) // Picks apart sentence word by word { if(dictionary.find(word_1)==dictionary.end()) { line_num = miss_words[word_1]; //Compiler doesn't like this miss_words[word_1] = line_num.push_back(line); } } line++; // Increments line marker }

    Read the article

  • KeyNotFound Exception in Dictionary(of T)

    - by C Patton
    I'm about ready to bang my head against the wall I have a class called Map which has a dictionary called tiles. class Map { public Dictionary<Location, Tile> tiles = new Dictionary<Location, Tile>(); public Size mapSize; public Map(Size size) { this.mapSize = size; } //etc... I fill this dictionary temporarily to test some things.. public void FillTemp(Dictionary<int, Item> itemInfo) { Random r = new Random(); for(int i =0; i < mapSize.Width; i++) { for(int j=0; j<mapSize.Height; j++) { Location temp = new Location(i, j, 0); int rint = r.Next(0, (itemInfo.Count - 1)); Tile t = new Tile(new Item(rint, rint)); tiles[temp] = t; } } } and in my main program code Map m = new Map(10, 10); m.FillTemp(iInfo); Tile t = m.GetTile(new Location(2, 2, 0)); //The problem line now, if I add a breakpoint in my code, I can clearly see that my instance (m) of the map class is filled with pairs via the function above, but when I try to access a value with the GetTile function: public Tile GetTile(Location location) { if(this.tiles.ContainsKey(location)) { return this.tiles[location]; } else { return null; } } it ALWAYS returns null. Again, if I view inside the Map object and find the Location key where x=2,y=2,z=0 , I clearly see the value being a Tile that FillTemp generated.. Why is it doing this? I've had no problems with a Dictionary such as this so far. I have no idea why it's returning null. and again, when debugging, I can CLEARLY see that the Map instance contains the Location key it says it does not... very frustrating. Any clues? Need any more info? Help would be greatly appreciated :)

    Read the article

  • Vacuum spread in a tile-based space game (like in Faster Than Light game)

    - by Reeze
    I've a space game with tilemap that looks like this (simplified): Map view - from top (like in SimCity 1) 0 - room space, 1 - some kind of wall, 8 - "lock" beetween rooms public int[,] _layer = new int[,] { { 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, { 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, { 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, { 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, { 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, { 1, 1, 8, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, { 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, { 0, 1, 0, 8, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, { 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, }; Each tile contains Air property (100 = air, 0 = vacuum). I made a little helper method to take tiles near tile with vacuum (to "suck air"): Point[] GetNearCells(Point cell, int distance = 1, bool diag = false) { Point topCell = new Point(cell.X, cell.Y - distance); Point botCell = new Point(cell.X, cell.Y + distance); Point leftCell = new Point(cell.X - distance, cell.Y); Point rightCell = new Point(cell.X + distance, cell.Y); if (diag) { return new[] { topCell, botCell, leftCell, rightCell }; } Point topLeftCell = new Point(cell.X - distance, cell.Y - distance); Point topRightCell = new Point(cell.X + distance, cell.Y + distance); Point botLeftCell = new Point(cell.X - distance, cell.Y - distance); Point botRightCell = new Point(cell.X - distance, cell.Y - distance); return new[] { topCell, botCell, leftCell, rightCell, topLeftCell, topRightCell, botLeftCell, botRightCell }; } What is the best practice to fill rooms with vacuum (decrease air) from some point? Should i use some kind of water flow? Thank you for any help!

    Read the article

  • KeyNotFound Exception in CSsharp

    - by C Patton
    I'm about ready to bang my head against the wall I have a class called Map which has a dictionary called tiles. class Map { public Dictionary<Location, Tile> tiles = new Dictionary<Location, Tile>(); public Size mapSize; public Map(Size size) { this.mapSize = size; } //etc... I fill this dictionary temporarily to test some things.. public void FillTemp(Dictionary<int, Item> itemInfo) { Random r = new Random(); for(int i =0; i < mapSize.Width; i++) { for(int j=0; j<mapSize.Height; j++) { Location temp = new Location(i, j, 0); int rint = r.Next(0, (itemInfo.Count - 1)); Tile t = new Tile(new Item(rint, rint)); tiles[temp] = t; } } } and in my main program code Map m = new Map(10, 10); m.FillTemp(iInfo); Tile t = m.GetTile(new Location(2, 2, 0)); //The problem line now, if I add a breakpoint in my code, I can clearly see that my instance (m) of the map class is filled with pairs via the function above, but when I try to access a value with the GetTile function: public Tile GetTile(Location location) { if(this.tiles.ContainsKey(location)) { return this.tiles[location]; } else { return null; } } it ALWAYS returns null. Again, if I view inside the Map object and find the Location key where x=2,y=2,z=0 , I clearly see the value being a Tile that FillTemp generated.. Why is it doing this? I've had no problems with a Dictionary such as this so far. I have no idea why it's returning null. and again, when debugging, I can CLEARLY see that the Map instance contains the Location key it says it does not... very frustrating. Any clues? Need any more info? Help would be greatly appreciated :)

    Read the article

  • 'Scanner' does not name a type error in g++

    - by Max
    Hi. I'm trying to compile code in g++ and I get the following errors: In file included from scanner.hpp:8, from scanner.cpp:5: parser.hpp:14: error: ‘Scanner’ does not name a type parser.hpp:15: error: ‘Token’ does not name a type Here's my g++ command: g++ parser.cpp scanner.cpp -Wall Here's parser.hpp: #ifndef PARSER_HPP #define PARSER_HPP #include <string> #include <map> #include "scanner.hpp" using std::string; class Parser { // Member Variables private: Scanner lex; // Lexical analyzer Token look; // tracks the current lookahead token // Member Functions <some function declarations> }; #endif and here's scanner.hpp: #ifndef SCANNER_HPP #define SCANNER_HPP #include <iostream> #include <cctype> #include <string> #include <map> #include "parser.hpp" using std::string; using std::map; enum { // reserved words BOOL, ELSE, IF, TRUE, WHILE, DO, FALSE, INT, VOID, // punctuation and operators LPAREN, RPAREN, LBRACK, RBRACK, LBRACE, RBRACE, SEMI, COMMA, PLUS, MINUS, TIMES, DIV, MOD, AND, OR, NOT, IS, ADDR, EQ, NE, LT, GT, LE, GE, // symbolic constants NUM, ID, ENDFILE, ERROR }; class Token { public: int tag; int value; string lexeme; Token() {tag = 0;} Token(int t) {tag = t;} }; class Num : public Token { public: Num(int v) {tag = NUM; value = v;} }; class Word : public Token { public: Word() {tag = 0; lexeme = "default";} Word(int t, string l) {tag = t; lexeme = l;} }; class Scanner { private: int line; // which line the compiler is currently on int depth; // how deep in the parse tree the compiler is map<string,Word> words; // list of reserved words and used identifiers // Member Functions public: Scanner(); Token scan(); string printTag(int); friend class Parser; }; #endif anyone see the problem? I feel like I'm missing something incredibly obvious.

    Read the article

  • No warning from gcc when function definition in linked source different from function prototype in h

    - by c_c
    Hi, I had a problem with a part of my code, which after some iterations seemed to read NaN as value of a int of a struct. I think I found the error, but am still wondering why gcc (version 3.2.3 on a embedded Linux with busybox) did not warn me. Here are the important parts of the code: A c file and its header for functions to acquire data over USB: // usb_control.h typedef struct{ double mean; short *values; } DATA_POINTS; typedef struct{ int size; DATA_POINTS *channel1; //....7 more channels } DATA_STRUCT; DATA_STRUCT *create_data_struct(int N); // N values per channel int free_data_struct(DATA_STRUCT *data); int aqcu_data(DATA_STRUCT *data, int N); A c and header file with helper function (math, bitshift,etc...): // helper.h int mean(DATA_STRUCT *data); // helper.c (this is where the error is obviously) double mean(DATA_STRUCT *data) { // sum in for loop data->channel1->mean = sum/data->N; // ...7 more channels // a printf here displayed the mean values corretly } The main file // main.c #include "helper.h" #include "usb_control.h" // Allocate space for data struct DATA_STRUCT *data = create_data_struct(N); // get data for different delays for (delay = 0; delay < 500; delay += pw){ acqu_data(data, N); mean(data); // printf of the mean values first is correct. Than after 5 iterations // it is always NaN for channel1. The other channels are displayed correctly; } There were no segfaults nor any other missbehavior, just the NaN for channel1 in the main file. After finding the error, which was not easy, it was of course east to fix. The return type of mean(){} was wrong in the definition. Instead of double mean() it has to be int mean() as the prototype defines. When all the functions are put into one file, gcc warns me that there is a redefinition of the function mean(). But as I compile each c file seperately and link them afterwards gcc seems to miss that. So my questions would be. Why didn't I get any warnings, even non with gcc -Wall? Or is there still another error hidden which is just not causing problems now? Regards, christian

    Read the article

  • Terminating a long-executing thread and then starting a new one in response to user changing parameters via UI in an applet

    - by user1817170
    I have an applet which creates music using the JFugue API and plays it for the user. It allows the user to input a music phrase which the piece will be based on, or lets them choose to have a phrase generated randomly. I had been using the following method (successfully) to simply stop and start the music, which runs in a thread using the Player class from JFugue. I generate the music using my classes and user input from the applet GUI...then... private playerThread pthread; private Thread threadPlyr; private Player player; (from variables declaration) public void startMusic(Pattern p) // pattern is a JFugue object which holds the generated music { if (pthread == null) { pthread = new playerThread(); } else { pthread = null; pthread = new playerThread(); } if (threadPlyr == null) { threadPlyr = new Thread(pthread); } else { threadPlyr = null; threadPlyr = new Thread(pthread); } pthread.setPattern(p); threadPlyr.start(); } class playerThread implements Runnable // plays midi using jfugue Player { private Pattern pt; public void setPattern(Pattern p) { pt = p; } @Override public void run() { try { player.play(pt); // takes a couple mins or more to execute resetGUI(); } catch (Exception exception) { } } } And the following to stop music when user presses the stop/start button while Player.isPlaying() is true: public void stopMusic() { threadPlyr.interrupt(); threadPlyr = null; pthread = null; player.stop(); } Now I want to implement a feature which will allow the user to change parameters while the music is playing, create an updated music pattern, and then play THAT pattern. Basically, the idea is to make it simulate "real time" adjustments to the generated music for the user. Well, I have been beating my head against the wall on this for a couple of weeks. I've read all the standard java documentation, researched, read, and searched forums, and I have tried many different ideas, none of which have succeeded. The problem I've run into with all approaches I've tried is that when I start the new thread with the new, updated musical pattern, all the old threads ALSO start, and there is a cacophony of unintelligible noise instead of my desired output. From what I've gathered, the issue seems to be that all the methods I've come across require that the thread is able to periodically check the value of a "flag" variable and then shut itself down from within its "run" block in response to that variable. However, since my thread makes a call that takes several minutes minimum to execute (playing the music), and I need to terminate it WHILE it is executing this, there is really no safe way to do so. So, I'm wondering if there is something I'm missing when it comes to threads, or if perhaps I can accomplish my goal using a totally different approach. Any ideas or guidance is greatly appreciated! Thank you!

    Read the article

  • Mapping UrlEncoded POST Values in ASP.NET Web API

    - by Rick Strahl
    If there's one thing that's a bit unexpected in ASP.NET Web API, it's the limited support for mapping url encoded POST data values to simple parameters of ApiController methods. When I first looked at this I thought I was doing something wrong, because it seems mighty odd that you can bind query string values to parameters by name, but can't bind POST values to parameters in the same way. To demonstrate here's a simple example. If you have a Web API method like this:[HttpGet] public HttpResponseMessage Authenticate(string username, string password) { …} and then hit with a URL like this: http://localhost:88/samples/authenticate?Username=ricks&Password=sekrit it works just fine. The query string values are mapped to the username and password parameters of our API method. But if you now change the method to work with [HttpPost] instead like this:[HttpPost] public HttpResponseMessage Authenticate(string username, string password) { …} and hit it with a POST HTTP Request like this: POST http://localhost:88/samples/authenticate HTTP/1.1 Host: localhost:88 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Content-type: application/x-www-form-urlencoded Content-Length: 30 Username=ricks&Password=sekrit you'll find that while the request works, it doesn't actually receive the two string parameters. The username and password parameters are null and so the method is definitely going to fail. When I mentioned this over Twitter a few days ago I got a lot of responses back of why I'd want to do this in the first place - after all HTML Form submissions are the domain of MVC and not WebAPI which is a valid point. However, the more common use case is using POST Variables with AJAX calls. The following is quite common for passing simple values:$.post(url,{ Username: "Rick", Password: "sekrit" },function(result) {…}); but alas that doesn't work. How ASP.NET Web API handles Content Bodies Web API supports parsing content data in a variety of ways, but it does not deal with multiple posted content values. In effect you can only post a single content value to a Web API Action method. That one parameter can be very complex and you can bind it in a variety of ways, but ultimately you're tied to a single POST content value in your parameter definition. While it's possible to support multiple parameters on a POST/PUT operation, only one parameter can be mapped to the actual content - the rest have to be mapped to route values or the query string. Web API treats the whole request body as one big chunk of data that is sent to a Media Type Formatter that's responsible for de-serializing the content into whatever value the method requires. The restriction comes from async nature of Web API where the request data is read only once inside of the formatter that retrieves and deserializes it. Because it's read once, checking for content (like individual POST variables) first is not possible. However, Web API does provide a couple of ways to access the form POST data: Model Binding - object property mapping to bind POST values FormDataCollection - collection of POST keys/values ModelBinding POST Values - Binding POST data to Object Properties The recommended way to handle POST values in Web API is to use Model Binding, which maps individual urlencoded POST values to properties of a model object provided as the parameter. Model binding requires a single object as input to be bound to the POST data, with each POST key that matches a property name (including nested properties like Address.Street) being mapped and updated including automatic type conversion of simple types. This is a very nice feature - and a familiar one from MVC - that makes it very easy to have model objects mapped directly from inbound data. The obvious drawback with Model Binding is that you need a model for it to work: You have to provide a strongly typed object that can receive the data and this object has to map the inbound data. To rewrite the example above to use ModelBinding I have to create a class maps the properties that I need as parameters:public class LoginData { public string Username { get; set; } public string Password { get; set; } } and then accept the data like this in the API method:[HttpPost] public HttpResponseMessage Authenticate(LoginData login) { string username = login.Username; string password = login.Password; … } This works fine mapping the POST values to the properties of the login object. As a side benefit of this method definition, the method now also allows posting of JSON or XML to the same endpoint. If I change my request to send JSON like this: POST http://localhost:88/samples/authenticate HTTP/1.1 Host: localhost:88 Accept: application/jsonContent-type: application/json Content-Length: 40 {"Username":"ricks","Password":"sekrit"} it works as well and transparently, courtesy of the nice Content Negotiation features of Web API. There's nothing wrong with using Model binding and in fact it's a common practice to use (view) model object for inputs coming back from the client and mapping them into these models. But it can be  kind of a hassle if you have AJAX applications with a ton of backend hits, especially if many methods are very atomic and focused and don't effectively require a model or view. Not always do you have to pass structured data, but sometimes there are just a couple of simple response values that need to be sent back. If all you need is to pass a couple operational parameters, creating a view model object just for parameter purposes seems like overkill. Maybe you can use the query string instead (if that makes sense), but if you can't then you can often end up with a plethora of 'message objects' that serve no further  purpose than to make Model Binding work. Note that you can accept multiple parameters with ModelBinding so the following would still work:[HttpPost] public HttpResponseMessage Authenticate(LoginData login, string loginDomain) but only the object will be bound to POST data. As long as loginDomain comes from the querystring or route data this will work. Collecting POST values with FormDataCollection Another more dynamic approach to handle POST values is to collect POST data into a FormDataCollection. FormDataCollection is a very basic key/value collection (like FormCollection in MVC and Request.Form in ASP.NET in general) and then read the values out individually by querying each. [HttpPost] public HttpResponseMessage Authenticate(FormDataCollection form) { var username = form.Get("Username"); var password = form.Get("Password"); …} The downside to this approach is that it's not strongly typed, you have to handle type conversions on non-string parameters, and it gets a bit more complicated to test such as setup as you have to seed a FormDataCollection with data. On the other hand it's flexible and easy to use and especially with string parameters is easy to deal with. It's also dynamic, so if the client sends you a variety of combinations of values on which you make operating decisions, this is much easier to work with than a strongly typed object that would have to account for all possible values up front. The downside is that the code looks old school and isn't as self-documenting as a parameter list or object parameter would be. Nevertheless it's totally functionality and a viable choice for collecting POST values. What about [FromBody]? Web API also has a [FromBody] attribute that can be assigned to parameters. If you have multiple parameters on a Web API method signature you can use [FromBody] to specify which one will be parsed from the POST content. Unfortunately it's not terribly useful as it only returns content in raw format and requires a totally non-standard format ("=content") to specify your content. For more info in how FromBody works and several related issues to how POST data is mapped, you can check out Mike Stalls post: How WebAPI does Parameter Binding Not really sure where the Web API team thought [FromBody] would really be a good fit other than a down and dirty way to send a full string buffer. Extending Web API to make multiple POST Vars work? Don't think so Clearly there's no native support for multiple POST variables being mapped to parameters, which is a bit of a bummer. I know in my own work on one project my customer actually found this to be a real sticking point in their AJAX backend work, and we ended up not using Web API and using MVC JSON features instead. That's kind of sad because Web API is supposed to be the proper solution for AJAX backends. With all of ASP.NET Web API's extensibility you'd think there would be some way to build this functionality on our own, but after spending a bit of time digging and asking some of the experts from the team and Web API community I didn't hear anything that even suggests that this is possible. From what I could find I'd say it's not possible primarily because Web API's Routing engine does not account for the POST variable mapping. This means [HttpPost] methods with url encoded POST buffers are not mapped to the parameters of the endpoint, and so the routes would never even trigger a request that could be intercepted. Once the routing doesn't work there's not much that can be done. If somebody has an idea how this could be accomplished I would love to hear about it. Do we really need multi-value POST mapping? I think that that POST value mapping is a feature that one would expect of any API tool to have. If you look at common APIs out there like Flicker and Google Maps etc. they all work with POST data. POST data is very prominent much more so than JSON inputs and so supporting as many options that enable would seem to be crucial. All that aside, Web API does provide very nice features with Model Binding that allows you to capture many POST variables easily enough, and logistically this will let you build whatever you need with POST data of all shapes as long as you map objects. But having to have an object for every operation that receives a data input is going to take its toll in heavy AJAX applications, with a lot of types created that do nothing more than act as parameter containers. I also think that POST variable mapping is an expected behavior and Web APIs non-support will likely result in many, many questions like this one: How do I bind a simple POST value in ASP.NET WebAPI RC? with no clear answer to this question. I hope for V.next of WebAPI Microsoft will consider this a feature that's worth adding. Related Articles Passing multiple POST parameters to Web API Controller Methods Mike Stall's post: How Web API does Parameter Binding Where does ASP.NET Web API Fit?© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Issue 15: Oracle PartnerNetwork Exchange @ Oracle OpenWorld

    - by rituchhibber
         ORACLE FOCUS Oracle PartnerNetwork Exchange@ ORACLE OpenWorld Sylvie MichouSenior DirectorPartner Marketing & Communications and Strategic Programs RESOURCES -- Oracle OpenWorld 2012 Oracle PartnerNetwork Exchange @ OpenWorld Oracle PartnerNetwork Exchange @ OpenWorld Registration Oracle PartnerNetwork Exchange SpecializationTest Fest Oracle OpenWorld Schedule Builder Oracle OpenWorld Promotional Toolkit for Partners Oracle Partner Events Oracle Partner Webcasts Oracle EMEA Partner News SUBSCRIBE FEEDBACK PREVIOUS ISSUES If you are attending our forthcoming Oracle OpenWorld 2012 conference in San Francisco from 30 September to 4 October, you will discover a new dedicated programme of keynotes and sessions tailored especially for you, our valued partners. Oracle PartnerNetwork Exchange @ OpenWorld has been created to enhance the opportunities for you to learn from and network with Oracle executives and experts. The programme also provides more informal opportunities than ever throughout the week to meet up with the people who are most important to your business: customers, prospects, colleagues and the Oracle EMEA Alliances & Channels management team. Oracle remains fully focused on building the industry's most admired partner ecosystem—which today spans over 25,000 partners. This new OPN Exchange programme offers an exciting change of pace for partners throughout the conference. Now it will be possible to enjoy a fully-integrated, partner-dedicated session schedule throughout the week, as well as key social events such as the Sunday night Welcome Reception, networking lunches from Monday to Thursday at the Howard Street Tent, and a fantastic closing event on the last Thursday afternoon. In addition to the regular Oracle OpenWorld conference schedule, if you have registered for the Oracle PartnerNetwork Exchange @ OpenWorld programme, you will be invited to attend a much anticipated global partner keynote presentation, plus more than 40 conference sessions aimed squarely at what's most important to you, as partners. Prominent topics for discussion will include: Oracle technologies and roadmaps and how they fit with partners' business plans; business development; regional distinctions in business practices; and much more. Each session will provide plenty of food for thought ahead of the numerous networking opportunities throughout the week, encouraging the knowledge exchange with Oracle executives, customers, prospects, and colleagues that will make this conference of even greater value for you. At Oracle we always work closely with our partners to deliver solution offerings that improve business value, simplify the IT experience and drive innovation and efficiencies for joint customers. The most important element of our new OPN Exchange is content that helps you get more from technology investments, more from your peer-to-peer connections, and more from your interactions with customers. To this end we've created some partner-specific tools which can be used by OPN members ahead of the conference itself. Crucially, a comprehensive Content Catalog already lists and organises details of every OPN Exchange session, speaker, exhibitor, demonstration and related materials. This Content Catalog can be used by all our partners to identify interesting content that you can add to your own personalised Oracle OpenWorld Schedule Builder, allowing more effective planning and pre-enrolment for vital sessions. There are numerous highlights that you will definitely want to include in those personal schedules. On Sunday morning, 30 September we will start the week with partner dedicated OPN Exchange sessions, following our Global Partner Keynote at 13:00 with Judson Althoff, SVP, Worldwide Alliances & Channels and Embedded Sales and senior executives, giving insight into Oracle's partner vision, strategy, and resources—all designed to help build and strengthen market opportunities for you. This will be followed by a number of OPN Exchange general sessions, the Oracle OpenWorld Opening Keynote with Larry Ellison, CEO, Oracle and concluded with the OPN Exchange AfterDark Welcome Reception, starting at 19:30 at the Metreon. From Monday 1 to Thursday 4 October, you can attend the OPN Exchange sessions that are most relevant to your business today and over the coming year. Oracle's top product and sales leaders will be on hand to discuss Oracle's strategic direction in 40+ targeted and in-depth sessions focussing on critical success factors to develop your business. Oracle's dedication to innovation, specialization, enablement and engineering provides Oracle partners with a huge opportunity to create new services and solutions, differentiate themselves and deliver extreme value to joint customers across the globe. Oracle will even be helping over 1000 partners to earn OPN Specialization certification during the Oracle OpenWorld OPN Exchange Test Fest, which will be providing all the study materials and exams required to drive Specialization for free at the conference. You simply need to check the list of current certification tracks available, and make sure you pre-register to reserve a seat in one of the ten sessions being offered free to OPN Exchange registered attendees. And finally, let's not forget those all-important networking opportunities, which can so often provide partners with valuable long-term alliances as well as exciting new business leads. The Oracle PartnerNetwork Lounge, located at Moscone South, exhibition hall, room 100 is the place where partners can meet formally or informally with colleagues, customers, prospects, and other industry professionals. OPN Specialized partners with OPN Exchange passes can also visit the OPN Video Blogging room to record and share ideas, and at the OPN Information Station you will find consultants available to answer your questions. "For the first time ever we will have a full partner conference within OpenWorld. OPN Exchange @ OpenWorld will kick-off on the first Sunday and run the entire week. We'll have over 40 sessions throughout that time and partners will hear from our top development executives, with special sessions dedicated to partnering throughout. It's going to be a phenomenal event, and we look forward to seeing our partners there." Judson Althoff, SVP, Oracle Worldwide Alliances & Channels and Embedded Sales So if you haven't done so already, please register for Oracle PartnerNetwork Exchange @ OpenWorld today or add OPN Exchange to your existing registration for just $100 through My Account. And if you have any further questions regarding partner activities at Oracle OpenWorld, please don't hesitate to contact the Oracle PartnerNetwork team at [email protected] will be on hand to share the very latest information about: Oracle's SPARC Superclusters: the latest Engineered Systems from Oracle, delivering radically improved performance, faster deployment and greatly reduced operational costs for mixed database and enterprise application consolidation Oracle's SPARC T4 servers: with the newly developed T4 processor and Oracle Solaris providing up to five times the single threaded performance and better overall system throughput for expanded application versatility Oracle Database Appliance: a new way to take advantage of the world's most popular database, Oracle Database 11g, in a single, easy-to-deploy and manage system. It's a complete package engineered to deliver simple, reliable and affordable database services to small and medium size businesses and departmental systems. All hardware and software components are supported together and offer customers unique pay-as-you-grow software licensing to quickly scale from two to 24 processor cores without incurring the costs and downtime usually associated with hardware upgrades Oracle Exalogic: the world's only integrated cloud machine, featuring server hardware and middleware software engineered together for maximum performance with minimum set-up and operational cost Oracle Exadata Database Machine: the only database machine that provides extreme performance for both data warehousing and online transaction processing (OLTP) applications, making it the ideal platform for consolidating onto grids or private clouds. It is a complete package of servers, storage, networking and software that is massively scalable, secure and redundant Oracle Sun ZFS Storage Appliances: providing enterprise-class NAS performance, price-performance, manageability and TCO by combining third-generation software with high-performance controllers, flash-based caches and disks Oracle Pillar Axiom Quality-of-Service: confidently consolidate storage for multiple applications into a single datacentre storage solution Oracle Solaris 11: delivering secure enterprise cloud deployments with the ability to run hundreds of virtual application with no overhead and co-engineered with other Oracle software products to provide the highest levels of security, manageability and performance Oracle Enterprise Manager 12c: Oracle's integrated enterprise IT management product, providing the industry's only complete, integrated and business-driven enterprise cloud management solution Oracle VM 3.0: the latest release of Oracle's server virtualisation and management solution, helping to move datacentres beyond server consolidation to improve application deployment and management. Register today and ensure your place at the Extreme Performance Tour! Extreme Performance Tour events are free to attend, but places are limited. To make sure that you don't miss out, please visit Oracle's Extreme Performance Tour website, select the city that you'd be interest in attending an event in, and then click on the 'Register Now' button for that city to secure your interest. Each individual city page also contains more in-depth information about your local event, including logistics, agenda and maybe even a preview of VIP guest speakers. -- Oracle OpenWorld 2010 Whether you attended Oracle OpenWorld 2009 or not, don't forget to save the date now for Oracle OpenWorld 2010. The event will be held a little earlier next year, from 19th-23rd September, so please don't miss out. With thousands of sessions and hundreds of exhibits and demos already lined up, there's no better place to learn how to optimise your existing systems, get an inside line on upcoming technology breakthroughs, and meet with your partner peers, Oracle strategists and even the developers responsible for the products and services that help you get better results for your end customers. Register Now for Oracle OpenWorld 2010! Perhaps you are interested in learning more about Oracle OpenWorld 2010, but don't wish to register at this time? Great! Please just enter your contact information here and we will contact you at a later date. How to Exhibit at Oracle OpenWorld 2010 Sponsorship Opportunities at Oracle OpenWorld 2010 Advertising Opportunities at Oracle OpenWorld 2010 -- Back to the welcome page

    Read the article

  • Entity Association Mapping with Code First Part 1 : Mapping Complex Types

    - by mortezam
    Last week the CTP5 build of the new Entity Framework Code First has been released by data team at Microsoft. Entity Framework Code-First provides a pretty powerful code-centric way to work with the databases. When it comes to associations, it brings ultimate flexibility. I’m a big fan of the EF Code First approach and am planning to explain association mapping with code first in a series of blog posts and this one is dedicated to Complex Types. If you are new to Code First approach, you can find a great walkthrough here. In order to build a solid foundation for our discussion, we will start by learning about some of the core concepts around the relationship mapping.   What is Mapping?Mapping is the act of determining how objects and their relationships are persisted in permanent data storage, in our case, relational databases. What is Relationship mapping?A mapping that describes how to persist a relationship (association, aggregation, or composition) between two or more objects. Types of RelationshipsThere are two categories of object relationships that we need to be concerned with when mapping associations. The first category is based on multiplicity and it includes three types: One-to-one relationships: This is a relationship where the maximums of each of its multiplicities is one. One-to-many relationships: Also known as a many-to-one relationship, this occurs when the maximum of one multiplicity is one and the other is greater than one. Many-to-many relationships: This is a relationship where the maximum of both multiplicities is greater than one. The second category is based on directionality and it contains two types: Uni-directional relationships: when an object knows about the object(s) it is related to but the other object(s) do not know of the original object. To put this in EF terminology, when a navigation property exists only on one of the association ends and not on the both. Bi-directional relationships: When the objects on both end of the relationship know of each other (i.e. a navigation property defined on both ends). How Object Relationships Are Implemented in POCO domain models?When the multiplicity is one (e.g. 0..1 or 1) the relationship is implemented by defining a navigation property that reference the other object (e.g. an Address property on User class). When the multiplicity is many (e.g. 0..*, 1..*) the relationship is implemented via an ICollection of the type of other object. How Relational Database Relationships Are Implemented? Relationships in relational databases are maintained through the use of Foreign Keys. A foreign key is a data attribute(s) that appears in one table and must be the primary key or other candidate key in another table. With a one-to-one relationship the foreign key needs to be implemented by one of the tables. To implement a one-to-many relationship we implement a foreign key from the “one table” to the “many table”. We could also choose to implement a one-to-many relationship via an associative table (aka Join table), effectively making it a many-to-many relationship. Introducing the ModelNow, let's review the model that we are going to use in order to implement Complex Type with Code First. It's a simple object model which consist of two classes: User and Address. Each user could have one billing address. The Address information of a User is modeled as a separate class as you can see in the UML model below: In object-modeling terms, this association is a kind of aggregation—a part-of relationship. Aggregation is a strong form of association; it has some additional semantics with regard to the lifecycle of objects. In this case, we have an even stronger form, composition, where the lifecycle of the part is fully dependent upon the lifecycle of the whole. Fine-grained domain models The motivation behind this design was to achieve Fine-grained domain models. In crude terms, fine-grained means “more classes than tables”. For example, a user may have both a billing address and a home address. In the database, you may have a single User table with the columns BillingStreet, BillingCity, and BillingPostalCode along with HomeStreet, HomeCity, and HomePostalCode. There are good reasons to use this somewhat denormalized relational model (performance, for one). In our object model, we can use the same approach, representing the two addresses as six string-valued properties of the User class. But it’s much better to model this using an Address class, where User has the BillingAddress and HomeAddress properties. This object model achieves improved cohesion and greater code reuse and is more understandable. Complex Types: Splitting a Table Across Multiple Types Back to our model, there is no difference between this composition and other weaker styles of association when it comes to the actual C# implementation. But in the context of ORM, there is a big difference: A composed class is often a candidate Complex Type. But C# has no concept of composition—a class or property can’t be marked as a composition. The only difference is the object identifier: a complex type has no individual identity (i.e. no AddressId defined on Address class) which make sense because when it comes to the database everything is going to be saved into one single table. How to implement a Complex Types with Code First Code First has a concept of Complex Type Discovery that works based on a set of Conventions. The convention is that if Code First discovers a class where a primary key cannot be inferred, and no primary key is registered through Data Annotations or the fluent API, then the type will be automatically registered as a complex type. Complex type detection also requires that the type does not have properties that reference entity types (i.e. all the properties must be scalar types) and is not referenced from a collection property on another type. Here is the implementation: public class User{    public int UserId { get; set; }    public string FirstName { get; set; }    public string LastName { get; set; }    public string Username { get; set; }    public Address Address { get; set; }} public class Address {     public string Street { get; set; }     public string City { get; set; }            public string PostalCode { get; set; }        }public class EntityMappingContext : DbContext {     public DbSet<User> Users { get; set; }        } With code first, this is all of the code we need to write to create a complex type, we do not need to configure any additional database schema mapping information through Data Annotations or the fluent API. Database SchemaThe mapping result for this object model is as follows: Limitations of this mappingThere are two important limitations to classes mapped as Complex Types: Shared references is not possible: The Address Complex Type doesn’t have its own database identity (primary key) and so can’t be referred to by any object other than the containing instance of User (e.g. a Shipping class that also needs to reference the same User Address). No elegant way to represent a null reference There is no elegant way to represent a null reference to an Address. When reading from database, EF Code First always initialize Address object even if values in all mapped columns of the complex type are null. This means that if you store a complex type object with all null property values, EF Code First returns a initialized complex type when the owning entity object is retrieved from the database. SummaryIn this post we learned about fine-grained domain models which complex type is just one example of it. Fine-grained is fully supported by EF Code First and is known as the most important requirement for a rich domain model. Complex type is usually the simplest way to represent one-to-one relationships and because the lifecycle is almost always dependent in such a case, it’s either an aggregation or a composition in UML. In the next posts we will revisit the same domain model and will learn about other ways to map a one-to-one association that does not have the limitations of the complex types. References ADO.NET team blog Mapping Objects to Relational Databases Java Persistence with Hibernate

    Read the article

  • Cutting edge technology, a lone Movember ranger and a 5-a-side football club ...meet the team at Oracle’s Belfast Offices.

    - by user10729410
    Normal 0 false false false EN-IE X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Normal 0 false false false EN-IE X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} By Olivia O’Connell To see what’s in store at Oracle’s next Open Day which comes to Belfast this week, I visited the offices with some colleagues to meet the team and get a feel for what‘s in store on November 29th. After being warmly greeted by Frances and Francesca, who make sure Front of House and Facilities run smoothly, we embarked on a quick tour of the 2 floors Oracle occupies, led by VP Bo, it was time to seek out some willing volunteers to be interviewed/photographed - what a shy bunch! A bit of coaxing from the social media team was needed here! In a male-dominated environment, the few women on the team caught my eye immediately. I got chatting to Susan, a business analyst and Bronagh, a tech writer. It becomes clear during our chat that the male/female divide is not an issue – “everyone here just gets on with the job,” says Suzanne, “We’re all around the same age and have similar priorities and luckily everyone is really friendly so there are no problems. ” A graduate of Queen’s University in Belfast majoring in maths & computer science, Susan works closely with product management and the development teams to ensure that the final project delivered to clients meets and exceeds their expectations. Bronagh, who joined us following working for a tech company in Montreal and gaining her post-grad degree at University of Ulster agrees that the work is challenging but “the environment is so relaxed and friendly”. Normal 0 false false false EN-IE X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Software developer David is taking the Movember challenge for the first time to raise vital funds and awareness for men’s health. Like other colleagues in the office, he is a University of Ulster graduate and works on Reference applications and Merchandising Tools which enable customers to establish e-shops using Oracle technologies. The social activities are headed up by Gordon, a software engineer on the commerce team who joined the team 4 years ago after graduating from the University of Strathclyde at Glasgow with a degree in Computer Science. Everyone is unanimous that the best things about working at Oracle’s Belfast offices are the casual friendly environment and the opportunity to be at the cutting edge of technology. We’re looking forward to our next trip to Belfast for some cool demos and meet candidates. And as for the camera-shyness? Look who came out to have their picture taken at the end of the day! Normal 0 false false false EN-IE X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} The Oracle offices in Belfast are located on the 6th floor, Victoria House, Gloucester Street, Belfast BT1 4LS, UK View Larger Map Normal 0 false false false EN-IE X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Open day takes place on Thursday, 29th November 4pm – 8pm. Visit the 5 Demo Stations to find out more about each teams' activities and projects to date. See live demos including "Engaging the Customer", "Managing Your Store", "Helping the Customer", "Shopping on-line" and "The Commerce Experience" processes. The "Working @Oracle" stand will give you the chance to connect with our recruitment team and get information about the Recruitment process and making your career path in Oracle. Register here.

    Read the article

  • How do I install PyYAML into local install of Python?

    - by Daryl Spitzer
    I've installed Python 2.6.4 into (a subdirectory in) my home directory on a Linux machine with Python 2.3.4 pre-installed, because I need to run some code that I've decided would require too much work to make it run on 2.3.4. (I'm not on the sudoers list for that machine.) I was hoping I could run ~/Python-2.6.4/python setup.py install (from the PyYAML directory in my home directory, where I untarred the PyYAML sources) and it would be smart enough to install it into my local Python 2.6.4 install. But it's not. (See the P.S.) Is it possible to install PyYAML into my local Python install, so that "import yaml" will work when I invoke that Python? If so, how do I do that? P.S. Here's the output when I ran ~/Python-2.6.4/python setup.py install: running install running build running build_py creating build/lib.linux-ppc64-2.6 creating build/lib.linux-ppc64-2.6/yaml copying lib/yaml/composer.py -> build/lib.linux-ppc64-2.6/yaml copying lib/yaml/nodes.py -> build/lib.linux-ppc64-2.6/yaml copying lib/yaml/dumper.py -> build/lib.linux-ppc64-2.6/yaml copying lib/yaml/resolver.py -> build/lib.linux-ppc64-2.6/yaml copying lib/yaml/events.py -> build/lib.linux-ppc64-2.6/yaml copying lib/yaml/emitter.py -> build/lib.linux-ppc64-2.6/yaml copying lib/yaml/error.py -> build/lib.linux-ppc64-2.6/yaml copying lib/yaml/loader.py -> build/lib.linux-ppc64-2.6/yaml copying lib/yaml/cyaml.py -> build/lib.linux-ppc64-2.6/yaml copying lib/yaml/scanner.py -> build/lib.linux-ppc64-2.6/yaml copying lib/yaml/__init__.py -> build/lib.linux-ppc64-2.6/yaml copying lib/yaml/serializer.py -> build/lib.linux-ppc64-2.6/yaml copying lib/yaml/reader.py -> build/lib.linux-ppc64-2.6/yaml copying lib/yaml/representer.py -> build/lib.linux-ppc64-2.6/yaml copying lib/yaml/constructor.py -> build/lib.linux-ppc64-2.6/yaml copying lib/yaml/tokens.py -> build/lib.linux-ppc64-2.6/yaml copying lib/yaml/parser.py -> build/lib.linux-ppc64-2.6/yaml running build_ext creating build/temp.linux-ppc64-2.6 checking if libyaml is compilable gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/dspitzer/Python-2.6.4/Include -I/home/dspitzer/Python-2.6.4 -c build/temp.linux-ppc64-2.6/check_libyaml.c -o build/temp.linux-ppc64-2.6/check_libyaml.o build/temp.linux-ppc64-2.6/check_libyaml.c:2:18: yaml.h: No such file or directory build/temp.linux-ppc64-2.6/check_libyaml.c: In function `main': build/temp.linux-ppc64-2.6/check_libyaml.c:5: error: `yaml_parser_t' undeclared (first use in this function) build/temp.linux-ppc64-2.6/check_libyaml.c:5: error: (Each undeclared identifier is reported only once build/temp.linux-ppc64-2.6/check_libyaml.c:5: error: for each function it appears in.) build/temp.linux-ppc64-2.6/check_libyaml.c:5: error: syntax error before "parser" build/temp.linux-ppc64-2.6/check_libyaml.c:6: error: `yaml_emitter_t' undeclared (first use in this function) build/temp.linux-ppc64-2.6/check_libyaml.c:8: warning: implicit declaration of function `yaml_parser_initialize' build/temp.linux-ppc64-2.6/check_libyaml.c:8: error: `parser' undeclared (first use in this function) build/temp.linux-ppc64-2.6/check_libyaml.c:9: warning: implicit declaration of function `yaml_parser_delete' build/temp.linux-ppc64-2.6/check_libyaml.c:11: warning: implicit declaration of function `yaml_emitter_initialize' build/temp.linux-ppc64-2.6/check_libyaml.c:11: error: `emitter' undeclared (first use in this function) build/temp.linux-ppc64-2.6/check_libyaml.c:12: warning: implicit declaration of function `yaml_emitter_delete' libyaml is not found or a compiler error: forcing --without-libyaml (if libyaml is installed correctly, you may need to specify the option --include-dirs or uncomment and modify the parameter include_dirs in setup.cfg) running install_lib creating /usr/local/lib/python2.6 error: could not create '/usr/local/lib/python2.6': Permission denied

    Read the article

  • eBooks on iPad vs. Kindle: More Debate than Smackdown

    - by andrewbrust
    When the iPad was presented at its San Francisco launch event on January 28th, Steve Jobs spent a significant amount of time explaining how well the device would serve as an eBook reader. He showed the iBooks reader application and iBookstore and laid down the gauntlet before Amazon and its beloved Kindle device. Almost immediately afterwards, criticism came rushing forth that the iPad could never beat the Kindle for book reading. The curious part of that criticism is that virtually no one offering it had actually used the iPad yet. A few weeks later, on April 3rd, the iPad was released for sale in the United States. I bought one on that day and in the few additional weeks that have elapsed, I’ve given quite a workout to most of its capabilities, including its eBook features. I’ve also spent some time with the Kindle, albeit a first-generation model, to see how it actually compares to the iPad. I had some expectations going in, but I came away with conclusions about each device that were more scenario-based than absolute. I present my findings to you here.   Vital Statistics Let’s start with an inventory of each device’s underlying technology. The iPad has a color, backlit LCD screen and an on-screen keyboard. It has a battery which, on a full charge, lasts anywhere from 6-10 hours. The Kindle offers a monochrome, reflective E Ink display, a physical keyboard and a battery that on my first gen loaner unit can go up to a week between charges (Amazon claims the battery on the Kindle 2 can last up to 2 weeks on a single charge). The Kindle connects to Amazon’s Kindle Store using a 3G modem (the technology and network vary depending on the model) that incurs no airtime service charges whatsoever. The iPad units that are on-sale today work over WiFi only. 3G-equipped models will be on sale shortly and will command a $130 premium over their WiFi-only counterparts. 3G service on the iPad, in the U.S. from AT&T, will be fee-based, with a 250MB plan at $14.99 per month and an unlimited plan at $29.99. No contract is required for 3G service. All these tech specs aside, I think a more useful observation is that the iPad is a multi-purpose Internet-connected entertainment device, while the Kindle is a dedicated reading device. The question is whether those differences in design and intended use create a clear-cut winner for reading electronic publications. Let’s take a look at each device, in isolation, now.   Kindle To me, what’s most innovative about the Kindle is its E Ink display. E Ink really looks like ink on a sheet of paper. It requires no backlight, it’s fully visible in direct sunlight and it causes almost none of the eyestrain that LCD-based computer display technology (like that used on the iPad) does. It’s really versatile in an all-around way. Forgive me if this sounds precious, but reading on it is really a joy. In fact, it’s a genuinely relaxing experience. Through the Kindle Store, Amazon allows users to download books (including audio books), magazines, newspapers and blog feeds. Books and magazines can be purchased either on a single-issue basis or as an annual subscription. Books, of course, are purchased singly. Oddly, blogs are not free, but instead carry a monthly subscription fee, typically $1.99. To me this is ludicrous, but I suppose the free 3G service is partially to blame. Books and magazine issues download quickly. Magazine and blog subscriptions cause new issues or posts to be pushed to your device on an automated basis. Available blogs include 9000-odd feeds that Amazon offers on the Kindle Store; unless I missed something, arbitrary RSS feeds are not supported (though there are third party workarounds to this limitation). The shopping experience is integrated well, has an huge selection, and offers certain graphical perks. For example, magazine and newspaper logos are displayed in menus, and book cover thumbnails appear as well. A simple search mechanism is provided and text entry through the physical keyboard is relatively painless. It’s very easy and straightforward to enter the store, find something you like and start reading it quickly. If you know what you’re looking for, it’s even faster. Given Kindle’s high portability, very reliable battery, instant-on capability and highly integrated content acquisition, it makes reading on whim, and in random spurts of downtime, very attractive. The Kindle’s home screen lists all of your publications, and easily lets you select one, then start reading it. Once opened, publications display in crisp, attractive text that is adjustable in size. “Turning” pages is achieved through buttons dedicated to the task. Notes can be recorded, bookmarks can be saved and pages can be saved as clippings. I am not an avid book reader, and yet I found the Kindle made it really fun, convenient and soothing to read. There’s something about the easy access to the material and the simplicity of the display that makes the Kindle seduce you into chilling out and reading page after page. On the other hand, the Kindle has an awkward navigation interface. While menus are displayed clearly on the screen, the method of selecting menu items is tricky: alongside the right-hand edge of the main display is a thin column that acts as a second display. It has a white background, and a scrollable silver cursor that is moved up or down through the use of the device’s scrollwheel. Picking a menu item on the main display involves scrolling the silver cursor to a position parallel to that menu item and pushing the scrollwheel in. This navigation technique creates a disconnect, literally. You don’t really click on a selection so much as you gesture toward it. I got used to this technique quickly, but I didn’t love it. It definitely created a kind of anxiety in me, making me feel the need to speed through menus and get to my destination document quickly. Once there, I could calm down and relax. Books are great on the Kindle. Magazines and newspapers much less so. I found the rendering of photographs, and even illustrations, to be unacceptably crude. For this reason, I expect that reading textbooks on the Kindle may leave students wanting. I found that the original flow and layout of any publication was sacrificed on the Kindle. In effect, browsing a magazine or newspaper was almost impossible. Reading the text of individual articles was enjoyable, but having to read this way made the whole experience much more “a la carte” than cohesive and thematic between articles. I imagine that for academic journals this is ideal, but for consumer publications it imposes a stripped-down, low-fidelity experience that evokes a sense of deprivation. In general, the Kindle is great for reading text. For just about anything else, especially activity that involves exploratory browsing, meandering and short-attention-span reading, it presents a real barrier to entry and adoption. Avid book readers will enjoy the Kindle (if they’re not already). It’s a great device for losing oneself in a book over long sittings. Multitaskers who are more interested in periodicals, be they online or off, will like it much less, as they will find compromise, and even sacrifice, to be palpable.   iPad The iPad is a very different device from the Kindle. While the Kindle is oriented to pages of text, the iPad orbits around applications and their interfaces. Be it the pinch and zoom experience in the browser, the rich media features that augment content on news and weather sites, or the ability to interact with social networking services like Twitter, the iPad is versatile. While it shares a slate-like form factor with the Kindle, it’s effectively an elegant personal computer. One of its many features is the iBook application and integration of the iBookstore. But it’s a multi-purpose device. That turns out to be good and bad, depending on what you’re reading. The iBookstore is great for browsing. It’s color, rich animation-laden user interface make it possible to shop for books, rather than merely search and acquire them. Unfortunately, its selection is rather sparse at the moment. If you’re looking for a New York Times bestseller, or other popular titles, you should be OK. If you want to read something more specialized, it’s much harder. Unlike the awkward navigation interface of the Kindle, the iPad offers a nearly flawless touch-screen interface that seduces the user into tinkering and kibitzing every bit as much as the Kindle lulls you into a deep, concentrated read. It’s a dynamic and interactive device, whereas the Kindle is static and passive. The iBook reader is slick and fun. Use the iPad in landscape mode and you can read the book in 2-up (left/right 2-page) display; use it in portrait mode and you can read one page at a time. Rather than clicking a hardware button to turn pages, you simply drag and wipe from right-to-left to flip the single or right-hand page. The page actually travels through an animated path as it would in a physical book. The intuitiveness of the interface is uncanny. The reader also accommodates saving of bookmarks, searching of the text, and the ability to highlight a word and look it up in a dictionary. Pages display brightly and clearly. They’re easy to read. But the backlight and the glare made me less comfortable than I was with the Kindle. The knowledge that completely different applications (including the Web and email and Twitter) were just a few taps away made me antsy and very tempted to task-switch. The knowledge that battery life is an issue created subtle discomfort. If the Kindle makes you feel like you’re in a library reading room, then the iPad makes you feel, at best, like you’re under fluorescent lights at a Barnes and Noble or Borders store. If you’re lucky, you’d be on a couch or at a reading table in the store, but you might also be standing up, in the aisles. Clearly, I didn’t find this conducive to focused and sustained reading. But that may have more to do with my own tendency to read periodicals far more than books, and my neurotic . And, truth be known, the book reading experience, when not explicitly compared to Kindle’s, was still pleasant. It is also important to point out that Kindle Store-sourced books can be read on the iPad through a Kindle reader application, from Amazon, specific to the device. This offered a less rich experience than the iBooks reader, but it was completely adequate. Despite the Kindle brand of the reader, however, it offered little in terms of simulating the reading experience on its namesake device. When it comes to periodicals, the iPad wins hands down. Magazines, even if merely scanned images of their print editions, read on the iPad in a way that felt similar to reading hard copy. The full color display, touch navigation and even the ability to render advertisements in their full glory makes the iPad a great way to read through any piece of work that is measured in pages, rather than chapters. There are many ways to get magazines and newspapers onto the iPad, including the Zinio reader, and publication-specific applications like the Wall Street Journal’s and Popular Science’s. The New York Times’ free Editors’ Choice application offers a Times Reader-like interface to a subset of the Gray Lady’s daily content. The completely Web-based but iPad-optimized Times Skimmer site (at www.nytimes.com/timesskimmer) works well too. Even conventional Web sites themselves can be read much like magazines, given the iPad’s ability to zoom in on the text and crop out advertisements on the margins. While the Kindle does have an experimental Web browser, it reminded me a lot of early mobile phone browsers, only in a larger size. For text-heavy sites with simple layout, it works fine. For just about anything else, it becomes more trouble than it’s worth. And given the way magazine articles make me think of things I want to look up online, I think that’s a real liability for the Kindle.   Summing Up What I came to realize is that the Kindle isn’t so much a computer or even an Internet device as it is a printer. While it doesn’t use physical paper, it still renders its content a page at a time, just like a laser printer does, and its output appears strikingly similar. You can read the rendered text, but you can’t interact with it in any way. That’s why the navigation requires a separate cursor display area. And because of the page-oriented rendering behavior, turning pages causes a flash on the display and requires a sometimes long pause before the next page is rendered. The good side of this is that once the page is generated, no battery power is required to display it. That makes for great battery life, optimal viewing under most lighting conditions (as long as there is some light) and low-eyestrain text-centric display of content. The Kindle is highly portable, has an excellent selection in its store and is refreshingly distraction-free. All of this is ideal for reading books. And iPad doesn’t offer any of it. What iPad does offer is versatility, variety, richness and luxury. It’s flush with accoutrements even if it’s low on focused, sustained text display. That makes it inferior to the Kindle for book reading. But that also makes it better than the Kindle for almost everything else. As such, and given that its book reading experience is still decent (even if not superior), I think the iPad will give Kindle a run for its money. True book lovers, and people on a budget, will want the Kindle. People with a robust amount of discretionary income may want both devices. Everyone else who is interested in a slate form factor e-reading device, especially if they also wish to have leisure-friendly Internet access, will likely choose the iPad exclusively. One thing is for sure: iPad has reduced Kindle’s market, and may have shifted its mass market potential to a mere niche play. If Amazon is smart, it will improve its iPad-based Kindle reader app significantly. It can then leverage the iPad channel as a significant market for the Kindle Store. After all, selling the eBooks themselves is what Amazon should care most about.

    Read the article

  • Your Day-by-Day Guide to Agile PLM at Oracle OpenWorld 2012

    - by Kerrie Foy
    This year’s Oracle OpenWorld conference is nearly here, and we’re all excited about what we have planned! With five days of activities and customer presenters from market leaders and top innovators like The Coca-Cola Company, Starbucks, JDSU, Facebook, GlobalFoundries, and more, this is an event you don't want to miss. I've compiled this day-by-day guide to help anyone keep track of all the “Product Lifecycle Management and Product Value Chain” sessions and activities at OpenWorld 2012, September 30 – October 4 in San Francisco, California.  Monday, October 1 There are great networking activities on Sunday September 30, but PLM specific sessions start after general conference keynotes on Monday, October 1 at 10:45 a.m. at the InterContinental Hotel in room Telegraph Hill. In fact, most of our sessions this year will be held in this room, which is still close to the conference keynotes in Moscone, but just far enough away to allow some focused networking and discussions.   This first session, 10:45 – 11:45 a.m. is a joint session with the Agile and AutoVue teams, entitled “Streamline PLM Design-to-Manufacturing Processes with AutoVue Visualization Soltuions” featuring presenters from Oracle as well as joint AutoVue and Agile PLM customer GlobalFoundries. In the following 12:15 – 1:15 p.m. slot, there are two sessions to choose from, so if you have a team of representatives attending OpenWorld, you may consider splitting up to catch both of these: a) Our General Session will be held in the InterContinental Hotel Ballroom C, which will cover our complete enterprise PLM strategy, product updates, and roadmaps. It’s our pleasure to feature a customer keynote presentation from Chris Bedi, CIO, and Rajeev Sethi, Director IT Business Engagement, of JDSU. b) A focused session on integrating PLM with Engineering and Supply Chain Systems will be held on the second floor of Moscone West (next to the InterContinental) in room 2022. Join to discover how these types of integrations help companies manage common and integrated design information across all MCAD, ECAD, and software components. After a lunch break and perhaps a visit to the Demogrounds in Moscone West, select from two product roadmap sessions in the next time slot (3:15 – 4:15 p.m.): an Agile 9.3.x session located in the InterContinental’s Ballroom C, and an Agile PLM for Process session located back in the InterContinental’s Telegraph Room. Both sessions will have strong content around each product line’s latest releases, vision, and customer examples. We are very pleased to feature Daniel Soosai of Facebook in the A9 session and Vinnie D’Agostino of The Coca-Cola Company in the PLM for Process session. Afterwards, hang in there for one last session of the day from 4:45 – 5:45 p.m.; it’s an insightful discussion on leveraging Agile PLM as the Foundation for Enterprise Quality Management, and it’s sure to be one of the best. In the Telegraph Room, this session will feature Oracle experts, partner co-presenter David Bartlett from CPG Solutions, and customer co-presenter Thomas Crowe, CIO of PL Developments. Hear their experience around implementing collaborative, integrated solutions to ensure effective knowledge transfer throughout an organization, and how to perform analysis in real time to resolve product quality issues swiftly and efficiently. On Monday evening there will be plenty of industry, product, and partner dinners, so take advantage of all the networking opportunities and catch some great tunes at the 5 day Oracle OpenWorld Music Festival! Tuesday, October 2 Tuesday starts early with a special PLM Networking Brunch, sponsored by several partners, from 8:30 a.m. – 10:30 a.m. at the B Restaurant that sits atop Yerba Buena Gardens. You’ll have the unique opportunity to meet with like-minded industry peers and a PLM partner to discuss a topic of your choosing while enjoying a delicious meal. Registration is required, so to inquire about attending this brunch, please email Terri.Hiskey-AT-oracle.com. After wrapping up your conversations over brunch, head over to the Marriott Marquis in the Nob Hill CD room for a chance to experience the Oracle Product Lifecycle Analytics solution in a Hands-On Lab, open from 10:15 a.m. – 12:45 p.m. Experts will be there to answer your questions. Back in the InterContinental Hotel’s Telegraph room, the session on “Ideation and Requirements Management: Capturing the Voice of the Customer” begins at 11:45 a.m. – 12:45 p.m. This may be the session for you if you’re struggling with challenges like too many repositories of customer needs, requests, and ideas; limited visibility into which ideas are being advanced by customers and field resources; or if you’re unable to leverage internal expertise to expose effort and potential risks. This session will discuss how Agile PLM can help you overcome ideation challenges to deliver the right products to their targeted markets and fulfill customer desires. Next, from 1:15 – 2:15 p.m. join us for a session on Managing Profitable Innovation with Oracle Product Lifecycle Analytics. If you missed the Hands-on Lab, have more questions, or simply want to be inspired by the product’s forward-thinking vision and capabilities, this is a great opportunity to meet the progressive-minded executives behind the application. After this session, it may be a good opportunity to swing by the Demogrounds in Moscone West and visit the Agile PLM demos at exhibit booths #81 for Agile PLM for Discrete Manufacturing, #70 for Agile PLM for Process, and #82 for AutoVue and Agile PLM Enterprise Visualization. Check out the related Supply Chain Management booths close by if you’re interested - here's the map. There’s always lots to see and do around the exhibit area. But don’t forget the last session of the day from 5:00 p.m. – 6:00 p.m. in Telegraph Hill on Managing Product Innovation and Compliance in Life Science Companies, a “must-see” if you’re in this industry. Launching innovative products quickly is already a high-stakes challenge, but companies in the life sciences industry face uniquely severe consequences when new products don’t perform or comply as required. In recent years, more and more regulations have become mandatory, and new ones, such as REACH, are currently going into effect for several companies. Customer presenters from pharmaceutical leader Eli Lilly will share how they’ve leveraged Agile PLM to deliver high-quality, innovative products in a fast-paced, heavily regulated market environment. Tuesday evening unwind at the Supply Chain Management Reception from 6:00 – 8:00 p.m. at the premier boutique Roe Nightclub and Lounge, which is located about three blocks down on Howard Street (on the other side of Moscone from the InterContinental Hotel). Registration is required. Click here for the details.   Wednesday, October 3 We have another full line-up on Wednesday, so be ready for an action-packed day. We start with a session at 10:15 – 11:15 a.m. in the Telegraph Room where we have a session on “PLM for Consumer Products: Building an Engine for Quality and Innovation” with featured presenters from Starbucks and partner Kalypso. This is a rare opportunity to learn directly from Starbucks how they instill quality and innovation throughout their organization, products, and processes, leveraging PLM disciplines with strong support from their partner.  If you’re not in the consumer products industry, we recommend attending another session at 10:15 – 11:15 a.m. in Moscone West room 3005: “Eco-Enterprise Innovation Awards and the Business Case for Sustainability” featuring Jeff Henley, Oracle’s Chairman of the Board and Jon Chorley, Chief Sustainability Officer. Oracle will honor select customers with Oracle’s Eco-Enterprise Innovation award, which recognizes customers and their respective partners who rely on Oracle products to support their green business practices to reduce their environmental impact while improving business efficiencies and reducing costs. The awards presentation is followed by a panel discussion with customers and Oracle executives, who describe how these award-winning organizations are embracing environmental initiatives as a central part of their business strategy and how information technology plays a pivotal role. Next at 11:45 a.m. – 12:45 p.m. in Telegraph Hill attend our session devoted to exploring Product Lifecycle Management’s role in Software Lifecycle Management. This is a thought leadership session with Oracle experts in the field on the importance of change management, and we’ll discuss how Oracle has for years leveraged Agile PLM to develop Agile PLM. If software lifecycle management doesn’t apply to your business or you’d rather engage in some lively one-on-one discussions, we also have a “Supply Chain Meet the Experts” session in Moscone West Room 2001A. Product experts, thought leaders and executives will be on hand to discuss your questions/topics, so come prepared. This session tends to fill up fast so try to get in early. At 1:15 – 2:15 p.m. join us back in Telegraph Hill for a session focused on leveraging the Agile Product Portfolio Management application as the Product Development Master Schedule to improve efficiencies, optimize resources, and gain visibility across projects enterprise-wide to improve portfolio profitability. Customer presenters from Broadcom will explain how they’ve leveraged the product to enable a master schedule with enterprise-level, phase-gate program and project collaboration and resource optimization. Again in Telegraph Hill from 3:30 – 4:30 p.m. we have an interesting session with leading semiconductor customer LSI and partner Kalypso on how LSI leveraged Agile PLM to advance from homegrown applications to complete Product Value Chain Management. That type of transition can be challenging, and LSI details how they were able to achieve their goals and the value they gained along the journey – a fascinating account for any company interested in leveraging best practices to innovate their business processes and even end products. Lastly, we’ll wrap up in Telegraph Hill from 5:00 – 6:00 p.m. with a session on “Ensuring New Product Success by Achieving Excellence in New Product Introduction.” This is a cross-industry session, guaranteed to deliver insight in the often elusive practice of creating winning products, and we’re very excited about. According to IDC Manufacturing Insights analyst Joe Barkai, “Product Failures are not necessarily a result of bad ideas…they are a result of suboptimal decisions.” We’ll show you how to wire your business processes to enhance decision-making and maximize product potential. Now, quickly hit your hotel room to freshen up and then catch one of the many complimentary shuttles to the much-anticipated Oracle Customer Appreciation Event on Treasure Island. We have a very exciting show planned – check out what’s in store here. Thursday, October 4 PLM has a light schedule on Thursday this year with just one session, but this again is one of our best sessions on managing the Product Value Chain: at 11:15 a.m – 12:15 p.m.in Telegraph Hill, it’s a customer and partner driven session with Sonoco Products and Deloitte telling their story about how to achieve integrated change control by interfacing Agile PLM with Oracle E-Business Suite. Sonoco Products, a global manufacturer of consumer and industrial packaging materials, with its systems integrator, Deloitte, is doing this by implementing prebuilt integration (Oracle Design-to-Release Integration Pack for Agile Product Lifecycle Management for Process and Oracle Process) to integrate Agile with Oracle Product Hub/Oracle Product Information Management and Oracle E-Business Suite. This session presents a case study of how Sonoco is leveraging this solution to improve data quality and build a framework for stronger master data governance. Even though that ends our PLM line-up at OpenWorld, there will still be many sessions and activities at the conference, so visit the Oracle OpenWorld website to review agendas and build your schedule. And of course, download and bring this guide and the latest version of the Agile PLM Focus-On Document (available soon!). San Francisco is a wonderful city to explore, and we’re glad you’re considering joining the Agile PLM team at Oracle OpenWorld!  I hope to see you there! Follow me before the conference and on site for real-time updates about #OOW12 on Twitter @Kerrie_Foy or @AgilePLM.

    Read the article

  • JMaghreb 2012 Trip Report

    - by arungupta
    JMaghreb is the inaugural Java conference organized by Morocco JUG. It is the biggest Java conference in Maghreb (5 countries in North West Africa). Oracle was the exclusive platinum sponsor with several others. The registrations had to be closed at 1412 for the free conference and several folks were already on the waiting list. Rabat with 531 registrations and Casablanca with 426 were the top cities. Some statistics ... 850+ attendees over 2 days, 500+ every day 30 sessions were delivered by 18 speakers from 10 different countries 10 sessions in French and 20 in English 6 of the speakers spoke at JavaOne 2012 8 will be at Devoxx Attendees from 5 different countries and 57 cities in Morocco 40.9% qualified them as professional and rest as students Topics ranged from HTML5, Java EE 7, ADF, JavaFX, MySQL, JCP, Vaadin, Android, Community, JCP Java EE 6 hands-on lab was sold out within 7 minutes and JavaFX in 12 minutes I gave the keynote along with Simon Ritter which was basically a recap of the Strategy and Technical keynotes presented at JavaOne 2012. An informal survey during the keynote showed the following numbers: 25% using NetBeans, 90% on Eclipse, 3 on JDeveloper, 1 on IntelliJ About 10 subscribers to free online Java magazine. This digital magazine is a comprehensive source of information for everything Java - subscribe for free!! About 10-15% using Java SE 7. Download JDK 7 and get started today! Even JDK 8 builds have been available for a while now. My second talk explained the core concepts of WebSocket and how JSR 356 is providing a standard API to build WebSocket-driven applications in Java EE 7. TOTD #183 explains how you can easily get started with WebSocket in GlassFish 4. The complete slide deck is available: Next day started with a community keynote by Sonya Barry. Some of us live the life of JCP, JSR, EG, EC, RI, etc every day, but not every body is. To address that, Sonya prepared an excellent introductory presentation providing an explanation of these terms and how java.net infrastructure supports Java development. The registration for the lab showed there is a definite demand for these technologies in this part of the world. I delivered the Java EE 6 hands-on lab to a packed room of about 120 attendees. Most of the attendees were able to progress and follow the lab instructions. Some of the attendees did not have a laptop but were taking extensive notes on paper notepads. Several attendees were already using Java EE 6 in their projects and typically they are the ones asking deep dive questions. Also gave out three copies of my recently released Java EE 6 Pocket Guide and new GlassFish t-shirts. Definitely feels happy to coach ~120 more Java developers learn standards-based enterprise Java programming. I also participated in a JCP BoF along with Werner, Sonya, and Badr. Adotp-a-JSR, java.net infrastructure, how to file a JSR, what is an RI, and other similar topics were discussed in a candid manner. You can follow @JMaghrebConf or check out their facebook page. java.net published a timely conversation with Badr El Houari - the fearless leader of the Morocco JUG team. Did you know that Morocco JUG stood for JCP EC elections (ADD LINK) ? Even though they did not get elected but did fairly well. Now some sample tweets from #JMaghreb ... #JMaghreb is over. Impressive for a first edition! Thanks @badrelhouari and all the @MoroccoJUG team ! Since you @speakjava : System.out.println("Thank you so much dear Tech Evangelist ! The JavaFX was pretty amazing !!! "); #JMaghreb @YounesVendetta @arungupta @JMaghrebConf Right ! hope he will be back to morocco again and again .. :) @Alji_ @arungupta @JMaghrebConf That dude is a genius ;) Put it on your wall :p @arungupta rocking Java EE 6 at @JMaghrebConf #Java #JavaEE #JMaghreb http://t.co/isl0Iq5p @sonyabarry you are an awesome speaker ;-) #JMaghreb rich more than 550 attendees in day one. Expecting more tomorrow! ongratulations @badrelhouari the organisation was great! The talks were pretty interesting, and the turnout was surprising at #JMaghreb! #JMaghreb is truly awesome... The speakers are unbelievable ! #JavaFX... Just amazing #JMaghreb Charmed by the talk about #javaFX ( nodes architecture, MVC, Lazy loading, binding... ) gotta start using it intead of SWT. #JMaghreb JavaFX is killing JFreeChart. It supports Charts a lot of kind of them ... #JMaghreb The british man is back #JMaghreb I do like him!! #JMaghreb @arungupta rocking @JMaghrebConf. pic.twitter.com/CNohA3PE @arungupta Great talk about the future of Java EE (JEE 7 & JEE 8) Thank you. #JMaghreb JEE7 more mooore power , leeess less code !! #JMaghreb They are simplifying the existing API for Java Message Service 2.0 #JMaghreb good to know , the more the code is simplified the better ! The Glassdoor guy #arungupta is doing it RIGHT ! #JMaghreb Great presentation of The Future of the Java Platform: Java EE 7, Java SE 8 & Beyond #jMaghreb @arungupta is a great Guy apparently #JMaghreb On a personal front, the hotel (Soiftel Jardin des Roses) was pretty nice and the location was perfect. There was a 1.8 mile loop dirt trail right next to it so I managed to squeeze some runs before my upcoming marathon. Also enjoyed some great Moroccan cuisine - Couscous, Tajine, mint tea, and moroccan salad. Visit to Kasbah of the Udayas, Hassan II (one of the tallest mosque in the world), and eating in a restaurant in a kasbah are some of the exciting local experiences. Now some pictures from the event (and around the city) ... And the complete album: Many thanks to Badr, Faisal, and rest of the team for organizing a great conference. They are already thinking about how to improve the content, logisitics, and flow for the next year. I'm certainly looking forward to JMaghreb 2.0 :-)

    Read the article

  • Command line mode only -- successful login only brings me back to login screen

    - by seth
    whenever I log in the screen goes black, I see a glimpse of terminal-esque text, and then it brings me back to the log in screen (Ubuntu 12.04). I can enter and log in via the command line. The guest account works find. I think this happened because I edited some Xorg related file trying to make an external monitor work with my laptop. I copy pasted from a forum post so I dont recall the file or what i put in the file. Can't find the forum post again and my bash history wasn't recorded from that session. I tried reinstalling Xorg and ubuntu-desktop, nvidia, resetting any configs I could find... I'm really at a loss of what to do. Here's my /.xsession-errors: /usr/sbin/lightdm-session: 11: /home/seth/.profile: -s: not found Backend : gconf Integration : true Profile : unity Adding plugins Initializing core options...done Initializing composite options...done Initializing opengl options...done Initializing decor options...done Initializing vpswitch options...done Initializing snap options...done Initializing mousepoll options...done Initializing resize options...done Initializing place options...done Initializing move options...done Initializing wall options...done Initializing grid options...done I/O warning : failed to load external entity "/home/seth/.compiz/session/108fa6ea48f8a973b9133850948930576700000017740033" Initializing session options...done Initializing gnomecompat options...done ** Message: applet now removed from the notification area Initializing animation options...done Initializing fade options...done Initializing unitymtgrabhandles options...done Initializing workarounds options...done Initializing scale options...done compiz (expo) - Warn: failed to bind image to texture Initializing expo options...done Initializing ezoom options...done ** Message: using fallback from indicator to GtkStatusIcon (compiz:1846): GConf-CRITICAL **: gconf_client_add_dir: assertion `gconf_valid_key (dirname, NULL)' failed Initializing unityshell options...done Nautilus-Share-Message: Called "net usershare info" but it failed: 'net usershare' returned error 255: net usershare: cannot open usershare directory /var/lib/samba/usershares. Error No such file or directory Please ask your system administrator to enable user sharing. Setting Update "main_menu_key" Setting Update "run_key" Setting Update "launcher_hide_mode" Setting Update "edge_responsiveness" Setting Update "launcher_capture_mouse" ** Message: moving back from GtkStatusIcon to indicator compiz (decor) - Warn: failed to bind pixmap to texture ** (zeitgeist-datahub:2128): WARNING **: zeitgeist-datahub.vala:227: Unable to get name "org.gnome.zeitgeist.datahub" on the bus! failed to create drawable compiz (core) - Warn: glXCreatePixmap failed compiz (core) - Warn: Couldn't bind background pixmap 0x1e00001 to texture compiz (decor) - Warn: failed to bind pixmap to texture ** Message: No keyring secrets found for Sonic.net_356/802-11-wireless-security; asking user. compiz (decor) - Warn: failed to bind pixmap to texture compiz (decor) - Warn: failed to bind pixmap to texture ** Message: No keyring secrets found for Sonic.net_356/802-11-wireless-security; asking user. ** Message: No keyring secrets found for Sonic.net_356/802-11-wireless-security; asking user. ** Message: No keyring secrets found for Sonic.net_356/802-11-wireless-security; asking user. ** Message: No keyring secrets found for Sonic.net_356/802-11-wireless-security; asking user. ** Message: No keyring secrets found for Sonic.net_356/802-11-wireless-security; asking user. ** Message: No keyring secrets found for Sonic.net_356/802-11-wireless-security; asking user. ** Message: No keyring secrets found for Sonic.net_356/802-11-wireless-security; asking user. ** Message: No keyring secrets found for Sonic.net_356/802-11-wireless-security; asking user. ** Message: No keyring secrets found for Sonic.net_356/802-11-wireless-security; asking user. ** Message: No keyring secrets found for Sonic.net_356/802-11-wireless-security; asking user. ** Message: No keyring secrets found for Sonic.net_356/802-11-wireless-security; asking user. ** Message: No keyring secrets found for Sonic.net_356/802-11-wireless-security; asking user. [2348:2352:12678840568:ERROR:gpu_watchdog_thread.cc(231)] The GPU process hung. Terminating after 10000 ms. [2256:2283:14450711755:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:14450726175:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:14450746028:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:14464521342:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:14464541249:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:14690775186:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:14690795231:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:14704543843:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:14704566717:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:14766138587:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:14857232694:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:14930901403:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:14930965542:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:14944566814:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:14944592215:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:15170929788:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:15170947382:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:15184585015:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:15184605475:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:15366189036:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:15410983381:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:15411569689:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:15431632431:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:15431674438:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:15457304356:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:15656020938:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:15656042383:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:15674651268:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:15674671786:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:16052544301:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:16057387653:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:16157122849:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:16157123851:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:16157125473:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:16157126544:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:16157129682:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 If anyone can help me out, I'd be forever grateful

    Read the article

  • PeopleSoft Upgrades, Fusion, & BI for Leading European PeopleSoft Applications Customers

    - by Mark Rosenberg
    With so many industry-leading services firms around the globe managing their businesses with PeopleSoft, it’s always an adventure setting up times and meetings for us to keep in touch with them, especially those outside of North America who often do not get to join us at Oracle OpenWorld. Fortunately, during the first two weeks of May, Nigel Woodland (Oracle’s Service Industries Director for the EMEA region) and I successfully blocked off our calendars to visit seven different customers spanning four countries in Western Europe. We met executives and leaders at four Staffing industry firms, two Professional Services firms that engage in consulting and auditing, and a Financial Services firm. As we shared the latest information regarding product capabilities and plans, we also gained valuable insight into the hot technology topics facing these businesses. What we heard was both informative and inspiring, and I suspect other Oracle PeopleSoft applications customers can benefit from one or more of the following observations from our trip. Great IT Plans Get Executed When You Respect the Users Each of our visits followed roughly the same pattern. After introductions, Nigel outlined Oracle’s product and technology strategy, including a discussion of how we at Oracle invest in each layer of the “technology stack” to provide customers with unprecedented business management capabilities and choice. Then, I provided the specifics of the PeopleSoft product line’s investment strategy, detailing the dramatic number of rich usability and functionality enhancements added to release 9.1 since its general availability in 2009 and the game-changing capabilities slated for 9.2. What was most exciting about each of these discussions was that shortly after my talking about what customers can do with release 9.1 right now to drive up user productivity and satisfaction, I saw the wheels turning in the minds of our audiences. Business analyst and end user-configurable tools and technologies, such as WorkCenters and the Related Action Framework, that provide the ability to tailor a “central command center” to the exact needs of each recruiter, biller, and every other role in the organization were exactly what each of our customers had been looking for. Every one of our audiences agreed that these tools which demonstrate a respect for the user would finally help IT pole vault over the wall of resistance that users had often raised in the past. With these new user-focused capabilities, IT is positioned to definitively partner with the business, instead of drag the business along, to unlock the value of their investment in PeopleSoft. This topic of respecting the user emerged during our very first visit, which was at Vital Services Group at their Head Office “The Mill” in Manchester, England. (If you are a student of architecture and are ever in Manchester, you should stop in to see this amazingly renovated old mill building.) I had just finished explaining our PeopleSoft 9.2 roadmap, and Mike Code, PeopleSoft Systems Manager for this innovative staffing company, said, “Mark, the new features you’ve shown us in 9.1/9.2 are very relevant to our business. As we forge ahead with the 9.1 upgrade, the ability to configure a targeted user interface with WorkCenters, Related Actions, Pivot Grids, and Alerts will enable us to satisfy the business that this upgrade is for them and will deliver tangible benefits. In fact, you’ve highlighted that we need to start talking to the business to keep up the momentum to start reviewing the 9.2 upgrade after we get to 9.1, because as much as 9.1 and PeopleTools 8.52 offers, what you’ve shown us for 9.2 is what we’ve envisioned was ultimately possible with our investment in PeopleSoft applications.” We also received valuable feedback about our investment for the Staffing industry when we visited with Hans Wanders, CIO of Randstad (the second largest Staffing company in the world) in the Netherlands. After our visit, Hans noted, “It was very interesting to see how the PeopleSoft applications have developed. I was truly impressed by many of the new developments.” Hans and Mike, sincere thanks for the validation that our team’s hard work and dedication to “respecting the users” is worth the effort! Co-existence of PeopleSoft and Fusion Applications Just Makes Sense As a “product person,” one of the most rewarding things about visiting customers is that they actually want to talk to me. Sometimes, they want to discuss a product area that we need to enhance; other times, they are interested in learning how to extract more value from their applications; and still others, they want to tell me how they are using the applications to drive real value for the business. During this trip, I was very pleased to hear that several of our customers not only thought the co-existence of Fusion applications alongside PeopleSoft applications made sense in theory, but also that they were aggressively looking at how to deploy one or more Fusion applications alongside their PeopleSoft HCM and FSCM applications. The most common deployment plan in the works by three of the organizations is to upgrade to PeopleSoft 9.1 or 9.2, and then adopt one of the new Fusion HCM applications, such as Fusion Performance Management or the full suite of  Fusion Talent Management. For example, during an applications upgrade planning discussion with the staffing company Hays plc., Mark Thomas, who is Hays’ UK IT Director, commented, “We are very excited about where we can go with the latest versions of the PeopleSoft applications in conjunction with Fusion Talent Management.” Needless to say, this news was very encouraging, because it reiterated that our applications investment strategy makes good business sense for our customers. Next Generation Business Intelligence Is the Key to the Future The third, and perhaps most exciting, lesson I learned during this journey is that our audiences already know that the latest generation of Business Intelligence technologies will be the “secret sauce” for organizations to transform business in radical ways. While a number of the organizations we visited on the trip have deployed or are deploying Oracle Business Intelligence Enterprise Edition and the associated analytics applications to provide dashboards of easy-to-understand, user-configurable metrics that help optimize business performance according to current operating procedures, what’s most exciting to them is being able to use Business Intelligence to change the way an organization does business, grows revenue, and makes a profit. In particular, several executives we met asked whether we can help them minimize the need to have perfectly structured data and at the same time generate analytics that improve order fulfillment decision-making. To them, the path to future growth lies in having the ability to analyze unstructured data rapidly and intuitively and leveraging technology’s ability to detect patterns that a human cannot reasonably be expected to see. For illustrative purposes, here is a good example of a business problem where analyzing a combination of structured and unstructured data can produce better results. If you have a resource manager trying to decide which person would be the best fit for an assignment in terms of ensuring (a) client satisfaction, (b) the individual’s satisfaction with the work, (c) least travel distance, and (d) highest margin, you traditionally compare resource qualifications to assignment needs, calculate margins on past work with the client, and measure distances. To perform these comparisons, you are likely to need the organization to have profiles setup, people ranked against profiles, margin targets setup, margins measured, distances setup, distances measured, and more. As you can imagine, this requires organizations to plan and implement data setup, capture, and quality management initiatives to ensure that dependable information is available to support resourcing analysis and decisions. In the fast-paced, tight-budget world in which most organizations operate today, the effort and discipline required to maintain high-quality, structured data like those described in the above example are certainly not desirable and in some cases are not feasible. You can imagine how intrigued our audiences were when I informed them that we are ready to help them analyze volumes of unstructured data, detect trends, and produce recommendations. Our discussions delved into examples of how the firms could leverage Oracle’s Secure Enterprise Search and Endeca technologies to keyword search against, compare, and learn from unstructured resource and assignment data. We also considered examples of how they could employ Oracle Real-Time Decisions to generate statistically significant recommendations based on similar resourcing scenarios that have produced the desired satisfaction and profit margin results. --- Although I had almost no time for sight-seeing during this trip to Europe, I have to say that it may have been one of the most energizing and engaging trips of my career. Showing these dedicated customers how they can give every user a uniquely tailored set of tools and address business problems in ways that have to date been impossible made the journey across the Atlantic more than worth it. If any of these three topics intrigue you, I’d recommend you contact your Oracle applications representative to arrange for more detailed discussions with the appropriate members of our organization.

    Read the article

  • Visiting the Emtel Data Centre

    Back in February at the first event of the Emtel Knowledge Series (EKS) I spoke to various people at Emtel about their data centre here on the island. I was trying to see whether it would be possible to arrange a meeting over there for a selected group of our community members. Well, let's say it like this... My first approach wasn't that promising and far from successful but during the following months there were more and more occasions to get in touch with the "right" contact persons at Emtel to make it happen... Setting up an appointment and pre-requisites The major improvement came during a Boot Camp for Windows Phone 8.1 App development organised by Microsoft Indian Ocean Islands in cooperation with Emtel at the Emtel World, Ebene. Apart from learning bits and pieces regarding Universal Apps I took the opportunity to get in touch with Arvin Lockee, Sales Executive - Data, during our lunch break. And this really kicked off the whole procedure. Prior to get access to the Emtel data centre it is requested that you provide full name and National ID of anyone going to visit. Also, it should be noted that there was only a limited amount of seats available. Anyways, packed with this information I posted through the usual social media channels. Responses came in very quickly and based on First-come, first-serve (FCFS) principle I noted down the details and forwarded them to Emtel in order to fix a date and time for the visit. In preparation on our side, all attendees exchanged contact details and we organised transport options to go to the data centre in Arsenal. The day before and on the day of our meeting, Arvin send me a reminder to check whether everything is still confirmed and ready to go... Of course, it was! Arriving at the Emtel Data Centre As I'm coming from Flic En Flac towards the North, we agreed that I'm going to pick up a couple of young fellows near the old post office in Port Louis. All went well, except that Sean eventually might be living in another time zone compared to the rest of us. Anyway, after some extended stop we were complete and arrived just in time in Arsenal to meet and greet with Ish and Veer. Again, Emtel is taking access procedures to their data centre very serious and the gate stayed close until all our IDs had been noted and compared to the list of registered attendees. Despite having a good laugh at the mixture of old and new ID cards it was a straight-forward processing. The ward was very helpful and guided us to the waiting area at the entrance section of the building. Shortly after we were welcomed by Kamlesh Bokhoree, the Data Centre Officer. He gave us brief introduction into the rules and regulations during our visit, like no photography allowed, not touching the buttons, and following his instructions through the whole visit. Of course! Inside the data centre Next, he explained us the multi-factor authentication system using a combination of bio-metric data, like finger print reader, and "classic" pin panel. The Emtel data centre provides multiple services and next to co-location for your own hardware they also offer storage options for your backup and archive data in their massive, fire-resistant vault. Very impressive to get to know about the considerations that have been done in choosing the right location and how to set up the whole premises. It should also be noted that there is 24/7 CCTV surveillance inside and outside the buildings. Strengths of the Emtel TIER 3 Data Centre, Mauritius Finally, we were guided into the first server room. And wow, the whole setup is cleverly planned and outlined in the architecture. From the false floor and ceilings in order to provide optimum air flow, over to the separation of cold and hot aisles between the full-size server racks, and of course the monitored air conditions in order to analyse and watch changes in temperature, smoke detection and other parameters. And not surprisingly everything has been implemented in two independent circuits. There is a standardised classification for the construction and operation of data centres world-wide, and the Emtel's one has been designed to be a TIER 4 building but due to the lack of an alternative power supplier on the island it is officially registered as a TIER 3 compliant data centre. Maybe in the long run there might be a second supplier of energy next to CEB... time will tell. Luckily, the data centre is integrated into the National Fibre Optic Gigabit Ring and Emtel already connects internationally through diverse undersea cable routes like SAFE & LION/LION2 out of Mauritius and through several other providers for onwards connectivity. The data centre is part of the National Fibre Optic Gigabit Ring and has redundant internet connectivity onwards. Meanwhile, Arvin managed to join our little group of geeks and he supported Kamlesh in answering our technical questions regarding the capacities and general operation of the data centre. Visiting the NOC and its dedicated team of IT professionals was surely one of the visual highlights. Seeing their wall of screens to monitor any kind of activities on the data lines, the managed servers and the activity in and around the building was great. Even though I'm using a multi-head setup since years I cannot keep it up with that setup... ;-) But I got a couple of ideas on how to improve my work spaces here at the office. Clear advantages of hosting your e-commerce and mobile backends locally After the completely isolated NOC area we continued our Q&A session with Kamlesh and Arvin in the second server room which is dedictated to shared environments. On first thought it should be well-noted that there is lots of space for full-sized racks and therefore co-location of your own hardware. Actually, given the feedback that there will be upcoming changes in prices the facilities at the Emtel data centre are getting more and more competitive and interesting for local companies, especially small and medium enterprises. After seeing this world-class infrastructure available on the island, I'm already considering of moving one of my root servers abroad to be co-located here on the island. This would provide an improved user experience in terms of site performance and latency. This would be a good improvement, especially for upcoming e-commerce solutions for two of my local clients. Later on, we actually started the conversation of additional services that could be a catalyst for the local market in order to attract more small and medium companies to take the data centre into their evaluations regarding online activities. Until today Emtel does not provide virtualised server environments but there might be ongoing plans in the future to cover this field as well. Emtel is a mobile operator and internet connectivity provider in the first place, entering a market of managed and virtualised server infrastructures including capacities in terms of cloud storage and computing are rather new and there is a continuous learning curve at Emtel, too. You cannot just jump into a new market and see how it works out... And I appreciate Emtel's approach towards a solid fundament and then building new services on top of that. Emtel as a future one-stop-shop service provider for all your internet and telecommunications needs. Emtel's promotional video about their TIER 3 data centre in Arsenal, Mauritius More details are thoroughly described in Emtel's brochure of their data centre. Check out their PDF document here. Thanks for this opportunity Visiting and walking through the Emtel data centre for more than 2 hours was a great experience. As representative of the Mauritius Software Craftsmanship Community (MSCC) I would like to thank anyone at Emtel involved in the process of making it happen, and especially to Arvin Lockee and Kamlesh Bokhoree for their time and patience in explaining the infrastructure and answering all the endless questions from our members. Thank You!

    Read the article

  • Experience your music in a whole new way with Zune for PC

    - by Matthew Guay
    Tired of the standard Media Player look and feel, and want something new and innovative?  Zune offers a fresh, new way to enjoy your music, videos, pictures, and podcasts, whether or not you own a Zune device. Microsoft started out on a new multimedia experience for PCs and mobile devices with the launch of the Zune several years ago.  The Zune devices have been well received and noted for their innovative UI, and the Zune HD’s fluid interface is the foundation for the widely anticipated Windows Phone 7.  But regardless of whether or not you have a Zune Device, you can still use the exciting new UI and services directly from your PC.  Zune for Windows is a very nice media player that offers a music and video store and wide support for multimedia formats including those used in Apple products.  And if you enjoy listening to a wide variety of music, it also offers the Zune Pass which lets you stream an unlimited number of songs to your computer and download 10 songs for keeps per month for $14.99/month. Or you can do a pre-paid music card as well.  It does all this using the new Metro UI which beautifully shows information using text in a whole new way.  Here’s a quick look at setting up and using Zune on your PC. Getting Started Download the installer (link below), and run it to begin setup.  Please note that Zune offers a separate version for computers running the 64 bit version of Windows Vista or 7, so choose it if your computer is running these. Once your download is finished, run the installer to setup Zune on your computer.  Accept the EULA when prompted. If there are any updates available, they will automatically download and install during the setup.  So, if you’re installing Zune from a disk (for example, one packaged with a Zune device), you don’t have to worry if you have the latest version.  Zune will proceed to install on your computer.   It may prompt you to restart your computer after installation; click Restart Now so you can proceed with your Zune setup.  The reboot appears to be for Zune device support, and the program ran fine otherwise without rebooting, so you could possibly skip this step if you’re not using a Zune device.  However, to be on the safe side, go ahead and reboot. After rebooting, launch Zune.  It will play a cute introduction video on first launch; press skip if you don’t want to watch it. Zune will now ask you if you want to keep the default settings or change them.  Choose Start to keep the defaults, or Settings to customize to your wishes.  Do note that the default settings will set Zune as your default media player, so click Settings if you wish to change this. If you choose to change the default settings, you can change how Zune finds and stores media on your computer.  In Windows 7, Zune will by default use your Windows 7 Libraries to manage your media, and will in fact add a new Podcasts library to Windows 7. If your media is stored on another location, such as on a server, then you can add this to the Library.  Please note that this adds the location to your system-wide library, not just the Zune player. There’s one last step.  Enter three of your favorite artists, and Zune will add Smart DJ mixes to your Quickplay list based on these.  Some less famous or popular artists may not be recognized, so you may have to try another if your choice isn’t available.  Or, you can click Skip if you don’t want to do this right now. Welcome to Zune!  This is the default first page, QuickPlay, where you can easily access your pinned and new items.   If you have a Zune account, or would like to create a new one, click Sign In on the top. Creating a new account is quick and simple, and if you’re new to Zune, you can try out a 14 day trial of Zune Pass for free if you want. Zune allows you to share your listening habits and favorites with friends or the world, but you can turn this off or change it if you like. Using Zune for Windows To access your media, click the Collection link on the top left.  Zune will show all the media you already have stored on your computer, organized by artist and album. Right-click on any album, and you choose to have Zune find album art or do a variety of other tasks with the media.   When playing media, you can view it in several unique ways.  First, the default Mix view will show related tracks to the music you’re playing from Smart DJ.  You can either play these fully if you’re a Zune Pass subscriber, or otherwise you can play 30 second previews. Then, for many popular artists, Zune will change the player background to show pictures and information in a unique way while the music is playing.  The information may range from history about the artist to the popularity of the song being played.   Zune also works as a nice viewer for the pictures on your computer. Start a slideshow, and Zune will play your pictures with nice transition effects and music from your library. Zune Store The Zune Store offers a wide variety of music, TV shows, and videos for purchase.  If you’re a Zune Pass subscriber, you can listen to or download any song without purchasing it; otherwise, you can preview a 30 second clip first. Zune also offers a wide selection of Podcasts you can subscribe to for free. Using Zune for PC with a Zune Device If you have a Zune device attached to your computer, you can easily add media files to it by simply dragging them to the Zune device icon in the left corner.  In the future, this will also work with Windows Phone 7 devices. If you have a Zune HD, you can also download and add apps to your device. Here’s the detailed information window for the weather app.  Click Download to add it to your device.   Mini Mode The Zune player generally takes up a large portion of your screen, and is actually most impressive when run maximized.  However, if you’re simply wanting to enjoy your tunes while you’re using your computer, you can use the Mini mode to still view music info and control Zune in a smaller mode.  Click the Mini Player button near the window control buttons in the top right to activate it. Now Zune will take up much less of your desktop.  This window will stay on top of other windows so you can still easily view and control it. Zune will display an image of the artist if one is available, and this shows up in Mini mode more often than it does in the full mode. And, in Windows 7, you could simply minimize Zune as you can control it directly from the taskbar thumbnail preview.   Even more controls are available from Zune’s jumplist in Windows 7.  You can directly access your Quickplay links or choose to shuffle all music without leaving the taskbar. Settings Although Zune is designed to be used without confusing menus and settings, you can tweak the program to your liking from the settings panel.  Click Settings near the top left of the window. Here you can change file storage, types, burn, metadata, and many more settings.  You can also setup Zune to stream media to your XBOX 360 if you have one.   You can also customize Zune’s look with a variety of modern backgrounds and gradients. Conclusion If you’re ready for a fresh way to enjoy your media, Zune is designed for you.  It’s innovative UI definitely sets it apart from standard media players, and is very pleasing to use.  Zune is especially nice if your computer is using XP, Vista Home Basic, or 7 Starter as these versions of Windows don’t include Media Center.  Additionally, the mini player mode is a nice touch that brings a feature of Windows 7’s Media Player to XP and Vista.  Zune is definitely one of our favorite music apps.  Try it out, and get a fresh view of your music today! Link Download Zune for Windows Similar Articles Productive Geek Tips Redeem Pre-paid Zune Card Points for Zune Marketplace MediaUpdate Your Zune Player SoftwaredoubleTwist is an iTunes Alternative that Supports Several DevicesFind Free or Cheap Indie Music at Amie StreetAmie Street Downloader Makes Purchasing Music Easier TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 The Ultimate Guide For YouTube Lovers Will it Blend? iPad Edition Penolo Lets You Share Sketches On Twitter Visit Woolyss.com for Old School Games, Music and Videos Add a Custom Title in IE using Spybot or Spyware Blaster When You Need to Hail a Taxi in NYC

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96  | Next Page >