Search Results

Search found 16059 results on 643 pages for 'global temp tables'.

Page 146/643 | < Previous Page | 142 143 144 145 146 147 148 149 150 151 152 153  | Next Page >

  • Database Migration Scripts: Getting from place A to place B

    - by Phil Factor
    We’ll be looking at a typical database ‘migration’ script which uses an unusual technique to migrate existing ‘de-normalised’ data into a more correct form. So, the book-distribution business that uses the PUBS database has gradually grown organically, and has slipped into ‘de-normalisation’ habits. What’s this? A new column with a list of tags or ‘types’ assigned to books. Because books aren’t really in just one category, someone has ‘cured’ the mismatch between the database and the business requirements. This is fine, but it is now proving difficult for their new website that allows searches by tags. Any request for history book really has to look in the entire list of associated tags rather than the ‘Type’ field that only keeps the primary tag. We have other problems. The TypleList column has duplicates in there which will be affecting the reporting, and there is the danger of mis-spellings getting there. The reporting system can’t be persuaded to do reports based on the tags and the Database developers are complaining about the unCoddly things going on in their database. In your version of PUBS, this extra column doesn’t exist, so we’ve added it and put in 10,000 titles using SQL Data Generator. /* So how do we refactor this database? firstly, we create a table of all the tags. */IF  OBJECT_ID('TagName') IS NULL OR OBJECT_ID('TagTitle') IS NULL  BEGIN  CREATE TABLE  TagName (TagName_ID INT IDENTITY(1,1) PRIMARY KEY ,     Tag VARCHAR(20) NOT NULL UNIQUE)  /* ...and we insert into it all the tags from the list (remembering to take out any leading spaces */  INSERT INTO TagName (Tag)     SELECT DISTINCT LTRIM(x.y.value('.', 'Varchar(80)')) AS [Tag]     FROM     (SELECT  Title_ID,          CONVERT(XML, '<list><i>' + REPLACE(TypeList, ',', '</i><i>') + '</i></list>')          AS XMLkeywords          FROM   dbo.titles)g    CROSS APPLY XMLkeywords.nodes('/list/i/text()') AS x ( y )  /* we can then use this table to provide a table that relates tags to articles */  CREATE TABLE TagTitle   (TagTitle_ID INT IDENTITY(1, 1),   [title_id] [dbo].[tid] NOT NULL REFERENCES titles (Title_ID),   TagName_ID INT NOT NULL REFERENCES TagName (Tagname_ID)   CONSTRAINT [PK_TagTitle]       PRIMARY KEY CLUSTERED ([title_id] ASC, TagName_ID)       ON [PRIMARY])        CREATE NONCLUSTERED INDEX idxTagName_ID  ON  TagTitle (TagName_ID)  INCLUDE (TagTitle_ID,title_id)        /* ...and it is easy to fill this with the tags for each title ... */        INSERT INTO TagTitle (Title_ID, TagName_ID)    SELECT DISTINCT Title_ID, TagName_ID      FROM        (SELECT  Title_ID,          CONVERT(XML, '<list><i>' + REPLACE(TypeList, ',', '</i><i>') + '</i></list>')          AS XMLkeywords          FROM   dbo.titles)g    CROSS APPLY XMLkeywords.nodes('/list/i/text()') AS x ( y )    INNER JOIN TagName ON TagName.Tag=LTRIM(x.y.value('.', 'Varchar(80)'))    END    /* That's all there was to it. Now we can select all titles that have the military tag, just to try things out */SELECT Title FROM titles  INNER JOIN TagTitle ON titles.title_ID=TagTitle.Title_ID  INNER JOIN Tagname ON Tagname.TagName_ID=TagTitle.TagName_ID  WHERE tagname.tag='Military'/* and see the top ten most popular tags for titles */SELECT Tag, COUNT(*) FROM titles  INNER JOIN TagTitle ON titles.title_ID=TagTitle.Title_ID  INNER JOIN Tagname ON Tagname.TagName_ID=TagTitle.TagName_ID  GROUP BY Tag ORDER BY COUNT(*) DESC/* and if you still want your list of tags for each title, then here they are */SELECT title_ID, title, STUFF(  (SELECT ','+tagname.tag FROM titles thisTitle    INNER JOIN TagTitle ON titles.title_ID=TagTitle.Title_ID    INNER JOIN Tagname ON Tagname.TagName_ID=TagTitle.TagName_ID  WHERE ThisTitle.title_id=titles.title_ID  FOR XML PATH(''), TYPE).value('.', 'varchar(max)')  ,1,1,'')    FROM titles  ORDER BY title_ID So we’ve refactored our PUBS database without pain. We’ve even put in a check to prevent it being re-run once the new tables are created. Here is the diagram of the new tag relationship We’ve done both the DDL to create the tables and their associated components, and the DML to put the data in them. I could have also included the script to remove the de-normalised TypeList column, but I’d do a whole lot of tests first before doing that. Yes, I’ve left out the assertion tests too, which should check the edge cases and make sure the result is what you’d expect. One thing I can’t quite figure out is how to deal with an ordered list using this simple XML-based technique. We can ensure that, if we have to produce a list of tags, we can get the primary ‘type’ to be first in the list, but what if the entire order is significant? Thank goodness it isn’t in this case. If it were, we might have to revisit a string-splitter function that returns the ordinal position of each component in the sequence. You’ll see immediately that we can create a synchronisation script for deployment from a comparison tool such as SQL Compare, to change the schema (DDL). On the other hand, no tool could do the DML to stuff the data into the new table, since there is no way that any tool will be able to work out where the data should go. We used some pretty hairy code to deal with a slightly untypical problem. We would have to do this migration by hand, and it has to go into source control as a batch. If most of your database changes are to be deployed by an automated process, then there must be a way of over-riding this part of the data synchronisation process to do this part of the process taking the part of the script that fills the tables, Checking that the tables have not already been filled, and executing it as part of the transaction. Of course, you might prefer the approach I’ve taken with the script of creating the tables in the same batch as the data conversion process, and then using the presence of the tables to prevent the script from being re-run. The problem with scripting a refactoring change to a database is that it has to work both ways. If we install the new system and then have to rollback the changes, several books may have been added, or had their tags changed, in the meantime. Yes, you have to script any rollback! These have to be mercilessly tested, and put in source control just in case of the rollback of a deployment after it has been in place for any length of time. I’ve shown you how to do this with the part of the script .. /* and if you still want your list of tags for each title, then here they are */SELECT title_ID, title, STUFF(  (SELECT ','+tagname.tag FROM titles thisTitle    INNER JOIN TagTitle ON titles.title_ID=TagTitle.Title_ID    INNER JOIN Tagname ON Tagname.TagName_ID=TagTitle.TagName_ID  WHERE ThisTitle.title_id=titles.title_ID  FOR XML PATH(''), TYPE).value('.', 'varchar(max)')  ,1,1,'')    FROM titles  ORDER BY title_ID …which would be turned into an UPDATE … FROM script. UPDATE titles SET  typelist= ThisTaglistFROM     (SELECT title_ID, title, STUFF(    (SELECT ','+tagname.tag FROM titles thisTitle      INNER JOIN TagTitle ON titles.title_ID=TagTitle.Title_ID      INNER JOIN Tagname ON Tagname.TagName_ID=TagTitle.TagName_ID    WHERE ThisTitle.title_id=titles.title_ID    ORDER BY CASE WHEN tagname.tag=titles.[type] THEN 1 ELSE 0  END DESC    FOR XML PATH(''), TYPE).value('.', 'varchar(max)')    ,1,1,'')  AS ThisTagList  FROM titles)fINNER JOIN Titles ON f.title_ID=Titles.title_ID You’ll notice that it isn’t quite a round trip because the tags are in a different order, though we’ve managed to make sure that the primary tag is the first one as originally. So, we’ve improved the database for the poor book distributors using PUBS. It is not a major deal but you’ve got to be prepared to provide a migration script that will go both forwards and backwards. Ideally, database refactoring scripts should be able to go from any version to any other. Schema synchronization scripts can do this pretty easily, but no data synchronisation scripts can deal with serious refactoring jobs without the developers being able to specify how to deal with cases like this.

    Read the article

  • Multiplayer online game engine/pipeline

    - by Slav
    I am implementing online multiplayer game where client must be written in AS3 (Flash) to embed game into browser and server in C++ (abstract part of which is already written and used with other games). Networking models may differ from each other, but currently I'm looking toward game's logic run on both client and server parts but they're written on different languages while it's not the main problem. My previous game (pretty big one - was implemented with efforts of ~5 programmers in 1.5 years) was mainly "written" within electronic tables as structured objects with implemented inheritance: was written standalone tool which generated AS3 and C++ (languages of platforms to which the game was published) using specified electronic tables file (.xls or .ods). That file contained ~50 tables with ~50 rows and ~50 columns each and was mainly written by game designers which do not know any programming languages. But that game was single-player. Having declared problem with my currently implementing MMO, I'm looking toward some vast pipeline, where will be resolved such problems like: game objects descriptions (which starships exist within game, how much HP they have, how fast move, what damage deal...) actions descriptions (what players or NPCs can do: attack each other, collect resources, build structures, move, teleport, cast spells) - actions are transmitted through server between clients influences (what happens when specified action applied on specified object, e.i "Ship A attacked Ship B: field "HP" of Ship B reduced by amount of field "damage" of Ship A" Influences can be much more difficult, yes, e.i. "damage is twice it's size when Ship has =5 allies around him in a 200 units range during night" and so on. If to be able to write such logic within some "design document" it will be easily possible to: let designers to do their job without programmer's intervention or any bug-prone programming validate described logic transfer (transform, convert) to any programming language where it will be executed Did somebody worked on something like that? Is there some tools/engines/pipelines which concernes with it? How to handle all of this problems simultaneously in a best way or do I properly imagine my tasks and problems to myself?

    Read the article

  • Structure of a .NET Assembly

    - by Om Talsania
    Assembly is the smallest unit of deployment in .NET Framework.When you compile your C# code, it will get converted into a managed module. A managed module is a standard EXE or DLL. This managed module will have the IL (Microsoft Intermediate Language) code and the metadata. Apart from this it will also have header information.The following table describes parts of a managed module.PartDescriptionPE HeaderPE32 Header for 32-bit PE32+ Header for 64-bit This is a standard Windows PE header which indicates the type of the file, i.e. whether it is an EXE or a DLL. It also contains the timestamp of the file creation date and time. It also contains some other fields which might be needed for an unmanaged PE (Portable Executable), but not important for a managed one. For managed PE, the next header i.e. CLR header is more importantCLR HeaderContains the version of the CLR required, some flags, token of the entry point method (Main), size and location of the metadata, resources, strong name, etc.MetadataThere can be many metadata tables. They can be categorized into 2 major categories.1. Tables that describe the types and members defined in your code2. Tables that describe the types and members referenced by your codeIL CodeMSIL representation of the C# code. At runtime, the CLR converts it into native instructions

    Read the article

  • Problems with LibreOffice Impress?

    - by kkp
    I am a big fan of Ubuntu. I have recently switched from Latex to LibreOffice Impress for PPT because it seems that with Impress you can quickly create slides. I am using LibreOffice 3.4.4 OOO340m1 (Build:402) on Ubuntu 11.04. I am saving them as .ppt format. However, facing the following problems. It is really frustrating and forcing me to use Windows/MAC just for making slides. Font is changing from Arial to TimesNewRoman specifically for Tables when I reopen an Impress presentation. Tables are stretched if I reopen an Impress presentation. Impress is not allowing me to compress them (the stretched tables) and I have needed to make a fresh table. Several other modifications are reverting back when I reopen an Impress presentation. For example, the "transparency" change to the borders (e.g., for rectangular shape). Arbitrarily Impress is crashing. It would be great if someone help me to resolve these issues. Thanks.

    Read the article

  • MVC 4 Authentication

    - by Aligned
    First: After searching for awhile to figure out what’s new/different with MVC 4 and forms authentication, this is the best article I've found on the subject: http://weblogs.asp.net/jgalloway/archive/2012/08/29/simplemembership-membership-providers-universal-providers-and-the-new-asp-net-4-5-web-forms-and-asp-net-mvc-4-templates.aspx Some quotes from the article: “The ASP.NET Web Pages team designed SimpleMembership to (wait for it) simplify the task of dealing with membership” "WSAT is built to work with ASP.NET Membership, and is not compatible with Simple Membership. There are two main options there: Use the WebSecurity and OAuthWebSecurity API to manage the users and roles Create a web admin using the above APIs Since SimpleMembership runs on top of your database, you can update your users as you would any other data - via EF or even in direct database edits (in development, of course)" “If you want to use an existing ASP.NET Membership Provider in ASP.NET MVC 4, you can't use the new AccountController. You can do a few things:” “Universal Providers do not work with Simple Membership.” ~ this post (look for Bob.at.SBS’s answer) says Universal Providers is not needed for MVC 4 to work in Azure)   I've been trying to figure out the Forms Authentication in MVC4. It's different than the past approach (aspnet_regsql). If you do file new project -> MVC 4 -> internet application, you get a really nice template with the controller and model setup for you. However, the tables are different than using aspnet_regsql and the ASP.Net Configuration tool (WSAT) wasn’t connecting to the data I had (it was creating an App_Data/aspnet.mdf file, which I didn’t see right away). Points of Note The database tables are created in the SimpleMembershipInitializer class, when you first run your app using Entity Framework 5 migration functionality. The tables created are webpages_Membership, webpages_OAuthMembership, webpages_Roles, webpages_UsersInRoles, UserProfile. Web.config settings don’t seem to be needed.   Scott Hanselman on Universal Providers was also useful if not somewhat out dated. Universal Providers and SimpleMembership are not compatible. http://www.asp.net/web-pages/tutorials/security/16-adding-security-and-membership – walk-through

    Read the article

  • file doesn't open, running outside of debugger results in seg fault (c++)

    - by misterich
    Hello (and thanks in advance) I'm in a bit of a quandry, I cant seem to figure out why I'm seg faulting. A couple of notes: It's for a course -- and sadly I am required to use use C-strings instead of std::string. Please dont fix my code (I wont learn that way and I will keep bugging you). please just point out the flaws in my logic and suggest a different function/way. platform: gcc version 4.4.1 on Suse Linux 11.2 (2.6.31 kernel) Here's the code main.cpp: // /////////////////////////////////////////////////////////////////////////////////// // INCLUDES (C/C++ Std Library) #include <cstdlib> /// EXIT_SUCCESS, EXIT_FAILURE #include <iostream> /// cin, cout, ifstream #include <cassert> /// assert // /////////////////////////////////////////////////////////////////////////////////// // DEPENDENCIES (custom header files) #include "dict.h" /// Header for the dictionary class // /////////////////////////////////////////////////////////////////////////////////// // PRE-PROCESSOR CONSTANTS #define ENTER '\n' /// Used to accept new lines, quit program. #define SPACE ' ' /// One way to end the program // /////////////////////////////////////////////////////////////////////////////////// // CUSTOM DATA TYPES /// File Namespace -- keep it local namespace { /// Possible program prompts to display for the user. enum FNS_Prompts { fileName_, /// prints out the name of the file noFile_, /// no file was passed to the program tooMany_, /// more than one file was passed to the program noMemory_, /// Not enough memory to use the program usage_, /// how to use the program word_, /// ask the user to define a word. notFound_, /// the word is not in the dictionary done_, /// the program is closing normally }; } // /////////////////////////////////////////////////////////////////////////////////// // Namespace using namespace std; /// Nothing special in the way of namespaces // /////////////////////////////////////////////////////////////////////////////////// // FUNCTIONS /** prompt() prompts the user to do something, uses enum Prompts for parameter. */ void prompt(FNS_Prompts msg /** determines the prompt to use*/) { switch(msg) { case fileName_ : { cout << ENTER << ENTER << "The file name is: "; break; } case noFile_ : { cout << ENTER << ENTER << "...Sorry, a dictionary file is needed. Try again." << endl; break; } case tooMany_ : { cout << ENTER << ENTER << "...Sorry, you can only specify one dictionary file. Try again." << endl; break; } case noMemory_ : { cout << ENTER << ENTER << "...Sorry, there isn't enough memory available to run this program." << endl; break; } case usage_ : { cout << "USAGE:" << endl << " lookup.exe [dictionary file name]" << endl << endl; break; } case done_ : { cout << ENTER << ENTER << "like Master P says, \"Word.\"" << ENTER << endl; break; } case word_ : { cout << ENTER << ENTER << "Enter a word in the dictionary to get it's definition." << ENTER << "Enter \"?\" to get a sorted list of all words in the dictionary." << ENTER << "... Press the Enter key to quit the program: "; break; } case notFound_ : { cout << ENTER << ENTER << "...Sorry, that word is not in the dictionary." << endl; break; } default : { cout << ENTER << ENTER << "something passed an invalid enum to prompt(). " << endl; assert(false); /// something passed in an invalid enum } } } /** useDictionary() uses the dictionary created by createDictionary * - prompts user to lookup a word * - ends when the user enters an empty word */ void useDictionary(Dictionary &d) { char *userEntry = new char; /// user's input on the command line if( !userEntry ) // check the pointer to the heap { cout << ENTER << MEM_ERR_MSG << endl; exit(EXIT_FAILURE); } do { prompt(word_); // test code cout << endl << "----------------------------------------" << endl << "Enter something: "; cin.getline(userEntry, INPUT_LINE_MAX_LEN, ENTER); cout << ENTER << userEntry << endl; }while ( userEntry[0] != NIL && userEntry[0] != SPACE ); // GARBAGE COLLECTION delete[] userEntry; } /** Program Entry * Reads in the required, single file from the command prompt. * - If there is no file, state such and error out. * - If there is more than one file, state such and error out. * - If there is a single file: * - Create the database object * - Populate the database object * - Prompt the user for entry * main() will return EXIT_SUCCESS upon termination. */ int main(int argc, /// the number of files being passed into the program char *argv[] /// pointer to the filename being passed into tthe program ) { // EXECUTE /* Testing code * / char tempFile[INPUT_LINE_MAX_LEN] = {NIL}; cout << "enter filename: "; cin.getline(tempFile, INPUT_LINE_MAX_LEN, '\n'); */ // uncomment after successful debugging if(argc <= 1) { prompt(noFile_); prompt(usage_); return EXIT_FAILURE; /// no file was passed to the program } else if(argc > 2) { prompt(tooMany_); prompt(usage_); return EXIT_FAILURE; /// more than one file was passed to the program } else { prompt(fileName_); cout << argv[1]; // print out name of dictionary file if( !argv[1] ) { prompt(noFile_); prompt(usage_); return EXIT_FAILURE; /// file does not exist } /* file.open( argv[1] ); // open file numEntries >> in.getline(file); // determine number of dictionary objects to create file.close(); // close file Dictionary[ numEntries ](argv[1]); // create the dictionary object */ // TEMPORARY FILE FOR TESTING!!!! //Dictionary scrabble(tempFile); Dictionary scrabble(argv[1]); // creaate the dicitonary object //*/ useDictionary(scrabble); // prompt the user, use the dictionary } // exit return EXIT_SUCCESS; /// terminate program. } Dict.h/.cpp #ifndef DICT_H #define DICT_H // /////////////////////////////////////////////////////////////////////////////////// // DEPENDENCIES (Custom header files) #include "entry.h" /// class for dictionary entries // /////////////////////////////////////////////////////////////////////////////////// // PRE-PROCESSOR MACROS #define INPUT_LINE_MAX_LEN 256 /// Maximum length of each line in the dictionary file class Dictionary { public : // // Do NOT modify the public section of this class // typedef void (*WordDefFunc)(const char *word, const char *definition); Dictionary( const char *filename ); ~Dictionary(); const char *lookupDefinition( const char *word ); void forEach( WordDefFunc func ); private : // // You get to provide the private members // // VARIABLES int m_numEntries; /// stores the number of entries in the dictionary Entry *m_DictEntry_ptr; /// points to an array of class Entry // Private Functions }; #endif ----------------------------------- // /////////////////////////////////////////////////////////////////////////////////// // INCLUDES (C/C++ Std Library) #include <iostream> /// cout, getline #include <fstream> // ifstream #include <cstring> /// strchr // /////////////////////////////////////////////////////////////////////////////////// // DEPENDENCIES (custom header files) #include "dict.h" /// Header file required by assignment //#include "entry.h" /// Dicitonary Entry Class // /////////////////////////////////////////////////////////////////////////////////// // PRE-PROCESSOR MACROS #define COMMA ',' /// Delimiter for file #define ENTER '\n' /// Carriage return character #define FILE_ERR_MSG "The data file could not be opened. Program will now terminate." #pragma warning(disable : 4996) /// turn off MS compiler warning about strcpy() // /////////////////////////////////////////////////////////////////////////////////// // Namespace reference using namespace std; // /////////////////////////////////////////////////////////////////////////////////// // PRIVATE MEMBER FUNCTIONS /** * Sorts the dictionary entries. */ /* static void sortDictionary(?) { // sort through the words using qsort } */ /** NO LONGER NEEDED?? * parses out the length of the first cell in a delimited cell * / int getWordLength(char *str /// string of data to parse ) { return strcspn(str, COMMA); } */ // /////////////////////////////////////////////////////////////////////////////////// // PUBLIC MEMBER FUNCTIONS /** constructor for the class * - opens/reads in file * - creates initializes the array of member vars * - creates pointers to entry objects * - stores pointers to entry objects in member var * - ? sort now or later? */ Dictionary::Dictionary( const char *filename ) { // Create a filestream, open the file to be read in ifstream dataFile(filename, ios::in ); /* if( dataFile.fail() ) { cout << FILE_ERR_MSG << endl; exit(EXIT_FAILURE); } */ if( dataFile.is_open() ) { // read first line of data // TEST CODE in.getline(dataFile, INPUT_LINE_MAX_LEN) >> m_numEntries; // TEST CODE char temp[INPUT_LINE_MAX_LEN] = {NIL}; // TEST CODE dataFile.getline(temp,INPUT_LINE_MAX_LEN,'\n'); dataFile >> m_numEntries; /** Number of terms in the dictionary file * \todo find out how many lines in the file, subtract one, ingore first line */ //create the array of entries m_DictEntry_ptr = new Entry[m_numEntries]; // check for valid memory allocation if( !m_DictEntry_ptr ) { cout << MEM_ERR_MSG << endl; exit(EXIT_FAILURE); } // loop thru each line of the file, parsing words/def's and populating entry objects for(int EntryIdx = 0; EntryIdx < m_numEntries; ++EntryIdx) { // VARIABLES char *tempW_ptr; /// points to a temporary word char *tempD_ptr; /// points to a temporary def char *w_ptr; /// points to the word in the Entry object char *d_ptr; /// points to the definition in the Entry int tempWLen; /// length of the temp word string int tempDLen; /// length of the temp def string char tempLine[INPUT_LINE_MAX_LEN] = {NIL}; /// stores a single line from the file // EXECUTE // getline(dataFile, tempLine) // get a "word,def" line from the file dataFile.getline(tempLine, INPUT_LINE_MAX_LEN); // get a "word,def" line from the file // Parse the string tempW_ptr = tempLine; // point the temp word pointer at the first char in the line tempD_ptr = strchr(tempLine, COMMA); // point the def pointer at the comma *tempD_ptr = NIL; // replace the comma with a NIL ++tempD_ptr; // increment the temp def pointer // find the string lengths... +1 to account for terminator tempWLen = strlen(tempW_ptr) + 1; tempDLen = strlen(tempD_ptr) + 1; // Allocate heap memory for the term and defnition w_ptr = new char[ tempWLen ]; d_ptr = new char[ tempDLen ]; // check memory allocation if( !w_ptr && !d_ptr ) { cout << MEM_ERR_MSG << endl; exit(EXIT_FAILURE); } // copy the temp word, def into the newly allocated memory and terminate the strings strcpy(w_ptr,tempW_ptr); w_ptr[tempWLen] = NIL; strcpy(d_ptr,tempD_ptr); d_ptr[tempDLen] = NIL; // set the pointers for the entry objects m_DictEntry_ptr[ EntryIdx ].setWordPtr(w_ptr); m_DictEntry_ptr[ EntryIdx ].setDefPtr(d_ptr); } // close the file dataFile.close(); } else { cout << ENTER << FILE_ERR_MSG << endl; exit(EXIT_FAILURE); } } /** * cleans up dynamic memory */ Dictionary::~Dictionary() { delete[] m_DictEntry_ptr; /// thou shalt not have memory leaks. } /** * Looks up definition */ /* const char *lookupDefinition( const char *word ) { // print out the word ---- definition } */ /** * prints out the entire dictionary in sorted order */ /* void forEach( WordDefFunc func ) { // to sort before or now.... that is the question } */ Entry.h/cpp #ifndef ENTRY_H #define ENTRY_H // /////////////////////////////////////////////////////////////////////////////////// // INCLUDES (C++ Std lib) #include <cstdlib> /// EXIT_SUCCESS, NULL // /////////////////////////////////////////////////////////////////////////////////// // PRE-PROCESSOR MACROS #define NIL '\0' /// C-String terminator #define MEM_ERR_MSG "Memory allocation has failed. Program will now terminate." // /////////////////////////////////////////////////////////////////////////////////// // CLASS DEFINITION class Entry { public: Entry(void) : m_word_ptr(NULL), m_def_ptr(NULL) { /* default constructor */ }; void setWordPtr(char *w_ptr); /// sets the pointer to the word - only if the pointer is empty void setDefPtr(char *d_ptr); /// sets the ponter to the definition - only if the pointer is empty /// returns what is pointed to by the word pointer char getWord(void) const { return *m_word_ptr; } /// returns what is pointed to by the definition pointer char getDef(void) const { return *m_def_ptr; } private: char *m_word_ptr; /** points to a dictionary word */ char *m_def_ptr; /** points to a dictionary definition */ }; #endif -------------------------------------------------- // /////////////////////////////////////////////////////////////////////////////////// // DEPENDENCIES (custom header files) #include "entry.h" /// class header file // /////////////////////////////////////////////////////////////////////////////////// // PUBLIC FUNCTIONS /* * only change the word member var if it is in its initial state */ void Entry::setWordPtr(char *w_ptr) { if(m_word_ptr == NULL) { m_word_ptr = w_ptr; } } /* * only change the def member var if it is in its initial state */ void Entry::setDefPtr(char *d_ptr) { if(m_def_ptr == NULL) { m_word_ptr = d_ptr; } }

    Read the article

  • ASP.NET Web API Exception Handling

    - by Fredrik N
    When I talk about exceptions in my product team I often talk about two kind of exceptions, business and critical exceptions. Business exceptions are exceptions thrown based on “business rules”, for example if you aren’t allowed to do a purchase. Business exceptions in most case aren’t important to log into a log file, they can directly be shown to the user. An example of a business exception could be "DeniedToPurchaseException”, or some validation exceptions such as “FirstNameIsMissingException” etc. Critical Exceptions are all other kind of exceptions such as the SQL server is down etc. Those kind of exception message need to be logged and should not reach the user, because they can contain information that can be harmful if it reach out to wrong kind of users. I often distinguish business exceptions from critical exceptions by creating a base class called BusinessException, then in my error handling code I catch on the type BusinessException and all other exceptions will be handled as critical exceptions. This blog post will be about different ways to handle exceptions and how Business and Critical Exceptions could be handled. Web API and Exceptions the basics When an exception is thrown in a ApiController a response message will be returned with a status code set to 500 and a response formatted by the formatters based on the “Accept” or “Content-Type” HTTP header, for example JSON or XML. Here is an example:   public IEnumerable<string> Get() { throw new ApplicationException("Error!!!!!"); return new string[] { "value1", "value2" }; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The response message will be: HTTP/1.1 500 Internal Server Error Content-Length: 860 Content-Type: application/json; charset=utf-8 { "ExceptionType":"System.ApplicationException","Message":"Error!!!!!","StackTrace":" at ..."} .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   The stack trace will be returned to the client, this is because of making it easier to debug. Be careful so you don’t leak out some sensitive information to the client. So as long as you are developing your API, this is not harmful. In a production environment it can be better to log exceptions and return a user friendly exception instead of the original exception. There is a specific exception shipped with ASP.NET Web API that will not use the formatters based on the “Accept” or “Content-Type” HTTP header, it is the exception is the HttpResponseException class. Here is an example where the HttpReponseExcetpion is used: // GET api/values [ExceptionHandling] public IEnumerable<string> Get() { throw new HttpResponseException(new HttpResponseMessage(HttpStatusCode.InternalServerError)); return new string[] { "value1", "value2" }; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The response will not contain any content, only header information and the status code based on the HttpStatusCode passed as an argument to the HttpResponseMessage. Because the HttpResponsException takes a HttpResponseMessage as an argument, we can give the response a content: public IEnumerable<string> Get() { throw new HttpResponseException(new HttpResponseMessage(HttpStatusCode.InternalServerError) { Content = new StringContent("My Error Message"), ReasonPhrase = "Critical Exception" }); return new string[] { "value1", "value2" }; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   The code above will have the following response:   HTTP/1.1 500 Critical Exception Content-Length: 5 Content-Type: text/plain; charset=utf-8 My Error Message .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The Content property of the HttpResponseMessage doesn’t need to be just plain text, it can also be other formats, for example JSON, XML etc. By using the HttpResponseException we can for example catch an exception and throw a user friendly exception instead: public IEnumerable<string> Get() { try { DoSomething(); return new string[] { "value1", "value2" }; } catch (Exception e) { throw new HttpResponseException(new HttpResponseMessage(HttpStatusCode.InternalServerError) { Content = new StringContent("An error occurred, please try again or contact the administrator."), ReasonPhrase = "Critical Exception" }); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   Adding a try catch to every ApiController methods will only end in duplication of code, by using a custom ExceptionFilterAttribute or our own custom ApiController base class we can reduce code duplicationof code and also have a more general exception handler for our ApiControllers . By creating a custom ApiController’s and override the ExecuteAsync method, we can add a try catch around the base.ExecuteAsync method, but I prefer to skip the creation of a own custom ApiController, better to use a solution that require few files to be modified. The ExceptionFilterAttribute has a OnException method that we can override and add our exception handling. Here is an example: using System; using System.Diagnostics; using System.Net; using System.Net.Http; using System.Web.Http; using System.Web.Http.Filters; public class ExceptionHandlingAttribute : ExceptionFilterAttribute { public override void OnException(HttpActionExecutedContext context) { if (context.Exception is BusinessException) { throw new HttpResponseException(new HttpResponseMessage(HttpStatusCode.InternalServerError) { Content = new StringContent(context.Exception.Message), ReasonPhrase = "Exception" }); } //Log Critical errors Debug.WriteLine(context.Exception); throw new HttpResponseException(new HttpResponseMessage(HttpStatusCode.InternalServerError) { Content = new StringContent("An error occurred, please try again or contact the administrator."), ReasonPhrase = "Critical Exception" }); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   Note: Something to have in mind is that the ExceptionFilterAttribute will be ignored if the ApiController action method throws a HttpResponseException. The code above will always make sure a HttpResponseExceptions will be returned, it will also make sure the critical exceptions will show a more user friendly message. The OnException method can also be used to log exceptions. By using a ExceptionFilterAttribute the Get() method in the previous example can now look like this: public IEnumerable<string> Get() { DoSomething(); return new string[] { "value1", "value2" }; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } To use the an ExceptionFilterAttribute, we can for example add the ExceptionFilterAttribute to our ApiControllers methods or to the ApiController class definition, or register it globally for all ApiControllers. You can read more about is here. Note: If something goes wrong in the ExceptionFilterAttribute and an exception is thrown that is not of type HttpResponseException, a formatted exception will be thrown with stack trace etc to the client. How about using a custom IHttpActionInvoker? We can create our own IHTTPActionInvoker and add Exception handling to the invoker. The IHttpActionInvoker will be used to invoke the ApiController’s ExecuteAsync method. Here is an example where the default IHttpActionInvoker, ApiControllerActionInvoker, is used to add exception handling: public class MyApiControllerActionInvoker : ApiControllerActionInvoker { public override Task<HttpResponseMessage> InvokeActionAsync(HttpActionContext actionContext, System.Threading.CancellationToken cancellationToken) { var result = base.InvokeActionAsync(actionContext, cancellationToken); if (result.Exception != null && result.Exception.GetBaseException() != null) { var baseException = result.Exception.GetBaseException(); if (baseException is BusinessException) { return Task.Run<HttpResponseMessage>(() => new HttpResponseMessage(HttpStatusCode.InternalServerError) { Content = new StringContent(baseException.Message), ReasonPhrase = "Error" }); } else { //Log critical error Debug.WriteLine(baseException); return Task.Run<HttpResponseMessage>(() => new HttpResponseMessage(HttpStatusCode.InternalServerError) { Content = new StringContent(baseException.Message), ReasonPhrase = "Critical Error" }); } } return result; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } You can register the IHttpActionInvoker with your own IoC to resolve the MyApiContollerActionInvoker, or add it in the Global.asax: GlobalConfiguration.Configuration.Services.Remove(typeof(IHttpActionInvoker), GlobalConfiguration.Configuration.Services.GetActionInvoker()); GlobalConfiguration.Configuration.Services.Add(typeof(IHttpActionInvoker), new MyApiControllerActionInvoker()); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   How about using a Message Handler for Exception Handling? By creating a custom Message Handler, we can handle error after the ApiController and the ExceptionFilterAttribute is invoked and in that way create a global exception handler, BUT, the only thing we can take a look at is the HttpResponseMessage, we can’t add a try catch around the Message Handler’s SendAsync method. The last Message Handler that will be used in the Wep API pipe-line is the HttpControllerDispatcher and this Message Handler is added to the HttpServer in an early stage. The HttpControllerDispatcher will use the IHttpActionInvoker to invoke the ApiController method. The HttpControllerDipatcher has a try catch that will turn ALL exceptions into a HttpResponseMessage, so that is the reason why a try catch around the SendAsync in a custom Message Handler want help us. If we create our own Host for the Wep API we could create our own custom HttpControllerDispatcher and add or exception handler to that class, but that would be little tricky but is possible. We can in a Message Handler take a look at the HttpResponseMessage’s IsSuccessStatusCode property to see if the request has failed and if we throw the HttpResponseException in our ApiControllers, we could use the HttpResponseException and give it a Reason Phrase and use that to identify business exceptions or critical exceptions. I wouldn’t add an exception handler into a Message Handler, instead I should use the ExceptionFilterAttribute and register it globally for all ApiControllers. BUT, now to another interesting issue. What will happen if we have a Message Handler that throws an exception?  Those exceptions will not be catch and handled by the ExceptionFilterAttribute. I found a  bug in my previews blog post about “Log message Request and Response in ASP.NET WebAPI” in the MessageHandler I use to log incoming and outgoing messages. Here is the code from my blog before I fixed the bug:   public abstract class MessageHandler : DelegatingHandler { protected override async Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken) { var corrId = string.Format("{0}{1}", DateTime.Now.Ticks, Thread.CurrentThread.ManagedThreadId); var requestInfo = string.Format("{0} {1}", request.Method, request.RequestUri); var requestMessage = await request.Content.ReadAsByteArrayAsync(); await IncommingMessageAsync(corrId, requestInfo, requestMessage); var response = await base.SendAsync(request, cancellationToken); var responseMessage = await response.Content.ReadAsByteArrayAsync(); await OutgoingMessageAsync(corrId, requestInfo, responseMessage); return response; } protected abstract Task IncommingMessageAsync(string correlationId, string requestInfo, byte[] message); protected abstract Task OutgoingMessageAsync(string correlationId, string requestInfo, byte[] message); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   If a ApiController throws a HttpResponseException, the Content property of the HttpResponseMessage from the SendAsync will be NULL. So a null reference exception is thrown within the MessageHandler. The yellow screen of death will be returned to the client, and the content is HTML and the Http status code is 500. The bug in the MessageHandler was solved by adding a check against the HttpResponseMessage’s IsSuccessStatusCode property: public abstract class MessageHandler : DelegatingHandler { protected override async Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken) { var corrId = string.Format("{0}{1}", DateTime.Now.Ticks, Thread.CurrentThread.ManagedThreadId); var requestInfo = string.Format("{0} {1}", request.Method, request.RequestUri); var requestMessage = await request.Content.ReadAsByteArrayAsync(); await IncommingMessageAsync(corrId, requestInfo, requestMessage); var response = await base.SendAsync(request, cancellationToken); byte[] responseMessage; if (response.IsSuccessStatusCode) responseMessage = await response.Content.ReadAsByteArrayAsync(); else responseMessage = Encoding.UTF8.GetBytes(response.ReasonPhrase); await OutgoingMessageAsync(corrId, requestInfo, responseMessage); return response; } protected abstract Task IncommingMessageAsync(string correlationId, string requestInfo, byte[] message); protected abstract Task OutgoingMessageAsync(string correlationId, string requestInfo, byte[] message); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } If we don’t handle the exceptions that can occur in a custom Message Handler, we can have a hard time to find the problem causing the exception. The savior in this case is the Global.asax’s Application_Error: protected void Application_Error() { var exception = Server.GetLastError(); Debug.WriteLine(exception); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } I would recommend you to add the Application_Error to the Global.asax and log all exceptions to make sure all kind of exception is handled. Summary There are different ways we could add Exception Handling to the Wep API, we can use a custom ApiController, ExceptionFilterAttribute, IHttpActionInvoker or Message Handler. The ExceptionFilterAttribute would be a good place to add a global exception handling, require very few modification, just register it globally for all ApiControllers, even the IHttpActionInvoker can be used to minimize the modifications of files. Adding the Application_Error to the global.asax is a good way to catch all unhandled exception that can occur, for example exception thrown in a Message Handler.   If you want to know when I have posted a blog post, you can follow me on twitter @fredrikn

    Read the article

  • a php script to get lat/long from google maps using CellID

    - by user296516
    Hi guys, I have found this script that supposedly connects to google maps and gets lat/long coordinates, based on CID/LAC/MNC/MCC info. But I can't really make it work... where do I input CID/LAC/MNC/MCC ? <?php $data = "\x00\x0e". // Function Code? "\x00\x00\x00\x00\x00\x00\x00\x00". //Session ID? "\x00\x00". // Contry Code "\x00\x00". // Client descriptor "\x00\x00". // Version "\x1b". // Op Code? "\x00\x00\x00\x00". // MNC "\x00\x00\x00\x00". // MCC "\x00\x00\x00\x03". "\x00\x00". "\x00\x00\x00\x00". //CID "\x00\x00\x00\x00". //LAC "\x00\x00\x00\x00". //MNC "\x00\x00\x00\x00". //MCC "\xff\xff\xff\xff". // ?? "\x00\x00\x00\x00" // Rx Level? ; if ($_REQUEST["myl"] != "") { $temp = split(":", $_REQUEST["myl"]); $mcc = substr("00000000".dechex($temp[0]),-8); $mnc = substr("00000000".dechex($temp[1]),-8); $lac = substr("00000000".dechex($temp[2]),-8); $cid = substr("00000000".dechex($temp[3]),-8); } else { $mcc = substr("00000000".$_REQUEST["mcc"],-8); $mnc = substr("00000000".$_REQUEST["mnc"],-8); $lac = substr("00000000".$_REQUEST["lac"],-8); $cid = substr("00000000".$_REQUEST["cid"],-8); } $init_pos = strlen($data); $data[$init_pos - 38]= pack("H*",substr($mnc,0,2)); $data[$init_pos - 37]= pack("H*",substr($mnc,2,2)); $data[$init_pos - 36]= pack("H*",substr($mnc,4,2)); $data[$init_pos - 35]= pack("H*",substr($mnc,6,2)); $data[$init_pos - 34]= pack("H*",substr($mcc,0,2)); $data[$init_pos - 33]= pack("H*",substr($mcc,2,2)); $data[$init_pos - 32]= pack("H*",substr($mcc,4,2)); $data[$init_pos - 31]= pack("H*",substr($mcc,6,2)); $data[$init_pos - 24]= pack("H*",substr($cid,0,2)); $data[$init_pos - 23]= pack("H*",substr($cid,2,2)); $data[$init_pos - 22]= pack("H*",substr($cid,4,2)); $data[$init_pos - 21]= pack("H*",substr($cid,6,2)); $data[$init_pos - 20]= pack("H*",substr($lac,0,2)); $data[$init_pos - 19]= pack("H*",substr($lac,2,2)); $data[$init_pos - 18]= pack("H*",substr($lac,4,2)); $data[$init_pos - 17]= pack("H*",substr($lac,6,2)); $data[$init_pos - 16]= pack("H*",substr($mnc,0,2)); $data[$init_pos - 15]= pack("H*",substr($mnc,2,2)); $data[$init_pos - 14]= pack("H*",substr($mnc,4,2)); $data[$init_pos - 13]= pack("H*",substr($mnc,6,2)); $data[$init_pos - 12]= pack("H*",substr($mcc,0,2)); $data[$init_pos - 11]= pack("H*",substr($mcc,2,2)); $data[$init_pos - 10]= pack("H*",substr($mcc,4,2)); $data[$init_pos - 9]= pack("H*",substr($mcc,6,2)); if ((hexdec($cid) > 0xffff) && ($mcc != "00000000") && ($mnc != "00000000")) { $data[$init_pos - 27] = chr(5); } else { $data[$init_pos - 24]= chr(0); $data[$init_pos - 23]= chr(0); } $context = array ( 'http' => array ( 'method' => 'POST', 'header'=> "Content-type: application/binary\r\n" . "Content-Length: " . strlen($data) . "\r\n", 'content' => $data ) ); $xcontext = stream_context_create($context); $str=file_get_contents("http://www.google.com/glm/mmap",FALSE,$xcontext); if (strlen($str) > 10) { $lat_tmp = unpack("l",$str[10].$str[9].$str[8].$str[7]); $lat = $lat_tmp[1]/1000000; $lon_tmp = unpack("l",$str[14].$str[13].$str[12].$str[11]); $lon = $lon_tmp[1]/1000000; echo "Lat=$lat <br> Lon=$lon"; } else { echo "Not found!"; } ?> also I found this one http://code.google.com/intl/ru/apis/gears/geolocation_network_protocol.html , but it isn't php.

    Read the article

  • Creating Descriptive Flex Field (DFF) Bean in OAF

    - by Manoj Madhusoodanan
    In this blog I will explain how to add a custom DFF in a custom OAF page.I am using XXCUST_DFF_DEMO table to store the DFF values.Also I am using custom DFF named XXCUST_PERSON_DFF.  Following steps needs to be performed to create this solution. 1) Register the custom table in Oracle Application2) Register the DFF3) Define the segments of DFF4) Create BC4J components for OAF and OA Page which holds the DFF I will explain the steps in detail below. Register the custom table in Oracle Application I am using custom DFF here so I have to register the custom table which I am going to capture the values.Please click here to see the table script. I am using the AD_DD package to register the custom table.Please click here to see the table registration script. Please verify the table has registered successfully. Navigation: Application Developer > Application > Database > Table Table has registered successfully. Register the DFF Next step is to register the DFF. Navigate to Application Developer > Flex Field > Descriptive > Register. Give details as below. Click on Reference Fields and set the Reference Field as ATTRIBUTE_CATEGORY. Click on the Columns button to verify that the columns ATTRIBUTE_CATEGORY,ATTRIBUTE1 .... ATTRIBUTE30 are enabled. DFF has registered successfully. Define the segments of DFF Here I am going to define the segments of the DFF.Navigate to Application Developer > Flex Field > Descriptive > Segments.Query for "XXCUST - Person DFF". Uncheck "Freeze Flexfield Definition". In my DFF the reference field I want to display a value set which has values "Permanent" and "Contractor". So define a value set  XXCUST_EMPLOYMENT_TYPE. Navigation: Application Developer > Flex Field > Descriptive > Validation > Sets After that assign the values to above created value sets. Navigation: Application Developer > Flex Field > Descriptive > Validation > Values Assign XXCUST_EMPLOYMENT_TYPE to Context Field Valueset. Setup the Context Field Values based on below table. Context Code Segments Global Data Elements Phone Number Email Fax Contractor Manager Extension Number CSP Name Permanent Extension Number Access Card Number Phone Number,Email and Fax displays always.When user choose Context Value as "Contractor" Manager Extension Number and CSP Name will show.In case of "Permanent" Extension Number and Access Card Number will show.  Assign value set also as follows. For Global Data Elements following are the segments. For "Contractor" following are the segments. For "Permanent" following are the segments. Check the "Freeze Flexfield Definition" check box and save.Standard concurrent program "Flexfield View Generator" will generate XXCUST_DFF_DEMO_DFV view which we mentioned in the DFF registration step.  Now the DFF has created successfully and ready to use. Create BC4J components for OAF and OA Page which holds the DFF Create the BC4J components ( EO,VO and AM) appropriately.Create the page based on the created VO.For DFF create an item of type "flex" with following property.  Note: You cannot create a flex item directly under a messageComponentLayout region, but you can create a messageLayout region under the messageComponentLayout region and add the flex item under the messageLayout region. In the Segment List property give the segment names which you want to display.The syntax of this is Global Data Elements|SEGMENT 1|...|SEGMENT N||[Context Code1]|SEGMENT 1|...|SEGMENT N||[Context Code2]|SEGMENT 1|...|SEGMENT N||... Eg: Global Data Elements|Phone Number|Email|Fax||Contractor|Manager Extension Number|CSP Name||Permanent|Extension Number|Access Card Number When you change the Context Value corresponding segments will display automatically by PPR in the page. You can attach partial action to the DFF bean programmatically so that you can identify the action related to DFF. pageContext.getParameter(EVENT_PARAM) will return "FLEX_CONTEXT_CHANGEDPersonDFF" when you change the DFF Context. Page is ready and you can test. When you choose "Contract" following output you can see. When you choose "Permanent" following output you can see.  Give proper values and press Apply.You can see values populated in the table.

    Read the article

  • What Every Developer Should Know About MSI Components

    - by Alois Kraus
    Hopefully nothing. But if you have to do more than simple XCopy deployment and you need to support updates, upgrades and perhaps side by side scenarios there is no way around MSI. You can create Msi files with a Visual Studio Setup project which is severely limited or you can use the Windows Installer Toolset. I cannot talk about WIX with my German colleagues because WIX has a very special meaning. It is funny to always use the long name when I talk about deployment possibilities. Alternatively you can buy commercial tools which help you to author Msi files but I am not sure how good they are. Given enough pain with existing solutions you can also learn the MSI Apis and create your own packaging solution. If I were you I would use either a commercial visual tool when you do easy deployments or use the free Windows Installer Toolset. Once you know the WIX schema you can create well formed wix xml files easily with any editor. Then you can “compile” from the wxs files your Msi package. Recently I had the “pleasure” to get my hands dirty with C++ (again) and the MSI technology. Installation is a complex topic but after several month of digging into arcane MSI issues I can safely say that there should exist an easier way to install and update files as today. I am not alone with this statement as John Robbins (creator of the cool tool Paraffin) states: “.. It's a brittle and scary API in Windows …”. To help other people struggling with installation issues I present you the advice I (and others) found useful and what will happen if you ignore this advice. What is a MSI file? A MSI file is basically a database with tables which reference each other to control how your un/installation should work. The basic idea is that you declare via these tables what you want to install and MSI controls the how to get your stuff onto or off your machine. Your “stuff” consists usually of files, registry keys, shortcuts and environment variables. Therefore the most important tables are File, Registry, Environment and Shortcut table which define what will be un/installed. The key to master MSI is that every resource (file, registry key ,…) is associated with a MSI component. The actual payload consists of compressed files in the CAB format which can either be embedded into the MSI file or reside beside the MSI file or in a subdirectory below it. To examine MSI files you need Orca a free MSI editor provided by MS. There is also another free editor called Super Orca which does support diffs between MSI and it does not lock the MSI files. But since Orca comes with a shell extension I tend to use only Orca because it is so easy to right click on a MSI file and open it with this tool. How Do I Install It? Double click it. This does work for fresh installations as well as major upgrades. Updates need to be installed via the command line via msiexec /i <msi> REINSTALL=ALL REINSTALLMODE=vomus   This tells the installer to reinstall all already installed features (new features will NOT be installed). The reinstallmode letters do force an overwrite of the old cached package in the %WINDIR%\Installer folder. All files, shortcuts and registry keys are redeployed if they are missing or need to be replaced with a newer version. When things did go really wrong and you want to overwrite everything unconditionally use REINSTALLMODE=vamus. How To Enable MSI Logs? You can download a MSI from Microsoft which installs some registry keys to enable full MSI logging. The log files can be found in your %TEMP% folder and are called MSIxxxx.log. Alternatively you can add to your msiexec command line the option msiexec …. /l*vx <LogFileName> Personally I find it rather strange that * does not mean full logging. To really get all logs I need to add v and x which is documented in the msiexec help but I still find this behavior unintuitive. What are MSI components? The whole MSI logic is bound to the concept of MSI components. Nearly every msi table has a Component column which binds an installable resource to a component. Below are the screenshots of the FeatureComponents and Component table of an example MSI. The Feature table defines basically the feature hierarchy.  To find out what belongs to a feature you need to look at the FeatureComponents table where for each feature the components are listed which will be installed when a feature is installed. The MSI components are defined in the  Component table. This table has as first column the component name and as second column the component id which is a GUID. All resources you want to install belong to a MSI component. Therefore nearly all MSI tables have a Component_ column which contains the component name. If you look e.g. a the File table you see that every file belongs to a component which is true for all other tables which install resources. The component table is the glue between all other tables which contain the resources you want to install. So far so easy. Why is MSI then so complex? Most MSI problems arise from the fact that you did violate a MSI component rule in one or the other way. When you install a feature the reference count for all components belonging to this feature will increase by one. If your component is installed by more than one feature it will get a higher refcount. When you uninstall a feature its refcount will drop by one. Interesting things happen if the component reference count reaches zero: Then all associated resources will be deleted. That looks like a reasonable thing and it is. What it makes complex are the strange component rules you have to follow. Below are some important component rules from the Tao of the Windows Installer … Rule 16: Follow Component Rules Components are a very important part of the Installer technology. They are the means whereby the Installer manages the resources that make up your application. The SDK provides the following guidelines for creating components in your package: Never create two components that install a resource under the same name and target location. If a resource must be duplicated in multiple components, change its name or target location in each component. This rule should be applied across applications, products, product versions, and companies. Two components must not have the same key path file. This is a consequence of the previous rule. The key path value points to a particular file or folder belonging to the component that the installer uses to detect the component. If two components had the same key path file, the installer would be unable to distinguish which component is installed. Two components however may share a key path folder. Do not create a version of a component that is incompatible with all previous versions of the component. This rule should be applied across applications, products, product versions, and companies. Do not create components containing resources that will need to be installed into more than one directory on the user’s system. The installer installs all of the resources in a component into the same directory. It is not possible to install some resources into subdirectories. Do not include more than one COM server per component. If a component contains a COM server, this must be the key path for the component. Do not specify more than one file per component as a target for the Start menu or a Desktop shortcut. … And these rules do not even talk about component ids, update packages and upgrades which you need to understand as well. Lets suppose you install two MSIs (MSI1 and MSI2) which have the same ComponentId but different component names. Both do install the same file. What will happen when you uninstall MSI2?   Hm the file should stay there. But the component names are different. Yes and yes. But MSI uses not use the component name as key for the refcount. Instead the ComponentId column of the Component table which contains a GUID is used as identifier under which the refcount is stored. The components Comp1 and Comp2 are identical from the MSI perspective. After the installation of both MSIs the Component with the Id {100000….} has a refcount of two. After uninstallation of one MSI there is still a refcount of one which drops to zero just as expected when we uninstall the last msi. Then the file which was the same for both MSIs is deleted. You should remember that MSI keeps a refcount across MSIs for components with the same component id. MSI does manage components not the resources you did install. The resources associated with a component are then and only then deleted when the refcount of the component reaches zero.   The dependencies between features, components and resources can be described as relations. m,k are numbers >= 1, n can be 0. Inside a MSI the following relations are valid Feature    1  –> n Components Component    1 –> m Features Component      1  –>  k Resources These relations express that one feature can install several components and features can share components between them. Every (meaningful) component will install at least one resource which means that its name (primary key to stay in database speak) does occur in some other table in the Component column as value which installs some resource. Lets make it clear with an example. We want to install with the feature MainFeature some files a registry key and a shortcut. We can then create components Comp1..3 which are referenced by the resources defined in the corresponding tables.   Feature Component Registry File Shortcuts MainFeature Comp1 RegistryKey1     MainFeature Comp2   File.txt   MainFeature Comp3   File2.txt Shortcut to File2.txt   It is illegal that the same resource is part of more than one component since this would break the refcount mechanism. Lets illustrate this:            Feature ComponentId Resource Reference Count Feature1 {1000-…} File1.txt 1 Feature2 {2000-….} File1.txt 1 The installation part works well but what happens when you uninstall Feature2? Component {20000…} gets a refcount of zero where MSI deletes all resources belonging to this component. In this case File1.txt will be deleted. But Feature1 still has another component {10000…} with a refcount of one which means that the file was deleted too early. You just have ruined your installation. To fix it you then need to click on the Repair button under Add/Remove Programs to let MSI reinstall any missing registry keys, files or shortcuts. The vigilant reader might has noticed that there is more in the Component table. Beside its name and GUID it has also an installation directory, attributes and a KeyPath. The KeyPath is a reference to a file or registry key which is used to detect if the component is already installed. This becomes important when you repair or uninstall a component. To find out if the component is already installed MSI checks if the registry key or file referenced by the KeyPath property does exist. When it does not exist it assumes that it was either already uninstalled (can lead to problems during uninstall) or that it is already installed and all is fine. Why is this detail so important? Lets put all files into one component. The KeyPath should be then one of the files of your component to check if it was installed or not. When your installation becomes corrupt because a file was deleted you cannot repair it with the Repair button under Add/Remove Programs because MSI checks the component integrity via the Resource referenced by its KeyPath. As long as you did not delete the KeyPath file MSI thinks all resources with your component are installed and never executes any repair action. You get even more trouble when you try to remove files during an upgrade (you cannot remove files during an update) from your super component which contains all files. The only way out and therefore best practice is to assign for every resource you want to install an extra component. This ensures painless updatability and repairs and you have much less effort to remove specific files during an upgrade. In effect you get this best practice relation Feature 1  –> n Components Component   1  –>  1 Resources MSI Component Rules Rule 1 – One component per resource Every resource you want to install (file, registry key, value, environment value, shortcut, directory, …) must get its own component which does never change between versions as long as the install location is the same. Penalty If you add more than one resources to a component you will break the repair capability of MSI because the KeyPath is used to check if the component needs repair. MSI ComponentId Files MSI 1.0 {1000} File1-5 MSI 2.0 {2000} File2-5 You want to remove File1 in version 2.0 of your MSI. Since you want to keep the other files you create a new component and add them there. MSI will delete all files if the component refcount of {1000} drops to zero. The files you want to keep are added to the new component {2000}. Ok that does work if your upgrade does uninstall the old MSI first. This will cause the refcount of all previously installed components to reach zero which means that all files present in version 1.0 are deleted. But there is a faster way to perform your upgrade by first installing your new MSI and then remove the old one.  If you choose this upgrade path then you will loose File1-5 after your upgrade and not only File1 as intended by your new component design.   Rule 2 – Only add, never remove resources from a component If you did follow rule 1 you will not need Rule 2. You can add in a patch more resources to one component. That is ok. But you can never remove anything from it. There are tricky ways around that but I do not want to encourage bad component design. Penalty Lets assume you have 2 MSI files which install under the same component one file   MSI1 MSI2 {1000} - ComponentId {1000} – ComponentId File1.txt File2.txt   When you install and uninstall both MSIs you will end up with an installation where either File1 or File2 will be left. Why? It seems that MSI does not store the resources associated with each component in its internal database. Instead Windows will simply query the MSI that is currently uninstalled for all resources belonging to this component. Since it will find only one file and not two it will only uninstall one file. That is the main reason why you never can remove resources from a component!   Rule 3 Never Remove A Component From an Update MSI. This is the same as if you change the GUID of a component by accident for your new update package. The resulting update package will not contain all components from the previously installed package. Penalty When you remove a component from a feature MSI will set the feature state during update to Advertised and log a warning message into its log file when you did enable MSI logging. SELMGR: ComponentId '{2DCEA1BA-3E27-E222-484C-D0D66AEA4F62}' is registered to feature 'xxxxxxx, but is not present in the Component table.  Removal of components from a feature is not supported! MSI (c) (24:44) [07:53:13:436]: SELMGR: Removal of a component from a feature is not supported Advertised means that MSI treats all components of this feature as not installed. As a consequence during uninstall nothing will be removed since it is not installed! This is not only bad because uninstall does no longer work but this feature will also not get the required patches. All other features which have followed component versioning rules for update packages will be updated but the one faulty feature will not. This results in very hard to find bugs why an update was only partially successful. Things got better with Windows Installer 4.5 but you cannot rely on that nobody will use an older installer. It is a good idea to add to your update msiexec call MSIENFORCEUPGRADECOMPONENTRULES=1 which will abort the installation if you did violate this rule.

    Read the article

  • Skechers Leverages Oracle Applications, Business Intelligence and On Demand Offerings to Drive Long-Term Growth

    - by user801960
    This month Oracle Retail in the USA announced that Skechers - a world leading lifestyle footwear retailer - would be adopting several Oracle Retail products as part of their global growth strategy and to maximise business efficiency.  While based primarily in the USA, Skechers is a respected retailer across the world and has been an Oracle customer since 1997.  The key information about the announcement is below.  To find out more about Skechers visit their website: http://www.skechers.com/  Skechers U.S.A. Inc., an award-winning global leader in the lifestyle footwear industry, has upgraded and expanded its Oracle® Applications investment, implemented Oracle Database and moved to Oracle On Demand, Oracle’s premier cloud service to support rapid growth across its retail and wholesale channels. The new business information systems are part of a larger initiative for the billion-dollar-plus footwear company to fuel growth, reduce total cost of ownership and enable the business to respond faster to market opportunities. With more than 3,000 styles of shoes to design, develop and market, Skechers upgraded to Oracle’s PeopleSoft Enterprise Financial Management and PeopleSoft Supply Chain Management to increase operational efficiencies and improve controls by establishing an integrated, industry-specific platform. An Oracle customer since 1997, Skechers implemented PeopleSoft Enterprise Real Estate Management to meet the rapid growth of its retail stores worldwide. The company is the first customer to go live on the Real Estate Management module and worked closely with Oracle to provide development insight. Skechers also implemented Oracle Fusion Governance, Risk, and Compliance applications. This deployment enabled the company to leverage its existing corporate governance and compliance efforts throughout the global enterprise and more effectively manage the audit processes across multiple business units, processes and systems while reducing audit costs. Next, Skechers leveraged Oracle Financial Analytics, a pre-built Oracle Business Intelligence Application and PeopleSoft Enterprise Project Costing and PeopleSoft Enterprise Contracts to develop a custom Royalty Management dashboard, providing managers with better financial visibility to the company’s licensing contracts. The company switched to Oracle Database and moved database hosting and management to Oracle On Demand to reduce maintenance, implementation and system administration costs. As a result, Skechers is also achieving a better response time and is delivering a higher level of 24x7 support. OSI Consulting, a Platinum partner in Oracle PartnerNetwork (OPN), provided implementation and integration services to Skechers.   To view the full announcement please click here

    Read the article

  • Best way of storing voxels

    - by opiop65
    This should be a pretty easy question to answer, and I'm not looking for how to store the voxels data wise, I'm looking for the theory. Currently, my voxel engine has no global list of tiles. Each chunk has it's own list, and its hard to do things like collision detection or anything that may use tiles that are outside of its own chunk. I recently worked on procedural terrain with a non voxel engine. Now, I want to start using heightmaps in my voxel engine. But, I have a problem. If each chunk has its own list, then it will be pretty difficult to implement the heightmaps. My real question is, is should I store a global list of voxels independent of the chunks? And then I can just easily perform collision detection and terrain generation. However, it would probably be harder to do things such as frustum culling. So how should I store them?

    Read the article

  • How to install older ruby version on ubuntu Precise Pangolin?

    - by Codestudent
    I got a present: older Rails tutorials that needs old ruby version. I try to install ruby-1.8 with the packet manager. I still got problems with the tutorial example code. Next I try rvm to install the old ruby version. Unfortunately I got an error. I do not know what to do. I search the internet. Many people got no problems with rvm. rvm use *ERROR: Branch origin/ruby_1_8_4 not found.* and *ERROR: Error running 'GEM_PATH="/usr/share/ruby-rvm/gems/ruby-1.8.4- tv1_8_4:/usr/share/ruby-rvm/gems/ruby-1.8.4-tv1_8_4@global:/usr/share/ruby- rvm/gems/ruby-1.8.4-tv1_8_4:/usr/share/ruby-rvm/gems/ruby-1.8.4-tv1_8_4@global" GEM_HOME="/usr/share/ruby-rvm/gems/ruby-1.8.4-tv1_8_4" "/usr/share/ruby- rvm/rubies/ruby-1.8.4-tv1_8_4/bin/ruby" "/usr/share/ruby-rvm/src/rubygems- 1.3.7/setup.rb"', please read /usr/share/ruby-rvm/log/ruby-1.8.4- tv1_8_4/rubygems.install.log* Please give me a hint.

    Read the article

  • How can unity-panel-service be disabled?

    - by Amos Annoy
    From the unity-panel-service manpages: DESCRIPTION The unity-panel-service program is normally started automatically by the Unity shell (which gets started as a compiz module) and is used to draw panels which can then be used for the global menu, or to hold indicators. How can the unity-panel-service be non-automatically started abnormally? In other words, how is it arbitrarily manually started and/or stopped? The manpage implication is that this can be done without stopping the Unity shell. This answer seems promising: Is it possible to restart the unity panel without restarting compiz? but ... not. The process can be killed from System Monitor but it restarts automatically. references: How can menu bars that require a right click be activated like Ubuntu versions <10.10? How do I disable the global application menu?

    Read the article

  • EU Digital Agenda scores 85/100

    - by trond-arne.undheim
    If the Digital Agenda was a bottle of wine and I were wine critic Robert Parker, I would say the Digital Agenda has "a great bouquet, many good elements, with astringent, dry and puckering mouth feel that will not please everyone, but still displaying some finesse. A somewhat controlled effort with no surprises and a few noticeable flaws in the delivery. Noticeably shorter aftertaste than advertised by the producers. Score: 85/100. Enjoy now". The EU Digital Agenda states that "standards are vital for interoperability" and has a whole chapter on interoperability and standards. With this strong emphasis, there is hope the EU's outdated standardization system finally is headed for reform. It has been 23 years since the legal framework of standardisation was completed by Council Decision 87/95/EEC8 in the Information and Communications Technology (ICT) sector. Standardization is market driven. For several decades the IT industry has been developing standards and specifications in global open standards development organisations (fora/consortia), many of which have transparency procedures and practices far superior to the European Standards Organizations. The Digital Agenda rightly states: "reflecting the rise and growing importance of ICT standards developed by certain global fora and consortia". Some fora/consortia, of course, are distorted, influenced by single vendors, have poor track record, and need constant vigilance, but they are the minority. Therefore, the recognition needs to be accompanied by eligibility criteria focused on openness. Will the EU reform its ICT standardization by the end of 2010? Possibly, and only if DG Enterprise takes on board that Information and Communications Technologies (ICTs) have driven half of the productivity growth in Europe over the past 15 years, a prominent fact in the EU's excellent Digital Competitiveness report 2010 published on Monday 17 May. It is ok to single out the ICT sector. It simply is the most important sector right now as it fuels growth in all other sectors. Let's not wait for the entire standardization package which may take another few years. Europe does not have time. The Digital Agenda is an umbrella strategy with deliveries from a host of actors across the Commission. For instance, the EU promises to issue "guidance on transparent ex-ante disclosure rules for essential intellectual property rights and licensing terms and conditions in the context of standard setting", by 2011 in the Horisontal Guidelines now out for public consultation by DG COMP and to some extent by DG ENTR's standardization policy reform. This is important. The EU will issue procurement guidance as interoperability frameworks are put into practice. This is a joint responsibility of several DGs, and is likely to suffer coordination problems, controversy and delays. We have seen plenty of the latter already and I have commented on the Commission's own interoperability elsewhere, with mixed luck. :( Yesterday, I watched the cartoonesque Korean western film The Good, the Bad and the Weird. In the movie (and I meant in the movie only), a bandit, a thief, and a bounty hunter, all excellent at whatever they do, fight for a treasure map. Whether that is a good analogy for the situation within the Commission, others are better judges of than I. However, as a movie fanatic, I still await the final shoot-out, and, as in the film, the only certainty is that "life is about chasing and being chased". The missed opportunity (in this case not following up the push from Member States to better define open standards based interoperability) is a casualty of the chaos ensued in the European Wild West (and I mean that in the most endearing sense, and my excuses beforehand to actors who possibly justifiably cannot bear being compared to fictional movie characters). Instead of exposing the ongoing fight, the EU opted for the legalistic use of the term "standards" throughout the document. This is a term that--to the EU-- excludes most standards used by the IT industry world wide. So, while it, for a moment, meant "weapon down", it will not lead to lasting peace. The Digital Agenda calls for the Member States to "Implement commitments on interoperability and standards in the Malmö and Granada Declarations by 2013". This is a far cry from the actual Ministerial Declarations which called upon the Commission to help them with this implementation by recognizing and further defining open standards based interoperability. Unless there is more forthcoming from the Commission, the market's judgement will be: you simply fall short. Generally, I think the EU focus now should be "from policy to practice" and the Digital Agenda does indeed stop short of tackling some highly practical issues. There is need for progress beyond the Digital Agenda. Here are some suggestions that would help Europe re-take global leadership on openness, public sector reform, and economic growth: A strong European software strategy centred around open standards based interoperability by 2011. An ambitious new eCommission strategy for 2011-15 focused on migration to open standards by 2015. Aligning the IT portfolio across the Commission into one Digital Agenda DG by 2012. Focusing all best practice exchange in eGovernment on one social networking site, epractice.eu (full disclosure: I had a role in getting that site up and running) Prioritizing public sector needs in global standardization over European standardization by 2014.

    Read the article

  • Assembly Resources Expression Builder

    - by João Angelo
    In ASP.NET you can tackle the internationalization requirement by taking advantage of native support to local and global resources used with the ResourceExpressionBuilder. But with this approach you cannot access public resources defined in external assemblies referenced by your Web application. However, since you can extend the .NET resource provider mechanism and create new expression builders you can workaround this limitation and use external resources in your ASPX pages much like you use local or global resources. Finally, if you are thinking, okay this is all very nice but where’s the code I can use? Well, it was too much to publish directly here so download it from Helpers.Web@Codeplex, where you also find a sample on how to configure and use it.

    Read the article

  • April 18: Learn about Oracle Hyperion Data Relationship Management

    - by Theresa Hickman
    Do you have multiple charts of accounts on different application instances? Would you like an easy way to synchronize your charts of accounts across instances? If you answered yes, then please join us in an informal reference call with Johnson Controls who were able to synchronize their charts of accounts across 5 HFM (Hyperion Financial Management) instances using Hyperion Data Relationship Management (DRM). Johnson Controls is a global technology and industrial leader with 162,000 employees, serving customers in more than 150 countries. This call will include a brief overview of Johnson Controls and their solution followed by a candid discussion and an open question and answer session. When: April 18, 2012 Time: 8:00 am PST Duration: 1 Hour Speaker: Raymond Chontos, HFM Application Manager Global Financial Systems Click here to register.

    Read the article

  • VADs (Value Added Distributors) Oracle em Portugal

    - by Paulo Folgado
    Com a recente incorporação da Sun na Oracle, e o consequente acolhimento no seu canal de revenda dos distribuidores de Hardware (designados até então pela Sun por CDP - Channel Development Provider), a Oracle aproveitou para fazer, a nível global, uma reformulação do seu canal de distribuição.Essa reformulação pretendeu alcançar vários objectivos: Uniformizar as condições comerciais e de processos entre os CDPs Sun agora incorporados e os VAD Oracle já existentes Reduzir o número total de VADs a nível global Dar preferência a VADs com operações internacionais, em detrimento das operações puramente locais num só país Conceder a cada um dos VADs seleccionados a distribuição de todas as linhas de produtos Oracle, incluindo Software e Hardware.Assim, em resultado dessa reformulação, temos o prazer de anunciar que a Oracle Portugal passa a operar com os dois seguintes VADs: Cada um destes VADs passa a distribuir indistintamente, como acima foi referido, as linhas de produtos Software e Hardware. Para mais detalhes sobre as 2 empresas e os respectivos contactos, favor consultar em: http://blogs.oracle.com/opnportugal/vad/vad.html. Estamos certos que esta reformulação virá contribuir para uma ainda maior dinamização do ecosistema de parceiros da Oracle Portugal.

    Read the article

  • ASP.NET Web Forms Extensibility: Handler Factories

    - by Ricardo Peres
    An handler factory is the class that implements IHttpHandlerFactory and is responsible for instantiating an handler (IHttpHandler) that will process the current request. This is true for all kinds of web requests, whether they are for ASPX pages, ASMX/SVC web services, ASHX/AXD handlers, or any other kind of file. Also used for restricting access for certain file types, such as Config, Csproj, etc. Handler factories are registered on the global Web.config file, normally located at %WINDIR%\Microsoft.NET\Framework<x64>\vXXXX\Config for a given path and request type (GET, POST, HEAD, etc). This goes on section <httpHandlers>. You would create a custom handler factory for a number of reasons, let me list just two: A centralized place for using dependency injection; Also a centralized place for invoking custom methods or performing some kind of validation on all pages. Let’s see an example using Unity for injecting dependencies into a page, suppose we have this on Global.asax.cs: 1: public class Global : HttpApplication 2: { 3: internal static readonly IUnityContainer Unity = new UnityContainer(); 4: 5: void Application_Start(Object sender, EventArgs e) 6: { 7: Unity.RegisterType<IFunctionality, ConcreteFunctionality>(); 8: } 9: } We instantiate Unity and register a concrete implementation for an interface, this could/should probably go in the Web.config file. Forget about its actual definition, it’s not important. Then, we create a custom handler factory: 1: public class UnityPageHandlerFactory : PageHandlerFactory 2: { 3: public override IHttpHandler GetHandler(HttpContext context, String requestType, String virtualPath, String path) 4: { 5: IHttpHandler handler = base.GetHandler(context, requestType, virtualPath, path); 6: 7: //one scenario: inject dependencies 8: Global.Unity.BuildUp(handler.GetType(), handler, String.Empty); 9:  10: return (handler); 11: } 12: } It inherits from PageHandlerFactory, which is .NET’s included factory for building regular ASPX pages. We override the GetHandler method and issue a call to the BuildUp method, which will inject required dependencies, if any exist. An example page with dependencies might be: 1: public class SomePage : Page 2: { 3: [Dependency] 4: public IFunctionality Functionality 5: { 6: get; 7: set; 8: } 9: } Notice the DependencyAttribute, it is used by Unity to identify properties that require dependency injection. When BuildUp is called, the Functionality property (or any other properties with the DependencyAttribute attribute) will receive the concrete implementation associated with it’s type, as registered on Unity. Another example, checking a page for authorization. Let’s define an interface first: 1: public interface IRestricted 2: { 3: Boolean Check(HttpContext ctx); 4: } An a page implementing that interface: 1: public class RestrictedPage : Page, IRestricted 2: { 3: public Boolean Check(HttpContext ctx) 4: { 5: //check the context and return a value 6: return ...; 7: } 8: } For this, we would use an handler factory such as this: 1: public class RestrictedPageHandlerFactory : PageHandlerFactory 2: { 3: private static readonly IHttpHandler forbidden = new UnauthorizedHandler(); 4:  5: public override IHttpHandler GetHandler(HttpContext context, String requestType, String virtualPath, String path) 6: { 7: IHttpHandler handler = base.GetHandler(context, requestType, virtualPath, path); 8: 9: if (handler is IRestricted) 10: { 11: if ((handler as IRestricted).Check(context) == false) 12: { 13: return (forbidden); 14: } 15: } 16:  17: return (handler); 18: } 19: } 20:  21: public class UnauthorizedHandler : IHttpHandler 22: { 23: #region IHttpHandler Members 24:  25: public Boolean IsReusable 26: { 27: get { return (true); } 28: } 29:  30: public void ProcessRequest(HttpContext context) 31: { 32: context.Response.StatusCode = (Int32) HttpStatusCode.Unauthorized; 33: context.Response.ContentType = "text/plain"; 34: context.Response.Write(context.Response.Status); 35: context.Response.Flush(); 36: context.Response.Close(); 37: context.ApplicationInstance.CompleteRequest(); 38: } 39:  40: #endregion 41: } The UnauthorizedHandler is an example of an IHttpHandler that merely returns an error code to the client, but does not cause redirection to the login page, it is included merely as an example. One thing we must keep in mind is, there can be only one handler factory registered for a given path/request type (verb) tuple. A typical registration would be: 1: <httpHandlers> 2: <remove path="*.aspx" verb="*"/> 3: <add path="*.aspx" verb="*" type="MyNamespace.MyHandlerFactory, MyAssembly"/> 4: </httpHandlers> First we remove the previous registration for ASPX files, and then we register our own. And that’s it. A very useful mechanism which I use lots of times.

    Read the article

  • Join us! Oracle Manufacturing Industries Forum, Chicago Thurs. Nov.14'13

    - by Stephen Slade
    The 6th Annual Oracle Manufacturing Industries Forum will take place in Chicago, Thurs, Nov.14 '13. Executives from successful global manufacturing companies will address key themes including: Value Chain Transformation, Owning the Customer Service Experience, Sales and Operations Planning in a Global Enterprise Supply Chain, and Modernizing the Manufacturing Enterprise. Join us for what we expect to be one of the industry's most informative and provocative executive events of the year. Event Objectives: Create an environment where executives can interact with their peers to discuss current issues Brainstorm and discuss how attendees can use technology to transform their organizations Share best practices and learn through the experiences of industry peers Where: Westin Chicago River North,  320 North Dearborn St,  Chicago, IL 60654 Evite: http://www.oracle.com/us/dm/229048-nafm13049989mpp074-se-2021171.html Register:   http://eventreg.oracle.com/profile/web/index.cfm?PKWebId=0x25005591a&source=evite Partner Sponsors Include: CSS, Fujitsu, Inspirage, Hitachi Consulting, Lucidity and Rolta Register Now!

    Read the article

  • Kauffman Foundation Selects Stackify to Present at Startup@Kauffman Demo Day

    - by Matt Watson
    Stackify will join fellow Kansas City startups to kick off Global Entrepreneurship WeekOn Monday, November 12, Stackify, a provider of tools that improve developers’ ability to support, manage and monitor their enterprise applications, will pitch its technology at the Startup@Kauffman Demo Day in Kansas City, Mo. Hosted by the Ewing Marion Kauffman Foundation, the event will mark the start of Global Entrepreneurship Week, the world’s largest celebration of innovators and job creators who launch startups.Stackify was selected through a competitive process for a six-minute opportunity to pitch its new technology to investors at Demo Day. In his pitch, Stackify’s founder, Matt Watson, will discuss the current challenges DevOps teams face and reveal how Stackify is reinventing the way software developers provide application support.In October, Stackify had successful appearances at two similar startup events. At Tech Cocktail’s Kansas City Mixer, the company was named “Hottest Kansas City Startup,” and it won free hosting service after pitching its solution at St. Louis, Mo.’s Startup Connection.“With less than a month until our public launch, events like Demo Day are giving Stackify the support and positioning we need to change the development community,” said Watson. “As a serial technology entrepreneur, I appreciate the Kauffman Foundation’s support of startup companies like Stackify. We’re thrilled to participate in Demo Day and Global Entrepreneurship Week activities.”Scheduled to publicly launch in early December 2012, Stackify’s platform gives developers insights into their production applications, servers and databases. Stackify finally provides agile developers safe and secure remote access to look at log files, config files, server health and databases. This solution removes the bottleneck from managers and system administrators who, until now, are the only team members with access. Essentially, Stackify enables development teams to spend less time fixing bugs and more time creating products.Currently in beta, Stackify has already been named a “Company to Watch” by Software Development Times, which called the startup “the next big thing.” Developers can register for a free Stackify account on Stackify.com.###Stackify Founded in 2012, Stackify is a Kansas City-based software service provider that helps development teams troubleshoot application problems. Currently in beta, Stackify will be publicly available in December 2012, when agile developers will finally be able to provide agile support. The startup has already been recognized by Tech Cocktail as “Hottest Kansas City Startup” and was named a “Company to Watch” by Software Development Times. To learn more, visit http://www.stackify.com and follow @stackify on Twitter.

    Read the article

  • Luxottica Delivers an Elevated Customer Experience

    - by user801960
    Luxottica Group is a global leader in premium, luxury and sports eyewear with nearly 6,250 stores worldwide. The Group’s strong brand portfolio comprises ten house brands including Oakley, Ray-Ban, Percol and Arnette, and 20 licensed brands such as Bulgari, Chanel and Versace. In January at the Oracle Retail Exchange in New York, Luca Del Din, Luxottica Group’s IT Manager – Global Retail Demand and Integration and Irven Cassio, Digital Experience Director for Luxottica Retail introduced our REx delegates to their flagship Sunglass Hut store on Fifth Avenue. This store showcase provided the opportunity to explore this fantastic retail space incorporating the store’s interactive retail concept, the Sunglass Hut Social Sun station. I invite you to hear from Luca and Irven as we explore some of the innovative technologies and concepts that Luxottica deployed in this store and how these deliver an elevated customer experience.

    Read the article

< Previous Page | 142 143 144 145 146 147 148 149 150 151 152 153  | Next Page >