Search Results

Search found 752 results on 31 pages for 'specifications'.

Page 27/31 | < Previous Page | 23 24 25 26 27 28 29 30 31  | Next Page >

  • Start a software company offshore

    - by Mascarpone
    Hello Everybody, I own a small, very young, EU based (Italy) company, and among other things, we sell IT solutions. I have a degree in applied mathematics, and I mainly deal with user interfaces, embedded systems, automation and web applications. You can say that I'm an enlightened entrepreneur because I work only with open source software (OS, IDE, I release under BSD , ... everything is free as in freedom), I give high importance to post sales services and customer satisfaction, plus I think I'm the best boss someone could desire (LOL), as I have google in mind when I think about IT workers rights. But the most beautiful thing is that, although everybody advised us not to use open source, is that we are quite profitable!!! (for the sixth trimester in a row). Now I offshore most of the work to an Indian company. I divide the work in modules and I outsource the longer or more trivial ones. I spend a lot of time defining the specifications and I leave the hard work to them. Using productivity bonuses, a lot of prototypes and third-party audits I think that my software has reached a very good quality level. I would like to start my own software development company, in order to improve control over process and cut costs. Obviously I can't afford the cost of labor in the EU, so I thought about opening a company in Asia. What I need Is: 1) Cheap labor - I can afford to give productivity bonuses and higher than average wages and stay profitable just because labor is cheap. 2) Many talents - I need a good level of tertiary education, and a good number of graduates, so I can hire junior developers and train and teach them according to my needs and philosophies (e.g.: open source mind) 3) Good infrastructure - buildings, transport, internet, .... everything that a company might need. I thought about 3 possible candidates: 1) India - I already work with indian people, I know that they are realiable and speak a good english. Big cities are too expensive, but maybe a small city like lucknow http://en.wikipedia.org/wiki/Lucknow could suits my needs. 2) China - They say it's cheaper than India, but I everytime I worked with a chineese company the language was a big barrier. They work hard, are somewhat skilled and cheap but maybe it's a risky path. Plus I feel a little uncofortable with their lack of human rights. 3) Philippines - Same as china: cheaper than india, but maybe less educated. Where do you think it's the best place to start a software company? Any reading or book to advise? thank you very much

    Read the article

  • Windows 7 startup MUCH slower after reinstall (on an SSD)

    - by user326639
    I installed Windows 7 Prof 64 bits OEM (Spanish) on my new machine. As I wanted my Windows to be in English, the web shop where I bought the DVD recomended me to download an ISO file with the same Windows version (but in English), burn it on a DVD and install it. And that I should be able to use my registration code. Location ISO: http://msft-dnl.digitalrivercontent.net/msvista/pub/X15-65805/X15-65805.iso I've done this and everything works (I have not activated my Windows yet but I expect no problem there). Just one thing: its startup is MUCH slower now! Have a look at my PC specs (bottom). On my first install (Spanish), it was like: - motherboard splash screen -- shows for a second or two - list of found drives -- a few seconds - the text "Windows starting" -- about a second before the dots appear - four collored dots form the Windows logo -- a few seconds after the logo is fully formed it moves on to the login screen. On my second install (English): - motherboard splash screen -- shows for 15 seconds - list of found drives -- a few seconds - the text "Windows starting" -- shows for 40 seconds before the dots appear - four collored dots form the Windows logo -- now it moves on to the login screen about equally fast as before. Ones it's up and running it seems to be as responsive as before, although it's possible that I'm not noticing the difference. I did the first install on the virgin SSD drive straight from the box. The second time I let the Windows installation program format the drive first to get rid of the old installation. I noticed that there were two partitions on my SSD: partition 1, 100 Mb, "reserved for the system" and partition 2, 111.7 Gb. I only formated the big partition, and I left the system partition untouched. Between the two installs, I didn't open the computer so everything is connected to the same port. I did not change anything in BIOS. Has Windows not recognized my SSD as an SSD but as a normal HDD. I suspect that Windows has not done the neccesary automatic configuration settings that it should do for SSD's (but that's just a hunch). How do I get my SSD back into its virgin state, as if it came right from the box, so I can go for a 3rd attempt to install windows. Should I use DISKPART? Other ideas are welcome. Specifications: mobo: Gigabyte GA-Z68X-UD3H-B3 CPU: i7-2600K SSD: OCZ Agility3 2,5" HDD: Samsung Spinpoint F4 mem: Kingston HyperX DIMM 8 Gb DDR3-1600

    Read the article

  • Confused Why I am getting C1010 error?

    - by bluepixel
    I have three files: Main, slist.h and slist.cpp can be seen at http://forums.devarticles.com/c-c-help-52/confused-why-i-am-getting-c2143-and-c1010-error-259574.html I'm trying to make a program where main reads the list of student names from a file (roster.txt) and inserts all the names in a list in ascending order. This is the full class roster list (notCheckedIN). From here I will read all students who have come to write the exams, each checkin will transfer their name to another list (in ascending order) called present. The final product is notCheckedIN will contain a list of all those students that did not write the exam and present will contain the list of all students who wrote the exam Main File: // Exam.cpp : Defines the entry point for the console application. #include "stdafx.h" #include "iostream" #include "iomanip" #include "fstream" #include "string" #include "slist.h" using namespace std; void OpenFile(ifstream&); void GetClassRoster(SortList&, ifstream&); void InputStuName(SortList&, SortList&); void UpdateList(SortList&, SortList&, string); void Print(SortList&, SortList&); const string END_DATA = "EndData"; int main() { ifstream roster; SortList notCheckedIn; //students present SortList present; //student absent OpenFile(roster); if(!roster) //Make sure file is opened return 1; GetClassRoster(notCheckedIn, roster); //insert the roster list into the notCheckedIn list InputStuName(present, notCheckedIn); Print(present, notCheckedIn); return 0; } void OpenFile(ifstream& roster) //Precondition: roster is pointing to file containing student anmes //Postcondition:IF file does not exist -> exit { string fileName = "roster.txt"; roster.open(fileName.c_str()); if(!roster) cout << "***ERROR CANNOT OPEN FILE :"<< fileName << "***" << endl; } void GetClassRoster(SortList& notCheckedIN, ifstream& roster) //Precondition:roster points to file containing list of student last name // && notCheckedIN is empty //Postcondition:notCheckedIN is filled with the names taken from roster.txt in ascending order { string name; roster >> name; while(roster) { notCheckedIN.Insert(name); roster >> name; } } void InputStuName(SortList& present, SortList& notCheckedIN) //Precondition: present list is empty initially and notCheckedIN list is full //Postcondition: repeated prompting to enter stuName // && notCheckedIN will delete all names found in present // && present will contain names present // && names not found in notCheckedIN will report Error { string stuName; cout << "Enter last name (Enter EndData if none to Enter): "; cin >> stuName; while(stuName!=END_DATA) { UpdateList(present, notCheckedIN, stuName); } } void UpdateList(SortList& present, SortList& notCheckedIN, string stuName) //Precondition:stuName is assigned //Postcondition:IF stuName is present, stuName is inserted in present list // && stuName is removed from the notCheckedIN list // ELSE stuName does not exist { if(notCheckedIN.isPresent(stuName)) { present.Insert(stuName); notCheckedIN.Delete(stuName); } else cout << "NAME IS NOT PRESENT" << endl; } void Print(SortList& present, SortList& notCheckedIN) //Precondition: present and notCheckedIN contains a list of student Names present/not present //Postcondition: content of present and notCheckedIN is printed { cout << "Candidates Present" << endl; present.Print(); cout << "Candidates Absent" << endl; notCheckedIN.Print(); } Header File: //Specification File: slist.h //This file gives the specifications of a list abstract data type //List items inserted will be in order //Class SortList, structured type used to represent an ADT using namespace std; const int MAX_LENGTH = 200; typedef string ItemType; //Class Object (class instance) SortList. Variable of class type. class SortList { //Class Member - components of a class, can be either data or functions public: //Constructor //Post-condition: Empty list is created SortList(); //Const member function. Compiler error occurs if any statement within tries to modify a private data bool isEmpty() const; //Post-condition: == true if list is empty // == false if list is not empty bool isFull() const; //Post-condition: == true if list is full // == false if list is full int Length() const; //Post-condition: size of list void Insert(ItemType item); //Precondition: NOT isFull() && item is assigned //Postcondition: item is in list && Length() = Length()@entry + 1 void Delete(ItemType item); //Precondition: NOT isEmpty() && item is assigned //Postcondition: // IF items is in list at entry // first occurance of item in list is removed // && Length() = Length()@entry -1; // ELSE // list is not changed bool isPresent(ItemType item) const; //Precondition: item is assigned //Postcondition: == true if item is present in list // == false if item is not present in list void Print() const; //Postcondition: All component of list have been output private: int length; ItemType data[MAX_LENGTH]; void BinSearch(ItemType, bool&, int&) const; }; Source File: //Implementation File: slist.cpp //This file gives the specifications of a list abstract data type //List items inserted will be in order //Class SortList, structured type used to represent an ADT #include "iostream" #include "slist.h" using namespace std; // int length; // ItemType data[MAX_SIZE]; //Class Object (class instance) SortList. Variable of class type. SortList::SortList() //Constructor //Post-condition: Empty list is created { length=0; } //Const member function. Compiler error occurs if any statement within tries to modify a private data bool SortList::isEmpty() const //Post-condition: == true if list is empty // == false if list is not empty { return(length==0); } bool SortList::isFull() const //Post-condition: == true if list is full // == false if list is full { return (length==(MAX_LENGTH-1)); } int SortList::Length() const //Post-condition: size of list { return length; } void SortList::Insert(ItemType item) //Precondition: NOT isFull() && item is assigned //Postcondition: item is in list && Length() = Length()@entry + 1 // && list componenet are in ascending order of value { int index; index = length -1; while(index >=0 && item<data[index]) { data[index+1]=data[index]; index--; } data[index+1]=item; length++; } void SortList:elete(ItemType item) //Precondition: NOT isEmpty() && item is assigned //Postcondition: // IF items is in list at entry // first occurance of item in list is removed // && Length() = Length()@entry -1; // && list components are in ascending order // ELSE data array is unchanged { bool found; int position; BinSearch(item,found,position); if (found) { for(int index = position; index < length; index++) data[index]=data[index+1]; length--; } } bool SortList::isPresent(ItemType item) const //Precondition: item is assigned && length <= MAX_LENGTH && items are in ascending order //Postcondition: true if item is found in the list // false if item is not found in the list { bool found; int position; BinSearch(item,found,position); return (found); } void SortList::Print() const //Postcondition: All component of list have been output { for(int x= 0; x<length; x++) cout << data[x] << endl; } void SortList::BinSearch(ItemType item, bool found, int position) const //Precondition: item contains item to be found // && item in the list is an ascending order //Postcondition: IF item is in list, position is returned // ELSE item does not exist in the list { int first = 0; int last = length -1; int middle; found = false; while(!found) { middle = (first+last)/2; if(data[middle]<item) first = middle+1; else if (data[middle] > item) last = middle -1; else found = true; } if(found) position = middle; } I cannot get rid of the C1010 error: fatal error C1010: unexpected end of file while looking for precompiled header. Did you forget to add '#include "stdafx.h"' to your source? Is there a way to get rid of this error? When I included "stdafx.h" I received the following 32 errors (which does not make sense to me why because I referred back to my manual on how to use Class method - everything looks a.ok.) Error 1 error C2871: 'std' : a namespace with this name does not exist c:\..\slist.h 6 Error 2 error C2146: syntax error : missing ';' before identifier 'ItemType' c:\..\slist.h 8 Error 3 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int c:\..\slist.h 8 Error 4 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int c:\..\slist.h 8 Error 5 error C2061: syntax error : identifier 'ItemType' c:\..\slist.h 30 Error 6 error C2061: syntax error : identifier 'ItemType' c:\..\slist.h 34 Error 7 error C2061: syntax error : identifier 'ItemType' c:\..\slist.h 43 Error 8 error C2146: syntax error : missing ';' before identifier 'data' c:\..\slist.h 52 Error 9 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int c:\..\slist.h 52 Error 10 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int c:\..\slist.h 52 Error 11 error C2061: syntax error : identifier 'ItemType' c:\..\slist.h 53 Error 12 error C2146: syntax error : missing ')' before identifier 'item' c:\..\slist.cpp 41 Error 13 error C2761: 'void SortList::Insert(void)' : member function redeclaration not allowed c:\..\slist.cpp 41 Error 14 error C2059: syntax error : ')' c:\..\slist.cpp 41 Error 15 error C2143: syntax error : missing ';' before '{' c:\..\slist.cpp 45 Error 16 error C2447: '{' : missing function header (old-style formal list?) c:\..\slist.cpp 45 Error 17 error C2146: syntax error : missing ')' before identifier 'item' c:\..\slist.cpp 57 Error 18 error C2761: 'void SortList:elete(void)' : member function redeclaration not allowed c:\..\slist.cpp 57 Error 19 error C2059: syntax error : ')' c:\..\slist.cpp 57 Error 20 error C2143: syntax error : missing ';' before '{' c:\..\slist.cpp 65 Error 21 error C2447: '{' : missing function header (old-style formal list?) c:\..\slist.cpp 65 Error 22 error C2146: syntax error : missing ')' before identifier 'item' c:\..\slist.cpp 79 Error 23 error C2761: 'bool SortList::isPresent(void) const' : member function redeclaration not allowed c:\..\slist.cpp 79 Error 24 error C2059: syntax error : ')' c:\..\slist.cpp 79 Error 25 error C2143: syntax error : missing ';' before '{' c:\..\slist.cpp 83 Error 26 error C2447: '{' : missing function header (old-style formal list?) c:\..\slist.cpp 83 Error 27 error C2065: 'data' : undeclared identifier c:\..\slist.cpp 95 Error 28 error C2146: syntax error : missing ')' before identifier 'item' c:\..\slist.cpp 98 Error 29 error C2761: 'void SortList::BinSearch(void) const' : member function redeclaration not allowed c:\..\slist.cpp 98 Error 30 error C2059: syntax error : ')' c:\..\slist.cpp 98 Error 31 error C2143: syntax error : missing ';' before '{' c:\..\slist.cpp 103 Error 32 error C2447: '{' : missing function header (old-style formal list?) c:\..\slist.cpp 103

    Read the article

  • Java ME SDK 3.2 is now live

    - by SungmoonCho
    Hi everyone, It has been a while since we released the last version. We have been very busy integrating new features and making lots of usability improvements into this new version. Datasheet is available here. Please visit Java ME SDK 3.2 download page to get the latest and best version yet! Some of the new features in this version are described below. Embedded Application SupportOracle Java ME SDK 3.2 now supports the new Oracle® Java ME Embedded. This includes support for JSR 228, the Information Module Profile-Next Generation API (IMP-NG). You can test and debug applications either on the built-in device emulators or on your device. Memory MonitorThe Memory Monitor shows memory use as an application runs. It displays a dynamic detailed listing of the memory usage per object in table form, and a graphical representation of the memory use over time. Eclipse IDE supportOracle Java ME SDK 3.2 now officially supports Eclipse IDE. Once you install the Java ME SDK plugins on Eclipse, you can start developing, debugging, and profiling your mobile or embedded application. Skin CreatorWith the Custom Device Skin Creator, you can create your own skins. The appearance of the custom skins is generic, but the functionality can be tailored to your own specifications.  Here are the release highlights. Implementation and support for the new Oracle® Java Wireless Client 3.2 runtime and the Oracle® Java ME Embedded runtime. The AMS in the CLDC emulators has a new look and new functionality (Install Application, Manage Certificate Authorities and Output Console). Support for JSR 228, the Information Module Profile-Next Generation API (IMP-NG). The IMP-NG platform is implemented as a subset of CLDC. Support includes: A new emulator for headless devices. Javadocs for the following Oracle APIs: Device Access API, Logging API, AMS API, and AccessPoint API. New demos for IMP-NG features can be run on the emulator or on a real device running the Oracle® Java ME Embedded runtime. New Custom Device Skin Creator. This tool provides a way to create and manage custom emulator skins. The skin appearance is generic, but the functionality, such as the JSRs supported or the device properties, are up to you. This utility only supported in NetBeans. Eclipse plugin for CLDC/MIDP. For the first time Oracle Java ME SDK is available as an Eclipse plugin. The Eclipse version does not support CDC, the Memory Monitor, and the Custom Device Skin Creator in this release. All Java ME tools are implemented as NetBeans plugins. As of the plugin integrates Java ME utilities into the standard NetBeans menus. Tools > Java ME menu is the place to launch Java ME utilities, including the new Skin Creator. Profile > Java ME is the place to work with the Network Monitor and the Memory Monitor. Use the standard NetBeans tools for debugging. Profiling, Network monitoring, and Memory monitoring are integrated with the NetBeans profiling tools. New network monitoring protocols are supported in this release: WMA, SIP, Bluetooth and OBEX, SATSA APDU and JCRMI, and server sockets. Java ME SDK Update Center. Oracle Java ME SDK can be updated or extended by new components. The Update Center can download, install, and uninstall plugins specific to the Java ME SDK. A plugin consists of runtime components and skins. Bug fixes and enhancements. This version comes with a few known problems. All of them have workarounds, so I hope you don't get stuck in these issues when you are using the product. It you cannot watch static variables during an Eclipse debugging session, and sometimes the Variable view cannot show data. In the source code, move the mouse over the required variable to inspect the variable value. A real device shown in the Device Selector is deleted from the Device Manager, yet it still appears. Kill the device manager in the system tray, and relaunch it. Then you will see the device removed from the list. On-device profiling does not work on a device. CPU profiling, networking monitoring, and memory monitoring do not work on the device, since the device runtime does not yet support it. Please do the profiling with your emulator first, and then test your application on the device. In the Device Selector, using Clean Database on real external device causes a null pointer exception. External devices do not have a database recognized by the SDK, so you can disregard this exception message. Suspending the Emulator during a Memory Monitor session hangs the emulator. Do not use the Suspend option (F5) while the Memory Monitor is running. If the emulator is hung, open the Windows task manager and stop the emulator process (javaw). To switch to another application while the Memory Monitor is running, choose Application > AMS Home (F4), and select a different application. Please let us know how we can improve it even better, by sending us your feedback. -Java ME SDK Team

    Read the article

  • Oracle Enterprise Manager Cloud Control 12c: Contributing to emerging Cloud standards

    - by Anand Akela
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Contributed by Tony Di Cenzo, Director for Standards Strategy and Architecture, and Mark Carlson, Principal Cloud Architect, for Oracle's Systems Management and Storage Products Groups . As one would expect of an industry leader, Oracle's participation in industry standards bodies is extensive. We participate in dozens of organizations that produce open standards which apply to our products, and our commitment to the success of these organizations is manifest in several way - we support them financially through our memberships; our senior engineers are active participants, often serving in leadership positions on boards, technical working groups and committees; and when it makes good business sense we contribute our intellectual property. We believe supporting the development of open standards is fundamental to Oracle meeting customer demands for product choice, seamless interoperability, and lowering the cost of ownership. Nowhere is this truer than in the area of cloud standards, and for the most recent release of our flagship management product, Oracle Enterprise Manager Cloud Control 12c (EM Cloud Control 12c). There is a fundamental rule that standards follow architecture. This was true of distributed computing, it was true of service-oriented architecture (SOA), and it's true of cloud. If you are familiar with Enterprise Manager it is likely to be no surprise that EM Cloud Control 12c is a source of technology that can be considered for adoption within cloud management standards. The reason, quite simply, is that the Oracle integrated stack architecture aligns with the cloud architecture models being adopted by the industry, and EM Cloud Control 12c has been developed to manage this architecture. EM Cloud Control 12c has facilities for managing the various underlying capabilities of the integrated stack in IaaS, PaaS, and SaaS clouds, and enables essential characteristics such as on-demand self-service provisioning, centralized policy-based resource management, integrated chargeback, and capacity planning, and complete visibility of the physical and virtual environment from applications to disk. Our most recent contribution in support of cloud management standards to come out of the EM Cloud Control 12c work was the Oracle Cloud Elemental Resource Model API. Oracle contributed the Elemental Resource Model API to the Distributed Management Task Force (DMTF) in 2011 where it was assigned to DMTF's Cloud Management Working Group (CMWG). The CMWG is considering the Oracle specification and those of several other vendors in their effort to produce a best practices specification for managing IaaS clouds. DMTF's Cloud Infrastructure Management Interface specification, called CIMI for short, is currently out for public review and expected to be released by DMTF later this year. We are proud to be playing an important role in the development of what is expected to become a major cloud standard. You can find more information on DMTF CIMI at http://dmtf.org/standards/cloud. You can find the work-in-progress release of CIMI at http://dmtf.org/content/cimi-work-progress-specifications-now-available-public-comment . The Oracle Cloud API specification is available on the Oracle Technology Network. You can find more information about the Oracle Cloud Elemental Resource Model API on the Oracle Technical Network (OTN), including a webcast featuring the API engineering manager Jack Yu (see TechCast Live: Inside the Oracle Cloud Resource Model API). If you have not seen this video we recommend you take the time to view it. Simply hover your cursor over the webcast title and control+click to follow the embedded link. If you have a question about the Oracle Cloud API or want to learn more about Oracle's participation in cloud management standards efforts drop us a line. We'd love to hear from you. The Enterprise Manager Standards Blogs are written by Tony Di Cenzo, Director for Standards Strategy and Architecture, and Mark Carlson, Principal Cloud Architect, for Oracle's Systems Management and Storage Products Groups. They can be reached at Tony.DiCenzo at Oracle.com and Mark.Carlson at Oracle.com respectively. Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • HTG Explains: Are You Using IPv6 Yet? Should You Even Care?

    - by Chris Hoffman
    IPv6 is extremely important for the long-term health of the Internet. But is your Internet service provider providing IPv6 connectivity yet? Does your home network support it? Should you even care if you’re using IPv6 yet? Switching from IPv4 to IPv6 will give the Internet a much larger pool of IP addresses. It should also allow every device to have its own public IP address, rather than be hidden behind a NAT router. IPv6 is Important Long-Term IPv6 is very important for the long-term health of the Internet. There are only about 3.7 billion public IPv4 addresses. This may sound like a lot, but it isn’t even one IP address for each person on the planet. Considering people have more and more Internet-connected devices — everything from light bulbs to thermostats are starting to become network-connected — the lack of IP addresses is already proving to be a serious problem. This may not affect those of us in well-off developed countries just yet, but developing countries are already running out of IPv4 addresses. So, if you work at an Internet service provider, manage Internet-connected servers, or develop software or hardware — yes, you should care about IPv6! You should be deploying it and ensuring your software and hardware works properly with it. It’s important to prepare for the future before the current IPv4 situation becomes completely unworkable. But, if you’re just typical user or even a typical geek with a home Internet connection and a home network, should you really care about your home network just yet? Probably not. What You Need to Use IPv6 To use IPv6, you’ll need three things: An IPv6-Compatible Operating System: Your operating system’s software must be capable of using IPv6. All modern desktop operating systems should be compatible — Windows Vista and newer versions of Windows, as well as modern versions of Mac OS X and Linux. Windows XP doesn’t have IPv6 support installed by default, but you shouldn’t be using Windows XP anymore, anyway. A Router With IPv6 Support: Many — maybe even most — consumer routers in the wild don’t support IPv6. Check your router’s specifications details to see if it supports IPv6 if you’re curious. If you’re going to buy a new router, you’ll probably want to get one with IPv6 support to future-proof yourself. If you don’t have an IPv6-enabled router yet, you don’t need to buy a new one just to get it. An ISP With IPv6 Enabled:  Your Internet service provider must also have IPv6 set up on their end. Even if you have modern software and hardware on your end, your ISP has to provide an IPv6 connection for you to use it. IPv6 is rolling out steadily, but slowly — there’s a good chance your ISP hasn’t enabled it for you yet. How to Tell If You’re Using IPv6 The easiest way to tell if you have IPv6 connectivity is to visit a website like testmyipv6.com. This website allows you to connect to it in different ways — click the links near the top to see if you can connect to the website via different types of connections. If you can’t connect via IPv6, it’s either because your operating system is too old (unlikely), your router doesn’t support IPv6 (very possible), or because your ISP hasn’t enabled it for you yet (very likely). Now What? If you can connect to the test website above via IPv6, congratulations! Everything is working as it should. Your ISP is doing a good job of rolling out IPv6 rather than dragging its feet. There’s a good chance you won’t have IPv6 working properly, however. So what should you do about this — should you head to Amazon and buy a new IPv6-enabled router or switch to an ISP that offers IPv6? Should you use a “tunnel broker,” as the test site recommends, to tunnel into IPv6 via your IPv4 connection? Well, probably not. Typical users shouldn’t have to worry about this yet. Connecting to the Internet via IPv6 shouldn’t be perceptibly faster, for example. It’s important for operating system vendors, hardware companies, and Internet service providers to prepare for the future and get IPv6 working, but you don’t need to worry about this on your home network. IPv6 is all about future-proofing. You shouldn’t be racing to implement this at home yet or worrying about it too much — but, when you need to buy a new router, try to buy one that supports IPv6. Image Credit: Adobe of Chaos on Flickr, hisperati on Flickr, Vox Efx on Flickr     

    Read the article

  • Meet our 2009 Oracle Graduates in South Africa

    - by anca.rosu
    Focusing on the broader Oracle community, Oracle South Africa initiated its first skills development programme in May 1988. Since its inception the programme has developed and improved and every year more graduates are taken on board. The Oracle Graduate Programme is made up of specific learning paths designed around customer, partner and Oracle specifications and is structured to meet the urgent skills requirements in the Oracle “economy”. The training programmes have a specific duration and incorporate both theoretical and practical application of Oracle product sets. It is aimed at creating: Meaningful employment for graduates; Learning opportunities for individuals within the organization so that career growth opportunities are exploited to the fullest; Capacity building for small enterprises which is aligned to Oracle’s Enterprise Development Programme Meet our five graduates who joined us in December 2008 and have spent over a year with us! Let’s get their initial feedback on the graduate programme and on their assignment to Jordan. Lector   On the Oracle Graduate Programme: “The Oracle Graduate Programme is an experience of a life time. I would not trade it for anything. It’s challenging and rewarding. I am proud and happy to be in an organization like Oracle” On the assignment in Jordan: "Friendly, welcoming people, world class instructors always willing to go the extra mile. What more can you ask for?"   Lungile On the Oracle Graduate Programme: “I joined Oracle as part of the graduate intake for pre-sales in order to develop my skills and knowledge. Working at Oracle has been an overwhelmingly positive experience as it has encouraged me to progress with my personal development. I am hugely grateful. It has been a great challenge and an awesome opportunity.” On the assignment to Jordan: “Going to Jordan was a great opportunity and the experience of a lifetime. The people were very welcoming and friendly. The culture was totally different from ours - the food, the clothes and the weather. It was an amazingly different experience to work from Sunday to Thursday with Friday and Saturday as the weekend.” Thabo On the Oracle Graduate Programme: “Life is an infinite learning path. I truly value growth. I believe for one to grow, one needs to be challenged to your full potential. The Oracle Graduate Programme offers real growth – and so much more.” On the assignment to Jordan: “I was amazed by the cultural differences. I now understood that to be part of the global community, I must embrace our similarities and understand our differences.”   Albeauty On the Oracle Graduate Programme: “Responsibility, dedication, focus and taking initiative … these are the key points I learned from Oracle. It is such an honour to finally be part of the Oracle family. The graduate programme itself was a great experience as I managed to learn how Oracle operates – it has been the highlight of my year. I believe that my hard work will assist in the growth of the company.” On the Jordan assignment: “A memory worth embracing. Going to Jordan was a great opportunity as I learned a lot with respect to integration between different cultures and getting to adapt to all things different. I, along with almost every other graduate, discovered that Oracle is far more than a database company. Now I know there is far more to the ‘Big Red’ name.” Emmanuel On the Oracle Graduate Programme: “The programme gave me invaluable exposure to the ICT sector and also provided an opportunity to travel, network and exchange ideas with others. The formal training helped me to improve my presentation skills and gave me a better understanding of business etiquette and communication.” On the assignment to Jordan: “It was my first trip abroad. It was a great chance to get to know each other. I had the opportunity to share ideas, share personal stuff as a team. We met experts who gave us superb training in Oracle Technologies. It was great.”   If you have any questions related to this article feel free to contact  [email protected].  You can find our job opportunities via http://campus.oracle.com.   Technorati Tags: Oracle community,South Africa,Graduate Programme,Jordan,Technologies

    Read the article

  • Oracle ATG Web Commerce 10 Implementation Developer Boot Camp - Reading (UK) - October 1-12, 2012

    - by Richard Lefebvre
    REGISTER NOW: Oracle ATG Web Commerce 10 Implementation Developer Boot Camp Reading, UK, October 1-12, 2012! OPN invites you to join us for a 10-day implementation bootcamp on Oracle ATG Web Commerce in Reading, UK from October 1-12, 2012.This 10-day boot camp is designed to provide partners with hands-on experience and technical training to successfully build and deploy Oracle ATG Web Commerce 10 Applications. This particular boot camp is focused on helping partners develop the essential skills needed to implement every aspect of an ATG Commerce Application from scratch, (not CRS-based), with a specific goal of enabling experienced Java/J2EE developers with a path towards becoming functional, effective, and contributing members of an ATG implementation team. Built for both new and experienced ATG developers alike, the collaborative nature of this program and its exercises, have proven to be highly effective and extremely valuable in learning the best practices for implementing ATG solutions. Though not required, this bootcamp provides a structured path to earning a Certified Oracle ATG Web Commerce 10 Specialization! What Is Covered: This boot camp is for Application Developers and Software Architects wanting to gain valuable insight into ATG application development best practices, as well as relevant and applicable implementation experience on projects modeled after four of the most common types of applications built on the ATG platform. The following learning objectives are all critical, and are of equal priority in enabling this role to succeed. This learning boot camp will help with: Building a basic functional transaction-ready ATG Web Commerce 10 Application. Utilizing ATG’s platform features such as scenarios, slots, targeters, user profiles and segments, to create a personalized user experience. Building Nucleus components to support and/or extend application functionality. Understanding the intricacies of ATG order checkout and fulfillment. Specifying, designing and implementing new commerce features in ATG 10. Building a functional commerce application modeled after four of the most common types of applications built on the ATG platform, within an agile-based project team environment and under simulated real-world project conditions. Duration: The Oracle ATG Web Commerce 10 Implementation Developer Boot Camp is an instructor-led workshop spanning 10 days. Audience: Application Developers Software Architects Prerequisite Training and Environment Requirements: Programming and Markup Experience with Java J2EE, JavaScript, XML, HTML and CSS Completion of Oracle ATG Web Commerce 10 Implementation Specialist Development Guided Learning Path modules Participants will be required to bring their own laptop that meets the minimum specifications:   64-bit PC and OS (e.g. Windows 7 64-bit) 4GB RAM or more 40GB Hard Disk Space Laptops will require access to the Internet through Remote Desktop via Windows. Agenda Topics: Week 1 – Day 1 through 5 Build a Basic Commerce Application In week one of the boot camp training, we will apply knowledge learned from the ATG Web Commerce 10 Implementation Developer Guided Learning Path modules, towards building a basic transaction-ready commerce application. There will be little to no lectures delivered in this boot camp, as developers will be fully engaged in ATG Application Development activities and best practices. Developers will work independently on the following lab assignments from day's 1 through 5: Lab Assignments  1 Environment Setup 2 Build a dynamic Home Page 3 Site Authentication 4 Build Customer Registration 5 Display Top Level Categories 6 Display Product Sub-Categories 7 Display Product List Page 8 Display Product Detail Page 9 ATG Inventory 10 Build “Add to Cart” Functionality 11 Build Shopping Cart 12 Build Checkout Page  13 Build Checkout Review Page 14 Create an Order and Build Order Confirmation Page 15 Implement Slots and Targeters for Personalization 16 Implement Pricing and Promotions 17 Order Fulfillment Back to top Week 2 – Day 6 through 10 Team-based Case Project In the second week of the boot camp training, participants will be asked to join a project team that will select a case project for the team to implement. Teams will be able to choose from four of the most common application types developed and deployed on the ATG platform. They are as follows: Hard goods with physical fulfillment, Soft goods with electronic fulfillment, a Service or subscription case example, a Course/Event registration case example. Team projects will have approximately 160 hours of use cases/stories for each team to build (40 hours per developer). Each day's Use Cases/Stories will build upon the prior day's work, and therefore must be fully completed at the end of each day. Please note that this boot camp intends to simulate real-world project conditions, and as such will likely require the need for project teams to possibly work beyond normal business hours. To promote further collaboration and group learning, each team will be asked to present their work and share the methodologies and solutions that they've applied to their cases at the end of each day. Location: Oracle Reading CVC TPC510 Room: Wraysbury Reading, UK 9:00 AM – 5:00 PM  Registration Fee (10 Days): US $3,375 Please click on the following link to REGISTER or  visit the Oracle ATG Web Commerce 10 Implementation Developer Boot Camp page for more information. Questions: Patrick Ty Partner Enablement, Oracle Commerce Phone: 310.343.7687 Mobile: 310.633.1013 Email: [email protected]

    Read the article

  • EU Digital Agenda scores 85/100

    - by trond-arne.undheim
    If the Digital Agenda was a bottle of wine and I were wine critic Robert Parker, I would say the Digital Agenda has "a great bouquet, many good elements, with astringent, dry and puckering mouth feel that will not please everyone, but still displaying some finesse. A somewhat controlled effort with no surprises and a few noticeable flaws in the delivery. Noticeably shorter aftertaste than advertised by the producers. Score: 85/100. Enjoy now". The EU Digital Agenda states that "standards are vital for interoperability" and has a whole chapter on interoperability and standards. With this strong emphasis, there is hope the EU's outdated standardization system finally is headed for reform. It has been 23 years since the legal framework of standardisation was completed by Council Decision 87/95/EEC8 in the Information and Communications Technology (ICT) sector. Standardization is market driven. For several decades the IT industry has been developing standards and specifications in global open standards development organisations (fora/consortia), many of which have transparency procedures and practices far superior to the European Standards Organizations. The Digital Agenda rightly states: "reflecting the rise and growing importance of ICT standards developed by certain global fora and consortia". Some fora/consortia, of course, are distorted, influenced by single vendors, have poor track record, and need constant vigilance, but they are the minority. Therefore, the recognition needs to be accompanied by eligibility criteria focused on openness. Will the EU reform its ICT standardization by the end of 2010? Possibly, and only if DG Enterprise takes on board that Information and Communications Technologies (ICTs) have driven half of the productivity growth in Europe over the past 15 years, a prominent fact in the EU's excellent Digital Competitiveness report 2010 published on Monday 17 May. It is ok to single out the ICT sector. It simply is the most important sector right now as it fuels growth in all other sectors. Let's not wait for the entire standardization package which may take another few years. Europe does not have time. The Digital Agenda is an umbrella strategy with deliveries from a host of actors across the Commission. For instance, the EU promises to issue "guidance on transparent ex-ante disclosure rules for essential intellectual property rights and licensing terms and conditions in the context of standard setting", by 2011 in the Horisontal Guidelines now out for public consultation by DG COMP and to some extent by DG ENTR's standardization policy reform. This is important. The EU will issue procurement guidance as interoperability frameworks are put into practice. This is a joint responsibility of several DGs, and is likely to suffer coordination problems, controversy and delays. We have seen plenty of the latter already and I have commented on the Commission's own interoperability elsewhere, with mixed luck. :( Yesterday, I watched the cartoonesque Korean western film The Good, the Bad and the Weird. In the movie (and I meant in the movie only), a bandit, a thief, and a bounty hunter, all excellent at whatever they do, fight for a treasure map. Whether that is a good analogy for the situation within the Commission, others are better judges of than I. However, as a movie fanatic, I still await the final shoot-out, and, as in the film, the only certainty is that "life is about chasing and being chased". The missed opportunity (in this case not following up the push from Member States to better define open standards based interoperability) is a casualty of the chaos ensued in the European Wild West (and I mean that in the most endearing sense, and my excuses beforehand to actors who possibly justifiably cannot bear being compared to fictional movie characters). Instead of exposing the ongoing fight, the EU opted for the legalistic use of the term "standards" throughout the document. This is a term that--to the EU-- excludes most standards used by the IT industry world wide. So, while it, for a moment, meant "weapon down", it will not lead to lasting peace. The Digital Agenda calls for the Member States to "Implement commitments on interoperability and standards in the Malmö and Granada Declarations by 2013". This is a far cry from the actual Ministerial Declarations which called upon the Commission to help them with this implementation by recognizing and further defining open standards based interoperability. Unless there is more forthcoming from the Commission, the market's judgement will be: you simply fall short. Generally, I think the EU focus now should be "from policy to practice" and the Digital Agenda does indeed stop short of tackling some highly practical issues. There is need for progress beyond the Digital Agenda. Here are some suggestions that would help Europe re-take global leadership on openness, public sector reform, and economic growth: A strong European software strategy centred around open standards based interoperability by 2011. An ambitious new eCommission strategy for 2011-15 focused on migration to open standards by 2015. Aligning the IT portfolio across the Commission into one Digital Agenda DG by 2012. Focusing all best practice exchange in eGovernment on one social networking site, epractice.eu (full disclosure: I had a role in getting that site up and running) Prioritizing public sector needs in global standardization over European standardization by 2014.

    Read the article

  • Inside Red Gate - Be Reasonable!

    - by Simon Cooper
    As I discussed in my previous posts, divisions and project teams within Red Gate are allowed a lot of autonomy to manage themselves. It's not just the teams though, there's an awful lot of freedom given to individual employees within the company as well. Reasonableness How Red Gate treats it's employees is embodied in the phrase 'You will be reasonable with us, and we will be reasonable with you'. As an employee, you are trusted to do your job to the best of you ability. There's no one looking over your shoulder, no one clocking you in and out each day. Everyone is working at the company because they want to, and one of the core ideas of Red Gate is that the company exists to 'let people do the best work of their lives'. Everything is geared towards that. To help you do your job, office services and the IT department are there. If you need something to help you work better (a third or fourth monitor, footrests, or a new keyboard) then ask people in Information Systems (IS) or Office Services and you will be given it, no questions asked. Everyone has administrator access to their own machines, and you can install whatever you want on it. If there's a particular bit of software you need, then ask IS and they will buy it. As an example, last year I wanted to replace my main hard drive with an SSD; I had a summer job at school working in a computer repair shop, so knew what to do. I went to IS and asked for 'an SSD, a SATA cable, and a screwdriver'. And I got it there and then, even the screwdriver. Awesome. I screwed it in myself, copied all my main drive files across, and I was good to go. Of course, if you're not happy doing that yourself, then IS will sort it all out for you, no problems. If you need something that the company doesn't have (say, a book off Amazon, or you need some specifications printing off & bound), then everyone has a expense limit of £100 that you can use without any sign-off needed from your managers. If you need a company credit card for whatever reason, then you can get it. This freedom extends to working hours and holiday; you're expected to be in the office 11am-3pm each day, but outside those times you can work whenever you want. If you need a half-day holiday on a days notice, or even the same day, then you'll get it, unless there's a good reason you're needed that day. If you need to work from home for a day or so for whatever reason, then you can. If it's reasonable, then it's allowed. Trust issues? A lot of trust, and a lot of leeway, is given to all the people in Red Gate. Everyone is expected to work hard, do their jobs to the best of their ability, and there will be a minimum of bureaucratic obstacles that stop you doing your work. What happens if you abuse this trust? Well, an example is company trip expenses. You're free to expense what you like; food, drink, transport, etc, but if you expenses are not reasonable, then you will never travel with the company again. Simple as that. Everyone knows when they're abusing the system, so simply don't do it. Along with reasonableness, another phrase used is 'Don't be a ***'. If you act like a ***, and abuse any of the trust placed in you, even if you're the best tester, salesperson, dev, or manager in the company, then you won't be a part of the company any more. From what I know about other companies, employee trust is highly variable between companies, all the way up to CCTV trained on employee's monitors. As a dev, I want to produce well-written & useful code that solves people's problems. Being able to get whatever I need - install whatever tools I need, get time off when I need to, obtain reference books within a day - all let me do my job, and so let Red Gate help other people do their own jobs through the tools we produce. Plus, I don't think I would like working for a company that doesn't allow admin access to your own machine and blocks Facebook!

    Read the article

  • JDK bug migration: components and subcomponents

    - by darcy
    One subtask of the JDK migration from the legacy bug tracking system to JIRA was reclassifying bugs from a three-level taxonomy in the legacy system, (product, category, subcategory), to a fundamentally two-level scheme in our customized JIRA instance, (component, subcomponent). In the JDK JIRA system, there is technically a third project-level classification, but by design a large majority of JDK-related bugs were migrated into a single "JDK" project. In the end, over 450 legacy subcategories were simplified into about 120 subcomponents in JIRA. The 120 subcomponents are distributed among 17 components. A rule of thumb used was that a subcategory had to have at least 50 bugs in it for it to be retained. Below is a listing the component / subcomponent classification of the JDK JIRA project along with some notes and guidance on which OpenJDK email addresses cover different areas. Eventually, a separate incidents project to host new issues filed at bugs.sun.com will use a slightly simplified version of this scheme. The preponderance of bugs and subcomponents for the JDK are in library-related areas, with components named foo-libs and subcomponents primarily named after packages. While there was an overall condensation of subcomponents in the migration, in some cases long-standing informal divisions in core libraries based on naming conventions in the description were promoted to formal subcomponents. For example, hundreds of bugs in the java.util subcomponent whose descriptions started with "(coll)" were moved into java.util:collections. Likewise, java.lang bugs starting with "(reflect)" and "(proxy)" were moved into java.lang:reflect. client-libs (Predominantly discussed on 2d-dev and awt-dev and swing-dev.) 2d demo java.awt java.awt:i18n java.beans (See beans-dev.) javax.accessibility javax.imageio javax.sound (See sound-dev.) javax.swing core-libs (See core-libs-dev.) java.io java.io:serialization java.lang java.lang.invoke java.lang:class_loading java.lang:reflect java.math java.net java.nio (Discussed on nio-dev.) java.nio.charsets java.rmi java.sql java.sql:bridge java.text java.util java.util.concurrent java.util.jar java.util.logging java.util.regex java.util:collections java.util:i18n javax.annotation.processing javax.lang.model javax.naming (JNDI) javax.script javax.script:javascript javax.sql org.openjdk.jigsaw (See jigsaw-dev.) security-libs (See security-dev.) java.security javax.crypto (JCE: includes SunJCE/MSCAPI/UCRYPTO/ECC) javax.crypto:pkcs11 (JCE: PKCS11 only) javax.net.ssl (JSSE, includes javax.security.cert) javax.security javax.smartcardio javax.xml.crypto org.ietf.jgss org.ietf.jgss:krb5 other-libs corba corba:idl corba:orb corba:rmi-iiop javadb other (When no other subcomponent is more appropriate; use judiciously.) Most of the subcomponents in the xml component are related to jaxp. xml jax-ws jaxb javax.xml.parsers (JAXP) javax.xml.stream (JAXP) javax.xml.transform (JAXP) javax.xml.validation (JAXP) javax.xml.xpath (JAXP) jaxp (JAXP) org.w3c.dom (JAXP) org.xml.sax (JAXP) For OpenJDK, most JVM-related bugs are connected to the HotSpot Java virtual machine. hotspot (See hotspot-dev.) build compiler (See hotspot-compiler-dev.) gc (garbage collection, see hotspot-gc-dev.) jfr (Java Flight Recorder) jni (Java Native Interface) jvmti (JVM Tool Interface) mvm (Multi-Tasking Virtual Machine) runtime (See hotspot-runtime-dev.) svc (Servicability) test core-svc (See serviceability-dev.) debugger java.lang.instrument java.lang.management javax.management tools The full JDK bug database contains entries related to legacy virtual machines that predate HotSpot as well as retired APIs. vm-legacy jit (Sun Exact VM) jit_symantec (Symantec VM, before Exact VM) jvmdi (JVM Debug Interface ) jvmpi (JVM Profiler Interface ) runtime (Exact VM Runtime) Notable command line tools in the $JDK/bin directory have corresponding subcomponents. tools appletviewer apt (See compiler-dev.) hprof jar javac (See compiler-dev.) javadoc(tool) (See compiler-dev.) javah (See compiler-dev.) javap (See compiler-dev.) jconsole launcher updaters (Timezone updaters, etc.) visualvm Some aspects of JDK infrastructure directly affect JDK Hg repositories, but other do not. infrastructure build (See build-dev and build-infra-dev.) licensing (Covers updates to the third party readme, licenses, and similar files.) release_eng (Release engineering) staging (Staging of web pages related to JDK releases.) The specification subcomponent encompasses the formal language and virtual machine specifications. specification language (The Java Language Specification) vm (The Java Virtual Machine Specification) The code for the deploy and install areas is not currently included in OpenJDK. deploy deployment_toolkit plugin webstart install auto_update install servicetags In the JDK, there are a number of cross-cutting concerns whose organization is essentially orthogonal to other areas. Since these areas generally have dedicated teams working on them, it is easier to find bugs of interest if these bugs are grouped first by their cross-cutting component rather than by the affected technology. docs doclet guides hotspot release_notes tools tutorial embedded build hotspot libraries globalization locale-data translation performance hotspot libraries The list of subcomponents will no doubt grow over time, but my inclination is to resist that growth since the addition of each subcomponent makes the system as a whole more complicated and harder to use. When the system gets closer to being externalized, I plan to post more blog entries describing recommended use of various custom fields in the JDK project.

    Read the article

  • OpenSSL Versions in Solaris

    - by darrenm
    Those of you have have installed Solaris 11 or have read some of the blogs by my colleagues will have noticed Solaris 11 includes OpenSSL 1.0.0, this is a different version to what we have in Solaris 10.  I hope the following explains why that is and how it fits with the expectations on binary compatibility between Solaris releases. Solaris 10 was the first release where we included OpenSSL libraries and headers (part of it was actually statically linked into the SSH client/server in Solaris 9).  At time we were building and releasing Solaris 10 the current train of OpenSSL was 0.9.7.  The OpenSSL libraries at that time were known to not always be completely API and ABI (binary) compatible between releases (some times even in the lettered patch releases) though mostly if you stuck with the documented high level APIs you would be fine.   For this reason OpenSSL was classified as a 'Volatile' interface and in Solaris 10 Volatile interfaces were not part of the default library search path which is why the OpenSSL libraries live in /usr/sfw/lib on Solaris 10.  Okay, but what does Volatile mean ? Quoting from the attributes(5) man page description of Volatile (which was called External in older taxonomy): Volatile interfaces can change at any time and for any reason. The Volatile interface stability level allows Sun pro- ducts to quickly track a fluid, rapidly evolving specif- ication. In many cases, this is preferred to providing additional stability to the interface, as it may better meet the expectations of the consumer. The most common application of this taxonomy level is to interfaces that are controlled by a body other than Sun, but unlike specifications controlled by standards bodies or Free or Open Source Software (FOSS) communities which value interface compatibility, it can not be asserted that an incompatible change to the interface specifica- tion would be exceedingly rare. It may also be applied to FOSS controlled software where it is deemed more important to track the community with minimal latency than to provide stability to our customers. It also common to apply the Volatile classification level to interfaces in the process of being defined by trusted or widely accepted organization. These are generically referred to as draft standards. An "IETF Internet draft" is a well understood example of a specification under development. Volatile can also be applied to experimental interfaces. No assertion is made regarding either source or binary compatibility of Volatile interfaces between any two releases, including patches. Applications containing these interfaces might fail to function properly in any future release. Note that last paragraph!  OpenSSL is only one example of the many interfaces in Solaris that are classified as Volatile.  At the other end of the scale we have Committed (Stable in Solaris 10 terminology) interfaces, these include things like the POSIX APIs or Solaris specific APIs that we have no intention of changing in an incompatible way.  There are also Private interfaces and things we declare as Not-an-Interface (eg command output not intended for scripting against only to be read by humans). Even if we had declared OpenSSL as a Committed/Stable interface in Solaris 10 there are allowed exceptions, again quoting from attributes(5): 4. An interface specification which isn't controlled by Sun has been changed incompatibly and the vast majority of interface consumers expect the newer interface. 5. Not making the incompatible change would be incomprehensible to our customers. In our opinion and that of our large and small customers keeping up with the OpenSSL community is important, and certainly both of the above cases apply. Our policy for dealing with OpenSSL on Solaris 10 was to stay at 0.9.7 and add fixes for security vulnerabilities (the version string includes the CVE numbers of fixed vulnerabilities relevant to that release train).  The last release of OpenSSL 0.9.7 delivered by the upstream community was more than 4 years ago in Feb 2007. Now lets roll forward to just before the release of Solaris 11 Express in 2010. By that point in time the current OpenSSL release was 0.9.8 with the 1.0.0 release known to be coming soon.  Two significant changes to OpenSSL were made between Solaris 10 and Solaris 11 Express.  First in Solaris 11 Express (and Solaris 11) we removed the requirement that Volatile libraries be placed in /usr/sfw/lib, that means OpenSSL is now in /usr/lib, secondly we upgraded it to the then current version stream of OpenSSL (0.9.8) as was expected by our customers. In between Solaris 11 Express in 2010 and the release of Solaris 11 in 2011 the OpenSSL community released version 1.0.0.  This was a huge milestone for a long standing and highly respected open source project.  It would have been highly negligent of Solaris not to include OpenSSL 1.0.0e in the Solaris 11 release. It is the latest best supported and best performing version.     In fact Solaris 11 isn't 'just' OpenSSL 1.0.0 but we have added our SPARC T4 engine and the AES-NI engine to support the on chip crypto acceleration. This gives us 4.3x better AES performance than OpenSSL 0.9.8 running on AIX on an IBM POWER7. We are now working with the OpenSSL community to determine how best to integrate the SPARC T4 changes into the mainline OpenSSL.  The OpenSSL 'pkcs11' engine we delivered in Solaris 10 to support the CA-6000 card and the SPARC T1/T2/T3 hardware is still included in Solaris 11. When OpenSSL 1.0.1 and 1.1.0 come out we will asses what is best for Solaris customers. It might be upgrade or it might be parallel delivery of more than one version stream.  At this time Solaris 11 still classifies OpenSSL as a Volatile interface, it is our hope that we will be able at some point in a future release to give it a higher interface stability level. Happy crypting! and thank-you OpenSSL community for all the work you have done that helps Solaris.

    Read the article

  • The first day of JavaOne is already over!

    - by delabassee
    In the past Sunday used to be a more relaxing day with ‘just’ some JavaOne activities going on. Sunday used to be a soft day to prepare yourself for an exhausting week. This is now over as JavaOne is expanding; Sunday is now an integral part of the conference. One of the side effect of this extra day is that some activities related to JavaOne and OpenWorld such as MySQL Connect are being push to start a day earlier on Saturday (can you spot the pattern here?). On the GlassFish front, Sunday was a very busy day! It started at the Moscone Center with the annual GlassFish Community Event where the Java EE 7 and GF 4 roadmaps were presented and discussed. During the event, different GlassFish users such as ZeroTurnaround (the JRebel guys), Grupo RBS and IDR Solutions shared their views on GF, why they like GF but also what could be improved. The event was also a forum for the GF community to exchange with some of the key Java EE / GlassFish Oracle Executives and the different GF team members. The Strategy keynote and the Technical keynote were held in the Masonic Auditorium later in the after-noon. Oracle executives have presented the plans for Java SE, Java FX and Java EE. As on-demand replays will be available soon, I will not summarize several hours of content but here are some personal takeaways from those keynotes. Modularity Modularity is a big deal. We know by now that Project Jigsaw will not be ready for Java SE 8 but in any case, it is already possible (and encouraged) to test Jigsaw today. In the future, Java EE plan to rely on the modularity features provided by Java SE, so Project Jigsaw is also relevant for Java EE developers. Shorter term, to cover some of the modular requirements, Java SE will adopt the approach that was used for Java EE 6 and the notion of Profiles. This approach does not define a module system per say; Profiles is a way to clearly define different subsets of Java SE to fulfill different needs (e.g. the full JRE is not required for a headless application). The introduction of different Profiles, from the Base profile (10mb) to the Full Profile (+50mb), has been proposed for Java SE 8. Embedded Embedded is a strong theme going forward for the Java Plaform. There is now a dedicated program : Java Embedded @ JavaOne Java by nature (e.g. platform independence, built-in security, ability easily talks to any back-end systems, large set of skills available on the market, etc.) is probably the most suited platform for the Internet of Things. You can quickly be up-to-speed and develop services and applications for that space just by using your current Java skills. All you need to start developing on ARM is a 35$ Raspberry Pi ARM board (25$ if you are cheap and can live without an ethernet connection) and the recently released JDK for Linux/ARM. Obviously, GlassFish runs on Raspberry Pi. If you wan to go further in the embedded space, you should take a look Java SE Embedded, an optimized, low footprint, Java environment that support the major embedded architectures (ARM, PPC and x86). Finally, Oracle has recently introduced Java Embedded Suite, a new solution that brings modern middleware capabilities to the embedded space. Java Embedded Suite is an optimized solution that leverage Java SE Embedded but also GlassFish, Jersey and JavaDB to deploy advanced value added capabilities (eg. sensor data filtering and) deeper in the network, closer to the devices. JavaFX JavaFX is going strong! Starting from Java SE 7u6, JavaFX is bundled with the JDK. JavaFX is now available for all the major desktop platforms (Windows, Linux and Mac OS X). JavaFX is now also available, in developer preview, for low end device running Linux/ARM. During the keynote, JavaFX was shown running on a Raspberry Pi! And as announced during the keynote, JavaFX should be fully open-sourced by the end of the year; contributions are welcome!. There is a strong momentum around JavaFX, it’s the ideal client solution for the Java platform. A client layer that works perfectly with GlassFish on the back-end. If you were not convince by JavaFX, it’s time to reconsider it! As an old Chinese proverb say “One tweet is worth a thousand words!” HTML5, Project Avatar and Java EE 7 HTML5 got a lot of airtime too, it was covered during the Java EE 7 section of the keynote. Some details about Project Avatar, Oracle’s incubator project for a TSA (Thin Server Architecture) solution, were diluted and shown during the keynote. On the tooling side, Project Easel running on NetBeans 7.3 beta was demo’ed, including a cool NetBeans debugging session running in Chrome! HTML 5, Project Avatar and Java EE 7 deserve separate posts... Feedback We need your feedback! There are many projects, JSRs and products cooking : GlassFish 4, Project Jigsaw, Concurrency Utilities for Java EE (JSR 236), OpenJFX, OpenJDK to name just a few. Those projects, those specifications will have a profound impact on the Java platform for the years to come! So if you have the opportunity, download, install, learn, tests them and give feedback! Remember, you can "Make the Future Java!" Finally, the traditional GlassFish Party at the Thirsty Bear concluded the first JavaOne day. This party is another place where the community can freely exchange with the GlassFish team in a more relaxed, more friendly (but sometime more noisy) atmosphere. Arun has posted a set of pictures to reflect the atmosphere of the keynotes and the GlassFish party. You can find more details on the others Java EE and GlassFish activities here.

    Read the article

  • How to Tell If Your Computer is Overheating and What to Do About It

    - by Chris Hoffman
    Heat is a computer’s enemy. Computers are designed with heat dispersion and ventilation in mind so they don’t overheat. If too much heat builds up, your computer may become unstable or suddenly shut down. The CPU and graphics card produce much more heat when running demanding applications. If there’s a problem with your computer’s cooling system, an excess of heat could even physically damage its components. Is Your Computer Overheating? When using a typical computer in a typical way, you shouldn’t have to worry about overheating at all. However, if you’re encountering system instability issues like abrupt shut downs, blue screens, and freezes — especially while doing something demanding like playing PC games or encoding video — your computer may be overheating. This can happen for several reasons. Your computer’s case may be full of dust, a fan may have failed, something may be blocking your computer’s vents, or you may have a compact laptop that was never designed to run at maximum performance for hours on end. Monitoring Your Computer’s Temperature First, bear in mind that different CPUs and GPUs (graphics cards) have different optimal temperature ranges. Before getting too worried about a temperature, be sure to check your computer’s documentation — or its CPU or graphics card specifications — and ensure you know the temperature ranges your hardware can handle. You can monitor your computer’s temperatures in a variety of different ways. First, you may have a way to monitor temperature that is already built into your system. You can often view temperature values in your computer’s BIOS or UEFI settings screen. This allows you to quickly see your computer’s temperature if Windows freezes or blue screens on you — just boot the computer, enter the BIOS or UEFI screen, and check the temperatures displayed there. Note that not all BIOSes or UEFI screens will display this information, but it is very common. There are also programs that will display your computer’s temperature. Such programs just read the sensors inside your computer and show you the temperature value they report, so there are a wide variety of tools you can use for this, from the simple Speccy system information utility to an advanced tool like SpeedFan. HWMonitor also offer this feature, displaying a wide variety of sensor information. Be sure to look at your CPU and graphics card temperatures. You can also find other temperatures, such as the temperature of your hard drive, but these components will generally only overheat if it becomes extremely hot in the computer’s case. They shouldn’t generate too much heat on their own. If you think your computer may be overheating, don’t just glance as these sensors once and ignore them. Do something demanding with your computer, such as running a CPU burn-in test with Prime 95, playing a PC game, or running a graphical benchmark. Monitor the computer’s temperature while you do this, even checking a few hours later — does any component overheat after you push it hard for a while? Preventing Your Computer From Overheating If your computer is overheating, here are some things you can do about it: Dust Out Your Computer’s Case: Dust accumulates in desktop PC cases and even laptops over time, clogging fans and blocking air flow. This dust can cause ventilation problems, trapping heat and preventing your PC from cooling itself properly. Be sure to clean your computer’s case occasionally to prevent dust build-up. Unfortunately, it’s often more difficult to dust out overheating laptops. Ensure Proper Ventilation: Put the computer in a location where it can properly ventilate itself. If it’s a desktop, don’t push the case up against a wall so that the computer’s vents become blocked or leave it near a radiator or heating vent. If it’s a laptop, be careful to not block its air vents, particularly when doing something demanding. For example, putting a laptop down on a mattress, allowing it to sink in, and leaving it there can lead to overheating — especially if the laptop is doing something demanding and generating heat it can’t get rid of. Check if Fans Are Running: If you’re not sure why your computer started overheating, open its case and check that all the fans are running. It’s possible that a CPU, graphics card, or case fan failed or became unplugged, reducing air flow. Tune Up Heat Sinks: If your CPU is overheating, its heat sink may not be seated correctly or its thermal paste may be old. You may need to remove the heat sink and re-apply new thermal paste before reseating the heat sink properly. This tip applies more to tweakers, overclockers, and people who build their own PCs, especially if they may have made a mistake when originally applying the thermal paste. This is often much more difficult when it comes to laptops, which generally aren’t designed to be user-serviceable. That can lead to trouble if the laptop becomes filled with dust and needs to be cleaned out, especially if the laptop was never designed to be opened by users at all. Consult our guide to diagnosing and fixing an overheating laptop for help with cooling down a hot laptop. Overheating is a definite danger when overclocking your CPU or graphics card. Overclocking will cause your components to run hotter, and the additional heat will cause problems unless you can properly cool your components. If you’ve overclocked your hardware and it has started to overheat — well, throttle back the overclock! Image Credit: Vinni Malek on Flickr     

    Read the article

  • Phones, Nokia, Microsoft and More

    - by Bill Evjen
    The phone revolution that is under way at the moment is insanely interesting and continuously full of buzz about directions, failures, and promises. The movement started with Apple completely reinventing what a smart phone was all about and now we have the followers. Though – don’t dismiss the followers, they are usually the ones that come out with the leap frog products when most of the world is thinking about jumping on. Remember the often used analogy – the USA invented the TV – but it was Japan that took it to the next level and now all TVs are from somewhere else other than the USA. Really there are two camps for the phones – the Cool Kids and other kids that no one wants to hang out with anymore. When it comes to cool – for some reason, the phone is an important part of that factor. Everyone wants to show their phone and its configuration (apps installed, etc) to their friends as a sign of (1) “I have money” and (2) I have smarts/tastes/style/etc when it comes to my applications that are on my phone. For those that don’t know – the Cool Kids include: Apple – this is quite obvious as everything Apple produces is in the cool camp. Just having an Apple product on your person means you can dance. Google – this is one of the more interesting releases as they have created something called Android (which in it’s own right is a major brand in itself). Microsoft – you might be saying “Really, Microsoft is cool?”. I would argue that they are indeed cool as it is now associated with XBOX 360, Kinect, and Windows 7. Gone are the days of Bob and that silly paperclip. Well – that’s it. There is nobody else I would stick in that camp. The other kids that weren’t picked for that dodgeball team include: Nokia Motorola Palm Blackberry and many many more The sad part of all this is that no matter what this second camp does now, it won’t be able to get out of this bucket easily. They will always be associated as yesterday’s technology and that association will drive the sales of the phone purchasers of the world. For those in that group, the only possible way out is to get invited to the cool club by one of the cool club members in the hope that their coolness somehow rubs off. To me, this is the move that Nokia is making. They are at this point where they have realized that they don’t have the full scope of the required end to end solution to make this all work. They have the plants to build the phones and the reach of the retailers that sell what they have. What they are missing is the proper operating system for the new world of multi-touch form factor phones. Even the companies that come up with some sort of new operating system for this type of new device, they are still associated with the yesterday and lack the developer community behind them to be the real wave of adoption that this market needs. Think about that – this is a major different between Nokia/Blackberry when you compare it to the likes of Apple, Google, and Microsoft. These three powerhouses having a very large and strong development community that will eagerly take on new initiatives using the skillsets that they have already cultivated over the years of already working with the company. This then results in a plethora of applications that are then placed on an app store of some kind. The developer gets a cut and then Apple/Google/Microsoft then get their cut. It is definitely a win-win. None of the other phone companies and wannabies can provide the same results. What Microsoft was missing was the major phone manufactures coming on board to create and push forward with the phones that are required to start the wave. This is where Nokia can come in and help Microsoft. They have the ability to promote the Windows Phone operating system on a new wave of phones. This does mean that Nokia will sell phones, but they lose out on the application store that they might have been thinking about making some money on as well as controlling the end to end solution. What is interesting is in questioning to oneself if Microsoft will purchase Nokia. It really depends upon how they want to compete and with whom Microsoft views as the major competitor. For instance, they can purchase Nokia and have their own hardware company and distribution network for phones – thereby taking on a model that is quite similar to Apple. On the other hand, they could just leave it up to the phone hardware companies such as Nokia and others to build and promote phones in a model that is similar to Google. Both ways have pluses and minuses. If they own the phone manufacturer, they really can put some thought into the design and technical specifications of the phone that is really designed to exactly how they want it. Microsoft has shown that they have this ability – especially with the XBOX initiative they have done over the years. Think about how good and powerful they have moved forward with XBOX – and I am not talking about just copying what others are doing, but coming up with leapfrog products that are steps ahead of everyone else. Though, if they didn’t do it themselves, they could then leave it up to the phone manufacturers to drive each other to build better and better phones that run the Microsoft OS – competition drives better products. We have seen this with the Android line of phones that are out there on the market. I have read a lot about Nokia investors really upset about the new Microsoft relationship – but really, this is a great thing. I for one am a fan of this relationship (I am also a Nokia stock holder btw). This will mean better days for Nokia.

    Read the article

  • Inside Red Gate - Be Reasonable!

    - by simonc
    As I discussed in my previous posts, divisions and project teams within Red Gate are allowed a lot of autonomy to manage themselves. It's not just the teams though, there's an awful lot of freedom given to individual employees within the company as well. Reasonableness How Red Gate treats it's employees is embodied in the phrase 'You will be reasonable with us, and we will be reasonable with you'. As an employee, you are trusted to do your job to the best of you ability. There's no one looking over your shoulder, no one clocking you in and out each day. Everyone is working at the company because they want to, and one of the core ideas of Red Gate is that the company exists to 'let people do the best work of their lives'. Everything is geared towards that. To help you do your job, office services and the IT department are there. If you need something to help you work better (a third or fourth monitor, footrests, or a new keyboard) then ask people in Information Systems (IS) or Office Services and you will be given it, no questions asked. Everyone has administrator access to their own machines, and you can install whatever you want on it. If there's a particular bit of software you need, then ask IS and they will buy it. As an example, last year I wanted to replace my main hard drive with an SSD; I had a summer job at school working in a computer repair shop, so knew what to do. I went to IS and asked for 'an SSD, a SATA cable, and a screwdriver'. And I got it there and then, even the screwdriver. Awesome. I screwed it in myself, copied all my main drive files across, and I was good to go. Of course, if you're not happy doing that yourself, then IS will sort it all out for you, no problems. If you need something that the company doesn't have (say, a book off Amazon, or you need some specifications printing off & bound), then everyone has a expense limit of £100 that you can use without any sign-off needed from your managers. If you need a company credit card for whatever reason, then you can get it. This freedom extends to working hours and holiday; you're expected to be in the office 11am-3pm each day, but outside those times you can work whenever you want. If you need a half-day holiday on a days notice, or even the same day, then you'll get it, unless there's a good reason you're needed that day. If you need to work from home for a day or so for whatever reason, then you can. If it's reasonable, then it's allowed. Trust issues? A lot of trust, and a lot of leeway, is given to all the people in Red Gate. Everyone is expected to work hard, do their jobs to the best of their ability, and there will be a minimum of bureaucratic obstacles that stop you doing your work. What happens if you abuse this trust? Well, an example is company trip expenses. You're free to expense what you like; food, drink, transport, etc, but if you expenses are not reasonable, then you will never travel with the company again. Simple as that. Everyone knows when they're abusing the system, so simply don't do it. Along with reasonableness, another phrase used is 'Don't be an a**hole'. If you act like an a**hole, and abuse any of the trust placed in you, even if you're the best tester, salesperson, dev, or manager in the company, then you won't be a part of the company any more. From what I know about other companies, employee trust is highly variable between companies, all the way up to CCTV trained on employee's monitors. As a dev, I want to produce well-written & useful code that solves people's problems. Being able to get whatever I need - install whatever tools I need, get time off when I need to, obtain reference books within a day - all let me do my job, and so let Red Gate help other people do their own jobs through the tools we produce. Plus, I don't think I would like working for a company that doesn't allow admin access to your own machine and blocks Facebook! Cross posted from Simple Talk.

    Read the article

  • Dynamic DataGrid columns in WPF DataGrid based on the underlying set of data (and their type)

    - by StatsMan
    Hello everyone, I've got kind of a conceptual question. I am in the process of wrapping some statistics classes I wrote into WPF. For that I have two DataGrid(-Views, currently in WinForms). In one DataGrid each row represents a column in the other. There I can set-up different variables (as in mathematical/statistical variables) with fields like "Header", "DataType", "ValidationBehaviour", "DisplayType". There I can also set-up how it should be displayed. Some Columns can automatically be set to ComboBoxColumns, some TextBoxColumns, and so on and so forth. So, now once I've set-up these Columns I can go to the other grid and enter my data. I may, for instance, have generated (in grid 1) one Column called "Annual Gross Salary" with input of numerical values. Another Column called "Education" with "0=NoEducation", "1=College Level", "3=Universitary" etc. These labels are displayed as text in the combobox and my statistics engine behind then selects the respective value (0-3) for calculations (i.e. ordinal, nominal variables). Sooo. In WinForms I could basically generate all the columns by hand in code and then add my data in the respective cells/rows. Now in WPF I thought that must be easy to realise. However, yesterday I got started with ICustomPropertyDescriptor which (maybe I was too thick) didn't give me the results I was looking for. Basically, I just need to be able to dynamically generate columns (and rows) with different Layout, Controls (ComboBox, simple Input, DateTimes) based on the data that I have. But I don't really know how to go about it? So here in summary: DataGrid 1 Purpose is to display columns that have been specified in DataGrid 2 In rows, the user can add any kind of data in the rows below the columns that is allowed as to the columns specifications DataGrid 2 Each row in this grid represents a column in DataGrid 1 Contains fields like Name/Header, DataType, Validation Behaviour, Default Value, Data Formatting, etc. Also contains a function to be able to set-up how it should be displayed. The user can select from, for instance, ComboBoxColumn (and also add the available options), DateTime, normal TextBox, CheckBox etc. After finishing adding a row it will automatically appear as a new column in DataGrid 1 I'd appreciate any kind of pointer into the right direction. Thanks very, very much in advance! :)

    Read the article

  • Establishing connection from java to access 2010 64bit

    - by user993250
    after going through several tutorials and blog posts, i have unsuccessfully tried to set up a connection from java to a microsoft access database. my system specifications are win 7 [64 bit] odbcad32.ese [64 bit] access 2010 [64 bit] jre6 [64 bit] the code that i wrote for establishing a connection public Connection makeConn() throws ClassNotFoundException, SQLException { Class.forName("sun.jdbc.odbc.JdbcOdbcDriver"); conn = DriverManager.getConnection("jdbc:odbc:Driver={Microsoft Access Driver (*.mdb)};DBQ=C:\\Users\\ARIFAH\\Desktop\\sampleDb.mdb"); return conn; } in the data source [odbc]---[located at path %windir%\system32\odbcad32.exe] i performed the following task USER DNS TAB ADD-Microsoft Access Driver (*.mdb, *.accdb ) [file ACEODBC.dll] -Finish Data Source Name: TestDriver ; Description: test driver for access 2010 64 bit ; DataBase-select -browse: C:\Users\ARIFAH\Desktop\sampleDb.mdb - ok apply ok now when i try to run my application, this is the error that i m getting, java.sql.SQLException: [Microsoft][ODBC Microsoft Access Driver] Could not find file '(unknown)'. at sun.jdbc.odbc.JdbcOdbc.createSQLException(Unknown Source) at sun.jdbc.odbc.JdbcOdbc.standardError(Unknown Source) at sun.jdbc.odbc.JdbcOdbc.SQLDriverConnect(Unknown Source) at sun.jdbc.odbc.JdbcOdbcConnection.initialize(Unknown Source) at sun.jdbc.odbc.JdbcOdbcDriver.connect(Unknown Source) at java.sql.DriverManager.getConnection(Unknown Source) at java.sql.DriverManager.getConnection(Unknown Source) at dataBase.connection.makeConn(connection.java:22) at newmodulewizrd.ui.App.<init>(App.java:32) at archetypedcomponent.commands.newModuleHandler.execute(newModuleHandler.java:39) at org.eclipse.ui.internal.handlers.HandlerProxy.execute(HandlerProxy.java:294) at org.eclipse.core.commands.Command.executeWithChecks(Command.java:476) at org.eclipse.core.commands.ParameterizedCommand.executeWithChecks(ParameterizedCommand.java:508) at org.eclipse.ui.internal.handlers.HandlerService.executeCommand(HandlerService.java:169) at org.eclipse.ui.internal.handlers.SlaveHandlerService.executeCommand(SlaveHandlerService.java:241) at org.eclipse.ui.internal.handlers.SlaveHandlerService.executeCommand(SlaveHandlerService.java:241) at org.eclipse.ui.menus.CommandContributionItem.handleWidgetSelection(CommandContributionItem.java:770) at org.eclipse.ui.menus.CommandContributionItem.access$10(CommandContributionItem.java:756) at org.eclipse.ui.menus.CommandContributionItem$5.handleEvent(CommandContributionItem.java:746) at org.eclipse.swt.widgets.EventTable.sendEvent(EventTable.java:84) at org.eclipse.swt.widgets.Widget.sendEvent(Widget.java:1003) at org.eclipse.swt.widgets.Display.runDeferredEvents(Display.java:3910) at org.eclipse.swt.widgets.Display.readAndDispatch(Display.java:3503) at org.eclipse.ui.internal.Workbench.runEventLoop(Workbench.java:2405) at org.eclipse.ui.internal.Workbench.runUI(Workbench.java:2369) at org.eclipse.ui.internal.Workbench.access$4(Workbench.java:2221) at org.eclipse.ui.internal.Workbench$5.run(Workbench.java:500) at org.eclipse.core.databinding.observable.Realm.runWithDefault(Realm.java:332) at org.eclipse.ui.internal.Workbench.createAndRunWorkbench(Workbench.java:493) at org.eclipse.ui.PlatformUI.createAndRunWorkbench(PlatformUI.java:149) at org.eclipse.ui.internal.ide.application.IDEApplication.start(IDEApplication.java:113) at org.eclipse.equinox.internal.app.EclipseAppHandle.run(EclipseAppHandle.java:194) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:110) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:79) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:368) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:179) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:559) at org.eclipse.equinox.launcher.Main.basicRun(Main.java:514) at org.eclipse.equinox.launcher.Main.run(Main.java:1311) at org.eclipse.equinox.launcher.Main.main(Main.java:1287) *can somebody help with it??? *

    Read the article

  • Becoming a professional programmer / software engineer

    - by Matt
    This isn't strictly about programming, more about being a programmer, so I'm sorry if its not the right kind of question to ask on this forum (mod, please delete if it isn't) I'm a computer tech in the US Army, and once I'm out I'll have eight years on the job. I'm about to start a degree through an online school (the only way I can get the army to pay for it while I'm still in), and I'm seriously looking at getting a computer science degree. I'm great with computers. I can take one apart and put it back together with my eyes closed. I'm A+ and Network+ certified and I'm getting a couple other CompTIA certs before I get out. I can work Windows as well as anyone on this planet and I'm not terrible with Linux. A job in computers is something I've always wanted. But, aside from being a computer technician, it seems that every job in the field requires programming ability. I like programming as a hobby. I programmed TI BASIC in high school and I'm teaching myself Python, but that's as far as my experience goes. That sort of brings me to my questions: I've always heard that the first language is the most difficult, and once you learn it well then all the others sort of fall into place for you. Is that true? Like, if I spend the next eight months mastering Python, will I pretty much be able to pick up at least fair proficiency in any other OO language within a month of studying it or whatever? How easy is it to burn out? the biggest thing I'm afraid of is just burning out on programming. I can go all day long if I'm programming strictly for my own personal desire, but I can imagine it being really easy to burn out after a few years of programming to deadlines and certain specifications. Especially if its a big project involving a dozen different designers. From what I told you about myself, would I already be qualified to work as a regular technician (geek squad type or maybe running a computer repair shop). Is Python a good base to learn from? I've heard that it makes you hate other languages because they feel more convoluted when learning, but also that its a great beginner language. If you're a professional programmer, did you have any of the same fears? Would you recommend that I stick to computer repair and Python rather than try to get into corporate programming? (just from what you've read in this thread, anyway) Thanks for taking the time to read all this and answer (if you did)

    Read the article

  • Reading Source Code Aloud

    - by Jon Purdy
    After seeing this question, I got to thinking about the various challenges that blind programmers face, and how some of them are applicable even to sighted programmers. Particularly, the problem of reading source code aloud gives me pause. I have been programming for most of my life, and I frequently tutor fellow students in programming, most often in C++ or Java. It is uniquely aggravating to try to verbally convey the essential syntax of a C++ expression. The speaker must give either an idiomatic translation into English, or a full specification of the code in verbal longhand, using explicit yet slow terms such as "opening parenthesis", "bitwise and", et cetera. Neither of these solutions is optimal. On the one hand, an idiomatic translation is only useful to a programmer who can de-translate back into the relevant programming code—which is not usually the case when tutoring a student. In turn, education (or simply getting someone up to speed on a project) is the most common situation in which source is read aloud, and there is a very small margin for error. On the other hand, a literal specification is aggravatingly slow. It takes far far longer to say "pound, include, left angle bracket, iostream, right angle bracket, newline" than it does to simply type #include <iostream>. Indeed, most experienced C++ programmers would read this merely as "include iostream", but again, inexperienced programmers abound and literal specifications are sometimes necessary. So I've had an idea for a potential solution to this problem. In C++, there is a finite set of keywords—63—and operators—54, discounting named operators and treating compound assignment operators and prefix versus postfix auto-increment and decrement as distinct. There are just a few types of literal, a similar number of grouping symbols, and the semicolon. Unless I'm utterly mistaken, that's about it. So would it not then be feasible to simply ascribe a concise, unique pronunciation to each of these distinct concepts (including one for whitespace, where it is required) and go from there? Programming languages are far more regular than natural languages, so the pronunciation could be standardised. Speakers of any language would be able to verbally convey C++ code, and due to the regularity and fixity of the language, speech-to-text software could be optimised to accept C++ speech with a high degree of accuracy. So my question is twofold: first, is my solution feasible; and second, does anyone else have other potential solutions? I intend to take suggestions from here and use them to produce a formal paper with an example implementation of my solution.

    Read the article

  • What's the best way to do base36 arithmetic in perl?

    - by DVK
    What's the best way to do base36 arithmetic in Perl? To be more specific, I need to be able to do the following: Operate on positive N-digit numbers in base 36 (e.g. digits are 0-9 A-Z) N is finite, say 9 Provide basic arithmetic, at the very least the following 3: Addition (A+B) Subtraction (A-B) Whole division, e.g. floor(A/B). Strictly speaking, I don't really need a base10 conversion ability - the numbers will 100% of time be in base36. So I'm quite OK if the solution does NOT implement conversion from base36 back to base10 and vice versa. I don't much care whether the solution is brute-force "convert to base 10 and back" or converting to binary, or some more elegant approach "natively" performing baseN operations (as stated above, to/from base10 conversion is not a requirement). My only 3 considerations are: It fits the minimum specifications above It's "standard". Currently we're using and old homegrown module based on base10 conversion done by hand that is buggy and sucks. I'd much rather replace that with some commonly used CPAN solution instead of re-writing my own bicycle from scratch, but I'm perfectly capable of building it if no better standard possibility exists. It must be fast-ish (though not lightning fast). Something that takes 1 second to sum up 2 9-digit base36 numbers is worse than anything I can roll on my own :) P.S. Just to provide some context in case people decide to solve my XY problem for me in addition to answering the technical question above :) We have a fairly large tree (stored in DB as a bunch of edges), and we need to superimpose order on a subset of that tree. The tree dimentions are big both depth- and breadth- wise. The tree is VERY actively updated (inserts and deletes and branch moves). This is currently done by having a second table with 3 columns: parent_vertex, child_vertex, local_order, where local_order is an 9-character string built of A-Z0-9 (e.g. base 36 number). Additional considerations: It is required that the local order is unique per child (and obviously unique per parent), Any complete re-ordering of a parent is somewhat expensive, and thus the implementation is to try and assign - for a parent with X children - the orders which are somewhat evenly distributed between 0 and 36**10-1, so that almost no tree inserts result in a full re-ordering.

    Read the article

  • Workflow for statistical analysis and report writing

    - by ws
    Does anyone have any wisdom on workflows for data analysis related to custom report writing? The use-case is basically this: Client commissions a report that uses data analysis, e.g. a population estimate and related maps for a water district. The analyst downloads some data, munges the data and saves the result (e.g. adding a column for population per unit, or subsetting the data based on district boundaries). The analyst analyzes the data created in (2), gets close to her goal, but sees that needs more data and so goes back to (1). Rinse repeat until the tables and graphics meet QA/QC and satisfy the client. Write report incorporating tables and graphics. Next year, the happy client comes back and wants an update. This should be as simple as updating the upstream data by a new download (e.g. get the building permits from the last year), and pressing a "RECALCULATE" button, unless specifications change. At the moment, I just start a directory and ad-hoc it the best I can. I would like a more systematic approach, so I am hoping someone has figured this out... I use a mix of spreadsheets, SQL, ARCGIS, R, and Unix tools. Thanks! PS: Below is a basic Makefile that checks for dependencies on various intermediate datasets (w/ ".RData" suffix) and scripts (".R" suffix). Make uses timestamps to check dependencies, so if you 'touch ss07por.csv', it will see that this file is newer than all the files / targets that depend on it, and execute the given scripts in order to update them accordingly. This is still a work in progress, including a step for putting into SQL database, and a step for a templating language like sweave. Note that Make relies on tabs in its syntax, so read the manual before cutting and pasting. Enjoy and give feedback! http://www.gnu.org/software/make/manual/html%5Fnode/index.html#Top R=/home/wsprague/R-2.9.2/bin/R persondata.RData : ImportData.R ../../DATA/ss07por.csv Functions.R $R --slave -f ImportData.R persondata.Munged.RData : MungeData.R persondata.RData Functions.R $R --slave -f MungeData.R report.txt: TabulateAndGraph.R persondata.Munged.RData Functions.R $R --slave -f TabulateAndGraph.R report.txt

    Read the article

  • Creating AST for shared and local variables

    - by Rizwan Abbasi
    Here is my grammar grammar simulator; options { language = Java; output = AST; ASTLabelType=CommonTree; } //imaginary tokens tokens{ SHARED; LOCALS; BOOL; RANGE; ARRAY; } parse : declaration+ ; declaration :variables ; variables : locals ; locals : (bool | range | array) ; bool :ID 'in' '[' ID ',' ID ']' ('init' ID)? -> ^(BOOL ID ID ID ID?) ; range : ID 'in' '[' INT '..' INT ']' ('init' INT)? -> ^(RANGE ID INT INT INT?) ; array :ID 'in' 'array' 'of' '[' INT '..' INT ']' ('init' INT)? -> ^(ARRAY ID INT INT INT?) ; ID : (('a'..'z' | 'A'..'Z'|'_')('a'..'z' | 'A'..'Z'|'0'..'9'|'_'))* ; INT : ('0'..'9')+ ; WHITESPACE : ('\t' | ' ' | '\r' | '\n' | '\u000C')+ {$channel = HIDDEN;} ; INPUT flag in [down, up] init down pc in [0..7] init 0 CA in array of [0..5] init 0 AST It is having a small problem. Variables (bool, range or array) can be of two abstract types 1. locals (each object will have it's own variable) 2. shared (think of static in java, same for all object) Now the requirements are changed. I want the user to input like this NEW INPUT domains: upDown [up,down] possibleStates [0-7] booleans [true,false] locals: pc in possibleStates init 0 flag in upDown init down flag1 in upDown init down map in array of booleans init false shared: pcs in possibleStates init 0 flag in upDown init down flag1 in upDown init down maps in array of booleans init false Again, all the variables can be of two types (of any domain sepecified) 1. Local 2. Shared In Domains: upDown [up,down] possibleStates [0-7] upDown, up, down and possibleStates are of type ID (ID is defined in my above grammar), 0 and 7 are of type INT Can any body help me how to convert my current grammar to meet new specifications.

    Read the article

  • T-SQL generated from LINQ to SQL is missing a where clause

    - by Jimmy W
    I have extended some functionality to a DataContext object (called "CodeLookupAccessDataContext") such that the object exposes some methods to return results of LINQ to SQL queries. Here are the methods I have defined: public List<CompositeSIDMap> lookupCompositeSIDMap(int regionId, int marketId) { var sidGroupId = CompositeSIDGroupMaps.Where(x => x.RegionID.Equals(regionId) && x.MarketID.Equals(marketId)) .Select(x => x.CompositeSIDGroup); IEnumerator<int> sidGroupIdEnum = sidGroupId.GetEnumerator(); if (sidGroupIdEnum.MoveNext()) return lookupCodeInfo<CompositeSIDMap, CompositeSIDMap>(x => x.CompositeSIDGroup.Equals(sidGroupIdEnum.Current), x => x); else return null; } private List<TResult> lookupCodeInfo<T, TResult>(Func<T, bool> compLambda, Func<T, TResult> selectLambda) where T : class { System.Data.Linq.Table<T> dataTable = this.GetTable<T>(); var codeQueryResult = dataTable.Where(compLambda) .Select(selectLambda); List<TResult> codeList = new List<TResult>(); foreach (TResult row in codeQueryResult) codeList.Add(row); return codeList; } CompositeSIDGroupMap and CompositeSIDMap are both tables in our database that are represented as objects in my DataContext object. I wrote the following code to call these methods and display the T-SQL generated after calling these methods: using (CodeLookupAccessDataContext codeLookup = new CodeLookupAccessDataContext()) { codeLookup.Log = Console.Out; List<CompositeSIDMap> compList = codeLookup.lookupCompositeSIDMap(5, 3); } I got the following results in my log after invoking this code: SELECT [t0].[CompositeSIDGroup] FROM [dbo].[CompositeSIDGroupMap] AS [t0] WHERE ([t0].[RegionID] = @p0) AND ([t0].[MarketID] = @p1) -- @p0: Input Int (Size = 0; Prec = 0; Scale = 0) [5] -- @p1: Input Int (Size = 0; Prec = 0; Scale = 0) [3] -- Context: SqlProvider(Sql2005) Model: AttributedMetaModel Build: 3.5.30729.1 SELECT [t0].[PK_CSM], [t0].[CompositeSIDGroup], [t0].[InputSID], [t0].[TargetSID], [t0].[StartOffset], [t0].[EndOffset], [t0].[Scale] FROM [dbo].[CompositeSIDMap] AS [t0] -- Context: SqlProvider(Sql2005) Model: AttributedMetaModel Build: 3.5.30729.1 The first T-SQL statement contains a where clause as specified and returns one column as expected. However, the second statement is missing a where clause and returns all columns, even though I did specify which rows I wanted to view and which columns were of interest. Why is the second T-SQL statement generated the way it is, and what should I do to ensure that I filter out the data according to specifications via the T-SQL? Also note that I would prefer to keep lookupCodeInfo() and especially am interested in keeping it enabled to accept lambda functions for specifying which rows/columns to return.

    Read the article

  • Understanding NFS4 (Linux server)

    - by drumfire
    I've been a bit bothered by NFS4 on Linux. Some information 'out there' seems to conflict with other information, and other information appears hard to find. So here are a couple of things that caught my attention, hopefully someone out there can shed some light on this. This question focuses exclusively on NFS4 without Kerberos etc. 1. Exports There is ambiguous information in the exports manpage on the structure of /etc/exports. To quote from exports(5): Also, each line may have one or more specifications for default options after the path name, in the form of a dash ("-") followed by an option list. The option list is used for all subsequent exports on that line only. What does "subsequent exports on that line only" mean? 1.2 fsid=0 not required anymore? I was searching for fsid when I found a comment on the linux-nfs list stating fsid=0 is not required anymore. Now I'm just confused, do I need it with nfs4 or not?! 2. Non-exported directory still mountable Say I have the following tree: /exp /exp/users /exp/distr /exp/distr/archlinux /exp/distr/debian And I have the following entries in this fstab entry: /dev/disk/by-label/users /mnt/users ext4 defaults 0 0 /dev/disk/by-label/distr /mnt/distr ext4 defaults 0 0 /mnt/users /exp/users none bind 0 0 /mnt/distr /exp/distr none bind 0 0 And my exports is exactly this: /exp 192.168.1.0/24(fsid=0,rw,async,no_subtree_check,no_root_squash) /exp/distr 192.168.1.0/24(rw,async,no_subtree_check,no_root_squash) And exportfs -arv shows: exporting 192.168.1.0/24:/exp/distr exporting 192.168.1.0/24:/exp Then why am I able to do this and get no error on a client: mount -t nfs4 server:/exp/users /tmp/test Even though /exp/users is not exported? I didn't export this directory, and while I don't see the contents of /dev/disk/by-label/users unless I specify crossmnt, I am still able to write to the directory. Everything I write to there goes to the underlying directory of /exp/users which can be seen when I umount /exp/users; ls /exp/users.. 3. The odd case of showmount -d server As stated by rpc.mountd(8), this command should display directories that are either currently mounted by clients, or stale entries in /var/lib/nfs/rmtab, as can be read: The rpc.mountd daemon registers every successful MNT request by adding an entry to the /var/lib/nfs/rmtab file. When receivng a UMNT request from an NFS client, rpc.mountd simply removes the matching entry from /var/lib/nfs/rmtab, as long as the access control list for that export allows that sender to access the export. (...) Note, however, that there is little to guarantee that the contents of /var/lib/nfs/rmtab are accurate. A client may continue accessing an export even after invoking UMNT. If the client reboots without sending a UMNT request, stale entries remain for that client in /var/lib/nfs/rmtab. After reading this I surely wonder: Isn't it terribly insecure to just expose this type of client information; Aren't unaware server admins bound to have an rmtab with a lot of stale clients; Is this the reason that clients that mount nfs4 directories with mount -v get to see output like "nothing was mounted" even though something was mounted? I have a lot of other questions regarding nfs4, but I'll keep it at this for the moment.. :)

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31  | Next Page >