Search Results

Search found 4580 results on 184 pages for 'faster'.

Page 183/184 | < Previous Page | 179 180 181 182 183 184  | Next Page >

  • Authoritative sources about Database vs. Flatfile decision

    - by FastAl
    <tldr>looking for a reference to a book or other undeniably authoritative source that gives reasons when you should choose a database vs. when you should choose other storage methods. I have provided an un-authoritative list of reasons about 2/3 of the way down this post.</tldr> I have a situation at my company where a database is being used where it would be better to use another solution (in this case, an auto-generated piece of source code that contains a static lookup table, searched by binary sort). Normally, a database would be an OK solution even though the problem does not require a database, e.g, none of the elements of ACID are needed, as it is read-only data, updated about every 3-5 years (also requiring other sourcecode changes), and fits in memory, and can be keyed into via binary search (a tad faster than db, but speed is not an issue). The problem is that this code runs on our enterprise server, but is shared with several PC platforms (some disconnected, some use a central DB, etc.), and parts of it are managed by multiple programming units, parts by the DBAs, parts even by mathematicians in another department, etc. These hit their own platform’s version of their databases (containing their own copy of the static data). What happens is that every implementation, every little change, something different goes wrong. There are many other issues as well. I can’t even use a flatfile, because one mode of running on our enterprise server does not have permission to read files (only databases, and of course, its own literal storage, e.g., in-source table). Of course, other parts of the system use databases in proper, less obscure manners; there is no problem with those parts. So why don’t we just change it? I don’t have administrative ability to force a change. But I’m affected because sometimes I have to help fix the problems, but mostly because it causes outages and tons of extra IT time by other programmers and d*mmit that makes me mad! The reason neither management, nor the designers of the system, can see the problem is that they propose a solution that won’t work: increase communication; implement more safeguards and standards; etc. But every time, in a different part of the already-pared-down but still multi-step processes, a few different diligent, hard-working, top performing IT personnel make a unique subtle error that causes it to fail, sometimes after the last round of testing! And in general these are not single-person failures, but understandable miscommunications. And communication at our company is actually better than most. People just don't think that's the case because they haven't dug into the matter. However, I have it on very good word from somebody with extensive formal study of sociology and psychology that the relatively small amount of less-than-proper database usage in this gigantic cross-platform multi-source, multi-language project is bureaucratically un-maintainable. Impossible. No chance. At least with Human Beings in the loop, and it can’t be automated. In addition, the management and developers who could change this, though intelligent and capable, don’t understand the rigidity of this ‘how humans are’ issue, and are not convincible on the matter. The reason putting the static data in sourcecode will solve the problem is, although the solution is less sexy than a database, it would function with no technical drawbacks; and since the sharing of sourcecode already works very well, you basically erase any database-related effort from this section of the project, along with all the drawbacks of it that are causing problems. OK, that’s the background, for the curious. I won’t be able to convince management that this is an unfixable sociological problem, and that the real solution is coding around these limits of human nature, just as you would code around a bug in a 3rd party component that you can’t change. So what I have to do is exploit the unsuitableness of the database solution, and not do it using logic, but rather authority. I am aware of many reasons, and posts on this site giving reasons for one over the other; I’m not looking for lists of reasons like these (although you can add a comment if I've miss a doozy): WHY USE A DATABASE? instead of flatfile/other DB vs. file: if you need... Random Read / Transparent search optimization Advanced / varied / customizable Searching and sorting capabilities Transaction/rollback Locks, semaphores Concurrency control / Shared users Security 1-many/m-m is easier Easy modification Scalability Load Balancing Random updates / inserts / deletes Advanced query Administrative control of design, etc. SQL / learning curve Debugging / Logging Centralized / Live Backup capabilities Cached queries / dvlp & cache execution plans Interleaved update/read Referential integrity, avoid redundant/missing/corrupt/out-of-sync data Reporting (from on olap or oltp db) / turnkey generation tools [Disadvantages:] Important to get right the first time - professional design - but only b/c it's meant to last s/w & h/w cost Usu. over a network, speed issue (best vs. best design vs. local=even then a separate process req's marshalling/netwk layers/inter-p comm) indicies and query processing can stand in the way of simple processing (vs. flatfile) WHY USE FLATFILE: If you only need... Sequential Row processing only Limited usage append only (no reading, no master key/update) Only Update the record you're reading (fixed length recs only) Too big to fit into memory If Local disk / read-ahead network connection Portability / small system Email / cut & Paste / store as document by novice - simple format Low design learning curve but high cost later WHY USE IN-MEMORY/TABLE (tables, arrays, etc.): if you need... Processing a single db/ff record that was imported Known size of data Static data if hardcoding the table Narrow, unchanging use (e.g., one program or proc) -includes a class that will be shared, but encapsulates its data manipulation Extreme speed needed / high transaction frequency Random access - but search is dependent on implementation Following are some other posts about the topic: http://stackoverflow.com/questions/1499239/database-vs-flat-text-file-what-are-some-technical-reasons-for-choosing-one-over http://stackoverflow.com/questions/332825/are-flat-file-databases-any-good http://stackoverflow.com/questions/2356851/database-vs-flat-files http://stackoverflow.com/questions/514455/databases-vs-plain-text/514530 What I’d like to know is if anybody could recommend a hard, authoritative source containing these reasons. I’m looking for a paper book I can buy, or a reputable website with whitepapers about the issue (e.g., Microsoft, IBM), not counting the user-generated content on those sites. This will have a greater change to elicit a change that I’m looking for: less wasted programmer time, and more reliable programs. Thanks very much for your help. You win a prize for reading such a large post!

    Read the article

  • How do I prove I should put a table of values in source code instead of a database table?

    - by FastAl
    <tldr>looking for a reference to a book or other undeniably authoritative source that gives reasons when you should choose a database vs. when you should choose other storage methods. I have provided an un-authoritative list of reasons about 2/3 of the way down this post.</tldr> I have a situation at my company where a database is being used where it would be better to use another solution (in this case, an auto-generated piece of source code that contains a static lookup table, searched by binary sort). Normally, a database would be an OK solution even though the problem does not require a database, e.g, none of the elements of ACID are needed, as it is read-only data, updated about every 3-5 years (also requiring other sourcecode changes), and fits in memory, and can be keyed into via binary search (a tad faster than db, but speed is not an issue). The problem is that this code runs on our enterprise server, but is shared with several PC platforms (some disconnected, some use a central DB, etc.), and parts of it are managed by multiple programming units, parts by the DBAs, parts even by mathematicians in another department, etc. These hit their own platform’s version of their databases (containing their own copy of the static data). What happens is that every implementation, every little change, something different goes wrong. There are many other issues as well. I can’t even use a flatfile, because one mode of running on our enterprise server does not have permission to read files (only databases, and of course, its own literal storage, e.g., in-source table). Of course, other parts of the system use databases in proper, less obscure manners; there is no problem with those parts. So why don’t we just change it? I don’t have administrative ability to force a change. But I’m affected because sometimes I have to help fix the problems, but mostly because it causes outages and tons of extra IT time by other programmers and d*mmit that makes me mad! The reason neither management, nor the designers of the system, can see the problem is that they propose a solution that won’t work: increase communication; implement more safeguards and standards; etc. But every time, in a different part of the already-pared-down but still multi-step processes, a few different diligent, hard-working, top performing IT personnel make a unique subtle error that causes it to fail, sometimes after the last round of testing! And in general these are not single-person failures, but understandable miscommunications. And communication at our company is actually better than most. People just don't think that's the case because they haven't dug into the matter. However, I have it on very good word from somebody with extensive formal study of sociology and psychology that the relatively small amount of less-than-proper database usage in this gigantic cross-platform multi-source, multi-language project is bureaucratically un-maintainable. Impossible. No chance. At least with Human Beings in the loop, and it can’t be automated. In addition, the management and developers who could change this, though intelligent and capable, don’t understand the rigidity of this ‘how humans are’ issue, and are not convincible on the matter. The reason putting the static data in sourcecode will solve the problem is, although the solution is less sexy than a database, it would function with no technical drawbacks; and since the sharing of sourcecode already works very well, you basically erase any database-related effort from this section of the project, along with all the drawbacks of it that are causing problems. OK, that’s the background, for the curious. I won’t be able to convince management that this is an unfixable sociological problem, and that the real solution is coding around these limits of human nature, just as you would code around a bug in a 3rd party component that you can’t change. So what I have to do is exploit the unsuitableness of the database solution, and not do it using logic, but rather authority. I am aware of many reasons, and posts on this site giving reasons for one over the other; I’m not looking for lists of reasons like these (although you can add a comment if I've miss a doozy): WHY USE A DATABASE? instead of flatfile/other DB vs. file: if you need... Random Read / Transparent search optimization Advanced / varied / customizable Searching and sorting capabilities Transaction/rollback Locks, semaphores Concurrency control / Shared users Security 1-many/m-m is easier Easy modification Scalability Load Balancing Random updates / inserts / deletes Advanced query Administrative control of design, etc. SQL / learning curve Debugging / Logging Centralized / Live Backup capabilities Cached queries / dvlp & cache execution plans Interleaved update/read Referential integrity, avoid redundant/missing/corrupt/out-of-sync data Reporting (from on olap or oltp db) / turnkey generation tools [Disadvantages:] Important to get right the first time - professional design - but only b/c it's meant to last s/w & h/w cost Usu. over a network, speed issue (best vs. best design vs. local=even then a separate process req's marshalling/netwk layers/inter-p comm) indicies and query processing can stand in the way of simple processing (vs. flatfile) WHY USE FLATFILE: If you only need... Sequential Row processing only Limited usage append only (no reading, no master key/update) Only Update the record you're reading (fixed length recs only) Too big to fit into memory If Local disk / read-ahead network connection Portability / small system Email / cut & Paste / store as document by novice - simple format Low design learning curve but high cost later WHY USE IN-MEMORY/TABLE (tables, arrays, etc.): if you need... Processing a single db/ff record that was imported Known size of data Static data if hardcoding the table Narrow, unchanging use (e.g., one program or proc) -includes a class that will be shared, but encapsulates its data manipulation Extreme speed needed / high transaction frequency Random access - but search is dependent on implementation Following are some other posts about the topic: http://stackoverflow.com/questions/1499239/database-vs-flat-text-file-what-are-some-technical-reasons-for-choosing-one-over http://stackoverflow.com/questions/332825/are-flat-file-databases-any-good http://stackoverflow.com/questions/2356851/database-vs-flat-files http://stackoverflow.com/questions/514455/databases-vs-plain-text/514530 What I’d like to know is if anybody could recommend a hard, authoritative source containing these reasons. I’m looking for a paper book I can buy, or a reputable website with whitepapers about the issue (e.g., Microsoft, IBM), not counting the user-generated content on those sites. This will have a greater change to elicit a change that I’m looking for: less wasted programmer time, and more reliable programs. Thanks very much for your help. You win a prize for reading such a large post!

    Read the article

  • Optimizing sorting container of objects with heap-allocated buffers - how to avoid hard-copying buff

    - by Kache4
    I was making sure I knew how to do the op= and copy constructor correctly in order to sort() properly, so I wrote up a test case. After getting it to work, I realized that the op= was hard-copying all the data_. I figure if I wanted to sort a container with this structure (its elements have heap allocated char buffer arrays), it'd be faster to just swap the pointers around. Is there a way to do that? Would I have to write my own sort/swap function? #include <deque> //#include <string> //#include <utility> //#include <cstdlib> #include <cstring> #include <iostream> //#include <algorithm> // I use sort(), so why does this still compile when commented out? #include <boost/filesystem.hpp> #include <boost/foreach.hpp> using namespace std; namespace fs = boost::filesystem; class Page { public: // constructor Page(const char* path, const char* data, int size) : path_(fs::path(path)), size_(size), data_(new char[size]) { // cout << "Creating Page..." << endl; strncpy(data_, data, size); // cout << "done creating Page..." << endl; } // copy constructor Page(const Page& other) : path_(fs::path(other.path())), size_(other.size()), data_(new char[other.size()]) { // cout << "Copying Page..." << endl; strncpy(data_, other.data(), size_); // cout << "done copying Page..." << endl; } // destructor ~Page() { delete[] data_; } // accessors const fs::path& path() const { return path_; } const char* data() const { return data_; } int size() const { return size_; } // operators Page& operator = (const Page& other) { if (this == &other) return *this; char* newImage = new char[other.size()]; strncpy(newImage, other.data(), other.size()); delete[] data_; data_ = newImage; path_ = fs::path(other.path()); size_ = other.size(); return *this; } bool operator < (const Page& other) const { return path_ < other.path(); } private: fs::path path_; int size_; char* data_; }; class Book { public: Book(const char* path) : path_(fs::path(path)) { cout << "Creating Book..." << endl; cout << "pushing back #1" << endl; pages_.push_back(Page("image1.jpg", "firstImageData", 14)); cout << "pushing back #3" << endl; pages_.push_back(Page("image3.jpg", "thirdImageData", 14)); cout << "pushing back #2" << endl; pages_.push_back(Page("image2.jpg", "secondImageData", 15)); cout << "testing operator <" << endl; cout << pages_[0].path().string() << (pages_[0] < pages_[1]? " < " : " > ") << pages_[1].path().string() << endl; cout << pages_[1].path().string() << (pages_[1] < pages_[2]? " < " : " > ") << pages_[2].path().string() << endl; cout << pages_[0].path().string() << (pages_[0] < pages_[2]? " < " : " > ") << pages_[2].path().string() << endl; cout << "sorting" << endl; BOOST_FOREACH (Page p, pages_) cout << p.path().string() << endl; sort(pages_.begin(), pages_.end()); cout << "done sorting\n"; BOOST_FOREACH (Page p, pages_) cout << p.path().string() << endl; cout << "checking datas" << endl; BOOST_FOREACH (Page p, pages_) { char data[p.size() + 1]; strncpy((char*)&data, p.data(), p.size()); data[p.size()] = '\0'; cout << p.path().string() << " " << data << endl; } cout << "done Creating Book" << endl; } private: deque<Page> pages_; fs::path path_; }; int main() { Book* book = new Book("/some/path/"); }

    Read the article

  • How do I search the MediaStore for a specific directory instead of entire external storage?

    - by Nick Lopez
    In my app I have an option that allows users to browse for audio files on their phone to add to the app. I am having trouble however with creating a faster way of processing the query code. Currently it searches the entire external storage and causes the phone to prompt a force close/wait warning. I would like to take the code I have posted below and make it more efficient by either searching in a specific folder on the phone or by streamlining the process to make the file search quicker. I am not sure how to do this however. Thanks! public class BrowseActivity extends DashboardActivity implements OnClickListener, OnItemClickListener { private List<Sound> soundsInDevice = new ArrayList<Sound>(); private List<Sound> checkedList; private ListView browsedList; private BrowserSoundAdapter adapter; private long categoryId; private Category category; private String currentCategoryName; private String description; // private Category newCategory ; private Button doneButton; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); requestWindowFeature(Window.FEATURE_NO_TITLE); setContentView(R.layout.activity_browse); checkedList = new ArrayList<Sound>(); browsedList = (ListView) findViewById(android.R.id.list); doneButton = (Button) findViewById(R.id.doneButton); soundsInDevice = getMediaSounds(); if (soundsInDevice.size() > 0) { adapter = new BrowserSoundAdapter(this, R.id.browseSoundName, soundsInDevice); } else { Toast.makeText(getApplicationContext(), getString(R.string.no_sounds_available), Toast.LENGTH_SHORT) .show(); } browsedList.setAdapter(adapter); browsedList.setOnItemClickListener(this); doneButton.setOnClickListener(this); } private List<Sound> getMediaSounds() { List<Sound> mediaSoundList = new ArrayList<Sound>(); ContentResolver cr = getContentResolver(); String[] projection = {MediaStore.Audio.Media._ID, MediaStore.Audio.Media.DISPLAY_NAME, MediaStore.Audio.Media.TITLE, MediaStore.Audio.Media.DATA, MediaStore.Audio.Media.DURATION}; final Uri uri = MediaStore.Audio.Media.EXTERNAL_CONTENT_URI; Log.v("MediaStore.Audio.Media.EXTERNAL_CONTENT_URI", "" + uri); final Cursor cursor = cr.query(uri, projection, null, null, null); int n = cursor.getCount(); Log.v("count", "" + n); if (cursor.moveToFirst()) { do { String soundName = cursor .getString(cursor .getColumnIndexOrThrow(MediaStore.Audio.Media.DISPLAY_NAME)); Log.v("soundName", "" + soundName); String title = cursor .getString(cursor .getColumnIndexOrThrow(MediaStore.Audio.Media.TITLE)); Log.v("title", "" + title); String path = cursor.getString(cursor .getColumnIndexOrThrow(MediaStore.Audio.Media.DATA)); Log.v("path", "" + path); Sound browsedSound = new Sound(title, path, false, false, false, false, 0); Log.v("browsedSound", "" + browsedSound); mediaSoundList.add(browsedSound); Log.v("mediaSoundList", "" + mediaSoundList.toString()); } while (cursor.moveToNext()); } return mediaSoundList; } public class BrowserSoundAdapter extends ArrayAdapter<Sound> { public BrowserSoundAdapter(Context context, int textViewResourceId, List<Sound> objects) { super(context, textViewResourceId, objects); } @Override public View getView(final int position, View convertView, ViewGroup parent) { ViewHolder viewHolder; View view = convertView; LayoutInflater inflater = getLayoutInflater(); if (view == null) { view = inflater.inflate(R.layout.list_item_browse, null); viewHolder = new ViewHolder(); viewHolder.soundNameTextView = (TextView) view .findViewById(R.id.browseSoundName); viewHolder.pathTextView = (TextView) view .findViewById(R.id.browseSoundPath); viewHolder.checkToAddSound = (CheckBox) view .findViewById(R.id.browse_checkbox); view.setTag(viewHolder); } else { viewHolder = (ViewHolder) view.getTag(); } final Sound sound = soundsInDevice.get(position); if (sound.isCheckedState()) { viewHolder.checkToAddSound.setChecked(true); } else { viewHolder.checkToAddSound.setChecked(false); } viewHolder.soundNameTextView.setText(sound.getName()); viewHolder.pathTextView.setText(sound.getUri()); viewHolder.checkToAddSound .setOnClickListener(new OnClickListener() { @Override public void onClick(View v) { CheckBox cb = (CheckBox) v .findViewById(R.id.browse_checkbox); boolean checked = cb.isChecked(); boolean newValue = checked; updateView(position, newValue); doneButtonStatus(checkedList.size()); } }); return view; } } // Adapter view holder class private class ViewHolder { private TextView soundNameTextView; private TextView pathTextView; private CheckBox checkToAddSound; } // done button On Click @Override public void onClick(View view) { boolean status = getIntent().getBooleanExtra("FromAddCat", false); Log.v("for add category","enters in if"); if(status){ Log.v("for add category","enters in if1"); currentCategoryName = getIntent().getStringExtra("categoryName"); description = getIntent().getStringExtra("description"); boolean existCategory = SQLiteHelper.getCategoryStatus(currentCategoryName); if (!existCategory) { category = new Category(currentCategoryName, description, false); category.insert(); category.update(); Log.v("for add category","enters in if2"); } }else{ categoryId = getIntent().getLongExtra("categoryId",-1); category = SQLiteHelper.getCategory(categoryId); } for (Sound checkedsound : checkedList) { checkedsound.setCheckedState(false); checkedsound.insert(); category.getSounds().add(checkedsound); final Intent intent = new Intent(this, CategoriesActivity.class); finish(); startActivity(intent); } } @Override public void onItemClick(AdapterView<?> arg0, View view, int position, long arg3) { boolean checked = true; boolean newValue = false; CheckBox cb = (CheckBox) view.findViewById(R.id.browse_checkbox); if (cb.isChecked()) { cb.setChecked(!checked); newValue = !checked; } else { cb.setChecked(checked); newValue = checked; } updateView(position, newValue); doneButtonStatus(checkedList.size()); } private void doneButtonStatus(int size) { if (size > 0) { doneButton.setEnabled(true); doneButton.setBackgroundResource(R.drawable.done_button_drawable); } else { doneButton.setEnabled(false); doneButton.setBackgroundResource(R.drawable.done_btn_disabled); } } private void updateView(int index, boolean newValue) { System.out.println(newValue); Sound sound = soundsInDevice.get(index); if (newValue == true) { checkedList.add(sound); sound.setCheckedState(newValue); } else { checkedList.remove(sound); sound.setCheckedState(newValue); } } }

    Read the article

  • SQLAuthority News – TechEd India – April 12-14, 2010 Bangalore – An Unforgettable Experience – An Op

    - by pinaldave
    TechEd India was one of the largest Technology events in India led by Microsoft. This event was attended by more than 3,000 technology enthusiasts, making it one of the most well-organized events of the year. Though I attempted to attend almost all the technology events here, I have not seen any bigger or better event in Indian subcontinents other than this. There are 21 Technical Tracks at Tech·Ed India 2010 that span more than 745 learning opportunities. I was fortunate enough to be a part of this whole event as a speaker and a delegate, as well. TechEd India Speaker Badge and A Token of Lifetime Hotel Selection I presented three different sessions at TechEd India and was also a part of panel discussion. (The details of the sessions are given at the end of this blog post.) Due to extensive traveling, I stay away from my family occasionally. For this reason, I took my wife – Nupur and daughter Shaivi (8 months old) to the event along with me. We stayed at the same hotel where the event was organized so as to maximize my time bonding with my family and to have more time in networking with technology community, at the same time. The hotel Lalit Ashok is the largest and most luxurious venue one can find in Bangalore, located in the middle of the city. The cost of the hotel was a bit pricey, but looking at all the advantages, I had decided to ask for a booking there. Hotel Lalit Ashok Nupur Dave and Shaivi Dave Arrival Day – DAY 0 – April 11, 2010 I reached the event a day earlier, and that was one wise decision for I was able to relax a bit and go over my presentation for the next day’s course. I am a kind of person who likes to get everything ready ahead of time. I was also able to enjoy a pleasant evening with several Microsoft employees and my family friends. I even checked out the location where I would be doing presentations the next day. I was fortunate enough to meet Bijoy Singhal from Microsoft who helped me out with a few of the logistics issues that occured the day before. I was not aware of the fact that the very next day he was going to be “The Man” of the TechEd 2010 event. Vinod Kumar from Microsoft was really very kind as he talked to me regarding my subsequent session. He gave me some suggestions which were really helpful that I was able to incorporate them during my presentation. Finally, I was able to meet Abhishek Kant from Microsoft; his valuable suggestions and unlimited passion have inspired many people like me to work with the Community. Pradipta from Microsoft was also around, being extremely busy with logistics; however, in those busy times, he did find some good spare time to have a chat with me and the other Community leaders. I also met Harish Ranganathan and Sachin Rathi, both from Microsoft. It was so interesting to listen to both of them talking about SharePoint. I just have no words to express my overwhelmed spirit because of all these passionate young guys - Pradipta,Vinod, Bijoy, Harish, Sachin and Ahishek (of course!). Map of TechEd India 2010 Event Day 1 – April 12, 2010 From morning until night time, today was truly a very busy day for me. I had two presentations and one panel discussion for the day. Needless to say, I had a few meetings to attend as well. The day started with a keynote from S. Somaseger where he announced the launch of Visual Studio 2010. The keynote area was really eye-catching because of the very large, bigger-than- life uniform screen. This was truly one to show. The title music of the keynote was very interesting and it featured Bijoy Singhal as the model. It was interesting to talk to him afterwards, when we laughed at jokes together about his modeling assignment. TechEd India Keynote Opening Featuring Bijoy TechEd India 2010 Keynote – S. Somasegar Time: 11:15pm – 11:45pm Session 1: True Lies of SQL Server – SQL Myth Buster Following the excellent keynote, I had my very first session on the subject of SQL Server Myth Buster. At first, I was a bit nervous as right after the keynote, for this was my very first session and during my presentation I saw lots of Microsoft Product Team members. Well, it really went well and I had a really good discussion with attendees of the session. I felt that a well begin was half-done and my confidence was regained. Right after the session, I met a few of my Community friends and had meaningful discussions with them on many subjects. The abstract of the session is as follows: In this 30-minute demo session, I am going to briefly demonstrate few SQL Server Myths and their resolutions as I back them up with some demo. This demo presentation is a must-attend for all developers and administrators who would come to the event. This is going to be a very quick yet fun session. Pinal Presenting session at TechEd India 2010 Time: 1:00 PM – 2:00 PM Lunch with Somasegar After the session I went to see my daughter, and then I headed right away to the lunch with S. Somasegar – the keynote speaker and senior vice president of the Developer Division at Microsoft. I really thank to Abhishek who made it possible for us. Because of his efforts, all the MVPs had the opportunity to meet such a legendary person and had to talk with them on Microsoft Technology. Though Somasegar is currently holding such a high position in Microsoft, he is very polite and a real gentleman, and how I wish that everybody in industry is like him. Believe me, if you spread love and kindness, then that is what you will receive back. As soon as lunch time was over, I ran to the session hall as my second presentation was about to start. Time: 2:30pm – 3:30pm Session 2: Master Data Services in Microsoft SQL Server 2008 R2 Business Intelligence is a subject which was widely talked about at TechEd. Everybody was interested in this subject, and I did not excuse myself from this great concept as well. I consider myself fortunate as I was presenting on the subject of Master Data Services at TechEd. When I had initially learned this subject, I had a bit of confusion about the usage of this tool. Later on, I decided that I would tackle about how we all developers and DBAs are not able to understand something so simple such as this, and even worst, creating confusion about the technology. During system designing, it is very important to have a reference material or master lookup tables. Well, I talked about the same subject and presented the session keeping that as my center talk. The session went very well and I received lots of interesting questions. I got many compliments for talking about this subject on the real-life scenario. I really thank Rushabh Mehta (CEO, Solid Quality Mentors India) for his supportive suggestions that helped me prepare the slide deck, as well as the subject. Pinal Presenting session at TechEd India 2010 The abstract of the session is as follows: SQL Server Master Data Services will ship with SQL Server 2008 R2 and will improve Microsoft’s platform appeal. This session provides an in-depth demonstration of MDS features and highlights important usage scenarios. Master Data Services enables consistent decision-making process by allowing you to create, manage and propagate changes from a single master view of your business entities. Also, MDS – Master Data-hub which is a vital component, helps ensure the consistency of reporting across systems and deliver faster and more accurate results across the enterprise. We will talk about establishing the basis for a centralized approach to defining, deploying, and managing master data in the enterprise. Pinal Presenting session at TechEd India 2010 The day was still not over for me. I had ran into several friends but we were not able keep our enthusiasm under control about all the rumors saying that SQL Server 2008 R2 was about to be launched tomorrow in the keynote. I then ran to my third and final technical event for the day- a panel discussion with the top technologies of India. Time: 5:00pm – 6:00pm Panel Discussion: Harness the power of Web – SEO and Technical Blogging As I have delivered two technical sessions by this time, I was a bit tired but  not less enthusiastic when I had to talk about Blog and Technology. We discussed many different topics there. I told them that the most important aspect for any blog is its content. We discussed in depth the issues with plagiarism and how to avoid it. Another topic of discussion was how we technology bloggers can create awareness in the Community about what the right kind of blogging is and what morally and technically wrong acts are. A couple of questions were raised about what type of liberty a person can have in terms of writing blogs. Well, it was generically agreed that a blog is mainly a representation of our ideas and thoughts; it should not be governed by external entities. As long as one is writing what they really want to say, but not providing incorrect information or not practicing plagiarism, a blogger should be allowed to express himself. This panel discussion was supposed to be over in an hour, but the interest of the participants was remarkable and so it was extended for 30 minutes more. Finally, we decided to bring to a close the discussion and agreed that we will continue the topic next year. TechEd India Panel Discussion on Web, Technology and SEO Surprisingly, the day was just beginning after doing all of these. By this time, I have almost met all the MVP who arrived at the event, as well as many Microsoft employees. There were lots of Community folks present, too. I decided that I would go to meet several friends from the Community and continue to communicate with me on SQLAuthority.com. I also met Abhishek Baxi and had a good talk with him regarding Win Mobile and Twitter. He also took a very quick video of me wherein I spoke in my mother’s tongue, Gujarati. It was funny that I talked in Gujarati almost all the day, but when I was talking in the interview I could not find the right Gujarati words to speak. I think we all think in English when we think about Technology, so as to address universality. After meeting them, I headed towards the Speakers’ Dinner. Time: 8:00 PM – onwards Speakers Dinner The Speakers’ dinner was indeed a wonderful opportunity for all the speakers to get together and relax. We talked so many different things, from XBOX to Hindi Movies, and from SQL to Samosas. I just could not express how much fun I had. After a long evening, when I returned tmy room and met Shaivi, I just felt instantly relaxed. Kids are really gifts from God. Today was a really long but exciting day. So many things happened in just one day: Visual Studio Lanch, lunch with Somasegar, 2 technical sessions, 1 panel discussion, community leaders meeting, speakers dinner and, last but not leas,t playing with my child! A perfect day! Day 2 – April 13, 2010 Today started with a bang with the excellent keynote by Kamal Hathi who launched SQL Server 2008 R2 in India and demonstrated the power of PowerPivot to all of us. 101 Million Rows in Excel brought lots of applause from the audience. Kamal Hathi Presenting Keynote at TechEd India 2010 The day was a bit easier one for me. I had no sessions today and no events planned. I had a few meetings planned for the second day of the event. I sat in the speaker’s lounge for half a day and met many people there. I attended nearly 9 different meetings today. The subjects of the meetings were very different. Here is a list of the topics of the Community-related meetings: SQL PASS and its involvement in India and subcontinents How to start community blogging Forums and developing aptitude towards technology Ahmedabad/Gandhinagar User Groups and their developments SharePoint and SQL Business Meeting – a client meeting Business Meeting – a potential performance tuning project Business Meeting – Solid Quality Mentors (SolidQ) And family friends Pinal Dave at TechEd India The day passed by so quickly during this meeting. In the evening, I headed to Partners Expo with friends and checked out few of the booths. I really wanted to talk about some of the products, but due to the freebies there was so much crowd that I finally decided to just take the contact details of the partner. I will now start sending them with my queries and, hopefully, I will have my questions answered. Nupur and Shaivi had also one meeting to attend; it was with our family friend Vijay Raj. Vijay is also a person who loves Technology and loves it more than anybody. I see him growing and learning every day, but still remaining as a ‘human’. I believe that if someone acquires as much knowledge as him, that person will become either a computer or cyborg. Here, Vijay is still a kind gentleman and is able to stay as our close family friend. Shaivi was really happy to play with Uncle Vijay. Pinal Dave and Vijay Raj Renuka Prasad, a Microsoft MVP, impressed me with his passion and knowledge of SQL. Every time he gives me credit for his success, I believe that he is very humble. He has way more certifications than me and has worked many more years with SQL compared to me. He is an excellent photographer as well. Most of the photos in this blog post have been taken by him. I told him if ever he wants to do a part time job, he can do the photography very well. Pinal Dave and Renuka Prasad I also met L Srividya from Microsoft, whom I was looking forward to meet. She is a bundle of knowledge that everyone would surely learn a lot from her. I was able to get a few minutes from her and well, I felt confident. She enlightened me with SQL Server BI concepts, domain management and SQL Server security and few other interesting details. I also had a wonderful time talking about SharePoint with fellow Solid Quality Mentor Joy Rathnayake. He is very passionate about SharePoint but when you talk .NET and SQL with him, he is still overwhelmingly knowledgeable. In fact, while talking to him, I figured out that the recent training he delivered was on SQL Server 2008 R2. I told him a joke that it hurts my ego as he is more popular now in SQL training and consulting than me. I am sure all of you agree that working with good people is a gift from God. I am fortunate enough to work with the best of the best Industry experts. It was a great pleasure to hang out with my Community friends – Ahswin Kini, HimaBindu Vejella, Vasudev G, Suprotim Agrawal, Dhananjay, Vikram Pendse, Mahesh Dhola, Mahesh Mitkari,  Manu Zacharia, Shobhan, Hardik Shah, Ashish Mohta, Manan, Subodh Sohani and Sanjay Shetty (of course!) .  (Please let me know if I have met you at the event and forgot your name to list here). Time: 8:00 PM – onwards Community Leaders Dinner After lots of meetings, I headed towards the Community Leaders dinner meeting and met almost all the folks I met in morning. The discussion was almost the same but the real good thing was that we were enjoying it. The food was really good. Nupur was invited in the event, but Shaivi could not come. When Nupur tried to enter the event, she was stopped as Shaivi did not have the pass to enter the dinner. Nupur expressed that Shaivi is only 8 months old and does not eat outside food as well and could not stay by herself at this age, but the door keeper did not agree and asked that without the entry details Shaivi could not go in, but Nupur could. Nupur called me on phone and asked me to help her out. By the time, I was outside; the organizer of the event reached to the door and happily approved Shaivi to join the party. Once in the party, Shaivi had lots of fun meeting so many people. Shaivi Dave and Abhishek Kant Dean Guida (Infragistics President and CEO) and Pinal Dave (SQLAuthority.com) Day 3 – April 14, 2010 Though, it was last day, I was very much excited today as I was about to present my very favorite session. Query Optimization and Performance Tuning is my domain expertise and I make my leaving by consulting and training the same. Today’s session was on the same subject and as an additional twist, another subject about Spatial Database was presented. I was always intrigued with Spatial Database and I have enjoyed learning about it; however, I have never thought about Spatial Indexing before it was decided that I will do this session. I really thank Solid Quality Mentor Dr. Greg Low for his assistance in helping me prepare the slide deck and also review the content. Furthermore, today was really what I call my ‘learning day’ . So far I had not attended any session in TechEd and I felt a bit down for that. Everybody spends their valuable time & money to learn something new and exciting in TechEd and I had not attended a single session at the moment thinking that it was already last day of the event. I did have a plan for the day and I attended two technical sessions before my session of spatial database. I attended 2 sessions of Vinod Kumar. Vinod is a natural storyteller and there was no doubt that his sessions would be jam-packed. People attended his sessions simply because Vinod is syhe speaker. He did not have a single time disappointed audience; he is truly a good speaker. He knows his stuff very well. I personally do not think that in India he can be compared to anyone for SQL. Time: 12:30pm-1:30pm SQL Server Query Optimization, Execution and Debugging Query Performance I really had a fun time attending this session. Vinod made this session very interactive. The entire audience really got into the presentation and started participating in the event. Vinod was presenting a small problem with Query Tuning, which any developer would have encountered and solved with their help in such a fashion that a developer feels he or she have already resolved it. In one question, I was the only one who was ready to answer and Vinod told me in a light tone that I am now allowed to answer it! The audience really found it very amusing. There was a huge crowd around Vinod after the session. Vinod – A master storyteller! Time: 3:45pm-4:45pm Data Recovery / consistency with CheckDB This session was much heavier than the earlier one, and I must say this is my most favorite session I EVER attended in India. In this TechEd I have only attended two sessions, but in my career, I have attended numerous technical sessions not only in India, but all over the world. This session had taken my breath away. One by one, Vinod took the different databases, and started to corrupt them in different ways. Each database has some unique ways to get corrupted. Once that was done, Vinod started to show the DBCC CEHCKDB and demonstrated how it can solve your problem. He finally fixed all the databases with this single tool. I do have a good knowledge of this subject, but let me honestly admit that I have learned a lot from this session. I enjoyed and cheered during this session along with other attendees. I had total satisfaction that, just like everyone, I took advantage of the event and learned something. I am now TECHnically EDucated. Pinal Dave and Vinod Kumar After two very interactive and informative SQL Sessions from Vinod Kumar, the next turn me presenting on Spatial Database and Indexing. I got once again nervous but Vinod told me to stay natural and do my presentation. Well, once I got a huge stage with a total of four projectors and a large crowd, I felt better. Time: 5:00pm-6:00pm Session 3: Developing with SQL Server Spatial and Deep Dive into Spatial Indexing Pinal Presenting session at TechEd India 2010 Pinal Presenting session at TechEd India 2010 I kicked off this session with Michael J Swart‘s beautiful spatial image. This session was the last one for the day but, to my surprise, I had more than 200+ attendees. Slowly, the rain was starting outside and I was worried that the hall would not be full; despite this, there was not a single seat available in the first five minutes of the session. Thanks to all of you for attending my presentation. I had demonstrated the map of world (and India) and quickly explained what  Geographic and Geometry data types in Spatial Database are. This session had interesting story of Indexing and Comparison, as well as how different traditional indexes are from spatial indexing. Pinal Presenting session at TechEd India 2010 Due to the heavy rain during this event, the power went off for about 22 minutes (just an accident – nobodies fault). During these minutes, there were no audio, no video and no light. I continued to address the mass of 200+ people without any audio device and PowerPoint. I must thank the audience because not a single person left from the session. They all stayed in their place, some moved closure to listen to me properly. I noticed that the curiosity and eagerness to learn new things was at the peak even though it was the very last session of the TechEd. Everybody wanted get the maximum knowledge out of this whole event. I was touched by the support from audience. They listened and participated in my session even without any kinds of technology (no ppt, no mike, no AC, nothing). During these 22 minutes, I had completed my theory verbally. Pinal Presenting session at TechEd India 2010 After a while, we got the projector back online and we continued with some exciting demos. Many thanks to Microsoft people who worked energetically in background to get the backup power for project up. I had a very interesting demo wherein I overlaid Bangalore and Hyderabad on the India Map and find their aerial distance between them. After finding the aerial distance, we browsed online and found that SQL Server estimates the exact aerial distance between these two cities, as compared to the factual distance. There was a huge applause from the crowd on the subject that SQL Server takes into the count of the curvature of the earth and finds the precise distances based on details. During the process of finding the distance, I demonstrated a few examples of the indexes where I expressed how one can use those indexes to find these distances and how they can improve the performance of similar query. I also demonstrated few examples wherein we were able to see in which data type the Index is most useful. We finished the demos with a few more internal stuff. Pinal Presenting session at TechEd India 2010 Despite all issues, I was mostly satisfied with my presentation. I think it was the best session I have ever presented at any conference. There was no help from Technology for a while, but I still got lots of appreciation at the end. When we ended the session, the applause from the audience was so loud that for a moment, the rain was not audible. I was truly moved by the dedication of the Technology enthusiasts. Pinal Dave After Presenting session at TechEd India 2010 The abstract of the session is as follows: The Microsoft SQL Server 2008 delivers new spatial data types that enable you to consume, use, and extend location-based data through spatial-enabled applications. Attend this session to learn how to use spatial functionality in next version of SQL Server to build and optimize spatial queries. This session outlines the new geography data type to store geodetic spatial data and perform operations on it, use the new geometry data type to store planar spatial data and perform operations on it, take advantage of new spatial indexes for high performance queries, use the new spatial results tab to quickly and easily view spatial query results directly from within Management Studio, extend spatial data capabilities by building or integrating location-enabled applications through support for spatial standards and specifications and much more. Time: 8:00 PM – onwards Dinner by Sponsors After the lively session during the day, there was another dinner party courtesy of one of the sponsors of TechEd. All the MVPs and several Community leaders were present at the dinner. I would like to express my gratitude to Abhishek Kant for organizing this wonderful event for us. It was a blast and really relaxing in all angles. We all stayed there for a long time and talked about our sweet and unforgettable memories of the event. Pinal Dave and Bijoy Singhal It was really one wonderful event. After writing this much, I say that I have no words to express about how much I enjoyed TechEd. However, it is true that I shared with you only 1% of the total activities I have done at the event. There were so many people I have met, yet were not mentioned here although I wanted to write their names here, too . Anyway, I have learned so many things and up until now, I am not able to get over all the fun I had in this event. Pinal Dave at TechEd India 2010 The Next Days – April 15, 2010 – till today I am still not able to get my mind out of the whole experience I had at TechEd India 2010. It was like a whole Microsoft Family working together to celebrate a happy occasion. TechEd India – Truly An Unforgettable Experience! Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, MVP, Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLAuthority News, SQLServer, T SQL, Technology Tagged: TechEd, TechEdIn

    Read the article

  • SQLAuthority News – TechEd India – April 12-14, 2010 Bangalore – An Unforgettable Experience – An Op

    - by pinaldave
    TechEd India was one of the largest Technology events in India led by Microsoft. This event was attended by more than 3,000 technology enthusiasts, making it one of the most well-organized events of the year. Though I attempted to attend almost all the technology events here, I have not seen any bigger or better event in Indian subcontinents other than this. There are 21 Technical Tracks at Tech·Ed India 2010 that span more than 745 learning opportunities. I was fortunate enough to be a part of this whole event as a speaker and a delegate, as well. TechEd India Speaker Badge and A Token of Lifetime Hotel Selection I presented three different sessions at TechEd India and was also a part of panel discussion. (The details of the sessions are given at the end of this blog post.) Due to extensive traveling, I stay away from my family occasionally. For this reason, I took my wife – Nupur and daughter Shaivi (8 months old) to the event along with me. We stayed at the same hotel where the event was organized so as to maximize my time bonding with my family and to have more time in networking with technology community, at the same time. The hotel Lalit Ashok is the largest and most luxurious venue one can find in Bangalore, located in the middle of the city. The cost of the hotel was a bit pricey, but looking at all the advantages, I had decided to ask for a booking there. Hotel Lalit Ashok Nupur Dave and Shaivi Dave Arrival Day – DAY 0 – April 11, 2010 I reached the event a day earlier, and that was one wise decision for I was able to relax a bit and go over my presentation for the next day’s course. I am a kind of person who likes to get everything ready ahead of time. I was also able to enjoy a pleasant evening with several Microsoft employees and my family friends. I even checked out the location where I would be doing presentations the next day. I was fortunate enough to meet Bijoy Singhal from Microsoft who helped me out with a few of the logistics issues that occured the day before. I was not aware of the fact that the very next day he was going to be “The Man” of the TechEd 2010 event. Vinod Kumar from Microsoft was really very kind as he talked to me regarding my subsequent session. He gave me some suggestions which were really helpful that I was able to incorporate them during my presentation. Finally, I was able to meet Abhishek Kant from Microsoft; his valuable suggestions and unlimited passion have inspired many people like me to work with the Community. Pradipta from Microsoft was also around, being extremely busy with logistics; however, in those busy times, he did find some good spare time to have a chat with me and the other Community leaders. I also met Harish Ranganathan and Sachin Rathi, both from Microsoft. It was so interesting to listen to both of them talking about SharePoint. I just have no words to express my overwhelmed spirit because of all these passionate young guys - Pradipta,Vinod, Bijoy, Harish, Sachin and Ahishek (of course!). Map of TechEd India 2010 Event Day 1 – April 12, 2010 From morning until night time, today was truly a very busy day for me. I had two presentations and one panel discussion for the day. Needless to say, I had a few meetings to attend as well. The day started with a keynote from S. Somaseger where he announced the launch of Visual Studio 2010. The keynote area was really eye-catching because of the very large, bigger-than- life uniform screen. This was truly one to show. The title music of the keynote was very interesting and it featured Bijoy Singhal as the model. It was interesting to talk to him afterwards, when we laughed at jokes together about his modeling assignment. TechEd India Keynote Opening Featuring Bijoy TechEd India 2010 Keynote – S. Somasegar Time: 11:15pm – 11:45pm Session 1: True Lies of SQL Server – SQL Myth Buster Following the excellent keynote, I had my very first session on the subject of SQL Server Myth Buster. At first, I was a bit nervous as right after the keynote, for this was my very first session and during my presentation I saw lots of Microsoft Product Team members. Well, it really went well and I had a really good discussion with attendees of the session. I felt that a well begin was half-done and my confidence was regained. Right after the session, I met a few of my Community friends and had meaningful discussions with them on many subjects. The abstract of the session is as follows: In this 30-minute demo session, I am going to briefly demonstrate few SQL Server Myths and their resolutions as I back them up with some demo. This demo presentation is a must-attend for all developers and administrators who would come to the event. This is going to be a very quick yet fun session. Pinal Presenting session at TechEd India 2010 Time: 1:00 PM – 2:00 PM Lunch with Somasegar After the session I went to see my daughter, and then I headed right away to the lunch with S. Somasegar – the keynote speaker and senior vice president of the Developer Division at Microsoft. I really thank to Abhishek who made it possible for us. Because of his efforts, all the MVPs had the opportunity to meet such a legendary person and had to talk with them on Microsoft Technology. Though Somasegar is currently holding such a high position in Microsoft, he is very polite and a real gentleman, and how I wish that everybody in industry is like him. Believe me, if you spread love and kindness, then that is what you will receive back. As soon as lunch time was over, I ran to the session hall as my second presentation was about to start. Time: 2:30pm – 3:30pm Session 2: Master Data Services in Microsoft SQL Server 2008 R2 Business Intelligence is a subject which was widely talked about at TechEd. Everybody was interested in this subject, and I did not excuse myself from this great concept as well. I consider myself fortunate as I was presenting on the subject of Master Data Services at TechEd. When I had initially learned this subject, I had a bit of confusion about the usage of this tool. Later on, I decided that I would tackle about how we all developers and DBAs are not able to understand something so simple such as this, and even worst, creating confusion about the technology. During system designing, it is very important to have a reference material or master lookup tables. Well, I talked about the same subject and presented the session keeping that as my center talk. The session went very well and I received lots of interesting questions. I got many compliments for talking about this subject on the real-life scenario. I really thank Rushabh Mehta (CEO, Solid Quality Mentors India) for his supportive suggestions that helped me prepare the slide deck, as well as the subject. Pinal Presenting session at TechEd India 2010 The abstract of the session is as follows: SQL Server Master Data Services will ship with SQL Server 2008 R2 and will improve Microsoft’s platform appeal. This session provides an in-depth demonstration of MDS features and highlights important usage scenarios. Master Data Services enables consistent decision-making process by allowing you to create, manage and propagate changes from a single master view of your business entities. Also, MDS – Master Data-hub which is a vital component, helps ensure the consistency of reporting across systems and deliver faster and more accurate results across the enterprise. We will talk about establishing the basis for a centralized approach to defining, deploying, and managing master data in the enterprise. Pinal Presenting session at TechEd India 2010 The day was still not over for me. I had ran into several friends but we were not able keep our enthusiasm under control about all the rumors saying that SQL Server 2008 R2 was about to be launched tomorrow in the keynote. I then ran to my third and final technical event for the day- a panel discussion with the top technologies of India. Time: 5:00pm – 6:00pm Panel Discussion: Harness the power of Web – SEO and Technical Blogging As I have delivered two technical sessions by this time, I was a bit tired but  not less enthusiastic when I had to talk about Blog and Technology. We discussed many different topics there. I told them that the most important aspect for any blog is its content. We discussed in depth the issues with plagiarism and how to avoid it. Another topic of discussion was how we technology bloggers can create awareness in the Community about what the right kind of blogging is and what morally and technically wrong acts are. A couple of questions were raised about what type of liberty a person can have in terms of writing blogs. Well, it was generically agreed that a blog is mainly a representation of our ideas and thoughts; it should not be governed by external entities. As long as one is writing what they really want to say, but not providing incorrect information or not practicing plagiarism, a blogger should be allowed to express himself. This panel discussion was supposed to be over in an hour, but the interest of the participants was remarkable and so it was extended for 30 minutes more. Finally, we decided to bring to a close the discussion and agreed that we will continue the topic next year. TechEd India Panel Discussion on Web, Technology and SEO Surprisingly, the day was just beginning after doing all of these. By this time, I have almost met all the MVP who arrived at the event, as well as many Microsoft employees. There were lots of Community folks present, too. I decided that I would go to meet several friends from the Community and continue to communicate with me on SQLAuthority.com. I also met Abhishek Baxi and had a good talk with him regarding Win Mobile and Twitter. He also took a very quick video of me wherein I spoke in my mother’s tongue, Gujarati. It was funny that I talked in Gujarati almost all the day, but when I was talking in the interview I could not find the right Gujarati words to speak. I think we all think in English when we think about Technology, so as to address universality. After meeting them, I headed towards the Speakers’ Dinner. Time: 8:00 PM – onwards Speakers Dinner The Speakers’ dinner was indeed a wonderful opportunity for all the speakers to get together and relax. We talked so many different things, from XBOX to Hindi Movies, and from SQL to Samosas. I just could not express how much fun I had. After a long evening, when I returned tmy room and met Shaivi, I just felt instantly relaxed. Kids are really gifts from God. Today was a really long but exciting day. So many things happened in just one day: Visual Studio Lanch, lunch with Somasegar, 2 technical sessions, 1 panel discussion, community leaders meeting, speakers dinner and, last but not leas,t playing with my child! A perfect day! Day 2 – April 13, 2010 Today started with a bang with the excellent keynote by Kamal Hathi who launched SQL Server 2008 R2 in India and demonstrated the power of PowerPivot to all of us. 101 Million Rows in Excel brought lots of applause from the audience. Kamal Hathi Presenting Keynote at TechEd India 2010 The day was a bit easier one for me. I had no sessions today and no events planned. I had a few meetings planned for the second day of the event. I sat in the speaker’s lounge for half a day and met many people there. I attended nearly 9 different meetings today. The subjects of the meetings were very different. Here is a list of the topics of the Community-related meetings: SQL PASS and its involvement in India and subcontinents How to start community blogging Forums and developing aptitude towards technology Ahmedabad/Gandhinagar User Groups and their developments SharePoint and SQL Business Meeting – a client meeting Business Meeting – a potential performance tuning project Business Meeting – Solid Quality Mentors (SolidQ) And family friends Pinal Dave at TechEd India The day passed by so quickly during this meeting. In the evening, I headed to Partners Expo with friends and checked out few of the booths. I really wanted to talk about some of the products, but due to the freebies there was so much crowd that I finally decided to just take the contact details of the partner. I will now start sending them with my queries and, hopefully, I will have my questions answered. Nupur and Shaivi had also one meeting to attend; it was with our family friend Vijay Raj. Vijay is also a person who loves Technology and loves it more than anybody. I see him growing and learning every day, but still remaining as a ‘human’. I believe that if someone acquires as much knowledge as him, that person will become either a computer or cyborg. Here, Vijay is still a kind gentleman and is able to stay as our close family friend. Shaivi was really happy to play with Uncle Vijay. Pinal Dave and Vijay Raj Renuka Prasad, a Microsoft MVP, impressed me with his passion and knowledge of SQL. Every time he gives me credit for his success, I believe that he is very humble. He has way more certifications than me and has worked many more years with SQL compared to me. He is an excellent photographer as well. Most of the photos in this blog post have been taken by him. I told him if ever he wants to do a part time job, he can do the photography very well. Pinal Dave and Renuka Prasad I also met L Srividya from Microsoft, whom I was looking forward to meet. She is a bundle of knowledge that everyone would surely learn a lot from her. I was able to get a few minutes from her and well, I felt confident. She enlightened me with SQL Server BI concepts, domain management and SQL Server security and few other interesting details. I also had a wonderful time talking about SharePoint with fellow Solid Quality Mentor Joy Rathnayake. He is very passionate about SharePoint but when you talk .NET and SQL with him, he is still overwhelmingly knowledgeable. In fact, while talking to him, I figured out that the recent training he delivered was on SQL Server 2008 R2. I told him a joke that it hurts my ego as he is more popular now in SQL training and consulting than me. I am sure all of you agree that working with good people is a gift from God. I am fortunate enough to work with the best of the best Industry experts. It was a great pleasure to hang out with my Community friends – Ahswin Kini, HimaBindu Vejella, Vasudev G, Suprotim Agrawal, Dhananjay, Vikram Pendse, Mahesh Dhola, Mahesh Mitkari,  Manu Zacharia, Shobhan, Hardik Shah, Ashish Mohta, Manan, Subodh Sohani and Sanjay Shetty (of course!) .  (Please let me know if I have met you at the event and forgot your name to list here). Time: 8:00 PM – onwards Community Leaders Dinner After lots of meetings, I headed towards the Community Leaders dinner meeting and met almost all the folks I met in morning. The discussion was almost the same but the real good thing was that we were enjoying it. The food was really good. Nupur was invited in the event, but Shaivi could not come. When Nupur tried to enter the event, she was stopped as Shaivi did not have the pass to enter the dinner. Nupur expressed that Shaivi is only 8 months old and does not eat outside food as well and could not stay by herself at this age, but the door keeper did not agree and asked that without the entry details Shaivi could not go in, but Nupur could. Nupur called me on phone and asked me to help her out. By the time, I was outside; the organizer of the event reached to the door and happily approved Shaivi to join the party. Once in the party, Shaivi had lots of fun meeting so many people. Shaivi Dave and Abhishek Kant Dean Guida (Infragistics President and CEO) and Pinal Dave (SQLAuthority.com) Day 3 – April 14, 2010 Though, it was last day, I was very much excited today as I was about to present my very favorite session. Query Optimization and Performance Tuning is my domain expertise and I make my leaving by consulting and training the same. Today’s session was on the same subject and as an additional twist, another subject about Spatial Database was presented. I was always intrigued with Spatial Database and I have enjoyed learning about it; however, I have never thought about Spatial Indexing before it was decided that I will do this session. I really thank Solid Quality Mentor Dr. Greg Low for his assistance in helping me prepare the slide deck and also review the content. Furthermore, today was really what I call my ‘learning day’ . So far I had not attended any session in TechEd and I felt a bit down for that. Everybody spends their valuable time & money to learn something new and exciting in TechEd and I had not attended a single session at the moment thinking that it was already last day of the event. I did have a plan for the day and I attended two technical sessions before my session of spatial database. I attended 2 sessions of Vinod Kumar. Vinod is a natural storyteller and there was no doubt that his sessions would be jam-packed. People attended his sessions simply because Vinod is syhe speaker. He did not have a single time disappointed audience; he is truly a good speaker. He knows his stuff very well. I personally do not think that in India he can be compared to anyone for SQL. Time: 12:30pm-1:30pm SQL Server Query Optimization, Execution and Debugging Query Performance I really had a fun time attending this session. Vinod made this session very interactive. The entire audience really got into the presentation and started participating in the event. Vinod was presenting a small problem with Query Tuning, which any developer would have encountered and solved with their help in such a fashion that a developer feels he or she have already resolved it. In one question, I was the only one who was ready to answer and Vinod told me in a light tone that I am now allowed to answer it! The audience really found it very amusing. There was a huge crowd around Vinod after the session. Vinod – A master storyteller! Time: 3:45pm-4:45pm Data Recovery / consistency with CheckDB This session was much heavier than the earlier one, and I must say this is my most favorite session I EVER attended in India. In this TechEd I have only attended two sessions, but in my career, I have attended numerous technical sessions not only in India, but all over the world. This session had taken my breath away. One by one, Vinod took the different databases, and started to corrupt them in different ways. Each database has some unique ways to get corrupted. Once that was done, Vinod started to show the DBCC CEHCKDB and demonstrated how it can solve your problem. He finally fixed all the databases with this single tool. I do have a good knowledge of this subject, but let me honestly admit that I have learned a lot from this session. I enjoyed and cheered during this session along with other attendees. I had total satisfaction that, just like everyone, I took advantage of the event and learned something. I am now TECHnically EDucated. Pinal Dave and Vinod Kumar After two very interactive and informative SQL Sessions from Vinod Kumar, the next turn me presenting on Spatial Database and Indexing. I got once again nervous but Vinod told me to stay natural and do my presentation. Well, once I got a huge stage with a total of four projectors and a large crowd, I felt better. Time: 5:00pm-6:00pm Session 3: Developing with SQL Server Spatial and Deep Dive into Spatial Indexing Pinal Presenting session at TechEd India 2010 Pinal Presenting session at TechEd India 2010 I kicked off this session with Michael J Swart‘s beautiful spatial image. This session was the last one for the day but, to my surprise, I had more than 200+ attendees. Slowly, the rain was starting outside and I was worried that the hall would not be full; despite this, there was not a single seat available in the first five minutes of the session. Thanks to all of you for attending my presentation. I had demonstrated the map of world (and India) and quickly explained what  Geographic and Geometry data types in Spatial Database are. This session had interesting story of Indexing and Comparison, as well as how different traditional indexes are from spatial indexing. Pinal Presenting session at TechEd India 2010 Due to the heavy rain during this event, the power went off for about 22 minutes (just an accident – nobodies fault). During these minutes, there were no audio, no video and no light. I continued to address the mass of 200+ people without any audio device and PowerPoint. I must thank the audience because not a single person left from the session. They all stayed in their place, some moved closure to listen to me properly. I noticed that the curiosity and eagerness to learn new things was at the peak even though it was the very last session of the TechEd. Everybody wanted get the maximum knowledge out of this whole event. I was touched by the support from audience. They listened and participated in my session even without any kinds of technology (no ppt, no mike, no AC, nothing). During these 22 minutes, I had completed my theory verbally. Pinal Presenting session at TechEd India 2010 After a while, we got the projector back online and we continued with some exciting demos. Many thanks to Microsoft people who worked energetically in background to get the backup power for project up. I had a very interesting demo wherein I overlaid Bangalore and Hyderabad on the India Map and find their aerial distance between them. After finding the aerial distance, we browsed online and found that SQL Server estimates the exact aerial distance between these two cities, as compared to the factual distance. There was a huge applause from the crowd on the subject that SQL Server takes into the count of the curvature of the earth and finds the precise distances based on details. During the process of finding the distance, I demonstrated a few examples of the indexes where I expressed how one can use those indexes to find these distances and how they can improve the performance of similar query. I also demonstrated few examples wherein we were able to see in which data type the Index is most useful. We finished the demos with a few more internal stuff. Pinal Presenting session at TechEd India 2010 Despite all issues, I was mostly satisfied with my presentation. I think it was the best session I have ever presented at any conference. There was no help from Technology for a while, but I still got lots of appreciation at the end. When we ended the session, the applause from the audience was so loud that for a moment, the rain was not audible. I was truly moved by the dedication of the Technology enthusiasts. Pinal Dave After Presenting session at TechEd India 2010 The abstract of the session is as follows: The Microsoft SQL Server 2008 delivers new spatial data types that enable you to consume, use, and extend location-based data through spatial-enabled applications. Attend this session to learn how to use spatial functionality in next version of SQL Server to build and optimize spatial queries. This session outlines the new geography data type to store geodetic spatial data and perform operations on it, use the new geometry data type to store planar spatial data and perform operations on it, take advantage of new spatial indexes for high performance queries, use the new spatial results tab to quickly and easily view spatial query results directly from within Management Studio, extend spatial data capabilities by building or integrating location-enabled applications through support for spatial standards and specifications and much more. Time: 8:00 PM – onwards Dinner by Sponsors After the lively session during the day, there was another dinner party courtesy of one of the sponsors of TechEd. All the MVPs and several Community leaders were present at the dinner. I would like to express my gratitude to Abhishek Kant for organizing this wonderful event for us. It was a blast and really relaxing in all angles. We all stayed there for a long time and talked about our sweet and unforgettable memories of the event. Pinal Dave and Bijoy Singhal It was really one wonderful event. After writing this much, I say that I have no words to express about how much I enjoyed TechEd. However, it is true that I shared with you only 1% of the total activities I have done at the event. There were so many people I have met, yet were not mentioned here although I wanted to write their names here, too . Anyway, I have learned so many things and up until now, I am not able to get over all the fun I had in this event. Pinal Dave at TechEd India 2010 The Next Days – April 15, 2010 – till today I am still not able to get my mind out of the whole experience I had at TechEd India 2010. It was like a whole Microsoft Family working together to celebrate a happy occasion. TechEd India – Truly An Unforgettable Experience! Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, MVP, Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLAuthority News, SQLServer, T SQL, Technology Tagged: TechEd, TechEdIn

    Read the article

  • Red Gate Coder interviews: Alex Davies

    - by Michael Williamson
    Alex Davies has been a software engineer at Red Gate since graduating from university, and is currently busy working on .NET Demon. We talked about tackling parallel programming with his actors framework, a scientific approach to debugging, and how JavaScript is going to affect the programming languages we use in years to come. So, if we start at the start, how did you get started in programming? When I was seven or eight, I was given a BBC Micro for Christmas. I had asked for a Game Boy, but my dad thought it would be better to give me a proper computer. For a year or so, I only played games on it, but then I found the user guide for writing programs in it. I gradually started doing more stuff on it and found it fun. I liked creating. As I went into senior school I continued to write stuff on there, trying to write games that weren’t very good. I got a real computer when I was fourteen and found ways to write BASIC on it. Visual Basic to start with, and then something more interesting than that. How did you learn to program? Was there someone helping you out? Absolutely not! I learnt out of a book, or by experimenting. I remember the first time I found a loop, I was like “Oh my God! I don’t have to write out the same line over and over and over again any more. It’s amazing!” When did you think this might be something that you actually wanted to do as a career? For a long time, I thought it wasn’t something that you would do as a career, because it was too much fun to be a career. I thought I’d do chemistry at university and some kind of career based on chemical engineering. And then I went to a careers fair at school when I was seventeen or eighteen, and it just didn’t interest me whatsoever. I thought “I could be a programmer, and there’s loads of money there, and I’m good at it, and it’s fun”, but also that I shouldn’t spoil my hobby. Now I don’t really program in my spare time any more, which is a bit of a shame, but I program all the rest of the time, so I can live with it. Do you think you learnt much about programming at university? Yes, definitely! I went into university knowing how to make computers do anything I wanted them to do. However, I didn’t have the language to talk about algorithms, so the algorithms course in my first year was massively important. Learning other language paradigms like functional programming was really good for breadth of understanding. Functional programming influences normal programming through design rather than actually using it all the time. I draw inspiration from it to write imperative programs which I think is actually becoming really fashionable now, but I’ve been doing it for ages. I did it first! There were also some courses on really odd programming languages, a bit of Prolog, a little bit of C. Having a little bit of each of those is something that I would have never done on my own, so it was important. And then there are knowledge-based courses which are about not programming itself but things that have been programmed like TCP. Those are really important for examples for how to approach things. Did you do any internships while you were at university? Yeah, I spent both of my summers at the same company. I thought I could code well before I went there. Looking back at the crap that I produced, it was only surpassed in its crappiness by all of the other code already in that company. I’m so much better at writing nice code now than I used to be back then. Was there just not a culture of looking after your code? There was, they just didn’t hire people for their abilities in that area. They hired people for raw IQ. The first indicator of it going wrong was that they didn’t have any computer scientists, which is a bit odd in a programming company. But even beyond that they didn’t have people who learnt architecture from anyone else. Most of them had started straight out of university, so never really had experience or mentors to learn from. There wasn’t the experience to draw from to teach each other. In the second half of my second internship, I was being given tasks like looking at new technologies and teaching people stuff. Interns shouldn’t be teaching people how to do their jobs! All interns are going to have little nuggets of things that you don’t know about, but they shouldn’t consistently be the ones who know the most. It’s not a good environment to learn. I was going to ask how you found working with people who were more experienced than you… When I reached Red Gate, I found some people who were more experienced programmers than me, and that was difficult. I’ve been coding since I was tiny. At university there were people who were cleverer than me, but there weren’t very many who were more experienced programmers than me. During my internship, I didn’t find anyone who I classed as being a noticeably more experienced programmer than me. So, it was a shock to the system to have valid criticisms rather than just formatting criticisms. However, Red Gate’s not so big on the actual code review, at least it wasn’t when I started. We did an entire product release and then somebody looked over all of the UI of that product which I’d written and say what they didn’t like. By that point, it was way too late and I’d disagree with them. Do you think the lack of code reviews was a bad thing? I think if there’s going to be any oversight of new people, then it should be continuous rather than chunky. For me I don’t mind too much, I could go out and get oversight if I wanted it, and in those situations I felt comfortable without it. If I was managing the new person, then maybe I’d be keener on oversight and then the right way to do it is continuously and in very, very small chunks. Have you had any significant projects you’ve worked on outside of a job? When I was a teenager I wrote all sorts of stuff. I used to write games, I derived how to do isomorphic projections myself once. I didn’t know what the word was so I couldn’t Google for it, so I worked it out myself. It was horrifically complicated. But it sort of tailed off when I started at university, and is now basically zero. If I do side-projects now, they tend to be work-related side projects like my actors framework, NAct, which I started in a down tools week. Could you explain a little more about NAct? It is a little C# framework for writing parallel code more easily. Parallel programming is difficult when you need to write to shared data. Sometimes parallel programming is easy because you don’t need to write to shared data. When you do need to access shared data, you could just have your threads pile in and do their work, but then you would screw up the data because the threads would trample on each other’s toes. You could lock, but locks are really dangerous if you’re using more than one of them. You get interactions like deadlocks, and that’s just nasty. Actors instead allows you to say this piece of data belongs to this thread of execution, and nobody else can read it. If you want to read it, then ask that thread of execution for a piece of it by sending a message, and it will send the data back by a message. And that avoids deadlocks as long as you follow some obvious rules about not making your actors sit around waiting for other actors to do something. There are lots of ways to write actors, NAct allows you to do it as if it was method calls on other objects, which means you get all the strong type-safety that C# programmers like. Do you think that this is suitable for the majority of parallel programming, or do you think it’s only suitable for specific cases? It’s suitable for most difficult parallel programming. If you’ve just got a hundred web requests which are all independent of each other, then I wouldn’t bother because it’s easier to just spin them up in separate threads and they can proceed independently of each other. But where you’ve got difficult parallel programming, where you’ve got multiple threads accessing multiple bits of data in multiple ways at different times, then actors is at least as good as all other ways, and is, I reckon, easier to think about. When you’re using actors, you presumably still have to write your code in a different way from you would otherwise using single-threaded code. You can’t use actors with any methods that have return types, because you’re not allowed to call into another actor and wait for it. If you want to get a piece of data out of another actor, then you’ve got to use tasks so that you can use “async” and “await” to await asynchronously for it. But other than that, you can still stick things in classes so it’s not too different really. Rather than having thousands of objects with mutable state, you can use component-orientated design, where there are only a few mutable classes which each have a small number of instances. Then there can be thousands of immutable objects. If you tend to do that anyway, then actors isn’t much of a jump. If I’ve already built my system without any parallelism, how hard is it to add actors to exploit all eight cores on my desktop? Usually pretty easy. If you can identify even one boundary where things look like messages and you have components where some objects live on one side and these other objects live on the other side, then you can have a granddaddy object on one side be an actor and it will parallelise as it goes across that boundary. Not too difficult. If we do get 1000-core desktop PCs, do you think actors will scale up? It’s hard. There are always in the order of twenty to fifty actors in my whole program because I tend to write each component as actors, and I tend to have one instance of each component. So this won’t scale to a thousand cores. What you can do is write data structures out of actors. I use dictionaries all over the place, and if you need a dictionary that is going to be accessed concurrently, then you could build one of those out of actors in no time. You can use queuing to marshal requests between different slices of the dictionary which are living on different threads. So it’s like a distributed hash table but all of the chunks of it are on the same machine. That means that each of these thousand processors has cached one small piece of the dictionary. I reckon it wouldn’t be too big a leap to start doing proper parallelism. Do you think it helps if actors get baked into the language, similarly to Erlang? Erlang is excellent in that it has thread-local garbage collection. C# doesn’t, so there’s a limit to how well C# actors can possibly scale because there’s a single garbage collected heap shared between all of them. When you do a global garbage collection, you’ve got to stop all of the actors, which is seriously expensive, whereas in Erlang garbage collections happen per-actor, so they’re insanely cheap. However, Erlang deviated from all the sensible language design that people have used recently and has just come up with crazy stuff. You can definitely retrofit thread-local garbage collection to .NET, and then it’s quite well-suited to support actors, even if it’s not baked into the language. Speaking of language design, do you have a favourite programming language? I’ll choose a language which I’ve never written before. I like the idea of Scala. It sounds like C#, only with some of the niggles gone. I enjoy writing static types. It means you don’t have to writing tests so much. When you say it doesn’t have some of the niggles? C# doesn’t allow the use of a property as a method group. It doesn’t have Scala case classes, or sum types, where you can do a switch statement and the compiler checks that you’ve checked all the cases, which is really useful in functional-style programming. Pattern-matching, in other words. That’s actually the major niggle. C# is pretty good, and I’m quite happy with C#. And what about going even further with the type system to remove the need for tests to something like Haskell? Or is that a step too far? I’m quite a pragmatist, I don’t think I could deal with trying to write big systems in languages with too few other users, especially when learning how to structure things. I just don’t know anyone who can teach me, and the Internet won’t teach me. That’s the main reason I wouldn’t use it. If I turned up at a company that writes big systems in Haskell, I would have no objection to that, but I wouldn’t instigate it. What about things in C#? For instance, there’s contracts in C#, so you can try to statically verify a bit more about your code. Do you think that’s useful, or just not worthwhile? I’ve not really tried it. My hunch is that it needs to be built into the language and be quite mathematical for it to work in real life, and that doesn’t seem to have ended up true for C# contracts. I don’t think anyone who’s tried them thinks they’re any good. I might be wrong. On a slightly different note, how do you like to debug code? I think I’m quite an odd debugger. I use guesswork extremely rarely, especially if something seems quite difficult to debug. I’ve been bitten spending hours and hours on guesswork and not being scientific about debugging in the past, so now I’m scientific to a fault. What I want is to see the bug happening in the debugger, to step through the bug happening. To watch the program going from a valid state to an invalid state. When there’s a bug and I can’t work out why it’s happening, I try to find some piece of evidence which places the bug in one section of the code. From that experiment, I binary chop on the possible causes of the bug. I suppose that means binary chopping on places in the code, or binary chopping on a stage through a processing cycle. Basically, I’m very stupid about how I debug. I won’t make any guesses, I won’t use any intuition, I will only identify the experiment that’s going to binary chop most effectively and repeat rather than trying to guess anything. I suppose it’s quite top-down. Is most of the time then spent in the debugger? Absolutely, if at all possible I will never debug using print statements or logs. I don’t really hold much stock in outputting logs. If there’s any bug which can be reproduced locally, I’d rather do it in the debugger than outputting logs. And with SmartAssembly error reporting, there’s not a lot that can’t be either observed in an error report and just fixed, or reproduced locally. And in those other situations, maybe I’ll use logs. But I hate using logs. You stare at the log, trying to guess what’s going on, and that’s exactly what I don’t like doing. You have to just look at it and see does this look right or wrong. We’ve covered how you get to grip with bugs. How do you get to grips with an entire codebase? I watch it in the debugger. I find little bugs and then try to fix them, and mostly do it by watching them in the debugger and gradually getting an understanding of how the code works using my process of binary chopping. I have to do a lot of reading and watching code to choose where my slicing-in-half experiment is going to be. The last time I did it was SmartAssembly. The old code was a complete mess, but at least it did things top to bottom. There wasn’t too much of some of the big abstractions where flow of control goes all over the place, into a base class and back again. Code’s really hard to understand when that happens. So I like to choose a little bug and try to fix it, and choose a bigger bug and try to fix it. Definitely learn by doing. I want to always have an aim so that I get a little achievement after every few hours of debugging. Once I’ve learnt the codebase I might be able to fix all the bugs in an hour, but I’d rather be using them as an aim while I’m learning the codebase. If I was a maintainer of a codebase, what should I do to make it as easy as possible for you to understand? Keep distinct concepts in different places. And name your stuff so that it’s obvious which concepts live there. You shouldn’t have some variable that gets set miles up the top of somewhere, and then is read miles down to choose some later behaviour. I’m talking from a very much SmartAssembly point of view because the old SmartAssembly codebase had tons and tons of these things, where it would read some property of the code and then deal with it later. Just thousands of variables in scope. Loads of things to think about. If you can keep concepts separate, then it aids me in my process of fixing bugs one at a time, because each bug is going to more or less be understandable in the one place where it is. And what about tests? Do you think they help at all? I’ve never had the opportunity to learn a codebase which has had tests, I don’t know what it’s like! What about when you’re actually developing? How useful do you find tests in finding bugs or regressions? Finding regressions, absolutely. Running bits of code that would be quite hard to run otherwise, definitely. It doesn’t happen very often that a test finds a bug in the first place. I don’t really buy nebulous promises like tests being a good way to think about the spec of the code. My thinking goes something like “This code works at the moment, great, ship it! Ah, there’s a way that this code doesn’t work. Okay, write a test, demonstrate that it doesn’t work, fix it, use the test to demonstrate that it’s now fixed, and keep the test for future regressions.” The most valuable tests are for bugs that have actually happened at some point, because bugs that have actually happened at some point, despite the fact that you think you’ve fixed them, are way more likely to appear again than new bugs are. Does that mean that when you write your code the first time, there are no tests? Often. The chance of there being a bug in a new feature is relatively unaffected by whether I’ve written a test for that new feature because I’m not good enough at writing tests to think of bugs that I would have written into the code. So not writing regression tests for all of your code hasn’t affected you too badly? There are different kinds of features. Some of them just always work, and are just not flaky, they just continue working whatever you throw at them. Maybe because the type-checker is particularly effective around them. Writing tests for those features which just tend to always work is a waste of time. And because it’s a waste of time I’ll tend to wait until a feature has demonstrated its flakiness by having bugs in it before I start trying to test it. You can get a feel for whether it’s going to be flaky code as you’re writing it. I try to write it to make it not flaky, but there are some things that are just inherently flaky. And very occasionally, I’ll think “this is going to be flaky” as I’m writing, and then maybe do a test, but not most of the time. How do you think your programming style has changed over time? I’ve got clearer about what the right way of doing things is. I used to flip-flop a lot between different ideas. Five years ago I came up with some really good ideas and some really terrible ideas. All of them seemed great when I thought of them, but they were quite diverse ideas, whereas now I have a smaller set of reliable ideas that are actually good for structuring code. So my code is probably more similar to itself than it used to be back in the day, when I was trying stuff out. I’ve got more disciplined about encapsulation, I think. There are operational things like I use actors more now than I used to, and that forces me to use immutability more than I used to. The first code that I wrote in Red Gate was the memory profiler UI, and that was an actor, I just didn’t know the name of it at the time. I don’t really use object-orientation. By object-orientation, I mean having n objects of the same type which are mutable. I want a constant number of objects that are mutable, and they should be different types. I stick stuff in dictionaries and then have one thing that owns the dictionary and puts stuff in and out of it. That’s definitely a pattern that I’ve seen recently. I think maybe I’m doing functional programming. Possibly. It’s plausible. If you had to summarise the essence of programming in a pithy sentence, how would you do it? Programming is the form of art that, without losing any of the beauty of architecture or fine art, allows you to produce things that people love and you make money from. So you think it’s an art rather than a science? It’s a little bit of engineering, a smidgeon of maths, but it’s not science. Like architecture, programming is on that boundary between art and engineering. If you want to do it really nicely, it’s mostly art. You can get away with doing architecture and programming entirely by having a good engineering mind, but you’re not going to produce anything nice. You’re not going to have joy doing it if you’re an engineering mind. Architects who are just engineering minds are not going to enjoy their job. I suppose engineering is the foundation on which you build the art. Exactly. How do you think programming is going to change over the next ten years? There will be an unfortunate shift towards dynamically-typed languages, because of JavaScript. JavaScript has an unfair advantage. JavaScript’s unfair advantage will cause more people to be exposed to dynamically-typed languages, which means other dynamically-typed languages crop up and the best features go into dynamically-typed languages. Then people conflate the good features with the fact that it’s dynamically-typed, and more investment goes into dynamically-typed languages. They end up better, so people use them. What about the idea of compiling other languages, possibly statically-typed, to JavaScript? It’s a reasonable idea. I would like to do it, but I don’t think enough people in the world are going to do it to make it pick up. The hordes of beginners are the lifeblood of a language community. They are what makes there be good tools and what makes there be vibrant community websites. And any particular thing which is the same as JavaScript only with extra stuff added to it, although it might be technically great, is not going to have the hordes of beginners. JavaScript is always to be quickest and easiest way for a beginner to start programming in the browser. And dynamically-typed languages are great for beginners. Compilers are pretty scary and beginners don’t write big code. And having your errors come up in the same place, whether they’re statically checkable errors or not, is quite nice for a beginner. If someone asked me to teach them some programming, I’d teach them JavaScript. If dynamically-typed languages are great for beginners, when do you think the benefits of static typing start to kick in? The value of having a statically typed program is in the tools that rely on the static types to produce a smooth IDE experience rather than actually telling me my compile errors. And only once you’re experienced enough a programmer that having a really smooth IDE experience makes a blind bit of difference, does static typing make a blind bit of difference. So it’s not really about size of codebase. If I go and write up a tiny program, I’m still going to get value out of writing it in C# using ReSharper because I’m experienced with C# and ReSharper enough to be able to write code five times faster if I have that help. Any other visions of the future? Nobody’s going to use actors. Because everyone’s going to be running on single-core VMs connected over network-ready protocols like JSON over HTTP. So, parallelism within one operating system is going to die. But until then, you should use actors. More Red Gater Coder interviews

    Read the article

  • Windows CE Chat Transcript (March 30, 2010)

    - by Bruce Eitman
    For those of you who missed the chat today, here is the raw transcript.   By raw, I mean that I copied and pasted the discussion without any edits. This is divided into two parts, the top part is the answers from the Microsoft Experts and the bottom part is the questions from the audience. Answers from Microsoft:   Karel Danihelka [MS] (Expert)[2010-3-30 12:2]: Hi everyone, my name is Karel Danihelka and I am developer in partner response team. Sing Wee [MS] (Expert)[2010-3-30 12:2]: Hi, I'm Sing Wee, part of the CoreOS/BSP Test Team. GLanger_MS (Expert)[2010-3-30 12:2]: Hi, I'm Glen Langer, program manager on the Core Team. Karel Danihelka [MS] (Expert)[2010-3-30 12:3]: Q: I need to implement hardware timers on my windows CE 6.0 device to trigger events at microsecond intervals. Where should i start? A: Until you are using CPU with GHz frequency your only chance is use interrupt handler and implement all funcionality there. But it will be really tricky and may reduce system performance. If period will be near to millisecond timeframe you can use normal thread wait for event pattern. Karel Danihelka [MS] (Expert)[2010-3-30 12:5]: Q: I want to partition my NAND Flash device. One partition to use for hive ragistry and the other for the apps and data. The only way to do it is programmatically or setting some registry values ? A: It need to be set in registry - generally you need mark this partition as boot partition. Karel Danihelka [MS] (Expert)[2010-3-30 12:7]: Q: My CPU is Intel celeron M processor 1Ghz. A: In this case you can try use normal approach - in interrupt handler return SYSINTR and start thread in device driver which will spin thread waiting on event attached to this SYSINTR. Karel Danihelka [MS] (Expert)[2010-3-30 12:7]: Q: If i need to implement it using interrupt handlers, What are all the files that I should look at? A: Good quesiton - I would recommend documentation and there was BSP development book to download for free. mikehall_ms (Moderator)[2010-3-30 12:8]: Q: Hi guys, what's the formal way to report bugs back to the core team / product team? The mechanism of calling the support phone number every time is really onerous and time-consuming. Is there another mechanism? A: Using product support is the formal way to report bugs/issues - Product support can then create an issue that can be tracked. Karel Danihelka [MS] (Expert)[2010-3-30 12:9]: Q: But the operation for creating the partitions ? A: This is tricky - if you will make it autopartition & autoformat it will be created by filesystem. But generally it depends on your boot loader. mikehall_ms (Moderator)[2010-3-30 12:10]: Q: Is Windows Phone 7 related to Windows CE? If so, can you tell me what version of Windows CE is the basis? Is it in fact the new version of Windows Mobile? A: At MIX 2010 Charlie Kindel presented a session that described some of the core technologies that make up Windows Phone 7 Series, including the underlying operating system (Windows CE) and the new ISV programming model based on Silverlight and .NET - check out the Mix Online Videos to get more information. davbo-msft (Moderator)[2010-3-30 12:10]: Q: Is Windows Phone 7 related to Windows CE? If so, can you tell me what version of Windows CE is the basis? Is it in fact the new version of Windows Mobile? A: This forum is to discussed released products in the industry. Windows Mobile & Windows CE are based on the same Windows CE Kernel/system. Windows CE is focused on deliverying the OS for embedded customers in the market where Windows Mobile is focused on deliverying compelling Windows Phone platform. davbo-msft (Moderator)[2010-3-30 12:11]: Q: Is Windows Phone 7 related to Windows CE? If so, can you tell me what version of Windows CE is the basis? Is it in fact the new version of Windows Mobile? A: http://en.wikipedia.org/wiki/Windows_CE Wikipedia gives a good breakdown of the version history. Travis Hobrla [MS] (Expert)[2010-3-30 12:13]: Q: I created a OS design with KITL and kernel debugger enabled. But I am unable to connect to the target for debugging. I am getting the following error when i try to connect with the device. PB Debugger Cannot initialize the Kernel Debugger. PB Debugger Debugger could not initialize connection. PB Debugger The Kernel Debugger is waiting to connect with target. PB Debugger The Kernel Debugger has been disconnected successfully. A: One possibility is that a rogue cesvchost.exe has co-opted the debugger. I am assuming this is CE 6.0? Can you try exiting visual studio and manually killing the cesvchost.exe process from the Task Manager? davbo-msft (Moderator)[2010-3-30 12:14]: Q: Hi guys, what's the formal way to report bugs back to the core team / product team? The mechanism of calling the support phone number every time is really onerous and time-consuming. Is there another mechanism? A: For info on contacting Microsoft support refer to the support page on the Embedded website: http://msdn.microsoft.com/en-us/windowsembedded/dd897633.aspx Sing Wee [MS] (Expert)[2010-3-30 12:16]: Q: Do u mean ISR/IST implementation? How can i register an interrupt? What kind of interrupt should i register? A: A good introduction to interrupts in WinCE 6.0 can be found here (aside from the documentation on MSDN): http://download.microsoft.com/download/9/c/f/9cffaa58-4000-48d6-a4b2-5fed9e4e6410/Chapter%206%20-%20Developing%20Device%20Drivers.pdf mikehall_ms (Moderator)[2010-3-30 12:16]: Q: What will be different in Windows Compact 7 from CE 6.0? A: Unfortunately we cannot discuss unreleased products on this chat - keep an eye on the Windows Embedded web site and blogs to keep up to date with product announcements. Travis Hobrla [MS] (Expert)[2010-3-30 12:16]: Q: I am using CE 6.0. There is no cesvchost process running in my system. A: What operating system are you using? Karel Danihelka [MS] (Expert)[2010-3-30 12:16]: Q: So...I have to modify file system code to create 2 partition at system startup ?!! I haven't understood.... A: You don't need to modify code, there are registry settings to achive this (look to documentation). But you may need to create partition table in boot loader. Unfortunatelly there isn't simple way how to do it. davbo-msft (Moderator)[2010-3-30 12:18]: Q: I would like to get a handle to a Silverlight screen section, is that possiable? A: Windows Embedded CE 6.0 R3 includes Sliverlight for Windows Embedded. Refer to New Features overview on the embedded web site. http://www.microsoft.com/windowsembedded/en-us/products/windowsce/default.mspx. Silverlight - The power of Silverlight brought to Windows Embedded CE to create rich applications and user interfaces is new part of Windows CE Embedded. mikehall_ms (Moderator)[2010-3-30 12:20]: Q: The link for developing device drivers is not working. can u please check that? A: http://msdn.microsoft.com/en-us/library/ms923714.aspx davbo-msft (Moderator)[2010-3-30 12:20]: Q: I would like to get a handle to a Silverlight screen section, is that possiable? A: Sorry misunderstood the question I thought you were asking if embedded CE could handle Silverlight. Please repost so that the question goes back into the active queue because once answered no way to put the status back to open. Travis Hobrla [MS] (Expert)[2010-3-30 12:21]: Q: sorry! Windows XP SP3 A: Can you try exiting VS2005 and confirming cesvchost.exe is not running, then renaming C:\Documents and Settings\USERNAME\Local Settings\Application Data\Microsoft\CoreCon\1.0 to 1.0_backup, then restarting VS2005? Sing Wee [MS] (Expert)[2010-3-30 12:24]: Q: Can I have the book's name please? A: I believe the downloadable version is related to the last link I sent. If you go to the following website, I believe you can download the whole thing: http://msdn.microsoft.com/en-us/windowsembedded/ce/cc294468.aspx davbo-msft (Moderator)[2010-3-30 12:24]: Q: If one has an image on a Silverlight page, it seems to be cached. How would one refresh that cache after changing the underlying image? A: change the URI of the image or use a writeable bitmap if they want to manually toggle the pixels Sing Wee [MS] (Expert)[2010-3-30 12:25]: A: Whoops, hit [ENTER] too early. On the right side, you'll see there an "Exam Preparation Kit" link that can be downloaded in several different languages. Sing Wee [MS] (Expert)[2010-3-30 12:25]: Q: Can I have the book's name please? A: Whoops, hit [ENTER] too early. On the right side, you'll see there an "Exam Preparation Kit" link that can be downloaded in several different languages. Karel Danihelka [MS] (Expert)[2010-3-30 12:26]: Q: I have a NAND Flash on my target device. On this flash I have the hive registry and an application.I have observed that when the NAND flash is fully, the system startup time is longer....is there a degradation of NAND use that influences the startup time ? Why ? A: Yes - on boot flash abstraction library (old one) read metadata from all sectors to rebuild physical - logical mapping table. davbo-msft (Moderator)[2010-3-30 12:27]: Q: I would like to get a handle to a Silverlight screen section, is that possiable? A: Need addition info on this question. Can you provide more details on what you are trying to do in Silverlight? davbo-msft (Moderator)[2010-3-30 12:27]: Q: If one has an image on a Silverlight page, it seems to be cached. How would one refresh that cache after changing the underlying image? A: Additonal Info: if you want to manually touch the pixels use WriteableBitmap if you want to use the underlying HWND then use IXRVisualHost::GetHWND() davbo-msft (Moderator)[2010-3-30 12:28]: Q: Writable bitmap, is there an example of the syntax? A: if you want to manually touch the pixels use WriteableBitmap if you want to use the underlying HWND then use IXRVisualHost::GetHWND() davbo-msft (Moderator)[2010-3-30 12:29]: Q: I would like to get a handle to a Silverlight screen section, is that possiable? A: Can I get more information about this question about what you are trying to accomplish in Silverlight? davbo-msft (Moderator)[2010-3-30 12:31]: Q: IXRVisualHost::GetHWND() exactly what I needed Thanks, A: Your welcome Sing Wee [MS] (Expert)[2010-3-30 12:31]: Q: ok. thanks for the book's link A: No problem. Travis Hobrla [MS] (Expert)[2010-3-30 12:32]: Q: Typically for SoC devices you name your hardware specific libraries in the form "SOCDIRNAME_LIBNAME". In our platform "OMAP35XX_TPS659XX_TI_V1" if you do this we cause the catalog parser to die... For example if we have a library "Musbfn_OMAP35XX_TPS659XX_TI_V1.dll" entering this in the catalogs pbcxml file in a <module> section causes the XML parser to fail with : Error 3 The 'urn:Microsoft.PlatformBuilder/Catalog:Module' element is        invalid - The value '012345678901234567890123456789.dll' is invalid         according to its datatype         'urn:Microsoft.PlatformBuilder/Catalog:CatalogFileName' - The actual         length is greater than the MaxLength value. A: There are a couple workarounds I can think of. I believe the Module element is only used when doing SYSGEN parsing to make sure dependent SYSGENs are present when the item is selected, so I believe it is optional to the catalog. The other obvious workaround is to shorten the soc name. I realize neither of these solutions is ideal. This is not something we anticipated when we tested CE6.0, sorry. Travis Hobrla [MS] (Expert)[2010-3-30 12:33]: Q: I am getting this error only when I select the KdStub as the debugger in Target device connectivity. A: Right, but KdStub is the debugger that you should use. Have you tried the steps I suggested? Travis Hobrla [MS] (Expert)[2010-3-30 12:36]: Q: If I select Active KTIL, My OS doesn't boots. It says "loading NK.EXE at 0x<xxxxx> location" after that nothing comes in the debug log. A: Can you look at the serial debug output and see what is happening there? Often it can give you a clue to the KITL driver malfunctioning. Travis Hobrla [MS] (Expert)[2010-3-30 12:38]: Q: I have tried that and I am getting the same error. A: I am assuming you have a device created in Target -> Connectivity Options in Platform Builder. What are the Kernel download / Kernel transport for your device? Travis Hobrla [MS] (Expert)[2010-3-30 12:40]: Q: KITL: *** Device Name CEPC56059 *** WARN: KITL will run in polling mode VBridge:: built on [Jul 10 2009] time [10:20:14] VBridgeInit()...TX = [16384] bytes -- Rx = [16384] bytes Tx buffer [0xA1B84860] to [0xA1B88860]. Rx buffer [0xA1B88880] to [0xA1B8C880]. VBridge:: NK add MAC: [0-60-65-2-DA-FB] Connecting to Desktop KITL: Connected host IP: 1 Port: 1086 .. this is the output of the serial debug A: This looks reasonable and does not give clues as to why boot would halt at that point. If you capture a network trace or turn on KITL debug zones via dpCurSettings in kitl.dll, do you see KITL active after this? Travis Hobrla [MS] (Expert)[2010-3-30 12:41]: Q: Both is happening via Ethernet. A: Only thing I have left to suggest is a Platform Builder installation Repair, then. Karel Danihelka [MS] (Expert)[2010-3-30 12:42]: Q: Hi, I saw that the ATADISK is quite generic and des not have any optimizations. Do you have any advice to consider while tryin to improve the performance of it? A: If I remember correctly sample code has support for some specific hardware controllers (little obsolete now). This should be good start point (if you will not decide take existing driver as sample and write you own). Travis Hobrla [MS] (Expert)[2010-3-30 12:44]: Q: I didn't do that. I have to try. A: I think that's the next valid step. You need to figure out whether KITL is hanging or the device - use instrumented serial debug messages and network trace to determine this. Sing Wee [MS] (Expert)[2010-3-30 12:46]: Q: KITL: *** Device Name CEPC56059 *** WARN: KITL will run in polling mode VBridge:: built on [Jul 10 2009] time [10:20:14] VBridgeInit()...TX = [16384] bytes -- Rx = [16384] bytes Tx buffer [0xA1B84860] to [0xA1B88860]. Rx buffer [0xA1B88880] to [0xA1B8C880]. VBridge:: NK add MAC: [0-60-65-2-DA-FB] Connecting to Desktop KITL: Connected host IP: 1 Port: 1086 .. this is the output of the serial debug A: Neo, have you by any chance tried looking into your firewall to see if it might be blocking traffic on any particular ports? Wireshark/netmon might be able to help you here if that's the issue. davbo-msft (Moderator)[2010-3-30 12:48]: Q: I lost spell check, how can i get it back A: Hello - can you give additional details about your question? Is this related to a Windows CE Embedded application? masatos_MSFT (Expert)[2010-3-30 12:51]: Q: When attempting to run the CETK cellcore tests the documentation states the pre-requisites include "stinger.ini", "ltk.ini" but windows CE doesn't provide these or document what they fully need to contain. Implicitly you also need "datatrans.xml" which isn't supplied. If you get around this error and steal these from Windows Mobile instead, when you try and run the CETK tests you get a data abort in radiometricsdll.dll. How should we invoke the cellcore parts of CETK? A: Hi Pev, what version of Windows CE and CETK are you using? I do not have the expertise to answer this question, but can find somebody who can. Travis Hobrla [MS] (Expert)[2010-3-30 12:52]: Q: I don't see a kitlcore.dll in my OS. is my debug image fails to load because of that? A: kitl.dll should be all that's needed, kitlcore.lib is linked into that. Travis Hobrla [MS] (Expert)[2010-3-30 12:55]: Q: I've got a platform (not developed by myself) where I2C bus support has to be provided through the OAL as the kernel needs to talk to devices such as the power management IC and gas gauge so a 'proper' I2C driver hanging off device manager isn't possible. This happens to be a polled driver, so obviously it hits the system hard when either under lots of traffic or an error condition occurs and the driver constantly polls. I originally thought that there was no straightforward way to make such code interrupt driven in the kernel (as it's a cludge) but I realised that that's exactly what ETHDBG drivers do. Is there any reason why I shouldn't have a go at implementing a similar mechanism for our kernel resident I2C driver? If not, are there any obvious pitfalls - I've not seen any other BSP's do this in the past... A: You can make a 'proper' driver that calls down into the OAL to do the actual I2C transactions. Alternatively you can build an interrupt-based version in the OAL where you handle everything in the ISR. There is nothing wrong with that so long as the rest of your drivers and app threads can handle longer times with interrupts off while you are servicing I2C interrupts. Sing Wee [MS] (Expert)[2010-3-30 12:55]: Q: I am having trouble with my mouse, I have the microsoft wireless mobile mouse 3000, when I push the scroll button I am suppose to have autoscroll instead it shows other web pages,Can you help me out tell me what to do!!! A: Sorry, this current chat is about Windows Embedded Compact. Hope you're able to find an answer to your question elsewhere. davbo-msft (Moderator)[2010-3-30 12:56]: Q: Is the Silverlight Animation "Spline" a BezierSpline? A: Spline - http://msdn.microsoft.com/en-us/library/ee501495.aspx<BR< a>>   masatos_MSFT (Expert)[2010-3-30 12:57]: Thanks for the info Pev. I will follow up with the CETK experts here and get back to you. davbo-msft (Moderator)[2010-3-30 12:59]: Q: Spline- bad link A: http://msdn.microsoft.com/en-us/library/ee501495.aspx davbo-msft (Moderator)[2010-3-30 13:0]: Q: Sorry, got to tirm the ">" A: No Worries http://msdn.microsoft.com/en-us/library/ee501495.aspx davbo-msft (Moderator)[2010-3-30 13:0]: Hello everyone, we are just about out of time. Thank you for joining us for our Windows Embedded CE 6.0 chat today! <http://www.Microsoft.com/Embedded>; A special thank you to the product group members for coming out. The transcript of today’s chat will be posted online as soon as possible, to <http://msdn.microsoft.com/en-us/chats>;. We’ll see you again for another chat next month. Please check <http://msdn.microsoft.com/en-us/chats>; for the list of upcoming chats. If you still have unanswered questions, let me suggest that you post them on one of our newsgroups on <http://msdn.microsoft.com/en-us/windowsembedded/ce/default.aspx> -Windows Embedded CE 6.0 R3 Now Available! <http://msdn.microsoft.com/windowsembedded/ce/dd630616.aspx>; davbo-msft (Moderator)[2010-3-30 13:1]: Q: hi everybody. I would like to know if there is something know about a bug in RTC API (VOIP), especially when using SIP. According the to the analysis with application verifyier there is a heap link in rtcdllmedia.dll. All of the unreleased chunks seem to have a size of 6560 bytes. A: I will follow up with the Networking Team for a response. davbo-msft (Moderator)[2010-3-30 13:1]: Q: Hi, we've problems with debugging of applications (= breakpoints in Platform Builder will be ignored) over KITL on Windows CE 5.0, if the PDB files are large (over 60MB). Are there any limitations to size of the PDB files? A: I will follow up with the tools team for a response and post with the transcript. Sing Wee [MS] (Expert)[2010-3-30 13:1]: Q: I am unable to use the target control in my development environment. any ideas? A: Make sure you have SYSGEN_SHELL=1 set in your build environment. davbo-msft (Moderator)[2010-3-30 13:3]: Q: what are the main differences between Object Store and RAM disk ? They are both in RAM...are there performance differences ? access differences ? A: I will follow up with the Core Team and get a response posted with the transcript to MSDN  The Questions   [2010-3-30 12:57]: Thanks for the info Pev. I will follow up with the CETK experts here and get back to you. [2010-3-30 12:59]:   [2010-3-30 13:0]:   [2010-3-30 13:0]: Hello everyone, we are just about out of time. Thank you for joining us for our Windows Embedded CE 6.0 chat today! <http://www.Microsoft.com/Embedded>; A special thank you to the product group members for coming out. The transcript of today’s chat will be posted online as soon as possible, to <http://msdn.microsoft.com/en-us/chats>;. We’ll see you again for another chat next month. Please check <http://msdn.microsoft.com/en-us/chats>; for the list of upcoming chats. If you still have unanswered questions, let me suggest that you post them on one of our newsgroups on <http://msdn.microsoft.com/en-us/windowsembedded/ce/default.aspx> -Windows Embedded CE 6.0 R3 Now Available! <http://msdn.microsoft.com/windowsembedded/ce/dd630616.aspx>; [2010-3-30 13:1]: [2010-3-30 13:1]: [2010-3-30 13:1]: [2010-3-30 13:3]: neo (Guest)[2010-3-30 11:37]: Hi all KellyG (Guest)[2010-3-30 11:37]: Hi KellyG (Guest)[2010-3-30 11:37]: I have a question unrelated to windows Ce embedded, can you please help me?? neo (Guest)[2010-3-30 11:38]: I need to implement hardware timers on my windows CE 6.0 device to trigger events at microsecond intervals! c neo (Guest)[2010-3-30 11:38]: yes. post it. May be i cud give a try KellyG (Guest)[2010-3-30 11:38]: My Product key listed on my tower is not the product key I need for microsoft office, but that is the only product key listed. neo (Guest)[2010-3-30 11:39]: I hope this is a chat for windows embedded. please post ur queries in office forums KellyG (Guest)[2010-3-30 11:39]: it is but i could not find a forum for office neo (Guest)[2010-3-30 11:40]: I think moderators will help u out. @ davbo-msft: can u help this guy? neo (Guest)[2010-3-30 11:41]: Q: I need to implement hardware timers on my windows CE 6.0 device to trigger events at microsecond intervals. Where should i start? davbo-msft (Moderator)[2010-3-30 11:50]: Our chat today covers the topic of Windows Embedded CE! 1. This chat will last for one hour. During this hour, our Experts will respond to as many questions as they can. Please understand that there may be some questions we cannot respond to due to lack of information or because the information is not yet public. 2. We encourage you to submit questions for our Experts. To do so, type your questions in the send box, select the “ask the Experts” box and click SEND. Questions sent directly to the Guest Chat room will not be answered by the Experts, but we encourage other community members to assist. 3. We ask that you stay on topic for the duration of the chat. This helps the Guests and Experts follow the conversation more easily. We invite you to ask off topic questions after this chat is over, but not during. 4. Please abide by the Chat Code of Conduct. Chat code of conduct: <http://msdn.microsoft.com/chats/chatroom.aspx?ctl=hlp#Conduct>; Pev (Guest)[2010-3-30 11:54]: Evening! davbo-msft (Moderator)[2010-3-30 11:54]: Hello everyone this is Dave Boyce - I worked in the Multimedia area for Windows CE. neo (Guest)[2010-3-30 11:55]: hello dave neo (Guest)[2010-3-30 11:55]: The chat code of conduct link is not working! Pev (Guest)[2010-3-30 11:56]: Best be polite just in case then ;-) neo (Guest)[2010-3-30 11:56]: davbo-msft (Moderator)[2010-3-30 11:57]: I'll check out the issue w/ the link paolopat (Guest)[2010-3-30 12:0]: Hello davbo-msft (Moderator)[2010-3-30 12:0]: We are pleased to welcome our Experts for today’s chat. I will have them introduce themselves now. Chat will begin in a couple of minutes. <http://www.Microsoft.com/Embedded>; paolopat (Guest)[2010-3-30 12:3]: Hello Experts ! neo (Guest)[2010-3-30 12:3]: Welcome all! paolopat (Guest)[2010-3-30 12:3]: Q: I want to partition my NAND Flash device. One partition to use for hive ragistry and the other for the apps and data. The only way to do it is programmatically or setting some registry values ? neo (Guest)[2010-3-30 12:3]: Q: I need to implement hardware timers on my windows CE 6.0 device to trigger events at microsecond intervals. Where should i start? neo (Guest)[2010-3-30 12:5]: Q: My CPU is Intel celeron M processor 1Ghz. Pev (Guest)[2010-3-30 12:5]: neo: if your silicon has multiple general purpose timers, pick one that's not in use for the system timer / profiler and set it up to trigger irqs for your purpose. You can't guarantee hard realtime type responses though... GarySwalling (Guest)[2010-3-30 12:5]: Q: Is Windows Phone 7 related to Windows CE? If so, can you tell me what version of Windows CE is the basis? Is it in fact the new version of Windows Mobile? Pev (Guest)[2010-3-30 12:6]: Q: Hi guys, what's the formal way to report bugs back to the core team / product team? The mechanism of calling the support phone number every time is really onerous and time-consuming. Is there another mechanism? paolopat (Guest)[2010-3-30 12:6]: Q: But the operation for creating the partitions ? neo (Guest)[2010-3-30 12:6]: Q: If i need to implement it using interrupt handlers, What are all the files that I should look at? GPM (Guest)[2010-3-30 12:6]: Q: I would like to get a handle to a Silverlight screen section, is that possiable? Jhony (Guest)[2010-3-30 12:7]: Q: I created a OS design with KITL and kernel debugger enabled. But I am unable to connect to the target for debugging. I am getting the following error when i try to connect with the device. PB Debugger Cannot initialize the Kernel Debugger. PB Debugger Debugger could not initialize connection. PB Debugger The Kernel Debugger is waiting to connect with target. PB Debugger The Kernel Debugger has been disconnected successfully. Charles (Guest)[2010-3-30 12:7]: What will be different in Windows Compact 7 from CE 6.0? neo (Guest)[2010-3-30 12:8]: Can I have the book's name please? kiefs_dev (Guest)[2010-3-30 12:8]: Q: hi everybody. I would like to know if there is something know about a bug in RTC API (VOIP), especially when using SIP. According the to the analysis with application verifyier there is a heap link in rtcdllmedia.dll. All of the unreleased chunks seem to have a size of 6560 bytes. paolopat (Guest)[2010-3-30 12:10]: Q: So...I have to modify file system code to create 2 partition at system startup ?!! I haven't understood.... neo (Guest)[2010-3-30 12:10]: Q: Do u mean ISR/IST implementation? How can i register an interrupt? What kind of interrupt should i register? neo (Guest)[2010-3-30 12:11]: Q: Can I have the book's name please? Charles (Guest)[2010-3-30 12:11]: Q: What will be different in Windows Compact 7 from CE 6.0? PaulT (Guest)[2010-3-30 12:11]: neo: I'd say that you really need the docs for YOUR BSP, not generic documents for BSPs in general. Each BSP may be architected differently. If you're using the CEPC BSP, then the documentation that comes with Platform Builder is a reasonable place to look. GPM (Guest)[2010-3-30 12:11]: Q: If one has an image on a Silverlight page, it seems to be cached. How would one refresh that cache after changing the underlying image? Elektrobit (Guest)[2010-3-30 12:12]: Q: Hi, we've problems with debugging of applications (= breakpoints in Platform Builder will be ignored) over KITL on Windows CE 5.0, if the PDB files are large (over 60MB). Are there any limitations to size of the PDB files? Jhony (Guest)[2010-3-30 12:15]: Q: I am using CE 6.0. There is no cesvchost process running in my system. alexquisi (Guest)[2010-3-30 12:15]: Q: Hi, I saw that the ATADISK is quite generic and des not have any optimizations. Do you have any advice to consider while tryin to improve the performance of it? Jhony (Guest)[2010-3-30 12:17]: Windows XP service pack 1 Jhony (Guest)[2010-3-30 12:17]: Q: sorry! Windows XP SP3 neo (Guest)[2010-3-30 12:19]: Q: The link for developing device drivers is not working. can u please check that? paolopat (Guest)[2010-3-30 12:20]: Q: I have a NAND Flash on my target device. On this flash I have the hive registry and an application.I have observed that when the NAND flash is fully, the system startup time is longer....is there a degradation of NAND use that influences the startup time ? Why ? Pev (Guest)[2010-3-30 12:20]: Q: When attempting to run the CETK cellcore tests the documentation states the pre-requisites include "stinger.ini", "ltk.ini" but windows CE doesn't provide these or document what they fully need to contain. Implicitly you also need "datatrans.xml" which isn't supplied. If you get around this error and steal these from Windows Mobile instead, when you try and run the CETK tests you get a data abort in radiometricsdll.dll. How should we invoke the cellcore parts of CETK? Pev (Guest)[2010-3-30 12:21]: Hi all, Pev (Guest)[2010-3-30 12:21]: oops Pev (Guest)[2010-3-30 12:21]: :-D Pev (Guest)[2010-3-30 12:24]: Typically for SoC devices you name your hardware specific libraries in the form "SOCDIRNAME_LIBNAME". In our platform "OMAP35XX_TPS659XX_TI_V1" if you do this we cause the catalog parser to die... For example if we have a library "Musbfn_OMAP35XX_TPS659XX_TI_V1.dll" entering this in the catalogs pbcxml file in a <module> section causes the XML parser to fail with : Pev (Guest)[2010-3-30 12:25]: Q: Error 3 The 'urn:Microsoft.PlatformBuilder/Catalog:Module' element is        invalid - The value '012345678901234567890123456789.dll' is invalid         according to its datatype         'urn:Microsoft.PlatformBuilder/Catalog:CatalogFileName' - The actual         length is greater than the MaxLength value. GPM (Guest)[2010-3-30 12:25]: Q: Writable bitmap, is there an example of the syntax? Pev (Guest)[2010-3-30 12:25]: Q: Typically for SoC devices you name your hardware specific libraries in the form "SOCDIRNAME_LIBNAME". In our platform "OMAP35XX_TPS659XX_TI_V1" if you do this we cause the catalog parser to die... For example if we have a library "Musbfn_OMAP35XX_TPS659XX_TI_V1.dll" entering this in the catalogs pbcxml file in a <module> section causes the XML parser to fail with : Error 3 The 'urn:Microsoft.PlatformBuilder/Catalog:Module' element is        invalid - The value '012345678901234567890123456789.dll' is invalid         according to its datatype         'urn:Microsoft.PlatformBuilder/Catalog:CatalogFileName' - The actual         length is greater than the MaxLength value. Pev (Guest)[2010-3-30 12:25]: sorry, messed up submission there! GPM (Guest)[2010-3-30 12:26]: Q: I would like to get a handle to a Silverlight screen section, is that possiable? GPM (Guest)[2010-3-30 12:28]: Q: IXRVisualHost::GetHWND() exactly what I needed Thanks, PaulT (Guest)[2010-3-30 12:29]: GPM: You don't have to keep submitting the questions. The chat experts have an application that they're using to follow the chat and all Ask the Experts questions are logged. Jhony (Guest)[2010-3-30 12:29]: Q: I am getting this error only when I select the KdStub as the debugger in Target device connectivity. neo (Guest)[2010-3-30 12:30]: Q: ok. thanks for the book's link Pev (Guest)[2010-3-30 12:31]: Hm, did those two I submitted get picked up by anyone? neo (Guest)[2010-3-30 12:33]: Q: If I select Active KTIL, My OS doesn't boots. It says "loading NK.EXE at 0x<xxxxx> location" after that nothing comes in the debug log. PaulT (Guest)[2010-3-30 12:34]: Pev: PaulT (Guest)[2010-3-30 12:35]: Pev: I'm sure they did. The guys who are actually on the chat may not be experts in that part of things. That's usually the explanation when you don't get an answer in 10 minutes or so. Pev (Guest)[2010-3-30 12:36]: Ah, fair enough Susie (Guest)[2010-3-30 12:36]: My Outlook Express incoming mail is corrput. No ONE has been able to fix the problem, Dell or Norton. I have dial up I'm in a rural area Jhony (Guest)[2010-3-30 12:37]: Q: I have tried that and I am getting the same error. Pev (Guest)[2010-3-30 12:37]: Susie : Use Thunderbird instead :-D PaulT (Guest)[2010-3-30 12:37]: Susie: Sorry, but this chat is not about Windows, but Embedded (like what runs on a phone). Your best chance is to find a local expert or talk to your ISP. paolopat (Guest)[2010-3-30 12:37]: Q: what are the main differences between Object Store and RAM disk ? They are both in RAM...are there performance differences ? access differences ? Susie (Guest)[2010-3-30 12:38]: My computer knowlege is very limited, what is Thunderbird? Pev (Guest)[2010-3-30 12:38]: A different email client :-D neo (Guest)[2010-3-30 12:39]: Q: KITL: *** Device Name CEPC56059 *** WARN: KITL will run in polling mode VBridge:: built on [Jul 10 2009] time [10:20:14] VBridgeInit()...TX = [16384] bytes -- Rx = [16384] bytes Tx buffer [0xA1B84860] to [0xA1B88860]. Rx buffer [0xA1B88880] to [0xA1B8C880]. VBridge:: NK add MAC: [0-60-65-2-DA-FB] Connecting to Desktop KITL: Connected host IP: 1 Port: 1086 .. this is the output of the serial debug Susie (Guest)[2010-3-30 12:39]: Do I need to uninstall Outlook Express youngboyzie (Guest)[2010-3-30 12:39]: I need to start battery calibration for my new battery for my dell inspiron 1525 laptop and should be able to reach the BIOS screen by hitting f2 but this isnt working... help? Pev (Guest)[2010-3-30 12:39]: Nah, you can run it instead - you'll still need help from your ISP to configure it I expect Jhony (Guest)[2010-3-30 12:39]: Q: Both is happening via Ethernet. PaulT (Guest)[2010-3-30 12:40]: youngboyzie: You're off-topic. This is not a general chat for Windows and certainly not for Dell. You'll have to ask Dell how to get to setup; it's their machine. bill (Guest)[2010-3-30 12:41]: I lost spell check, how can i get et back neo (Guest)[2010-3-30 12:42]: Q: I didn't do that. I have to try. Jhony (Guest)[2010-3-30 12:43]: Q: Ok. I will do it then. bill (Guest)[2010-3-30 12:43]: Q: I lost spell check, how can i get it back PaulT (Guest)[2010-3-30 12:44]: bill: This isn't a general Windows chat. There are some Web forums that you might try. GarySwalling (Guest)[2010-3-30 12:45]: Q: Thanks, I found the Phone 7 presentation at http://live.visitmix.com/MIX10/Sessions/CL13 GPM (Guest)[2010-3-30 12:45]: Q: Is the Silverlight Animation "Spline" a BezierSpline? neo (Guest)[2010-3-30 12:46]: Q: ok. I'll do it. thanks Pev (Guest)[2010-3-30 12:46]: Whowever was asking about KITL connection : I've had this loads in the past. I think I started debugging last time by using wireshark to see what was happening on the network then setting up the OAL_ETHER and OAL_FUNC and OAL_VERBOSE as well as OAL_KITL flags to see what was actually happening in the driver.... Pev (Guest)[2010-3-30 12:47]: I'd generally make sure that you're testing though a 10baseT hub (instead of anything faster) and forcing Active KITL in polled mode too... neo (Guest)[2010-3-30 12:48]: Q: I disabled the firewall in my PC. Pev (Guest)[2010-3-30 12:50]: Q: I've got a platform (not developed by myself) where I2C bus support has to be provided through the OAL as the kernel needs to talk to devices such as the power management IC and gas gauge so a 'proper' I2C driver hanging off device manager isn't possible. This happens to be a polled driver, so obviously it hits the system hard when either under lots of traffic or an error condition occurs and the driver constantly polls. I originally thought that there was no straightforward way to make such code interrupt driven in the kernel (as it's a cludge) but I realised that that's exactly what ETHDBG drivers do. Is there any reason why I shouldn't have a go at implementing a similar mechanism for our kernel resident I2C driver? If not, are there any obvious pitfalls - I've not seen any other BSP's do this in the past... neo (Guest)[2010-3-30 12:51]: Q: I don't see a kitlcore.dll in my OS. is my debug image fails to load because of that? Pev (Guest)[2010-3-30 12:52]: Q: Hi masatos, I'm using Windows Embedded CE 6.0 with R3 and patched to feb 2010's QFE's (with it's associated CETK version) this is a machine with only CE 6.0 on (no conflicts with earlier CE or WM...) neo (Guest)[2010-3-30 12:54]: ok. Got it neo (Guest)[2010-3-30 12:54]: Q: ok. Got it Roundman (Guest)[2010-3-30 12:55]: Q: I am having trouble with my mouse, I have the microsoft wireless mobile mouse 3000, when I push the scroll button I am suppose to have autoscroll instead it shows other web pages,Can you help me out tell me what to do!!! Pev (Guest)[2010-3-30 12:55]: Hey neo, debugging kitl issues is really frustrating but dont lose heart :-) neo (Guest)[2010-3-30 12:55]: @ pev : u fixed the problem of KITL after that? neo (Guest)[2010-3-30 12:56]: I am getting the same error again and again. I even cleaned my environment and tried in a fresh PC. But didn't succeed yet Pev (Guest)[2010-3-30 12:56]: Well, eventually - my experiences probably won't help you as different platforms have different reasons for doing that neo (Guest)[2010-3-30 12:56]: I think so PaulT (Guest)[2010-3-30 12:58]: neo: Have you searched the old messages in microsoft.public.windowsce.platbuilder? It seems to me that there was a packet size situation where it was possible to have problems with KITL connections based on a setting on the PC. Google Groups, groups.google.com, Advanced Groups Search will allow you to search a single newsgroup or a set of newsgroups easily. GPM (Guest)[2010-3-30 12:58]: Q: Spline- bad link PaulT (Guest)[2010-3-30 12:58]: GPM: without the > at the end does it work? It seems to for me... neo (Guest)[2010-3-30 12:58]: But some times if i try to connect to the device again. The Image information is seen in the serial debug. what does that mean?Download BIN file information: ----------------------------------------------------- [0]: Base Address=0x220000 Length=0x18DAADC Received a broadcast message !CheckUDP: Not UDP (proto = 0x00000001) after this i am getting the old errors. PB debugger cannot initialize ... GPM (Guest)[2010-3-30 12:59]: Q: Sorry, got to tirm the ">" neo (Guest)[2010-3-30 12:59]: ok paul. I ll look into that. davbo-msft (Moderator)[2010-3-30 13:0]: Hello everyone, we are just about out of time. Thank you for joining us for our Windows Embedded CE 6.0 chat today! <http://www.Microsoft.com/Embedded>; A special thank you to the product group members for coming out. The transcript of today’s chat will be posted online as soon as possible, to <http://msdn.microsoft.com/en-us/chats>;. We’ll see you again for another chat next month. Please check <http://msdn.microsoft.com/en-us/chats>; for the list of upcoming chats. If you still have unanswered questions, let me suggest that you post them on one of our newsgroups on <http://msdn.microsoft.com/en-us/windowsembedded/ce/default.aspx> -Windows Embedded CE 6.0 R3 Now Available! <http://msdn.microsoft.com/windowsembedded/ce/dd630616.aspx>; neo (Guest)[2010-3-30 13:1]: Q: I am unable to use the target control in my development environment. any ideas? Pev (Guest)[2010-3-30 13:1]: Sure, if KITL isn't connected target control won't work as it runs over kitl... neo (Guest)[2010-3-30 13:1]: ok .thanks pev neo (Guest)[2010-3-30 13:2]: yes. sysgen_shell is set to 1 neo (Guest)[2010-3-30 13:2]: Q: yes. sysgen_shell is set to 1 Marcelovk (Guest)[2010-3-30 13:2]: Q: Is there any way to extract the default command lines of the tests in CETK? I want to have it running unconnected from the desktop.   Copyright © 2010 – Bruce Eitman All Rights Reserved

    Read the article

  • WordPress not resizing images with Nginx + php-fpm and other issues

    - by Julian Fernandes
    Recently i setup a Ubuntu 12.04 VPS with 512mb/1ghz CPU, Nginx + php-fpm + Varnish + APC + Percona's MySQL server + CloudFlare Pro for our Ubuntu LoCo Team's WordPress blog. The blog get about 3~4k daily hits, use about 180MB and 8~20% CPU. Everything seems to be working insanely fast... page load is really good and is about 16x faster than any of our competitors... but there is one problem. When we upload a image, WordPress don't resize it, so all we can do it insert the full image in the post. If the imagem have, let's say, 30kb, it resize fine... but if the image have 100kb+, it won't... In nginx error logs i see this: upstream timed out (110: Connection timed out) while reading response header from upstream, client: 150.162.216.64, server: www.ubuntubrsc.com, request: "POST /wp-admin/async-upload.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "www.ubuntubrsc.com", referrer: "http://www.ubuntubrsc.com/wp-admin/media-upload.php?post_id=2668&" It seems to be related with the issue, but i dunno. When that timeout happens, i started to get it when i'm trying to view a post too: upstream timed out (110: Connection timed out) while reading response header from upstream, client: 150.162.216.64, server: www.ubuntubrsc.com, request: "GET /tutoriais-gimp-6-adicionando-aplicando-novos-pinceis.html HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "www.ubuntubrsc.com", referrer: "http://www.ubuntubrsc.com/" And only a restart of php5-fpm fix it. I tryed increasing some timeouts and stuffs but it did not worked, so i guess it's some kind of limitation i did not figured yet. Could someone help me with it, please? /etc/nginx/nginx.conf: user www-data; worker_processes 1; pid /var/run/nginx.pid; events { worker_connections 1024; use epoll; multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay off; keepalive_timeout 15; keepalive_requests 2000; types_hash_max_size 2048; server_tokens off; server_name_in_redirect off; open_file_cache max=1000 inactive=300s; open_file_cache_valid 360s; open_file_cache_min_uses 2; open_file_cache_errors off; server_names_hash_bucket_size 64; # server_name_in_redirect off; client_body_buffer_size 128K; client_header_buffer_size 1k; client_max_body_size 2m; large_client_header_buffers 4 8k; client_body_timeout 10m; client_header_timeout 10m; send_timeout 10m; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## error_log /var/log/nginx/error.log; access_log off; ## # CloudFlare's IPs (uncomment when site goes live) ## set_real_ip_from 204.93.240.0/24; set_real_ip_from 204.93.177.0/24; set_real_ip_from 199.27.128.0/21; set_real_ip_from 173.245.48.0/20; set_real_ip_from 103.22.200.0/22; set_real_ip_from 141.101.64.0/18; set_real_ip_from 108.162.192.0/18; set_real_ip_from 190.93.240.0/20; real_ip_header CF-Connecting-IP; set_real_ip_from 127.0.0.1/32; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; gzip_vary on; gzip_proxied any; gzip_comp_level 9; gzip_min_length 1000; gzip_proxied expired no-cache no-store private auth; gzip_buffers 32 8k; # gzip_http_version 1.1; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; ## # nginx-naxsi config ## # Uncomment it if you installed nginx-naxsi ## #include /etc/nginx/naxsi_core.rules; ## # nginx-passenger config ## # Uncomment it if you installed nginx-passenger ## #passenger_root /usr; #passenger_ruby /usr/bin/ruby; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } /etc/nginx/fastcgi_params: fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_FILENAME $request_filename; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param HTTPS $https; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 256 4k; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200; /etc/nginx/sites-avaiable/default: ## # DEFAULT HANDLER # ubuntubrsc.com ## server { listen 8080; # Make site available from main domain server_name www.ubuntubrsc.com; # Root directory root /var/www; index index.php index.html index.htm; include /var/www/nginx.conf; access_log off; location / { try_files $uri $uri/ /index.php?q=$uri&$args; } location = /favicon.ico { log_not_found off; access_log off; } location = /robots.txt { allow all; log_not_found off; access_log off; } location ~ /\. { deny all; access_log off; log_not_found off; } location ~* ^/wp-content/uploads/.*.php$ { deny all; access_log off; log_not_found off; } rewrite /wp-admin$ $scheme://$host$uri/ permanent; error_page 404 = @wordpress; log_not_found off; location @wordpress { include /etc/nginx/fastcgi_params; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_param SCRIPT_NAME /index.php; fastcgi_param SCRIPT_FILENAME $document_root/index.php; } location ~ \.php$ { try_files $uri =404; include /etc/nginx/fastcgi_params; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; if (-f $request_filename) { fastcgi_pass unix:/var/run/php5-fpm.sock; } } } server { listen 8080; server_name ubuntubrsc.* www.ubuntubrsc.net www.ubuntubrsc.org www.ubuntubrsc.com.br www.ubuntubrsc.info www.ubuntubrsc.in; return 301 $scheme://www.ubuntubrsc.com$request_uri; } /var/www/nginx.conf: # BEGIN W3TC Minify cache location ~ /wp-content/w3tc/min.*\.js$ { types {} default_type application/x-javascript; expires modified 31536000s; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; add_header Vary "Accept-Encoding"; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; } location ~ /wp-content/w3tc/min.*\.css$ { types {} default_type text/css; expires modified 31536000s; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; add_header Vary "Accept-Encoding"; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; } location ~ /wp-content/w3tc/min.*js\.gzip$ { gzip off; types {} default_type application/x-javascript; expires modified 31536000s; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; add_header Vary "Accept-Encoding"; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; add_header Content-Encoding gzip; } location ~ /wp-content/w3tc/min.*css\.gzip$ { gzip off; types {} default_type text/css; expires modified 31536000s; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; add_header Vary "Accept-Encoding"; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; add_header Content-Encoding gzip; } # END W3TC Minify cache # BEGIN W3TC Browser Cache gzip on; gzip_types text/css application/x-javascript text/x-component text/richtext image/svg+xml text/plain text/xsd text/xsl text/xml image/x-icon; location ~ \.(css|js|htc)$ { expires 31536000s; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; } location ~ \.(html|htm|rtf|rtx|svg|svgz|txt|xsd|xsl|xml)$ { expires 3600s; add_header Pragma "public"; add_header Cache-Control "max-age=3600, public, must-revalidate, proxy-revalidate"; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; try_files $uri $uri/ $uri.html /index.php?$args; } location ~ \.(asf|asx|wax|wmv|wmx|avi|bmp|class|divx|doc|docx|eot|exe|gif|gz|gzip|ico|jpg|jpeg|jpe|mdb|mid|midi|mov|qt|mp3|m4a|mp4|m4v|mpeg|mpg|mpe|mpp|otf|odb|odc|odf|odg|odp|ods|odt|ogg|pdf|png|pot|pps|ppt|pptx|ra|ram|svg|svgz|swf|tar|tif|tiff|ttf|ttc|wav|wma|wri|xla|xls|xlsx|xlt|xlw|zip)$ { expires 31536000s; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; } # END W3TC Browser Cache # BEGIN W3TC Minify core rewrite ^/wp-content/w3tc/min/w3tc_rewrite_test$ /wp-content/w3tc/min/index.php?w3tc_rewrite_test=1 last; set $w3tc_enc ""; if ($http_accept_encoding ~ gzip) { set $w3tc_enc .gzip; } if (-f $request_filename$w3tc_enc) { rewrite (.*) $1$w3tc_enc break; } rewrite ^/wp-content/w3tc/min/(.+\.(css|js))$ /wp-content/w3tc/min/index.php?file=$1 last; # END W3TC Minify core # BEGIN W3TC Skip 404 error handling by WordPress for static files if (-f $request_filename) { break; } if (-d $request_filename) { break; } if ($request_uri ~ "(robots\.txt|sitemap(_index)?\.xml(\.gz)?|[a-z0-9_\-]+-sitemap([0-9]+)?\.xml(\.gz)?)") { break; } if ($request_uri ~* \.(css|js|htc|htm|rtf|rtx|svg|svgz|txt|xsd|xsl|xml|asf|asx|wax|wmv|wmx|avi|bmp|class|divx|doc|docx|eot|exe|gif|gz|gzip|ico|jpg|jpeg|jpe|mdb|mid|midi|mov|qt|mp3|m4a|mp4|m4v|mpeg|mpg|mpe|mpp|otf|odb|odc|odf|odg|odp|ods|odt|ogg|pdf|png|pot|pps|ppt|pptx|ra|ram|svg|svgz|swf|tar|tif|tiff|ttf|ttc|wav|wma|wri|xla|xls|xlsx|xlt|xlw|zip)$) { return 404; } # END W3TC Skip 404 error handling by WordPress for static files # BEGIN Better WP Security location ~ /\.ht { deny all; } location ~ wp-config.php { deny all; } location ~ readme.html { deny all; } location ~ readme.txt { deny all; } location ~ /install.php { deny all; } set $susquery 0; set $rule_2 0; set $rule_3 0; rewrite ^wp-includes/(.*).php /not_found last; rewrite ^/wp-admin/includes(.*)$ /not_found last; if ($request_method ~* "^(TRACE|DELETE|TRACK)"){ return 403; } set $rule_0 0; if ($request_method ~ "POST"){ set $rule_0 1; } if ($uri ~ "^(.*)wp-comments-post.php*"){ set $rule_0 2$rule_0; } if ($http_user_agent ~ "^$"){ set $rule_0 4$rule_0; } if ($rule_0 = "421"){ return 403; } if ($args ~* "\.\./") { set $susquery 1; } if ($args ~* "boot.ini") { set $susquery 1; } if ($args ~* "tag=") { set $susquery 1; } if ($args ~* "ftp:") { set $susquery 1; } if ($args ~* "http:") { set $susquery 1; } if ($args ~* "https:") { set $susquery 1; } if ($args ~* "(<|%3C).*script.*(>|%3E)") { set $susquery 1; } if ($args ~* "mosConfig_[a-zA-Z_]{1,21}(=|%3D)") { set $susquery 1; } if ($args ~* "base64_encode") { set $susquery 1; } if ($args ~* "(%24&x)") { set $susquery 1; } if ($args ~* "(\[|\]|\(|\)|<|>|ê|\"|;|\?|\*|=$)"){ set $susquery 1; } if ($args ~* "(&#x22;|&#x27;|&#x3C;|&#x3E;|&#x5C;|&#x7B;|&#x7C;|%24&x)"){ set $susquery 1; } if ($args ~* "(%0|%A|%B|%C|%D|%E|%F|127.0)") { set $susquery 1; } if ($args ~* "(globals|encode|localhost|loopback)") { set $susquery 1; } if ($args ~* "(request|select|insert|concat|union|declare)") { set $susquery 1; } if ($http_cookie !~* "wordpress_logged_in_" ) { set $susquery "${susquery}2"; set $rule_2 1; set $rule_3 1; } if ($susquery = 12) { return 403; } # END Better WP Security /etc/php5/fpm/php-fpm.conf: pid = /var/run/php5-fpm.pid error_log = /var/log/php5-fpm.log emergency_restart_threshold = 3 emergency_restart_interval = 1m process_control_timeout = 10s events.mechanism = epoll /etc/php5/fpm/php.ini (only options i changed): open_basedir ="/var/www/" disable_functions = pcntl_alarm,pcntl_fork,pcntl_waitpid,pcntl_wait,pcntl_wifexited,pcntl_wifstopped,pcntl_wifsignaled,pcntl_wexitstatus,pcntl_wtermsig,pcntl_wstopsig,pcntl_signal,pcntl_signal_dispatch,pcntl_get_last_error,pcntl_strerror,pcntl_sigprocmask,pcntl_sigwaitinfo,pcntl_sigtimedwait,pcntl_exec,pcntl_getpriority,pcntl_setpriority,dl,system,shell_exec,fsockopen,parse_ini_file,passthru,popen,proc_open,proc_close,shell_exec,show_source,symlink,proc_close,proc_get_status,proc_nice,proc_open,proc_terminate,shell_exec ,highlight_file,escapeshellcmd,define_syslog_variables,posix_uname,posix_getpwuid,apache_child_terminate,posix_kill,posix_mkfifo,posix_setpgid,posix_setsid,posix_setuid,escapeshellarg,posix_uname,ftp_exec,ftp_connect,ftp_login,ftp_get,ftp_put,ftp_nb_fput,ftp_raw,ftp_rawlist,ini_alter,ini_restore,inject_code,syslog,openlog,define_syslog_variables,apache_setenv,mysql_pconnect,eval,phpAds_XmlRpc,phpA ds_remoteInfo,phpAds_xmlrpcEncode,phpAds_xmlrpcDecode,xmlrpc_entity_decode,fp,fput,virtual,show_source,pclose,readfile,wget expose_php = off max_execution_time = 30 max_input_time = 60 memory_limit = 128M display_errors = Off post_max_size = 2M allow_url_fopen = off default_socket_timeout = 60 APC settings: [APC] apc.enabled = 1 apc.shm_segments = 1 apc.shm_size = 64M apc.optimization = 0 apc.num_files_hint = 4096 apc.ttl = 60 apc.user_ttl = 7200 apc.gc_ttl = 0 apc.cache_by_default = 1 apc.filters = "" apc.mmap_file_mask = "/tmp/apc.XXXXXX" apc.slam_defense = 0 apc.file_update_protection = 2 apc.enable_cli = 0 apc.max_file_size = 10M apc.stat = 1 apc.write_lock = 1 apc.report_autofilter = 0 apc.include_once_override = 0 apc.localcache = 0 apc.localcache.size = 512 apc.coredump_unmap = 0 apc.stat_ctime = 0 /etc/php5/fpm/pool.d/www.conf user = www-data group = www-data listen = /var/run/php5-fpm.sock listen.owner = www-data listen.group = www-data listen.mode = 0666 pm = ondemand pm.max_children = 5 pm.process_idle_timeout = 3s; pm.max_requests = 50 I also started to get 404 errors in front page if i use W3 Total Cache's Page Cache (Disk Enhanced). It worked fine untill somedays ago, and then, out of nowhere, it started to happen. Tonight i will disable my mobile plugin and activate only W3 Total Cache to see if it's a conflict with them... And to finish all this, i have been getting this error: PHP Warning: apc_store(): Unable to allocate memory for pool. in /var/www/wp-content/plugins/w3-total-cache/lib/W3/Cache/Apc.php on line 41 I already modifed my APC settings, but no sucess. So... could anyone help me with those issuees, please? Ooohh... if it helps, i instaled PHP like this: sudo apt-get install php5-fpm php5-suhosin php-apc php5-gd php5-imagick php5-curl And Nginx from the official PPA. Sorry for my bad english and thanks for your time people! (:

    Read the article

  • 256 Windows Azure Worker Roles, Windows Kinect and a 90's Text-Based Ray-Tracer

    - by Alan Smith
    For a couple of years I have been demoing a simple render farm hosted in Windows Azure using worker roles and the Azure Storage service. At the start of the presentation I deploy an Azure application that uses 16 worker roles to render a 1,500 frame 3D ray-traced animation. At the end of the presentation, when the animation was complete, I would play the animation delete the Azure deployment. The standing joke with the audience was that it was that it was a “$2 demo”, as the compute charges for running the 16 instances for an hour was $1.92, factor in the bandwidth charges and it’s a couple of dollars. The point of the demo is that it highlights one of the great benefits of cloud computing, you pay for what you use, and if you need massive compute power for a short period of time using Windows Azure can work out very cost effective. The “$2 demo” was great for presenting at user groups and conferences in that it could be deployed to Azure, used to render an animation, and then removed in a one hour session. I have always had the idea of doing something a bit more impressive with the demo, and scaling it from a “$2 demo” to a “$30 demo”. The challenge was to create a visually appealing animation in high definition format and keep the demo time down to one hour.  This article will take a run through how I achieved this. Ray Tracing Ray tracing, a technique for generating high quality photorealistic images, gained popularity in the 90’s with companies like Pixar creating feature length computer animations, and also the emergence of shareware text-based ray tracers that could run on a home PC. In order to render a ray traced image, the ray of light that would pass from the view point must be tracked until it intersects with an object. At the intersection, the color, reflectiveness, transparency, and refractive index of the object are used to calculate if the ray will be reflected or refracted. Each pixel may require thousands of calculations to determine what color it will be in the rendered image. Pin-Board Toys Having very little artistic talent and a basic understanding of maths I decided to focus on an animation that could be modeled fairly easily and would look visually impressive. I’ve always liked the pin-board desktop toys that become popular in the 80’s and when I was working as a 3D animator back in the 90’s I always had the idea of creating a 3D ray-traced animation of a pin-board, but never found the energy to do it. Even if I had a go at it, the render time to produce an animation that would look respectable on a 486 would have been measured in months. PolyRay Back in 1995 I landed my first real job, after spending three years being a beach-ski-climbing-paragliding-bum, and was employed to create 3D ray-traced animations for a CD-ROM that school kids would use to learn physics. I had got into the strange and wonderful world of text-based ray tracing, and was using a shareware ray-tracer called PolyRay. PolyRay takes a text file describing a scene as input and, after a few hours processing on a 486, produced a high quality ray-traced image. The following is an example of a basic PolyRay scene file. background Midnight_Blue   static define matte surface { ambient 0.1 diffuse 0.7 } define matte_white texture { matte { color white } } define matte_black texture { matte { color dark_slate_gray } } define position_cylindrical 3 define lookup_sawtooth 1 define light_wood <0.6, 0.24, 0.1> define median_wood <0.3, 0.12, 0.03> define dark_wood <0.05, 0.01, 0.005>     define wooden texture { noise surface { ambient 0.2  diffuse 0.7  specular white, 0.5 microfacet Reitz 10 position_fn position_cylindrical position_scale 1  lookup_fn lookup_sawtooth octaves 1 turbulence 1 color_map( [0.0, 0.2, light_wood, light_wood] [0.2, 0.3, light_wood, median_wood] [0.3, 0.4, median_wood, light_wood] [0.4, 0.7, light_wood, light_wood] [0.7, 0.8, light_wood, median_wood] [0.8, 0.9, median_wood, light_wood] [0.9, 1.0, light_wood, dark_wood]) } } define glass texture { surface { ambient 0 diffuse 0 specular 0.2 reflection white, 0.1 transmission white, 1, 1.5 }} define shiny surface { ambient 0.1 diffuse 0.6 specular white, 0.6 microfacet Phong 7  } define steely_blue texture { shiny { color black } } define chrome texture { surface { color white ambient 0.0 diffuse 0.2 specular 0.4 microfacet Phong 10 reflection 0.8 } }   viewpoint {     from <4.000, -1.000, 1.000> at <0.000, 0.000, 0.000> up <0, 1, 0> angle 60     resolution 640, 480 aspect 1.6 image_format 0 }       light <-10, 30, 20> light <-10, 30, -20>   object { disc <0, -2, 0>, <0, 1, 0>, 30 wooden }   object { sphere <0.000, 0.000, 0.000>, 1.00 chrome } object { cylinder <0.000, 0.000, 0.000>, <0.000, 0.000, -4.000>, 0.50 chrome }   After setting up the background and defining colors and textures, the viewpoint is specified. The “camera” is located at a point in 3D space, and it looks towards another point. The angle, image resolution, and aspect ratio are specified. Two lights are present in the image at defined coordinates. The three objects in the image are a wooden disc to represent a table top, and a sphere and cylinder that intersect to form a pin that will be used for the pin board toy in the final animation. When the image is rendered, the following image is produced. The pins are modeled with a chrome surface, so they reflect the environment around them. Note that the scale of the pin shaft is not correct, this will be fixed later. Modeling the Pin Board The frame of the pin-board is made up of three boxes, and six cylinders, the front box is modeled using a clear, slightly reflective solid, with the same refractive index of glass. The other shapes are modeled as metal. object { box <-5.5, -1.5, 1>, <5.5, 5.5, 1.2> glass } object { box <-5.5, -1.5, -0.04>, <5.5, 5.5, -0.09> steely_blue } object { box <-5.5, -1.5, -0.52>, <5.5, 5.5, -0.59> steely_blue } object { cylinder <-5.2, -1.2, 1.4>, <-5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, -1.2, 1.4>, <5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <-5.2, 5.2, 1.4>, <-5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, 5.2, 1.4>, <5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <0, -1.2, 1.4>, <0, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <0, 5.2, 1.4>, <0, 5.2, -0.74>, 0.2 steely_blue }   In order to create the matrix of pins that make up the pin board I used a basic console application with a few nested loops to create two intersecting matrixes of pins, which models the layout used in the pin boards. The resulting image is shown below. The pin board contains 11,481 pins, with the scene file containing 23,709 lines of code. For the complete animation 2,000 scene files will be created, which is over 47 million lines of code. Each pin in the pin-board will slide out a specific distance when an object is pressed into the back of the board. This is easily modeled by setting the Z coordinate of the pin to a specific value. In order to set all of the pins in the pin-board to the correct position, a bitmap image can be used. The position of the pin can be set based on the color of the pixel at the appropriate position in the image. When the Windows Azure logo is used to set the Z coordinate of the pins, the following image is generated. The challenge now was to make a cool animation. The Azure Logo is fine, but it is static. Using a normal video to animate the pins would not work; the colors in the video would not be the same as the depth of the objects from the camera. In order to simulate the pin board accurately a series of frames from a depth camera could be used. Windows Kinect The Kenect controllers for the X-Box 360 and Windows feature a depth camera. The Kinect SDK for Windows provides a programming interface for Kenect, providing easy access for .NET developers to the Kinect sensors. The Kinect Explorer provided with the Kinect SDK is a great starting point for exploring Kinect from a developers perspective. Both the X-Box 360 Kinect and the Windows Kinect will work with the Kinect SDK, the Windows Kinect is required for commercial applications, but the X-Box Kinect can be used for hobby projects. The Windows Kinect has the advantage of providing a mode to allow depth capture with objects closer to the camera, which makes for a more accurate depth image for setting the pin positions. Creating a Depth Field Animation The depth field animation used to set the positions of the pin in the pin board was created using a modified version of the Kinect Explorer sample application. In order to simulate the pin board accurately, a small section of the depth range from the depth sensor will be used. Any part of the object in front of the depth range will result in a white pixel; anything behind the depth range will be black. Within the depth range the pixels in the image will be set to RGB values from 0,0,0 to 255,255,255. A screen shot of the modified Kinect Explorer application is shown below. The Kinect Explorer sample application was modified to include slider controls that are used to set the depth range that forms the image from the depth stream. This allows the fine tuning of the depth image that is required for simulating the position of the pins in the pin board. The Kinect Explorer was also modified to record a series of images from the depth camera and save them as a sequence JPEG files that will be used to animate the pins in the animation the Start and Stop buttons are used to start and stop the image recording. En example of one of the depth images is shown below. Once a series of 2,000 depth images has been captured, the task of creating the animation can begin. Rendering a Test Frame In order to test the creation of frames and get an approximation of the time required to render each frame a test frame was rendered on-premise using PolyRay. The output of the rendering process is shown below. The test frame contained 23,629 primitive shapes, most of which are the spheres and cylinders that are used for the 11,800 or so pins in the pin board. The 1280x720 image contains 921,600 pixels, but as anti-aliasing was used the number of rays that were calculated was 4,235,777, with 3,478,754,073 object boundaries checked. The test frame of the pin board with the depth field image applied is shown below. The tracing time for the test frame was 4 minutes 27 seconds, which means rendering the2,000 frames in the animation would take over 148 hours, or a little over 6 days. Although this is much faster that an old 486, waiting almost a week to see the results of an animation would make it challenging for animators to create, view, and refine their animations. It would be much better if the animation could be rendered in less than one hour. Windows Azure Worker Roles The cost of creating an on-premise render farm to render animations increases in proportion to the number of servers. The table below shows the cost of servers for creating a render farm, assuming a cost of $500 per server. Number of Servers Cost 1 $500 16 $8,000 256 $128,000   As well as the cost of the servers, there would be additional costs for networking, racks etc. Hosting an environment of 256 servers on-premise would require a server room with cooling, and some pretty hefty power cabling. The Windows Azure compute services provide worker roles, which are ideal for performing processor intensive compute tasks. With the scalability available in Windows Azure a job that takes 256 hours to complete could be perfumed using different numbers of worker roles. The time and cost of using 1, 16 or 256 worker roles is shown below. Number of Worker Roles Render Time Cost 1 256 hours $30.72 16 16 hours $30.72 256 1 hour $30.72   Using worker roles in Windows Azure provides the same cost for the 256 hour job, irrespective of the number of worker roles used. Provided the compute task can be broken down into many small units, and the worker role compute power can be used effectively, it makes sense to scale the application so that the task is completed quickly, making the results available in a timely fashion. The task of rendering 2,000 frames in an animation is one that can easily be broken down into 2,000 individual pieces, which can be performed by a number of worker roles. Creating a Render Farm in Windows Azure The architecture of the render farm is shown in the following diagram. The render farm is a hybrid application with the following components: ·         On-Premise o   Windows Kinect – Used combined with the Kinect Explorer to create a stream of depth images. o   Animation Creator – This application uses the depth images from the Kinect sensor to create scene description files for PolyRay. These files are then uploaded to the jobs blob container, and job messages added to the jobs queue. o   Process Monitor – This application queries the role instance lifecycle table and displays statistics about the render farm environment and render process. o   Image Downloader – This application polls the image queue and downloads the rendered animation files once they are complete. ·         Windows Azure o   Azure Storage – Queues and blobs are used for the scene description files and completed frames. A table is used to store the statistics about the rendering environment.   The architecture of each worker role is shown below.   The worker role is configured to use local storage, which provides file storage on the worker role instance that can be use by the applications to render the image and transform the format of the image. The service definition for the worker role with the local storage configuration highlighted is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="CloudRay" >   <WorkerRole name="CloudRayWorkerRole" vmsize="Small">     <Imports>     </Imports>     <ConfigurationSettings>       <Setting name="DataConnectionString" />     </ConfigurationSettings>     <LocalResources>       <LocalStorage name="RayFolder" cleanOnRoleRecycle="true" />     </LocalResources>   </WorkerRole> </ServiceDefinition>     The two executable programs, PolyRay.exe and DTA.exe are included in the Azure project, with Copy Always set as the property. PolyRay will take the scene description file and render it to a Truevision TGA file. As the TGA format has not seen much use since the mid 90’s it is converted to a JPG image using Dave's Targa Animator, another shareware application from the 90’s. Each worker roll will use the following process to render the animation frames. 1.       The worker process polls the job queue, if a job is available the scene description file is downloaded from blob storage to local storage. 2.       PolyRay.exe is started in a process with the appropriate command line arguments to render the image as a TGA file. 3.       DTA.exe is started in a process with the appropriate command line arguments convert the TGA file to a JPG file. 4.       The JPG file is uploaded from local storage to the images blob container. 5.       A message is placed on the images queue to indicate a new image is available for download. 6.       The job message is deleted from the job queue. 7.       The role instance lifecycle table is updated with statistics on the number of frames rendered by the worker role instance, and the CPU time used. The code for this is shown below. public override void Run() {     // Set environment variables     string polyRayPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), PolyRayLocation);     string dtaPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), DTALocation);       LocalResource rayStorage = RoleEnvironment.GetLocalResource("RayFolder");     string localStorageRootPath = rayStorage.RootPath;       JobQueue jobQueue = new JobQueue("renderjobs");     JobQueue downloadQueue = new JobQueue("renderimagedownloadjobs");     CloudRayBlob sceneBlob = new CloudRayBlob("scenes");     CloudRayBlob imageBlob = new CloudRayBlob("images");     RoleLifecycleDataSource roleLifecycleDataSource = new RoleLifecycleDataSource();       Frames = 0;       while (true)     {         // Get the render job from the queue         CloudQueueMessage jobMsg = jobQueue.Get();           if (jobMsg != null)         {             // Get the file details             string sceneFile = jobMsg.AsString;             string tgaFile = sceneFile.Replace(".pi", ".tga");             string jpgFile = sceneFile.Replace(".pi", ".jpg");               string sceneFilePath = Path.Combine(localStorageRootPath, sceneFile);             string tgaFilePath = Path.Combine(localStorageRootPath, tgaFile);             string jpgFilePath = Path.Combine(localStorageRootPath, jpgFile);               // Copy the scene file to local storage             sceneBlob.DownloadFile(sceneFilePath);               // Run the ray tracer.             string polyrayArguments =                 string.Format("\"{0}\" -o \"{1}\" -a 2", sceneFilePath, tgaFilePath);             Process polyRayProcess = new Process();             polyRayProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), polyRayPath);             polyRayProcess.StartInfo.Arguments = polyrayArguments;             polyRayProcess.Start();             polyRayProcess.WaitForExit();               // Convert the image             string dtaArguments =                 string.Format(" {0} /FJ /P{1}", tgaFilePath, Path.GetDirectoryName (jpgFilePath));             Process dtaProcess = new Process();             dtaProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), dtaPath);             dtaProcess.StartInfo.Arguments = dtaArguments;             dtaProcess.Start();             dtaProcess.WaitForExit();               // Upload the image to blob storage             imageBlob.UploadFile(jpgFilePath);               // Add a download job.             downloadQueue.Add(jpgFile);               // Delete the render job message             jobQueue.Delete(jobMsg);               Frames++;         }         else         {             Thread.Sleep(1000);         }           // Log the worker role activity.         roleLifecycleDataSource.Alive             ("CloudRayWorker", RoleLifecycleDataSource.RoleLifecycleId, Frames);     } }     Monitoring Worker Role Instance Lifecycle In order to get more accurate statistics about the lifecycle of the worker role instances used to render the animation data was tracked in an Azure storage table. The following class was used to track the worker role lifecycles in Azure storage.   public class RoleLifecycle : TableServiceEntity {     public string ServerName { get; set; }     public string Status { get; set; }     public DateTime StartTime { get; set; }     public DateTime EndTime { get; set; }     public long SecondsRunning { get; set; }     public DateTime LastActiveTime { get; set; }     public int Frames { get; set; }     public string Comment { get; set; }       public RoleLifecycle()     {     }       public RoleLifecycle(string roleName)     {         PartitionKey = roleName;         RowKey = Utils.GetAscendingRowKey();         Status = "Started";         StartTime = DateTime.UtcNow;         LastActiveTime = StartTime;         EndTime = StartTime;         SecondsRunning = 0;         Frames = 0;     } }     A new instance of this class is created and added to the storage table when the role starts. It is then updated each time the worker renders a frame to record the total number of frames rendered and the total processing time. These statistics are used be the monitoring application to determine the effectiveness of use of resources in the render farm. Rendering the Animation The Azure solution was deployed to Windows Azure with the service configuration set to 16 worker role instances. This allows for the application to be tested in the cloud environment, and the performance of the application determined. When I demo the application at conferences and user groups I often start with 16 instances, and then scale up the application to the full 256 instances. The configuration to run 16 instances is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="16" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     About six minutes after deploying the application the first worker roles become active and start to render the first frames of the animation. The CloudRay Monitor application displays an icon for each worker role instance, with a number indicating the number of frames that the worker role has rendered. The statistics on the left show the number of active worker roles and statistics about the render process. The render time is the time since the first worker role became active; the CPU time is the total amount of processing time used by all worker role instances to render the frames.   Five minutes after the first worker role became active the last of the 16 worker roles activated. By this time the first seven worker roles had each rendered one frame of the animation.   With 16 worker roles u and running it can be seen that one hour and 45 minutes CPU time has been used to render 32 frames with a render time of just under 10 minutes.     At this rate it would take over 10 hours to render the 2,000 frames of the full animation. In order to complete the animation in under an hour more processing power will be required. Scaling the render farm from 16 instances to 256 instances is easy using the new management portal. The slider is set to 256 instances, and the configuration saved. We do not need to re-deploy the application, and the 16 instances that are up and running will not be affected. Alternatively, the configuration file for the Azure service could be modified to specify 256 instances.   <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="256" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     Six minutes after the new configuration has been applied 75 new worker roles have activated and are processing their first frames.   Five minutes later the full configuration of 256 worker roles is up and running. We can see that the average rate of frame rendering has increased from 3 to 12 frames per minute, and that over 17 hours of CPU time has been utilized in 23 minutes. In this test the time to provision 140 worker roles was about 11 minutes, which works out at about one every five seconds.   We are now half way through the rendering, with 1,000 frames complete. This has utilized just under three days of CPU time in a little over 35 minutes.   The animation is now complete, with 2,000 frames rendered in a little over 52 minutes. The CPU time used by the 256 worker roles is 6 days, 7 hours and 22 minutes with an average frame rate of 38 frames per minute. The rendering of the last 1,000 frames took 16 minutes 27 seconds, which works out at a rendering rate of 60 frames per minute. The frame counts in the server instances indicate that the use of a queue to distribute the workload has been very effective in distributing the load across the 256 worker role instances. The first 16 instances that were deployed first have rendered between 11 and 13 frames each, whilst the 240 instances that were added when the application was scaled have rendered between 6 and 9 frames each.   Completed Animation I’ve uploaded the completed animation to YouTube, a low resolution preview is shown below. Pin Board Animation Created using Windows Kinect and 256 Windows Azure Worker Roles   The animation can be viewed in 1280x720 resolution at the following link: http://www.youtube.com/watch?v=n5jy6bvSxWc Effective Use of Resources According to the CloudRay monitor statistics the animation took 6 days, 7 hours and 22 minutes CPU to render, this works out at 152 hours of compute time, rounded up to the nearest hour. As the usage for the worker role instances are billed for the full hour, it may have been possible to render the animation using fewer than 256 worker roles. When deciding the optimal usage of resources, the time required to provision and start the worker roles must also be considered. In the demo I started with 16 worker roles, and then scaled the application to 256 worker roles. It would have been more optimal to start the application with maybe 200 worker roles, and utilized the full hour that I was being billed for. This would, however, have prevented showing the ease of scalability of the application. The new management portal displays the CPU usage across the worker roles in the deployment. The average CPU usage across all instances is 93.27%, with over 99% used when all the instances are up and running. This shows that the worker role resources are being used very effectively. Grid Computing Scenarios Although I am using this scenario for a hobby project, there are many scenarios where a large amount of compute power is required for a short period of time. Windows Azure provides a great platform for developing these types of grid computing applications, and can work out very cost effective. ·         Windows Azure can provide massive compute power, on demand, in a matter of minutes. ·         The use of queues to manage the load balancing of jobs between role instances is a simple and effective solution. ·         Using a cloud-computing platform like Windows Azure allows proof-of-concept scenarios to be tested and evaluated on a very low budget. ·         No charges for inbound data transfer makes the uploading of large data sets to Windows Azure Storage services cost effective. (Transaction charges still apply.) Tips for using Windows Azure for Grid Computing Scenarios I found the implementation of a render farm using Windows Azure a fairly simple scenario to implement. I was impressed by ease of scalability that Azure provides, and by the short time that the application took to scale from 16 to 256 worker role instances. In this case it was around 13 minutes, in other tests it took between 10 and 20 minutes. The following tips may be useful when implementing a grid computing project in Windows Azure. ·         Using an Azure Storage queue to load-balance the units of work across multiple worker roles is simple and very effective. The design I have used in this scenario could easily scale to many thousands of worker role instances. ·         Windows Azure accounts are typically limited to 20 cores. If you need to use more than this, a call to support and a credit card check will be required. ·         Be aware of how the billing model works. You will be charged for worker role instances for the full clock our in which the instance is deployed. Schedule the workload to start just after the clock hour has started. ·         Monitor the utilization of the resources you are provisioning, ensure that you are not paying for worker roles that are idle. ·         If you are deploying third party applications to worker roles, you may well run into licensing issues. Purchasing software licenses on a per-processor basis when using hundreds of processors for a short time period would not be cost effective. ·         Third party software may also require installation onto the worker roles, which can be accomplished using start-up tasks. Bear in mind that adding a startup task and possible re-boot will add to the time required for the worker role instance to start and activate. An alternative may be to use a prepared VM and use VM roles. ·         Consider using the Windows Azure Autoscaling Application Block (WASABi) to autoscale the worker roles in your application. When using a large number of worker roles, the utilization must be carefully monitored, if the scaling algorithms are not optimal it could get very expensive!

    Read the article

  • Weblogic is slow to start (11mins) under VM (VirtualBox and VMware)

    - by Vladimir Dyuzhev
    (SOLVED! BY FAKING SYSTEM RANDOM GENERATOR, SEE BELOW) I'm setting up a VM image for my dev/build team. Inside that VM a Weblogic domain should be running. I use Ububtu server distro, WLS 9.2MP3 + ALSB. Everything works OK, quite fast, but at the start time the WLS stops twice for a measurable amounts of time. Two stops in total amount to about 10 minutes delay. For tasks where deployment requires server restart it's very annoying. :-( Sleeping time is not constant, sometimes the server starts very fast, sometimes so-so, sometimes 10 minutes or more. Interesting that if I press Enter while looking at the stopped server, it wakes up much faster, sometimes after a few seconds. WLST (Weblogic Jython shell) is also hanging for quite a time when executed in VM. It doesn't react to Enter though. Here must be some developers who run WLS with a VM. I wonder if others have the same problem? Was someone able to solve it? Here's the server output (just for a case): Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_12-b04) Java HotSpot(TM) Client VM (build 1.5.0_12-b04, mixed mode) Starting WLS with line: /shared2/beahome/jdk150_12/bin/java -client -Xmx256m -XX:MaxPermSize=128m -Xverify:none -da -Dplatform.home=/shared2/beahome/weblogic92 -Dwls.home=/shared2/beahome/weblogic92/server -Dwli.home=/shared2/beahome/weblogic92/integration -Dweblogic.management.discover=true -Dwl w.iterativeDev= -Dwlw.testConsole= -Dwlw.logErrorsToConsole= -Dweblogic.ext.dirs=/shared2/beahome/patch_weblogic923/profiles/default/sysext_ manifest_classpath -Dweblogic.management.username=admin -Dweblogic.management.password=wlsadmin -Dweblogic.Name=LOGMGR-admin -Djava.security .policy=/shared2/beahome/weblogic92/server/lib/weblogic.policy weblogic.Server <1-Apr-2010 12:47:22 o'clock PM GMT-05:00> <Notice> <WebLogicServer> <BEA-000395> <Following extensions directory contents added to the end of the classpath: /shared2/beahome/weblogic92/platform/lib/p13n/p13n-schemas.jar:/shared2/beahome/weblogic92/platform/lib/p13n/p13n_common.jar:/shared2/beahom e/weblogic92/platform/lib/p13n/p13n_system.jar:/shared2/beahome/weblogic92/platform/lib/wlp/netuix_common.jar:/shared2/beahome/weblogic92/pl atform/lib/wlp/netuix_schemas.jar:/shared2/beahome/weblogic92/platform/lib/wlp/netuix_system.jar:/shared2/beahome/weblogic92/platform/lib/wl p/wsrp-common.jar> <1-Apr-2010 12:47:22 o'clock PM GMT-05:00> <Info> <WebLogicServer> <BEA-000377> <Starting WebLogic Server with Java HotSpot(TM) Client VM Ve rsion 1.5.0_12-b04 from Sun Microsystems Inc.> <1-Apr-2010 12:47:23 o'clock PM GMT-05:00> <Info> <Management> <BEA-141107> <Version: WebLogic Server 9.2 MP3 Mon Mar 10 08:28:41 EDT 2008 1096261 > <1-Apr-2010 12:47:25 o'clock PM GMT-05:00> <Info> <WebLogicServer> <BEA-000215> <Loaded License : /shared2/beahome/license.bea> <1-Apr-2010 12:47:25 o'clock PM GMT-05:00> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to STARTING> <1-Apr-2010 12:47:25 o'clock PM GMT-05:00> <Info> <WorkManager> <BEA-002900> <Initializing self-tuning thread pool> <1-Apr-2010 12:47:25 o'clock PM GMT-05:00> <Notice> <Log Management> <BEA-170019> <The server log file /shared2/wldomains/beaadmd/LOGMGR/ser vers/LOGMGR-admin/logs/LOGMGR-admin.log is opened. All server side log events will be written to this file.> Here we have the first delay, up to 5 mins... <1-Apr-2010 12:53:21 o'clock PM GMT-05:00> <Notice> <Security> <BEA-090082> <Security initializing using security realm myrealm.> <1-Apr-2010 12:53:24 o'clock PM GMT-05:00> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to STANDBY> <1-Apr-2010 12:53:24 o'clock PM GMT-05:00> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to STARTING> <1-Apr-2010 12:53:25 o'clock PM GMT-05:00> <Notice> <Log Management> <BEA-170027> <The server initialized the domain log broadcaster success fully. Log messages will now be broadcasted to the domain log.> <1-Apr-2010 12:53:25 o'clock PM GMT-05:00> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to ADMIN> <1-Apr-2010 12:53:25 o'clock PM GMT-05:00> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to RESUMING> <1-Apr-2010 12:53:28 o'clock PM GMT-05:00> <Notice> <Security> <BEA-090171> <Loading the identity certificate and private key stored under t he alias adminuialias from the jks keystore file /shared2/wldomains/beaadmd/LOGMGR/CustomIdentity.jks.> And here is the second, again up to 5 mins. <1-Apr-2010 12:58:56 o'clock PM GMT-05:00> <Notice> <Security> <BEA-090169> <Loading trusted certificates from the jks keystore file /shared 2/wldomains/beaadmd/LOGMGR/CustomTrust.jks.> <1-Apr-2010 12:58:57 o'clock PM GMT-05:00> <Notice> <Server> <BEA-002613> <Channel "DefaultSecure" is now listening on 192.168.56.102:7002 f or protocols iiops, t3s, ldaps, https.> <1-Apr-2010 12:58:57 o'clock PM GMT-05:00> <Notice> <Server> <BEA-002613> <Channel "Default" is now listening on 192.168.56.102:8012 for pro tocols iiop, t3, ldap, http.> <1-Apr-2010 12:58:57 o'clock PM GMT-05:00> <Notice> <WebLogicServer> <BEA-000331> <Started WebLogic Admin Server "LOGMGR-admin" for domain " LOGMGR" running in Development Mode> <1-Apr-2010 12:58:57 o'clock PM GMT-05:00> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to RUNNING> <1-Apr-2010 12:58:57 o'clock PM GMT-05:00> <Notice> <WebLogicServer> <BEA-000360> <Server started in RUNNING mode> UPDATE I think I've got the track: it must be the randon seed initialization. That may explain why generating keyboard events release the server. I've made the thread dump, and one thread is in runnable state, but waiting: "[STANDBY] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'" daemon prio=1 tid=0x0a7b06e8 nid=0xeda runnable [0x728a500 0..0x728a6d80] at java.io.FileInputStream.readBytes(Native Method) at java.io.FileInputStream.read(FileInputStream.java:194) at sun.security.provider.NativePRNG$RandomIO.readFully(NativePRNG.java:185) at sun.security.provider.NativePRNG$RandomIO.implGenerateSeed(NativePRNG.java:202) - locked <0x7d928c78> (a java.lang.Object) at sun.security.provider.NativePRNG$RandomIO.access$300(NativePRNG.java:108) at sun.security.provider.NativePRNG.engineGenerateSeed(NativePRNG.java:102) at java.security.SecureRandom.generateSeed(SecureRandom.java:475) at weblogic.security.AbstractRandomData.ensureInittedAndSeeded(AbstractRandomData.java:83) SOLVED Weblogic uses SecureRandom to init security subsystem. SecureRandom by default uses /dev/urandom file. For some reason, reading this file under VM comes to halt quite often. Generating console events helps to create more randomness, and release the WLS. For the test purposes I have changed jre/lib/security/java.security file, property to securerandom.source=file:/tmp/big.random.file. Weblogic now starts in 15 seconds.

    Read the article

  • MS Dynamics CRM trapping .NET error before I can handle it

    - by clifgriffin
    This is a fun one. I have written a custom search page that provides faster, more user friendly searches than the default Contacts view and also allows searching of Leads and Contacts simultaneously. It uses GridViews bound to SqlDataSources that query filtered views. I'm sure someone will complain that I'm not using the web services for this, but this is just the design decision we made. These GridViews live in UpdatePanels to enable very slick AJAX updates upon search. It's all working great. Nearly ready to be deployed, except for one thing: Some long running searches are triggering an uncatchable SQL timeout exception. [SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.] at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlDataReader.ConsumeMetaData() at System.Data.SqlClient.SqlDataReader.get_MetaData() at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString) at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async) at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, DbAsyncResult result) at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method) at System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior behavior, String method) at System.Data.SqlClient.SqlCommand.ExecuteDbDataReader(CommandBehavior behavior) at System.Data.Common.DbCommand.System.Data.IDbCommand.ExecuteReader(CommandBehavior behavior) at System.Data.Common.DbDataAdapter.FillInternal(DataSet dataset, DataTable[] datatables, Int32 startRecord, Int32 maxRecords, String srcTable, IDbCommand command, CommandBehavior behavior) at System.Data.Common.DbDataAdapter.Fill(DataSet dataSet, Int32 startRecord, Int32 maxRecords, String srcTable, IDbCommand command, CommandBehavior behavior) at System.Data.Common.DbDataAdapter.Fill(DataSet dataSet, String srcTable) at System.Web.UI.WebControls.SqlDataSourceView.ExecuteSelect(DataSourceSelectArguments arguments) at System.Web.UI.DataSourceView.Select(DataSourceSelectArguments arguments, DataSourceViewSelectCallback callback) at System.Web.UI.WebControls.DataBoundControl.PerformSelect() at System.Web.UI.WebControls.BaseDataBoundControl.DataBind() at System.Web.UI.WebControls.GridView.DataBind() at System.Web.UI.WebControls.BaseDataBoundControl.EnsureDataBound() at System.Web.UI.WebControls.CompositeDataBoundControl.CreateChildControls() at System.Web.UI.Control.EnsureChildControls() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) I found that CRM is doing a server.transfer to capture this error because my UpdatePanels started throwing JavaSript errors when this error would occur. I was only able to get the full error message by using the JavaScript debugger in IE. Having found this error, I thought the solution would be simple. I just needed to wrap my databind calls in try/catch blocks to capture any errors. Unfortunately it seems CRM's IIS configuration has the magic ability to capture this error before it ever gets back to my code. Using the debugger I never see it. It never gets to my catch blocks, but it's clearly happening in the SQL Data Source which is clearly (by the stack trace) being triggered by my GridView bind. Any ideas on this? It's driving me crazy. Code Behind (with some irrelevant functions omitted): protected void Page_Load(object sender, EventArgs e) { //Initialize some stuff this.bannerOracle = new OdbcConnection(ConfigurationManager.ConnectionStrings["OracleConnectionString"].ConnectionString); //Prospect default HideProspects(); HideProspectAddressColumn(); //Contacts default HideContactAddressColumn(); //Default error messages gvContacts.EmptyDataText = "Sad day. Your search returned no contacts."; gvProspects.EmptyDataText = "Sad day. Your search returned no prospects."; //New search try { SearchContact(null, -1); } catch { gvContacts.EmptyDataText = "Oops! An error occured. This may have been a timeout. Please try your search again."; gvContacts.DataSource = null; gvContacts.DataBind(); } } protected void txtSearchString_TextChanged(object sender, EventArgs e) { if(!String.IsNullOrEmpty(txtSearchString.Text)) { try { SearchContact(txtSearchString.Text, Convert.ToInt16(lstSearchType.SelectedValue)); } catch { gvContacts.EmptyDataText = "Oops! An error occured. This may have been a timeout. Please try your search again."; gvContacts.DataSource = null; gvContacts.DataBind(); } if (chkProspects.Checked == true) { try { SearchProspect(txtSearchString.Text, Convert.ToInt16(lstSearchType.SelectedValue)); } catch { gvProspects.EmptyDataText = "Oops! An error occured. This may have been a timeout. Please try your search again."; gvProspects.DataSource = null; gvProspects.DataBind(); } finally { ShowProspects(); } } else { HideProspects(); } } } protected void SearchContact(string search, int type) { SqlCRM_Contact.ConnectionString = ConfigurationManager.ConnectionStrings["MSSQLConnectionString"].ConnectionString; gvContacts.DataSourceID = "SqlCRM_Contact"; string strQuery = ""; string baseQuery = @"SELECT filteredcontact.contactid, filteredcontact.new_libertyid, filteredcontact.fullname, 'none' AS line1, filteredcontact.emailaddress1, filteredcontact.telephone1, filteredcontact.birthdateutc AS birthdate, filteredcontact.gendercodename FROM filteredcontact "; switch(type) { case LASTFIRST: strQuery = baseQuery + "WHERE fullname LIKE @value AND filteredcontact.statecode = 0"; SqlCRM_Contact.SelectCommand = strQuery; SqlCRM_Contact.SelectParameters.Add("value", DbType.String, search.Trim() + "%"); break; case LAST: strQuery = baseQuery + "WHERE lastname LIKE @value AND filteredcontact.statecode = 0"; SqlCRM_Contact.SelectCommand = strQuery; SqlCRM_Contact.SelectParameters.Add("value", DbType.String, search.Trim() + "%"); break; case FIRST: strQuery = baseQuery + "WHERE firstname LIKE @value AND filteredcontact.statecode = 0"; SqlCRM_Contact.SelectCommand = strQuery; SqlCRM_Contact.SelectParameters.Add("value", DbType.String, search.Trim() + "%"); break; case LIBERTYID: strQuery = baseQuery + "WHERE new_libertyid LIKE @value AND filteredcontact.statecode = 0"; SqlCRM_Contact.SelectCommand = strQuery; SqlCRM_Contact.SelectParameters.Add("value", DbType.String, search.Trim() + "%"); break; case EMAIL: strQuery = baseQuery + "WHERE emailaddress1 LIKE @value AND filteredcontact.statecode = 0"; SqlCRM_Contact.SelectCommand = strQuery; SqlCRM_Contact.SelectParameters.Add("value", DbType.String, search.Trim() + "%"); break; case TELEPHONE: strQuery = baseQuery + "WHERE telephone1 LIKE @value AND filteredcontact.statecode = 0"; SqlCRM_Contact.SelectCommand = strQuery; SqlCRM_Contact.SelectParameters.Add("value", DbType.String, search.Trim() + "%"); break; case BIRTHDAY: strQuery = baseQuery + "WHERE filteredcontact.birthdateutc BETWEEN @dateStart AND @dateEnd AND filteredcontact.statecode = 0"; try { DateTime temp = DateTime.Parse(search); if (temp.Year < 1753 || temp.Year > 9999) { search = string.Empty; } else { search = temp.ToString("yyyy-MM-dd"); } } catch { search = string.Empty; } SqlCRM_Contact.SelectCommand = strQuery; SqlCRM_Contact.SelectParameters.Add("dateStart", DbType.String, search.Trim() + " 00:00:00.000"); SqlCRM_Contact.SelectParameters.Add("dateEnd", DbType.String, search.Trim() + " 23:59:59.999"); break; case SSN: //Do something break; case ADDRESS: strQuery = @"SELECT contactid, new_libertyid, fullname, line1, emailaddress1, telephone1, birthdate, gendercodename FROM (SELECT FC.contactid, FC.new_libertyid, FC.fullname, FA.line1, FC.emailaddress1, FC.telephone1, FC.birthdateutc AS birthdate, FC.gendercodename, ROW_NUMBER() OVER(PARTITION BY FC.contactid ORDER BY FC.contactid DESC) AS rn FROM filteredcontact FC INNER JOIN FilteredCustomerAddress FA ON FC.contactid = FA.parentid WHERE FA.line1 LIKE @value AND FA.addressnumber <> 1 AND FC.statecode = 0 ) AS RESULTS WHERE rn = 1"; SqlCRM_Contact.SelectCommand = strQuery; SqlCRM_Contact.SelectParameters.Add("value", DbType.String, search.Trim() + "%"); ShowContactAddressColumn(); break; default: strQuery = @"SELECT TOP 500 filteredcontact.contactid, filteredcontact.new_libertyid, filteredcontact.fullname, 'none' AS line1, filteredcontact.emailaddress1, filteredcontact.telephone1, filteredcontact.birthdateutc AS birthdate, filteredcontact.gendercodename FROM filteredcontact WHERE filteredcontact.statecode = 0"; SqlCRM_Contact.SelectCommand = strQuery; break; } if (type != ADDRESS) { HideContactAddressColumn(); } gvContacts.PageIndex = 0; //try //{ // SqlCRM_Contact.DataBind(); //} //catch //{ // SqlCRM_Contact.DataBind(); //} gvContacts.DataBind(); } protected void SearchProspect(string search, int type) { SqlCRM_Prospect.ConnectionString = ConfigurationManager.ConnectionStrings["MSSQLConnectionString"].ConnectionString; gvProspects.DataSourceID = "SqlCRM_Prospect"; string strQuery = ""; string baseQuery = @"SELECT filteredlead.leadid, filteredlead.fullname, 'none' AS address1_line1, filteredlead.emailaddress1, filteredlead.telephone1, filteredlead.lu_dateofbirthutc AS lu_dateofbirth, filteredlead.lu_gendername FROM filteredlead "; switch (type) { case LASTFIRST: strQuery = baseQuery + "WHERE fullname LIKE @value AND filteredlead.statecode = 0"; SqlCRM_Prospect.SelectCommand = strQuery; SqlCRM_Prospect.SelectParameters.Add("value", DbType.String, search.Trim() + "%"); break; case LAST: strQuery = baseQuery + "WHERE lastname LIKE @value AND filteredlead.statecode = 0"; SqlCRM_Prospect.SelectCommand = strQuery; SqlCRM_Prospect.SelectParameters.Add("value", DbType.String, search.Trim() + "%"); break; case FIRST: strQuery = baseQuery + "WHERE firstname LIKE @value AND filteredlead.statecode = 0"; SqlCRM_Prospect.SelectCommand = strQuery; SqlCRM_Prospect.SelectParameters.Add("value", DbType.String, search.Trim() + "%"); break; case LIBERTYID: strQuery = baseQuery + "WHERE new_libertyid LIKE @value AND filteredlead.statecode = 0"; SqlCRM_Prospect.SelectCommand = strQuery; SqlCRM_Prospect.SelectParameters.Add("value", DbType.String, search.Trim() + "%"); break; case EMAIL: strQuery = baseQuery + "WHERE emailaddress1 LIKE @value AND filteredlead.statecode = 0"; SqlCRM_Prospect.SelectCommand = strQuery; SqlCRM_Prospect.SelectParameters.Add("value", DbType.String, search.Trim() + "%"); break; case TELEPHONE: strQuery = baseQuery + "WHERE telephone1 LIKE @value AND filteredlead.statecode = 0"; SqlCRM_Prospect.SelectCommand = strQuery; SqlCRM_Prospect.SelectParameters.Add("value", DbType.String, search.Trim() + "%"); break; case BIRTHDAY: strQuery = baseQuery + "WHERE filteredlead.lu_dateofbirth BETWEEN @dateStart AND @dateEnd AND filteredlead.statecode = 0"; try { DateTime temp = DateTime.Parse(search); if (temp.Year < 1753 || temp.Year > 9999) { search = string.Empty; } else { search = temp.ToString("yyyy-MM-dd"); } } catch { search = string.Empty; } SqlCRM_Prospect.SelectCommand = strQuery; SqlCRM_Prospect.SelectParameters.Add("dateStart", DbType.String, search.Trim() + " 00:00:00.000"); SqlCRM_Prospect.SelectParameters.Add("dateEnd", DbType.String, search.Trim() + " 23:59:59.999"); break; case SSN: //Do nothing break; case ADDRESS: strQuery = @"SELECT filteredlead.leadid, filteredlead.fullname, filteredlead.address1_line1, filteredlead.emailaddress1, filteredlead.telephone1, filteredlead.lu_dateofbirthutc AS lu_dateofbirth, filteredlead.lu_gendername FROM filteredlead WHERE address1_line1 LIKE @value AND filteredlead.statecode = 0"; SqlCRM_Prospect.SelectCommand = strQuery; SqlCRM_Prospect.SelectParameters.Add("value", DbType.String, search.Trim() + "%"); ShowProspectAddressColumn(); break; default: strQuery = @"SELECT TOP 500 filteredlead.leadid, filteredlead.fullname, 'none' AS address1_line1 filteredlead.emailaddress1, filteredlead.telephone1, filteredlead.lu_dateofbirthutc AS lu_dateofbirth, filteredlead.lu_gendername FROM filteredlead WHERE filteredlead.statecode = 0"; SqlCRM_Prospect.SelectCommand = strQuery; break; } if (type != ADDRESS) { HideProspectAddressColumn(); } gvProspects.PageIndex = 0; //try //{ // SqlCRM_Prospect.DataBind(); //} //catch (Exception ex) //{ // SqlCRM_Prospect.DataBind(); //} gvProspects.DataBind(); }

    Read the article

  • Help Optimizing MySQL Table (~ 500,000 records) and PHP Code.

    - by Pyrite
    I have a MySQL table that collects player data from various game servers (Urban Terror). The bot that collects the data runs 24/7, and currently the table is up to about 475,000+ records. Because of this, querying this table from PHP has become quite slow. I wonder what I can do on the database side of things to make it as optomized as possible, then I can focus on the application to query the database. The table is as follows: CREATE TABLE IF NOT EXISTS `people` ( `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT, `name` varchar(40) NOT NULL, `ip` int(4) unsigned NOT NULL, `guid` varchar(32) NOT NULL, `server` int(4) unsigned NOT NULL, `date` int(11) NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `Person` (`name`,`ip`,`guid`), KEY `server` (`server`), KEY `date` (`date`), KEY `PlayerName` (`name`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 COMMENT='People that Play on Servers' AUTO_INCREMENT=475843 ; I'm storying the IPv4 (ip and server) as 4 byte integers, and using the MySQL functions NTOA(), etc to encode and decode, I heard that this way is faster, rather than varchar(15). The guid is a md5sum, 32 char hex. Date is stored as unix timestamp. I have a unique key on name, ip and guid, as to avoid duplicates of the same player. Do I have my keys setup right? Is the way I'm storing data efficient? Here is the code to query this table. You search for a name, ip, or guid, and it grabs the results of the query and cross references other records that match the name, ip, or guid from the results of the first query, and does it for each field. This is kind of hard to explain. But basically, if I search for one player by name, I'll see every other name he has used, every IP he has used and every GUID he has used. <form action="<?php echo $_SERVER['PHP_SELF']; ?>" method="post"> Search: <input type="text" name="query" id="query" /><input type="submit" name="btnSubmit" value="Submit" /> </form> <?php if (!empty($_POST['query'])) { ?> <table cellspacing="1" id="1up_people" class="tablesorter" width="300"> <thead> <tr> <th>ID</th> <th>Player Name</th> <th>Player IP</th> <th>Player GUID</th> <th>Server</th> <th>Date</th> </tr> </thead> <tbody> <?php function super_unique($array) { $result = array_map("unserialize", array_unique(array_map("serialize", $array))); foreach ($result as $key => $value) { if ( is_array($value) ) { $result[$key] = super_unique($value); } } return $result; } if (!empty($_POST['query'])) { $query = trim($_POST['query']); $count = 0; $people = array(); $link = mysql_connect('localhost', 'mysqluser', 'yea right!'); if (!$link) { die('Could not connect: ' . mysql_error()); } mysql_select_db("1up"); $sql = "SELECT id, name, INET_NTOA(ip) AS ip, guid, INET_NTOA(server) AS server, date FROM 1up_people WHERE (name LIKE \"%$query%\" OR INET_NTOA(ip) LIKE \"%$query%\" OR guid LIKE \"%$query%\")"; $result = mysql_query($sql, $link); if (!$result) { die(mysql_error()); } // Now take the initial results and parse each column into its own array while ($row = mysql_fetch_array($result, MYSQL_NUM)) { $name = htmlspecialchars($row[1]); $people[] = array( 'id' => $row[0], 'name' => $name, 'ip' => $row[2], 'guid' => $row[3], 'server' => $row[4], 'date' => $row[5] ); } // now for each name, ip, guid in results, find additonal records $people2 = array(); foreach ($people AS $person) { $ip = $person['ip']; $sql = "SELECT id, name, INET_NTOA(ip) AS ip, guid, INET_NTOA(server) AS server, date FROM 1up_people WHERE (ip = \"$ip\")"; $result = mysql_query($sql, $link); while ($row = mysql_fetch_array($result, MYSQL_NUM)) { $name = htmlspecialchars($row[1]); $people2[] = array( 'id' => $row[0], 'name' => $name, 'ip' => $row[2], 'guid' => $row[3], 'server' => $row[4], 'date' => $row[5] ); } } $people3 = array(); foreach ($people AS $person) { $guid = $person['guid']; $sql = "SELECT id, name, INET_NTOA(ip) AS ip, guid, INET_NTOA(server) AS server, date FROM 1up_people WHERE (guid = \"$guid\")"; $result = mysql_query($sql, $link); while ($row = mysql_fetch_array($result, MYSQL_NUM)) { $name = htmlspecialchars($row[1]); $people3[] = array( 'id' => $row[0], 'name' => $name, 'ip' => $row[2], 'guid' => $row[3], 'server' => $row[4], 'date' => $row[5] ); } } $people4 = array(); foreach ($people AS $person) { $name = $person['name']; $sql = "SELECT id, name, INET_NTOA(ip) AS ip, guid, INET_NTOA(server) AS server, date FROM 1up_people WHERE (name = \"$name\")"; $result = mysql_query($sql, $link); while ($row = mysql_fetch_array($result, MYSQL_NUM)) { $name = htmlspecialchars($row[1]); $people4[] = array( 'id' => $row[0], 'name' => $name, 'ip' => $row[2], 'guid' => $row[3], 'server' => $row[4], 'date' => $row[5] ); } } // Combine people and people2 into just people $people = array_merge($people, $people2); $people = array_merge($people, $people3); $people = array_merge($people, $people4); $people = super_unique($people); foreach ($people AS $person) { $date = ($person['date']) ? date("M d, Y", $person['date']) : 'Before 8/1/10'; echo "<tr>\n"; echo "<td>".$person['id']."</td>"; echo "<td>".$person['name']."</td>"; echo "<td>".$person['ip']."</td>"; echo "<td>".$person['guid']."</td>"; echo "<td>".$person['server']."</td>"; echo "<td>".$date."</td>"; echo "</tr>\n"; $count++; } // Find Total Records //$result = mysql_query("SELECT id FROM 1up_people", $link); //$total = mysql_num_rows($result); mysql_close($link); } ?> </tbody> </table> <p> <?php echo $count." Records Found for \"".$_POST['query']."\" out of $total"; ?> </p> <?php } $time_stop = microtime(true); print("Done (ran for ".round($time_stop-$time_start)." seconds)."); ?> Any help at all is appreciated! Thank you.

    Read the article

  • Help Optimizing MySQL Table (~ 500,000 records).

    - by Pyrite
    I have a MySQL table that collects player data from various game servers (Urban Terror). The bot that collects the data runs 24/7, and currently the table is up to about 475,000+ records. Because of this, querying this table from PHP has become quite slow. I wonder what I can do on the database side of things to make it as optomized as possible, then I can focus on the application to query the database. The table is as follows: CREATE TABLE IF NOT EXISTS `people` ( `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT, `name` varchar(40) NOT NULL, `ip` int(4) unsigned NOT NULL, `guid` varchar(32) NOT NULL, `server` int(4) unsigned NOT NULL, `date` int(11) NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `Person` (`name`,`ip`,`guid`), KEY `server` (`server`), KEY `date` (`date`), KEY `PlayerName` (`name`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 COMMENT='People that Play on Servers' AUTO_INCREMENT=475843 ; I'm storying the IPv4 (ip and server) as 4 byte integers, and using the MySQL functions NTOA(), etc to encode and decode, I heard that this way is faster, rather than varchar(15). The guid is a md5sum, 32 char hex. Date is stored as unix timestamp. I have a unique key on name, ip and guid, as to avoid duplicates of the same player. Do I have my keys setup right? Is the way I'm storing data efficient? Here is the code to query this table. You search for a name, ip, or guid, and it grabs the results of the query and cross references other records that match the name, ip, or guid from the results of the first query, and does it for each field. This is kind of hard to explain. But basically, if I search for one player by name, I'll see every other name he has used, every IP he has used and every GUID he has used. <form action="<?php echo $_SERVER['PHP_SELF']; ?>" method="post"> Search: <input type="text" name="query" id="query" /><input type="submit" name="btnSubmit" value="Submit" /> </form> <?php if (!empty($_POST['query'])) { ?> <table cellspacing="1" id="1up_people" class="tablesorter" width="300"> <thead> <tr> <th>ID</th> <th>Player Name</th> <th>Player IP</th> <th>Player GUID</th> <th>Server</th> <th>Date</th> </tr> </thead> <tbody> <?php function super_unique($array) { $result = array_map("unserialize", array_unique(array_map("serialize", $array))); foreach ($result as $key => $value) { if ( is_array($value) ) { $result[$key] = super_unique($value); } } return $result; } if (!empty($_POST['query'])) { $query = trim($_POST['query']); $count = 0; $people = array(); $link = mysql_connect('localhost', 'mysqluser', 'yea right!'); if (!$link) { die('Could not connect: ' . mysql_error()); } mysql_select_db("1up"); $sql = "SELECT id, name, INET_NTOA(ip) AS ip, guid, INET_NTOA(server) AS server, date FROM 1up_people WHERE (name LIKE \"%$query%\" OR INET_NTOA(ip) LIKE \"%$query%\" OR guid LIKE \"%$query%\")"; $result = mysql_query($sql, $link); if (!$result) { die(mysql_error()); } // Now take the initial results and parse each column into its own array while ($row = mysql_fetch_array($result, MYSQL_NUM)) { $name = htmlspecialchars($row[1]); $people[] = array( 'id' => $row[0], 'name' => $name, 'ip' => $row[2], 'guid' => $row[3], 'server' => $row[4], 'date' => $row[5] ); } // now for each name, ip, guid in results, find additonal records $people2 = array(); foreach ($people AS $person) { $ip = $person['ip']; $sql = "SELECT id, name, INET_NTOA(ip) AS ip, guid, INET_NTOA(server) AS server, date FROM 1up_people WHERE (ip = \"$ip\")"; $result = mysql_query($sql, $link); while ($row = mysql_fetch_array($result, MYSQL_NUM)) { $name = htmlspecialchars($row[1]); $people2[] = array( 'id' => $row[0], 'name' => $name, 'ip' => $row[2], 'guid' => $row[3], 'server' => $row[4], 'date' => $row[5] ); } } $people3 = array(); foreach ($people AS $person) { $guid = $person['guid']; $sql = "SELECT id, name, INET_NTOA(ip) AS ip, guid, INET_NTOA(server) AS server, date FROM 1up_people WHERE (guid = \"$guid\")"; $result = mysql_query($sql, $link); while ($row = mysql_fetch_array($result, MYSQL_NUM)) { $name = htmlspecialchars($row[1]); $people3[] = array( 'id' => $row[0], 'name' => $name, 'ip' => $row[2], 'guid' => $row[3], 'server' => $row[4], 'date' => $row[5] ); } } $people4 = array(); foreach ($people AS $person) { $name = $person['name']; $sql = "SELECT id, name, INET_NTOA(ip) AS ip, guid, INET_NTOA(server) AS server, date FROM 1up_people WHERE (name = \"$name\")"; $result = mysql_query($sql, $link); while ($row = mysql_fetch_array($result, MYSQL_NUM)) { $name = htmlspecialchars($row[1]); $people4[] = array( 'id' => $row[0], 'name' => $name, 'ip' => $row[2], 'guid' => $row[3], 'server' => $row[4], 'date' => $row[5] ); } } // Combine people and people2 into just people $people = array_merge($people, $people2); $people = array_merge($people, $people3); $people = array_merge($people, $people4); $people = super_unique($people); foreach ($people AS $person) { $date = ($person['date']) ? date("M d, Y", $person['date']) : 'Before 8/1/10'; echo "<tr>\n"; echo "<td>".$person['id']."</td>"; echo "<td>".$person['name']."</td>"; echo "<td>".$person['ip']."</td>"; echo "<td>".$person['guid']."</td>"; echo "<td>".$person['server']."</td>"; echo "<td>".$date."</td>"; echo "</tr>\n"; $count++; } // Find Total Records //$result = mysql_query("SELECT id FROM 1up_people", $link); //$total = mysql_num_rows($result); mysql_close($link); } ?> </tbody> </table> <p> <?php echo $count." Records Found for \"".$_POST['query']."\" out of $total"; ?> </p> <?php } $time_stop = microtime(true); print("Done (ran for ".round($time_stop-$time_start)." seconds)."); ?> Any help at all is appreciated! Thank you.

    Read the article

  • Zen and the Art of File and Folder Organization

    - by Mark Virtue
    Is your desk a paragon of neatness, or does it look like a paper-bomb has gone off? If you’ve been putting off getting organized because the task is too huge or daunting, or you don’t know where to start, we’ve got 40 tips to get you on the path to zen mastery of your filing system. For all those readers who would like to get their files and folders organized, or, if they’re already organized, better organized—we have compiled a complete guide to getting organized and staying organized, a comprehensive article that will hopefully cover every possible tip you could want. Signs that Your Computer is Poorly Organized If your computer is a mess, you’re probably already aware of it.  But just in case you’re not, here are some tell-tale signs: Your Desktop has over 40 icons on it “My Documents” contains over 300 files and 60 folders, including MP3s and digital photos You use the Windows’ built-in search facility whenever you need to find a file You can’t find programs in the out-of-control list of programs in your Start Menu You save all your Word documents in one folder, all your spreadsheets in a second folder, etc Any given file that you’re looking for may be in any one of four different sets of folders But before we start, here are some quick notes: We’re going to assume you know what files and folders are, and how to create, save, rename, copy and delete them The organization principles described in this article apply equally to all computer systems.  However, the screenshots here will reflect how things look on Windows (usually Windows 7).  We will also mention some useful features of Windows that can help you get organized. Everyone has their own favorite methodology of organizing and filing, and it’s all too easy to get into “My Way is Better than Your Way” arguments.  The reality is that there is no perfect way of getting things organized.  When I wrote this article, I tried to keep a generalist and objective viewpoint.  I consider myself to be unusually well organized (to the point of obsession, truth be told), and I’ve had 25 years experience in collecting and organizing files on computers.  So I’ve got a lot to say on the subject.  But the tips I have described here are only one way of doing it.  Hopefully some of these tips will work for you too, but please don’t read this as any sort of “right” way to do it. At the end of the article we’ll be asking you, the reader, for your own organization tips. Why Bother Organizing At All? For some, the answer to this question is self-evident. And yet, in this era of powerful desktop search software (the search capabilities built into the Windows Vista and Windows 7 Start Menus, and third-party programs like Google Desktop Search), the question does need to be asked, and answered. I have a friend who puts every file he ever creates, receives or downloads into his My Documents folder and doesn’t bother filing them into subfolders at all.  He relies on the search functionality built into his Windows operating system to help him find whatever he’s looking for.  And he always finds it.  He’s a Search Samurai.  For him, filing is a waste of valuable time that could be spent enjoying life! It’s tempting to follow suit.  On the face of it, why would anyone bother to take the time to organize their hard disk when such excellent search software is available?  Well, if all you ever want to do with the files you own is to locate and open them individually (for listening, editing, etc), then there’s no reason to ever bother doing one scrap of organization.  But consider these common tasks that are not achievable with desktop search software: Find files manually.  Often it’s not convenient, speedy or even possible to utilize your desktop search software to find what you want.  It doesn’t work 100% of the time, or you may not even have it installed.  Sometimes its just plain faster to go straight to the file you want, if you know it’s in a particular sub-folder, rather than trawling through hundreds of search results. Find groups of similar files (e.g. all your “work” files, all the photos of your Europe holiday in 2008, all your music videos, all the MP3s from Dark Side of the Moon, all your letters you wrote to your wife, all your tax returns).  Clever naming of the files will only get you so far.  Sometimes it’s the date the file was created that’s important, other times it’s the file format, and other times it’s the purpose of the file.  How do you name a collection of files so that they’re easy to isolate based on any of the above criteria?  Short answer, you can’t. Move files to a new computer.  It’s time to upgrade your computer.  How do you quickly grab all the files that are important to you?  Or you decide to have two computers now – one for home and one for work.  How do you quickly isolate only the work-related files to move them to the work computer? Synchronize files to other computers.  If you have more than one computer, and you need to mirror some of your files onto the other computer (e.g. your music collection), then you need a way to quickly determine which files are to be synced and which are not.  Surely you don’t want to synchronize everything? Choose which files to back up.  If your backup regime calls for multiple backups, or requires speedy backups, then you’ll need to be able to specify which files are to be backed up, and which are not.  This is not possible if they’re all in the same folder. Finally, if you’re simply someone who takes pleasure in being organized, tidy and ordered (me! me!), then you don’t even need a reason.  Being disorganized is simply unthinkable. Tips on Getting Organized Here we present our 40 best tips on how to get organized.  Or, if you’re already organized, to get better organized. Tip #1.  Choose Your Organization System Carefully The reason that most people are not organized is that it takes time.  And the first thing that takes time is deciding upon a system of organization.  This is always a matter of personal preference, and is not something that a geek on a website can tell you.  You should always choose your own system, based on how your own brain is organized (which makes the assumption that your brain is, in fact, organized). We can’t instruct you, but we can make suggestions: You may want to start off with a system based on the users of the computer.  i.e. “My Files”, “My Wife’s Files”, My Son’s Files”, etc.  Inside “My Files”, you might then break it down into “Personal” and “Business”.  You may then realize that there are overlaps.  For example, everyone may want to share access to the music library, or the photos from the school play.  So you may create another folder called “Family”, for the “common” files. You may decide that the highest-level breakdown of your files is based on the “source” of each file.  In other words, who created the files.  You could have “Files created by ME (business or personal)”, “Files created by people I know (family, friends, etc)”, and finally “Files created by the rest of the world (MP3 music files, downloaded or ripped movies or TV shows, software installation files, gorgeous desktop wallpaper images you’ve collected, etc).”  This system happens to be the one I use myself.  See below:  Mark is for files created by meVC is for files created by my company (Virtual Creations)Others is for files created by my friends and familyData is the rest of the worldAlso, Settings is where I store the configuration files and other program data files for my installed software (more on this in tip #34, below). Each folder will present its own particular set of requirements for further sub-organization.  For example, you may decide to organize your music collection into sub-folders based on the artist’s name, while your digital photos might get organized based on the date they were taken.  It can be different for every sub-folder! Another strategy would be based on “currentness”.  Files you have yet to open and look at live in one folder.  Ones that have been looked at but not yet filed live in another place.  Current, active projects live in yet another place.  All other files (your “archive”, if you like) would live in a fourth folder. (And of course, within that last folder you’d need to create a further sub-system based on one of the previous bullet points). Put some thought into this – changing it when it proves incomplete can be a big hassle!  Before you go to the trouble of implementing any system you come up with, examine a wide cross-section of the files you own and see if they will all be able to find a nice logical place to sit within your system. Tip #2.  When You Decide on Your System, Stick to It! There’s nothing more pointless than going to all the trouble of creating a system and filing all your files, and then whenever you create, receive or download a new file, you simply dump it onto your Desktop.  You need to be disciplined – forever!  Every new file you get, spend those extra few seconds to file it where it belongs!  Otherwise, in just a month or two, you’ll be worse off than before – half your files will be organized and half will be disorganized – and you won’t know which is which! Tip #3.  Choose the Root Folder of Your Structure Carefully Every data file (document, photo, music file, etc) that you create, own or is important to you, no matter where it came from, should be found within one single folder, and that one single folder should be located at the root of your C: drive (as a sub-folder of C:\).  In other words, do not base your folder structure in standard folders like “My Documents”.  If you do, then you’re leaving it up to the operating system engineers to decide what folder structure is best for you.  And every operating system has a different system!  In Windows 7 your files are found in C:\Users\YourName, whilst on Windows XP it was C:\Documents and Settings\YourName\My Documents.  In UNIX systems it’s often /home/YourName. These standard default folders tend to fill up with junk files and folders that are not at all important to you.  “My Documents” is the worst offender.  Every second piece of software you install, it seems, likes to create its own folder in the “My Documents” folder.  These folders usually don’t fit within your organizational structure, so don’t use them!  In fact, don’t even use the “My Documents” folder at all.  Allow it to fill up with junk, and then simply ignore it.  It sounds heretical, but: Don’t ever visit your “My Documents” folder!  Remove your icons/links to “My Documents” and replace them with links to the folders you created and you care about! Create your own file system from scratch!  Probably the best place to put it would be on your D: drive – if you have one.  This way, all your files live on one drive, while all the operating system and software component files live on the C: drive – simply and elegantly separated.  The benefits of that are profound.  Not only are there obvious organizational benefits (see tip #10, below), but when it comes to migrate your data to a new computer, you can (sometimes) simply unplug your D: drive and plug it in as the D: drive of your new computer (this implies that the D: drive is actually a separate physical disk, and not a partition on the same disk as C:).  You also get a slight speed improvement (again, only if your C: and D: drives are on separate physical disks). Warning:  From tip #12, below, you will see that it’s actually a good idea to have exactly the same file system structure – including the drive it’s filed on – on all of the computers you own.  So if you decide to use the D: drive as the storage system for your own files, make sure you are able to use the D: drive on all the computers you own.  If you can’t ensure that, then you can still use a clever geeky trick to store your files on the D: drive, but still access them all via the C: drive (see tip #17, below). If you only have one hard disk (C:), then create a dedicated folder that will contain all your files – something like C:\Files.  The name of the folder is not important, but make it a single, brief word. There are several reasons for this: When creating a backup regime, it’s easy to decide what files should be backed up – they’re all in the one folder! If you ever decide to trade in your computer for a new one, you know exactly which files to migrate You will always know where to begin a search for any file If you synchronize files with other computers, it makes your synchronization routines very simple.   It also causes all your shortcuts to continue to work on the other machines (more about this in tip #24, below). Once you’ve decided where your files should go, then put all your files in there – Everything!  Completely disregard the standard, default folders that are created for you by the operating system (“My Music”, “My Pictures”, etc).  In fact, you can actually relocate many of those folders into your own structure (more about that below, in tip #6). The more completely you get all your data files (documents, photos, music, etc) and all your configuration settings into that one folder, then the easier it will be to perform all of the above tasks. Once this has been done, and all your files live in one folder, all the other folders in C:\ can be thought of as “operating system” folders, and therefore of little day-to-day interest for us. Here’s a screenshot of a nicely organized C: drive, where all user files are located within the \Files folder:   Tip #4.  Use Sub-Folders This would be our simplest and most obvious tip.  It almost goes without saying.  Any organizational system you decide upon (see tip #1) will require that you create sub-folders for your files.  Get used to creating folders on a regular basis. Tip #5.  Don’t be Shy About Depth Create as many levels of sub-folders as you need.  Don’t be scared to do so.  Every time you notice an opportunity to group a set of related files into a sub-folder, do so.  Examples might include:  All the MP3s from one music CD, all the photos from one holiday, or all the documents from one client. It’s perfectly okay to put files into a folder called C:\Files\Me\From Others\Services\WestCo Bank\Statements\2009.  That’s only seven levels deep.  Ten levels is not uncommon.  Of course, it’s possible to take this too far.  If you notice yourself creating a sub-folder to hold only one file, then you’ve probably become a little over-zealous.  On the other hand, if you simply create a structure with only two levels (for example C:\Files\Work) then you really haven’t achieved any level of organization at all (unless you own only six files!).  Your “Work” folder will have become a dumping ground, just like your Desktop was, with most likely hundreds of files in it. Tip #6.  Move the Standard User Folders into Your Own Folder Structure Most operating systems, including Windows, create a set of standard folders for each of its users.  These folders then become the default location for files such as documents, music files, digital photos and downloaded Internet files.  In Windows 7, the full list is shown below: Some of these folders you may never use nor care about (for example, the Favorites folder, if you’re not using Internet Explorer as your browser).  Those ones you can leave where they are.  But you may be using some of the other folders to store files that are important to you.  Even if you’re not using them, Windows will still often treat them as the default storage location for many types of files.  When you go to save a standard file type, it can become annoying to be automatically prompted to save it in a folder that’s not part of your own file structure. But there’s a simple solution:  Move the folders you care about into your own folder structure!  If you do, then the next time you go to save a file of the corresponding type, Windows will prompt you to save it in the new, moved location. Moving the folders is easy.  Simply drag-and-drop them to the new location.  Here’s a screenshot of the default My Music folder being moved to my custom personal folder (Mark): Tip #7.  Name Files and Folders Intelligently This is another one that almost goes without saying, but we’ll say it anyway:  Do not allow files to be created that have meaningless names like Document1.doc, or folders called New Folder (2).  Take that extra 20 seconds and come up with a meaningful name for the file/folder – one that accurately divulges its contents without repeating the entire contents in the name. Tip #8.  Watch Out for Long Filenames Another way to tell if you have not yet created enough depth to your folder hierarchy is that your files often require really long names.  If you need to call a file Johnson Sales Figures March 2009.xls (which might happen to live in the same folder as Abercrombie Budget Report 2008.xls), then you might want to create some sub-folders so that the first file could be simply called March.xls, and living in the Clients\Johnson\Sales Figures\2009 folder. A well-placed file needs only a brief filename! Tip #9.  Use Shortcuts!  Everywhere! This is probably the single most useful and important tip we can offer.  A shortcut allows a file to be in two places at once. Why would you want that?  Well, the file and folder structure of every popular operating system on the market today is hierarchical.  This means that all objects (files and folders) always live within exactly one parent folder.  It’s a bit like a tree.  A tree has branches (folders) and leaves (files).  Each leaf, and each branch, is supported by exactly one parent branch, all the way back to the root of the tree (which, incidentally, is exactly why C:\ is called the “root folder” of the C: drive). That hard disks are structured this way may seem obvious and even necessary, but it’s only one way of organizing data.  There are others:  Relational databases, for example, organize structured data entirely differently.  The main limitation of hierarchical filing structures is that a file can only ever be in one branch of the tree – in only one folder – at a time.  Why is this a problem?  Well, there are two main reasons why this limitation is a problem for computer users: The “correct” place for a file, according to our organizational rationale, is very often a very inconvenient place for that file to be located.  Just because it’s correctly filed doesn’t mean it’s easy to get to.  Your file may be “correctly” buried six levels deep in your sub-folder structure, but you may need regular and speedy access to this file every day.  You could always move it to a more convenient location, but that would mean that you would need to re-file back to its “correct” location it every time you’d finished working on it.  Most unsatisfactory. A file may simply “belong” in two or more different locations within your file structure.  For example, say you’re an accountant and you have just completed the 2009 tax return for John Smith.  It might make sense to you to call this file 2009 Tax Return.doc and file it under Clients\John Smith.  But it may also be important to you to have the 2009 tax returns from all your clients together in the one place.  So you might also want to call the file John Smith.doc and file it under Tax Returns\2009.  The problem is, in a purely hierarchical filing system, you can’t put it in both places.  Grrrrr! Fortunately, Windows (and most other operating systems) offers a way for you to do exactly that:  It’s called a “shortcut” (also known as an “alias” on Macs and a “symbolic link” on UNIX systems).  Shortcuts allow a file to exist in one place, and an icon that represents the file to be created and put anywhere else you please.  In fact, you can create a dozen such icons and scatter them all over your hard disk.  Double-clicking on one of these icons/shortcuts opens up the original file, just as if you had double-clicked on the original file itself. Consider the following two icons: The one on the left is the actual Word document, while the one on the right is a shortcut that represents the Word document.  Double-clicking on either icon will open the same file.  There are two main visual differences between the icons: The shortcut will have a small arrow in the lower-left-hand corner (on Windows, anyway) The shortcut is allowed to have a name that does not include the file extension (the “.docx” part, in this case) You can delete the shortcut at any time without losing any actual data.  The original is still intact.  All you lose is the ability to get to that data from wherever the shortcut was. So why are shortcuts so great?  Because they allow us to easily overcome the main limitation of hierarchical file systems, and put a file in two (or more) places at the same time.  You will always have files that don’t play nice with your organizational rationale, and can’t be filed in only one place.  They demand to exist in two places.  Shortcuts allow this!  Furthermore, they allow you to collect your most often-opened files and folders together in one spot for convenient access.  The cool part is that the original files stay where they are, safe forever in their perfectly organized location. So your collection of most often-opened files can – and should – become a collection of shortcuts! If you’re still not convinced of the utility of shortcuts, consider the following well-known areas of a typical Windows computer: The Start Menu (and all the programs that live within it) The Quick Launch bar (or the Superbar in Windows 7) The “Favorite folders” area in the top-left corner of the Windows Explorer window (in Windows Vista or Windows 7) Your Internet Explorer Favorites or Firefox Bookmarks Each item in each of these areas is a shortcut!  Each of those areas exist for one purpose only:  For convenience – to provide you with a collection of the files and folders you access most often. It should be easy to see by now that shortcuts are designed for one single purpose:  To make accessing your files more convenient.  Each time you double-click on a shortcut, you are saved the hassle of locating the file (or folder, or program, or drive, or control panel icon) that it represents. Shortcuts allow us to invent a golden rule of file and folder organization: “Only ever have one copy of a file – never have two copies of the same file.  Use a shortcut instead” (this rule doesn’t apply to copies created for backup purposes, of course!) There are also lesser rules, like “don’t move a file into your work area – create a shortcut there instead”, and “any time you find yourself frustrated with how long it takes to locate a file, create a shortcut to it and place that shortcut in a convenient location.” So how to we create these massively useful shortcuts?  There are two main ways: “Copy” the original file or folder (click on it and type Ctrl-C, or right-click on it and select Copy):  Then right-click in an empty area of the destination folder (the place where you want the shortcut to go) and select Paste shortcut: Right-drag (drag with the right mouse button) the file from the source folder to the destination folder.  When you let go of the mouse button at the destination folder, a menu pops up: Select Create shortcuts here. Note that when shortcuts are created, they are often named something like Shortcut to Budget Detail.doc (windows XP) or Budget Detail – Shortcut.doc (Windows 7).   If you don’t like those extra words, you can easily rename the shortcuts after they’re created, or you can configure Windows to never insert the extra words in the first place (see our article on how to do this). And of course, you can create shortcuts to folders too, not just to files! Bottom line: Whenever you have a file that you’d like to access from somewhere else (whether it’s convenience you’re after, or because the file simply belongs in two places), create a shortcut to the original file in the new location. Tip #10.  Separate Application Files from Data Files Any digital organization guru will drum this rule into you.  Application files are the components of the software you’ve installed (e.g. Microsoft Word, Adobe Photoshop or Internet Explorer).  Data files are the files that you’ve created for yourself using that software (e.g. Word Documents, digital photos, emails or playlists). Software gets installed, uninstalled and upgraded all the time.  Hopefully you always have the original installation media (or downloaded set-up file) kept somewhere safe, and can thus reinstall your software at any time.  This means that the software component files are of little importance.  Whereas the files you have created with that software is, by definition, important.  It’s a good rule to always separate unimportant files from important files. So when your software prompts you to save a file you’ve just created, take a moment and check out where it’s suggesting that you save the file.  If it’s suggesting that you save the file into the same folder as the software itself, then definitely don’t follow that suggestion.  File it in your own folder!  In fact, see if you can find the program’s configuration option that determines where files are saved by default (if it has one), and change it. Tip #11.  Organize Files Based on Purpose, Not on File Type If you have, for example a folder called Work\Clients\Johnson, and within that folder you have two sub-folders, Word Documents and Spreadsheets (in other words, you’re separating “.doc” files from “.xls” files), then chances are that you’re not optimally organized.  It makes little sense to organize your files based on the program that created them.  Instead, create your sub-folders based on the purpose of the file.  For example, it would make more sense to create sub-folders called Correspondence and Financials.  It may well be that all the files in a given sub-folder are of the same file-type, but this should be more of a coincidence and less of a design feature of your organization system. Tip #12.  Maintain the Same Folder Structure on All Your Computers In other words, whatever organizational system you create, apply it to every computer that you can.  There are several benefits to this: There’s less to remember.  No matter where you are, you always know where to look for your files If you copy or synchronize files from one computer to another, then setting up the synchronization job becomes very simple Shortcuts can be copied or moved from one computer to another with ease (assuming the original files are also copied/moved).  There’s no need to find the target of the shortcut all over again on the second computer Ditto for linked files (e.g Word documents that link to data in a separate Excel file), playlists, and any files that reference the exact file locations of other files. This applies even to the drive that your files are stored on.  If your files are stored on C: on one computer, make sure they’re stored on C: on all your computers.  Otherwise all your shortcuts, playlists and linked files will stop working! Tip #13.  Create an “Inbox” Folder Create yourself a folder where you store all files that you’re currently working on, or that you haven’t gotten around to filing yet.  You can think of this folder as your “to-do” list.  You can call it “Inbox” (making it the same metaphor as your email system), or “Work”, or “To-Do”, or “Scratch”, or whatever name makes sense to you.  It doesn’t matter what you call it – just make sure you have one! Once you have finished working on a file, you then move it from the “Inbox” to its correct location within your organizational structure. You may want to use your Desktop as this “Inbox” folder.  Rightly or wrongly, most people do.  It’s not a bad place to put such files, but be careful:  If you do decide that your Desktop represents your “to-do” list, then make sure that no other files find their way there.  In other words, make sure that your “Inbox”, wherever it is, Desktop or otherwise, is kept free of junk – stray files that don’t belong there. So where should you put this folder, which, almost by definition, lives outside the structure of the rest of your filing system?  Well, first and foremost, it has to be somewhere handy.  This will be one of your most-visited folders, so convenience is key.  Putting it on the Desktop is a great option – especially if you don’t have any other folders on your Desktop:  the folder then becomes supremely easy to find in Windows Explorer: You would then create shortcuts to this folder in convenient spots all over your computer (“Favorite Links”, “Quick Launch”, etc). Tip #14.  Ensure You have Only One “Inbox” Folder Once you’ve created your “Inbox” folder, don’t use any other folder location as your “to-do list”.  Throw every incoming or created file into the Inbox folder as you create/receive it.  This keeps the rest of your computer pristine and free of randomly created or downloaded junk.  The last thing you want to be doing is checking multiple folders to see all your current tasks and projects.  Gather them all together into one folder. Here are some tips to help ensure you only have one Inbox: Set the default “save” location of all your programs to this folder. Set the default “download” location for your browser to this folder. If this folder is not your desktop (recommended) then also see if you can make a point of not putting “to-do” files on your desktop.  This keeps your desktop uncluttered and Zen-like: (the Inbox folder is in the bottom-right corner) Tip #15.  Be Vigilant about Clearing Your “Inbox” Folder This is one of the keys to staying organized.  If you let your “Inbox” overflow (i.e. allow there to be more than, say, 30 files or folders in there), then you’re probably going to start feeling like you’re overwhelmed:  You’re not keeping up with your to-do list.  Once your Inbox gets beyond a certain point (around 30 files, studies have shown), then you’ll simply start to avoid it.  You may continue to put files in there, but you’ll be scared to look at it, fearing the “out of control” feeling that all overworked, chaotic or just plain disorganized people regularly feel. So, here’s what you can do: Visit your Inbox/to-do folder regularly (at least five times per day). Scan the folder regularly for files that you have completed working on and are ready for filing.  File them immediately. Make it a source of pride to keep the number of files in this folder as small as possible.  If you value peace of mind, then make the emptiness of this folder one of your highest (computer) priorities If you know that a particular file has been in the folder for more than, say, six weeks, then admit that you’re not actually going to get around to processing it, and move it to its final resting place. Tip #16.  File Everything Immediately, and Use Shortcuts for Your Active Projects As soon as you create, receive or download a new file, store it away in its “correct” folder immediately.  Then, whenever you need to work on it (possibly straight away), create a shortcut to it in your “Inbox” (“to-do”) folder or your desktop.  That way, all your files are always in their “correct” locations, yet you still have immediate, convenient access to your current, active files.  When you finish working on a file, simply delete the shortcut. Ideally, your “Inbox” folder – and your Desktop – should contain no actual files or folders.  They should simply contain shortcuts. Tip #17.  Use Directory Symbolic Links (or Junctions) to Maintain One Unified Folder Structure Using this tip, we can get around a potential hiccup that we can run into when creating our organizational structure – the issue of having more than one drive on our computer (C:, D:, etc).  We might have files we need to store on the D: drive for space reasons, and yet want to base our organized folder structure on the C: drive (or vice-versa). Your chosen organizational structure may dictate that all your files must be accessed from the C: drive (for example, the root folder of all your files may be something like C:\Files).  And yet you may still have a D: drive and wish to take advantage of the hundreds of spare Gigabytes that it offers.  Did you know that it’s actually possible to store your files on the D: drive and yet access them as if they were on the C: drive?  And no, we’re not talking about shortcuts here (although the concept is very similar). By using the shell command mklink, you can essentially take a folder that lives on one drive and create an alias for it on a different drive (you can do lots more than that with mklink – for a full rundown on this programs capabilities, see our dedicated article).  These aliases are called directory symbolic links (and used to be known as junctions).  You can think of them as “virtual” folders.  They function exactly like regular folders, except they’re physically located somewhere else. For example, you may decide that your entire D: drive contains your complete organizational file structure, but that you need to reference all those files as if they were on the C: drive, under C:\Files.  If that was the case you could create C:\Files as a directory symbolic link – a link to D:, as follows: mklink /d c:\files d:\ Or it may be that the only files you wish to store on the D: drive are your movie collection.  You could locate all your movie files in the root of your D: drive, and then link it to C:\Files\Media\Movies, as follows: mklink /d c:\files\media\movies d:\ (Needless to say, you must run these commands from a command prompt – click the Start button, type cmd and press Enter) Tip #18. Customize Your Folder Icons This is not strictly speaking an organizational tip, but having unique icons for each folder does allow you to more quickly visually identify which folder is which, and thus saves you time when you’re finding files.  An example is below (from my folder that contains all files downloaded from the Internet): To learn how to change your folder icons, please refer to our dedicated article on the subject. Tip #19.  Tidy Your Start Menu The Windows Start Menu is usually one of the messiest parts of any Windows computer.  Every program you install seems to adopt a completely different approach to placing icons in this menu.  Some simply put a single program icon.  Others create a folder based on the name of the software.  And others create a folder based on the name of the software manufacturer.  It’s chaos, and can make it hard to find the software you want to run. Thankfully we can avoid this chaos with useful operating system features like Quick Launch, the Superbar or pinned start menu items. Even so, it would make a lot of sense to get into the guts of the Start Menu itself and give it a good once-over.  All you really need to decide is how you’re going to organize your applications.  A structure based on the purpose of the application is an obvious candidate.  Below is an example of one such structure: In this structure, Utilities means software whose job it is to keep the computer itself running smoothly (configuration tools, backup software, Zip programs, etc).  Applications refers to any productivity software that doesn’t fit under the headings Multimedia, Graphics, Internet, etc. In case you’re not aware, every icon in your Start Menu is a shortcut and can be manipulated like any other shortcut (copied, moved, deleted, etc). With the Windows Start Menu (all version of Windows), Microsoft has decided that there be two parallel folder structures to store your Start Menu shortcuts.  One for you (the logged-in user of the computer) and one for all users of the computer.  Having two parallel structures can often be redundant:  If you are the only user of the computer, then having two parallel structures is totally redundant.  Even if you have several users that regularly log into the computer, most of your installed software will need to be made available to all users, and should thus be moved out of the “just you” version of the Start Menu and into the “all users” area. To take control of your Start Menu, so you can start organizing it, you’ll need to know how to access the actual folders and shortcut files that make up the Start Menu (both versions of it).  To find these folders and files, click the Start button and then right-click on the All Programs text (Windows XP users should right-click on the Start button itself): The Open option refers to the “just you” version of the Start Menu, while the Open All Users option refers to the “all users” version.  Click on the one you want to organize. A Windows Explorer window then opens with your chosen version of the Start Menu selected.  From there it’s easy.  Double-click on the Programs folder and you’ll see all your folders and shortcuts.  Now you can delete/rename/move until it’s just the way you want it. Note:  When you’re reorganizing your Start Menu, you may want to have two Explorer windows open at the same time – one showing the “just you” version and one showing the “all users” version.  You can drag-and-drop between the windows. Tip #20.  Keep Your Start Menu Tidy Once you have a perfectly organized Start Menu, try to be a little vigilant about keeping it that way.  Every time you install a new piece of software, the icons that get created will almost certainly violate your organizational structure. So to keep your Start Menu pristine and organized, make sure you do the following whenever you install a new piece of software: Check whether the software was installed into the “just you” area of the Start Menu, or the “all users” area, and then move it to the correct area. Remove all the unnecessary icons (like the “Read me” icon, the “Help” icon (you can always open the help from within the software itself when it’s running), the “Uninstall” icon, the link(s)to the manufacturer’s website, etc) Rename the main icon(s) of the software to something brief that makes sense to you.  For example, you might like to rename Microsoft Office Word 2010 to simply Word Move the icon(s) into the correct folder based on your Start Menu organizational structure And don’t forget:  when you uninstall a piece of software, the software’s uninstall routine is no longer going to be able to remove the software’s icon from the Start Menu (because you moved and/or renamed it), so you’ll need to remove that icon manually. Tip #21.  Tidy C:\ The root of your C: drive (C:\) is a common dumping ground for files and folders – both by the users of your computer and by the software that you install on your computer.  It can become a mess. There’s almost no software these days that requires itself to be installed in C:\.  99% of the time it can and should be installed into C:\Program Files.  And as for your own files, well, it’s clear that they can (and almost always should) be stored somewhere else. In an ideal world, your C:\ folder should look like this (on Windows 7): Note that there are some system files and folders in C:\ that are usually and deliberately “hidden” (such as the Windows virtual memory file pagefile.sys, the boot loader file bootmgr, and the System Volume Information folder).  Hiding these files and folders is a good idea, as they need to stay where they are and are almost never needed to be opened or even seen by you, the user.  Hiding them prevents you from accidentally messing with them, and enhances your sense of order and well-being when you look at your C: drive folder. Tip #22.  Tidy Your Desktop The Desktop is probably the most abused part of a Windows computer (from an organization point of view).  It usually serves as a dumping ground for all incoming files, as well as holding icons to oft-used applications, plus some regularly opened files and folders.  It often ends up becoming an uncontrolled mess.  See if you can avoid this.  Here’s why… Application icons (Word, Internet Explorer, etc) are often found on the Desktop, but it’s unlikely that this is the optimum place for them.  The “Quick Launch” bar (or the Superbar in Windows 7) is always visible and so represents a perfect location to put your icons.  You’ll only be able to see the icons on your Desktop when all your programs are minimized.  It might be time to get your application icons off your desktop… You may have decided that the Inbox/To-do folder on your computer (see tip #13, above) should be your Desktop.  If so, then enough said.  Simply be vigilant about clearing it and preventing it from being polluted by junk files (see tip #15, above).  On the other hand, if your Desktop is not acting as your “Inbox” folder, then there’s no reason for it to have any data files or folders on it at all, except perhaps a couple of shortcuts to often-opened files and folders (either ongoing or current projects).  Everything else should be moved to your “Inbox” folder. In an ideal world, it might look like this: Tip #23.  Move Permanent Items on Your Desktop Away from the Top-Left Corner When files/folders are dragged onto your desktop in a Windows Explorer window, or when shortcuts are created on your Desktop from Internet Explorer, those icons are always placed in the top-left corner – or as close as they can get.  If you have other files, folders or shortcuts that you keep on the Desktop permanently, then it’s a good idea to separate these permanent icons from the transient ones, so that you can quickly identify which ones the transients are.  An easy way to do this is to move all your permanent icons to the right-hand side of your Desktop.  That should keep them separated from incoming items. Tip #24.  Synchronize If you have more than one computer, you’ll almost certainly want to share files between them.  If the computers are permanently attached to the same local network, then there’s no need to store multiple copies of any one file or folder – shortcuts will suffice.  However, if the computers are not always on the same network, then you will at some point need to copy files between them.  For files that need to permanently live on both computers, the ideal way to do this is to synchronize the files, as opposed to simply copying them. We only have room here to write a brief summary of synchronization, not a full article.  In short, there are several different types of synchronization: Where the contents of one folder are accessible anywhere, such as with Dropbox Where the contents of any number of folders are accessible anywhere, such as with Windows Live Mesh Where any files or folders from anywhere on your computer are synchronized with exactly one other computer, such as with the Windows “Briefcase”, Microsoft SyncToy, or (much more powerful, yet still free) SyncBack from 2BrightSparks.  This only works when both computers are on the same local network, at least temporarily. A great advantage of synchronization solutions is that once you’ve got it configured the way you want it, then the sync process happens automatically, every time.  Click a button (or schedule it to happen automatically) and all your files are automagically put where they’re supposed to be. If you maintain the same file and folder structure on both computers, then you can also sync files depend upon the correct location of other files, like shortcuts, playlists and office documents that link to other office documents, and the synchronized files still work on the other computer! Tip #25.  Hide Files You Never Need to See If you have your files well organized, you will often be able to tell if a file is out of place just by glancing at the contents of a folder (for example, it should be pretty obvious if you look in a folder that contains all the MP3s from one music CD and see a Word document in there).  This is a good thing – it allows you to determine if there are files out of place with a quick glance.  Yet sometimes there are files in a folder that seem out of place but actually need to be there, such as the “folder art” JPEGs in music folders, and various files in the root of the C: drive.  If such files never need to be opened by you, then a good idea is to simply hide them.  Then, the next time you glance at the folder, you won’t have to remember whether that file was supposed to be there or not, because you won’t see it at all! To hide a file, simply right-click on it and choose Properties: Then simply tick the Hidden tick-box:   Tip #26.  Keep Every Setup File These days most software is downloaded from the Internet.  Whenever you download a piece of software, keep it.  You’ll never know when you need to reinstall the software. Further, keep with it an Internet shortcut that links back to the website where you originally downloaded it, in case you ever need to check for updates. See tip #33 below for a full description of the excellence of organizing your setup files. Tip #27.  Try to Minimize the Number of Folders that Contain Both Files and Sub-folders Some of the folders in your organizational structure will contain only files.  Others will contain only sub-folders.  And you will also have some folders that contain both files and sub-folders.  You will notice slight improvements in how long it takes you to locate a file if you try to avoid this third type of folder.  It’s not always possible, of course – you’ll always have some of these folders, but see if you can avoid it. One way of doing this is to take all the leftover files that didn’t end up getting stored in a sub-folder and create a special “Miscellaneous” or “Other” folder for them. Tip #28.  Starting a Filename with an Underscore Brings it to the Top of a List Further to the previous tip, if you name that “Miscellaneous” or “Other” folder in such a way that its name begins with an underscore “_”, then it will appear at the top of the list of files/folders. The screenshot below is an example of this.  Each folder in the list contains a set of digital photos.  The folder at the top of the list, _Misc, contains random photos that didn’t deserve their own dedicated folder: Tip #29.  Clean Up those CD-ROMs and (shudder!) Floppy Disks Have you got a pile of CD-ROMs stacked on a shelf of your office?  Old photos, or files you archived off onto CD-ROM (or even worse, floppy disks!) because you didn’t have enough disk space at the time?  In the meantime have you upgraded your computer and now have 500 Gigabytes of space you don’t know what to do with?  If so, isn’t it time you tidied up that stack of disks and filed them into your gorgeous new folder structure? So what are you waiting for?  Bite the bullet, copy them all back onto your computer, file them in their appropriate folders, and then back the whole lot up onto a shiny new 1000Gig external hard drive! Useful Folders to Create This next section suggests some useful folders that you might want to create within your folder structure.  I’ve personally found them to be indispensable. The first three are all about convenience – handy folders to create and then put somewhere that you can always access instantly.  For each one, it’s not so important where the actual folder is located, but it’s very important where you put the shortcut(s) to the folder.  You might want to locate the shortcuts: On your Desktop In your “Quick Launch” area (or pinned to your Windows 7 Superbar) In your Windows Explorer “Favorite Links” area Tip #30.  Create an “Inbox” (“To-Do”) Folder This has already been mentioned in depth (see tip #13), but we wanted to reiterate its importance here.  This folder contains all the recently created, received or downloaded files that you have not yet had a chance to file away properly, and it also may contain files that you have yet to process.  In effect, it becomes a sort of “to-do list”.  It doesn’t have to be called “Inbox” – you can call it whatever you want. Tip #31.  Create a Folder where Your Current Projects are Collected Rather than going hunting for them all the time, or dumping them all on your desktop, create a special folder where you put links (or work folders) for each of the projects you’re currently working on. You can locate this folder in your “Inbox” folder, on your desktop, or anywhere at all – just so long as there’s a way of getting to it quickly, such as putting a link to it in Windows Explorer’s “Favorite Links” area: Tip #32.  Create a Folder for Files and Folders that You Regularly Open You will always have a few files that you open regularly, whether it be a spreadsheet of your current accounts, or a favorite playlist.  These are not necessarily “current projects”, rather they’re simply files that you always find yourself opening.  Typically such files would be located on your desktop (or even better, shortcuts to those files).  Why not collect all such shortcuts together and put them in their own special folder? As with the “Current Projects” folder (above), you would want to locate that folder somewhere convenient.  Below is an example of a folder called “Quick links”, with about seven files (shortcuts) in it, that is accessible through the Windows Quick Launch bar: See tip #37 below for a full explanation of the power of the Quick Launch bar. Tip #33.  Create a “Set-ups” Folder A typical computer has dozens of applications installed on it.  For each piece of software, there are often many different pieces of information you need to keep track of, including: The original installation setup file(s).  This can be anything from a simple 100Kb setup.exe file you downloaded from a website, all the way up to a 4Gig ISO file that you copied from a DVD-ROM that you purchased. The home page of the software manufacturer (in case you need to look up something on their support pages, their forum or their online help) The page containing the download link for your actual file (in case you need to re-download it, or download an upgraded version) The serial number Your proof-of-purchase documentation Any other template files, plug-ins, themes, etc that also need to get installed For each piece of software, it’s a great idea to gather all of these files together and put them in a single folder.  The folder can be the name of the software (plus possibly a very brief description of what it’s for – in case you can’t remember what the software does based in its name).  Then you would gather all of these folders together into one place, and call it something like “Software” or “Setups”. If you have enough of these folders (I have several hundred, being a geek, collected over 20 years), then you may want to further categorize them.  My own categorization structure is based on “platform” (operating system): The last seven folders each represents one platform/operating system, while _Operating Systems contains set-up files for installing the operating systems themselves.  _Hardware contains ROMs for hardware I own, such as routers. Within the Windows folder (above), you can see the beginnings of the vast library of software I’ve compiled over the years: An example of a typical application folder looks like this: Tip #34.  Have a “Settings” Folder We all know that our documents are important.  So are our photos and music files.  We save all of these files into folders, and then locate them afterwards and double-click on them to open them.  But there are many files that are important to us that can’t be saved into folders, and then searched for and double-clicked later on.  These files certainly contain important information that we need, but are often created internally by an application, and saved wherever that application feels is appropriate. A good example of this is the “PST” file that Outlook creates for us and uses to store all our emails, contacts, appointments and so forth.  Another example would be the collection of Bookmarks that Firefox stores on your behalf. And yet another example would be the customized settings and configuration files of our all our software.  Granted, most Windows programs store their configuration in the Registry, but there are still many programs that use configuration files to store their settings. Imagine if you lost all of the above files!  And yet, when people are backing up their computers, they typically only back up the files they know about – those that are stored in the “My Documents” folder, etc.  If they had a hard disk failure or their computer was lost or stolen, their backup files would not include some of the most vital files they owned.  Also, when migrating to a new computer, it’s vital to ensure that these files make the journey. It can be a very useful idea to create yourself a folder to store all your “settings” – files that are important to you but which you never actually search for by name and double-click on to open them.  Otherwise, next time you go to set up a new computer just the way you want it, you’ll need to spend hours recreating the configuration of your previous computer! So how to we get our important files into this folder?  Well, we have a few options: Some programs (such as Outlook and its PST files) allow you to place these files wherever you want.  If you delve into the program’s options, you will find a setting somewhere that controls the location of the important settings files (or “personal storage” – PST – when it comes to Outlook) Some programs do not allow you to change such locations in any easy way, but if you get into the Registry, you can sometimes find a registry key that refers to the location of the file(s).  Simply move the file into your Settings folder and adjust the registry key to refer to the new location. Some programs stubbornly refuse to allow their settings files to be placed anywhere other then where they stipulate.  When faced with programs like these, you have three choices:  (1) You can ignore those files, (2) You can copy the files into your Settings folder (let’s face it – settings don’t change very often), or (3) you can use synchronization software, such as the Windows Briefcase, to make synchronized copies of all your files in your Settings folder.  All you then have to do is to remember to run your sync software periodically (perhaps just before you run your backup software!). There are some other things you may decide to locate inside this new “Settings” folder: Exports of registry keys (from the many applications that store their configurations in the Registry).  This is useful for backup purposes or for migrating to a new computer Notes you’ve made about all the specific customizations you have made to a particular piece of software (so that you’ll know how to do it all again on your next computer) Shortcuts to webpages that detail how to tweak certain aspects of your operating system or applications so they are just the way you like them (such as how to remove the words “Shortcut to” from the beginning of newly created shortcuts).  In other words, you’d want to create shortcuts to half the pages on the How-To Geek website! Here’s an example of a “Settings” folder: Windows Features that Help with Organization This section details some of the features of Microsoft Windows that are a boon to anyone hoping to stay optimally organized. Tip #35.  Use the “Favorite Links” Area to Access Oft-Used Folders Once you’ve created your great new filing system, work out which folders you access most regularly, or which serve as great starting points for locating the rest of the files in your folder structure, and then put links to those folders in your “Favorite Links” area of the left-hand side of the Windows Explorer window (simply called “Favorites” in Windows 7):   Some ideas for folders you might want to add there include: Your “Inbox” folder (or whatever you’ve called it) – most important! The base of your filing structure (e.g. C:\Files) A folder containing shortcuts to often-accessed folders on other computers around the network (shown above as Network Folders) A folder containing shortcuts to your current projects (unless that folder is in your “Inbox” folder) Getting folders into this area is very simple – just locate the folder you’re interested in and drag it there! Tip #36.  Customize the Places Bar in the File/Open and File/Save Boxes Consider the screenshot below: The highlighted icons (collectively known as the “Places Bar”) can be customized to refer to any folder location you want, allowing instant access to any part of your organizational structure. Note:  These File/Open and File/Save boxes have been superseded by new versions that use the Windows Vista/Windows 7 “Favorite Links”, but the older versions (shown above) are still used by a surprisingly large number of applications. The easiest way to customize these icons is to use the Group Policy Editor, but not everyone has access to this program.  If you do, open it up and navigate to: User Configuration > Administrative Templates > Windows Components > Windows Explorer > Common Open File Dialog If you don’t have access to the Group Policy Editor, then you’ll need to get into the Registry.  Navigate to: HKEY_CURRENT_USER \ Software \ Microsoft  \ Windows \ CurrentVersion \ Policies \ comdlg32 \ Placesbar It should then be easy to make the desired changes.  Log off and log on again to allow the changes to take effect. Tip #37.  Use the Quick Launch Bar as a Application and File Launcher That Quick Launch bar (to the right of the Start button) is a lot more useful than people give it credit for.  Most people simply have half a dozen icons in it, and use it to start just those programs.  But it can actually be used to instantly access just about anything in your filing system: For complete instructions on how to set this up, visit our dedicated article on this topic. Tip #38.  Put a Shortcut to Windows Explorer into Your Quick Launch Bar This is only necessary in Windows Vista and Windows XP.  The Microsoft boffins finally got wise and added it to the Windows 7 Superbar by default. Windows Explorer – the program used for managing your files and folders – is one of the most useful programs in Windows.  Anyone who considers themselves serious about being organized needs instant access to this program at any time.  A great place to create a shortcut to this program is in the Windows XP and Windows Vista “Quick Launch” bar: To get it there, locate it in your Start Menu (usually under “Accessories”) and then right-drag it down into your Quick Launch bar (and create a copy). Tip #39.  Customize the Starting Folder for Your Windows 7 Explorer Superbar Icon If you’re on Windows 7, your Superbar will include a Windows Explorer icon.  Clicking on the icon will launch Windows Explorer (of course), and will start you off in your “Libraries” folder.  Libraries may be fine as a starting point, but if you have created yourself an “Inbox” folder, then it would probably make more sense to start off in this folder every time you launch Windows Explorer. To change this default/starting folder location, then first right-click the Explorer icon in the Superbar, and then right-click Properties:Then, in Target field of the Windows Explorer Properties box that appears, type %windir%\explorer.exe followed by the path of the folder you wish to start in.  For example: %windir%\explorer.exe C:\Files If that folder happened to be on the Desktop (and called, say, “Inbox”), then you would use the following cleverness: %windir%\explorer.exe shell:desktop\Inbox Then click OK and test it out. Tip #40.  Ummmmm…. No, that’s it.  I can’t think of another one.  That’s all of the tips I can come up with.  I only created this one because 40 is such a nice round number… Case Study – An Organized PC To finish off the article, I have included a few screenshots of my (main) computer (running Vista).  The aim here is twofold: To give you a sense of what it looks like when the above, sometimes abstract, tips are applied to a real-life computer, and To offer some ideas about folders and structure that you may want to steal to use on your own PC. Let’s start with the C: drive itself.  Very minimal.  All my files are contained within C:\Files.  I’ll confine the rest of the case study to this folder: That folder contains the following: Mark: My personal files VC: My business (Virtual Creations, Australia) Others contains files created by friends and family Data contains files from the rest of the world (can be thought of as “public” files, usually downloaded from the Net) Settings is described above in tip #34 The Data folder contains the following sub-folders: Audio:  Radio plays, audio books, podcasts, etc Development:  Programmer and developer resources, sample source code, etc (see below) Humour:  Jokes, funnies (those emails that we all receive) Movies:  Downloaded and ripped movies (all legal, of course!), their scripts, DVD covers, etc. Music:  (see below) Setups:  Installation files for software (explained in full in tip #33) System:  (see below) TV:  Downloaded TV shows Writings:  Books, instruction manuals, etc (see below) The Music folder contains the following sub-folders: Album covers:  JPEG scans Guitar tabs:  Text files of guitar sheet music Lists:  e.g. “Top 1000 songs of all time” Lyrics:  Text files MIDI:  Electronic music files MP3 (representing 99% of the Music folder):  MP3s, either ripped from CDs or downloaded, sorted by artist/album name Music Video:  Video clips Sheet Music:  usually PDFs The Data\Writings folder contains the following sub-folders: (all pretty self-explanatory) The Data\Development folder contains the following sub-folders: Again, all pretty self-explanatory (if you’re a geek) The Data\System folder contains the following sub-folders: These are usually themes, plug-ins and other downloadable program-specific resources. The Mark folder contains the following sub-folders: From Others:  Usually letters that other people (friends, family, etc) have written to me For Others:  Letters and other things I have created for other people Green Book:  None of your business Playlists:  M3U files that I have compiled of my favorite songs (plus one M3U playlist file for every album I own) Writing:  Fiction, philosophy and other musings of mine Mark Docs:  Shortcut to C:\Users\Mark Settings:  Shortcut to C:\Files\Settings\Mark The Others folder contains the following sub-folders: The VC (Virtual Creations, my business – I develop websites) folder contains the following sub-folders: And again, all of those are pretty self-explanatory. Conclusion These tips have saved my sanity and helped keep me a productive geek, but what about you? What tips and tricks do you have to keep your files organized?  Please share them with us in the comments.  Come on, don’t be shy… Similar Articles Productive Geek Tips Fix For When Windows Explorer in Vista Stops Showing File NamesWhy Did Windows Vista’s Music Folder Icon Turn Yellow?Print or Create a Text File List of the Contents in a Directory the Easy WayCustomize the Windows 7 or Vista Send To MenuAdd Copy To / Move To on Windows 7 or Vista Right-Click Menu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Track Daily Goals With 42Goals Video Toolbox is a Superb Online Video Editor Fun with 47 charts and graphs Tomorrow is Mother’s Day Check the Average Speed of YouTube Videos You’ve Watched OutlookStatView Scans and Displays General Usage Statistics

    Read the article

  • Is there a Telecommunications Reference Architecture?

    - by raul.goycoolea
    @font-face { font-family: "Arial"; }@font-face { font-family: "Courier New"; }@font-face { font-family: "Wingdings"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpFirst, li.MsoListParagraphCxSpFirst, div.MsoListParagraphCxSpFirst { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpMiddle, li.MsoListParagraphCxSpMiddle, div.MsoListParagraphCxSpMiddle { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpLast, li.MsoListParagraphCxSpLast, div.MsoListParagraphCxSpLast { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; }ol { margin-bottom: 0cm; }ul { margin-bottom: 0cm; } Abstract   Reference architecture provides needed architectural information that can be provided in advance to an enterprise to enable consistent architectural best practices. Enterprise Reference Architecture helps business owners to actualize their strategies, vision, objectives, and principles. It evaluates the IT systems, based on Reference Architecture goals, principles, and standards. It helps to reduce IT costs by increasing functionality, availability, scalability, etc. Telecom Reference Architecture provides customers with the flexibility to view bundled service bills online with the provision of multiple services. It provides real-time, flexible billing and charging systems, to handle complex promotions, discounts, and settlements with multiple parties. This paper attempts to describe the Reference Architecture for the Telecom Enterprises. It lays the foundation for a Telecom Reference Architecture by articulating the requirements, drivers, and pitfalls for telecom service providers. It describes generic reference architecture for telecom enterprises and moves on to explain how to achieve Enterprise Reference Architecture by using SOA.   Introduction   A Reference Architecture provides a methodology, set of practices, template, and standards based on a set of successful solutions implemented earlier. These solutions have been generalized and structured for the depiction of both a logical and a physical architecture, based on the harvesting of a set of patterns that describe observations in a number of successful implementations. It helps as a reference for the various architectures that an enterprise can implement to solve various problems. It can be used as the starting point or the point of comparisons for various departments/business entities of a company, or for the various companies for an enterprise. It provides multiple views for multiple stakeholders.   Major artifacts of the Enterprise Reference Architecture are methodologies, standards, metadata, documents, design patterns, etc.   Purpose of Reference Architecture   In most cases, architects spend a lot of time researching, investigating, defining, and re-arguing architectural decisions. It is like reinventing the wheel as their peers in other organizations or even the same organization have already spent a lot of time and effort defining their own architectural practices. This prevents an organization from learning from its own experiences and applying that knowledge for increased effectiveness.   Reference architecture provides missing architectural information that can be provided in advance to project team members to enable consistent architectural best practices.   Enterprise Reference Architecture helps an enterprise to achieve the following at the abstract level:   ·       Reference architecture is more of a communication channel to an enterprise ·       Helps the business owners to accommodate to their strategies, vision, objectives, and principles. ·       Evaluates the IT systems based on Reference Architecture Principles ·       Reduces IT spending through increasing functionality, availability, scalability, etc ·       A Real-time Integration Model helps to reduce the latency of the data updates Is used to define a single source of Information ·       Provides a clear view on how to manage information and security ·       Defines the policy around the data ownership, product boundaries, etc. ·       Helps with cost optimization across project and solution portfolios by eliminating unused or duplicate investments and assets ·       Has a shorter implementation time and cost   Once the reference architecture is in place, the set of architectural principles, standards, reference models, and best practices ensure that the aligned investments have the greatest possible likelihood of success in both the near term and the long term (TCO).     Common pitfalls for Telecom Service Providers   Telecom Reference Architecture serves as the first step towards maturity for a telecom service provider. During the course of our assignments/experiences with telecom players, we have come across the following observations – Some of these indicate a lack of maturity of the telecom service provider:   ·       In markets that are growing and not so mature, it has been observed that telcos have a significant amount of in-house or home-grown applications. In some of these markets, the growth has been so rapid that IT has been unable to cope with business demands. Telcos have shown a tendency to come up with workarounds in their IT applications so as to meet business needs. ·       Even for core functions like provisioning or mediation, some telcos have tried to manage with home-grown applications. ·       Most of the applications do not have the required scalability or maintainability to sustain growth in volumes or functionality. ·       Applications face interoperability issues with other applications in the operator's landscape. Integrating a new application or network element requires considerable effort on the part of the other applications. ·       Application boundaries are not clear, and functionality that is not in the initial scope of that application gets pushed onto it. This results in the development of the multiple, small applications without proper boundaries. ·       Usage of Legacy OSS/BSS systems, poor Integration across Multiple COTS Products and Internal Systems. Most of the Integrations are developed on ad-hoc basis and Point-to-Point Integration. ·       Redundancy of the business functions in different applications • Fragmented data across the different applications and no integrated view of the strategic data • Lot of performance Issues due to the usage of the complex integration across OSS and BSS systems   However, this is where the maturity of the telecom industry as a whole can be of help. The collaborative efforts of telcos to overcome some of these problems have resulted in bodies like the TM Forum. They have come up with frameworks for business processes, data, applications, and technology for telecom service providers. These could be a good starting point for telcos to clean up their enterprise landscape.   Industry Trends in Telecom Reference Architecture   Telecom reference architectures are evolving rapidly because telcos are facing business and IT challenges.   “The reality is that there probably is no killer application, no silver bullet that the telcos can latch onto to carry them into a 21st Century.... Instead, there are probably hundreds – perhaps thousands – of niche applications.... And the only way to find which of these works for you is to try out lots of them, ramp up the ones that work, and discontinue the ones that fail.” – Martin Creaner President & CTO TM Forum.   The following trends have been observed in telecom reference architecture:   ·       Transformation of business structures to align with customer requirements ·       Adoption of more Internet-like technical architectures. The Web 2.0 concept is increasingly being used. ·       Virtualization of the traditional operations support system (OSS) ·       Adoption of SOA to support development of IP-based services ·       Adoption of frameworks like Service Delivery Platforms (SDPs) and IP Multimedia Subsystem ·       (IMS) to enable seamless deployment of various services over fixed and mobile networks ·       Replacement of in-house, customized, and stove-piped OSS/BSS with standards-based COTS products ·       Compliance with industry standards and frameworks like eTOM, SID, and TAM to enable seamless integration with other standards-based products   Drivers of Reference Architecture   The drivers of the Reference Architecture are Reference Architecture Goals, Principles, and Enterprise Vision and Telecom Transformation. The details are depicted below diagram. @font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoCaption, li.MsoCaption, div.MsoCaption { margin: 0cm 0cm 10pt; font-size: 9pt; font-family: "Times New Roman"; color: rgb(79, 129, 189); font-weight: bold; }div.Section1 { page: Section1; } Figure 1. Drivers for Reference Architecture @font-face { font-family: "Arial"; }@font-face { font-family: "Courier New"; }@font-face { font-family: "Wingdings"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpFirst, li.MsoListParagraphCxSpFirst, div.MsoListParagraphCxSpFirst { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpMiddle, li.MsoListParagraphCxSpMiddle, div.MsoListParagraphCxSpMiddle { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpLast, li.MsoListParagraphCxSpLast, div.MsoListParagraphCxSpLast { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; }ol { margin-bottom: 0cm; }ul { margin-bottom: 0cm; } Today’s telecom reference architectures should seamlessly integrate traditional legacy-based applications and transition to next-generation network technologies (e.g., IP multimedia subsystems). This has resulted in new requirements for flexible, real-time billing and OSS/BSS systems and implications on the service provider’s organizational requirements and structure.   Telecom reference architectures are today expected to:   ·       Integrate voice, messaging, email and other VAS over fixed and mobile networks, back end systems ·       Be able to provision multiple services and service bundles • Deliver converged voice, video and data services ·       Leverage the existing Network Infrastructure ·       Provide real-time, flexible billing and charging systems to handle complex promotions, discounts, and settlements with multiple parties. ·       Support charging of advanced data services such as VoIP, On-Demand, Services (e.g.  Video), IMS/SIP Services, Mobile Money, Content Services and IPTV. ·       Help in faster deployment of new services • Serve as an effective platform for collaboration between network IT and business organizations ·       Harness the potential of converging technology, networks, devices and content to develop multimedia services and solutions of ever-increasing sophistication on a single Internet Protocol (IP) ·       Ensure better service delivery and zero revenue leakage through real-time balance and credit management ·       Lower operating costs to drive profitability   Enterprise Reference Architecture   The Enterprise Reference Architecture (RA) fills the gap between the concepts and vocabulary defined by the reference model and the implementation. Reference architecture provides detailed architectural information in a common format such that solutions can be repeatedly designed and deployed in a consistent, high-quality, supportable fashion. This paper attempts to describe the Reference Architecture for the Telecom Application Usage and how to achieve the Enterprise Level Reference Architecture using SOA.   • Telecom Reference Architecture • Enterprise SOA based Reference Architecture   Telecom Reference Architecture   Tele Management Forum’s New Generation Operations Systems and Software (NGOSS) is an architectural framework for organizing, integrating, and implementing telecom systems. NGOSS is a component-based framework consisting of the following elements:   ·       The enhanced Telecom Operations Map (eTOM) is a business process framework. ·       The Shared Information Data (SID) model provides a comprehensive information framework that may be specialized for the needs of a particular organization. ·       The Telecom Application Map (TAM) is an application framework to depict the functional footprint of applications, relative to the horizontal processes within eTOM. ·       The Technology Neutral Architecture (TNA) is an integrated framework. TNA is an architecture that is sustainable through technology changes.   NGOSS Architecture Standards are:   ·       Centralized data ·       Loosely coupled distributed systems ·       Application components/re-use  ·       A technology-neutral system framework with technology specific implementations ·       Interoperability to service provider data/processes ·       Allows more re-use of business components across multiple business scenarios ·       Workflow automation   The traditional operator systems architecture consists of four layers,   ·       Business Support System (BSS) layer, with focus toward customers and business partners. Manages order, subscriber, pricing, rating, and billing information. ·       Operations Support System (OSS) layer, built around product, service, and resource inventories. ·       Networks layer – consists of Network elements and 3rd Party Systems. ·       Integration Layer – to maximize application communication and overall solution flexibility.   Reference architecture for telecom enterprises is depicted below. @font-face { font-family: "Arial"; }@font-face { font-family: "Courier New"; }@font-face { font-family: "Wingdings"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoCaption, li.MsoCaption, div.MsoCaption { margin: 0cm 0cm 10pt; font-size: 9pt; font-family: "Times New Roman"; color: rgb(79, 129, 189); font-weight: bold; }p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpFirst, li.MsoListParagraphCxSpFirst, div.MsoListParagraphCxSpFirst { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpMiddle, li.MsoListParagraphCxSpMiddle, div.MsoListParagraphCxSpMiddle { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpLast, li.MsoListParagraphCxSpLast, div.MsoListParagraphCxSpLast { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; }ol { margin-bottom: 0cm; }ul { margin-bottom: 0cm; } Figure 2. Telecom Reference Architecture   The major building blocks of any Telecom Service Provider architecture are as follows:   1. Customer Relationship Management   CRM encompasses the end-to-end lifecycle of the customer: customer initiation/acquisition, sales, ordering, and service activation, customer care and support, proactive campaigns, cross sell/up sell, and retention/loyalty.   CRM also includes the collection of customer information and its application to personalize, customize, and integrate delivery of service to a customer, as well as to identify opportunities for increasing the value of the customer to the enterprise.   The key functionalities related to Customer Relationship Management are   ·       Manage the end-to-end lifecycle of a customer request for products. ·       Create and manage customer profiles. ·       Manage all interactions with customers – inquiries, requests, and responses. ·       Provide updates to Billing and other south bound systems on customer/account related updates such as customer/ account creation, deletion, modification, request bills, final bill, duplicate bills, credit limits through Middleware. ·       Work with Order Management System, Product, and Service Management components within CRM. ·       Manage customer preferences – Involve all the touch points and channels to the customer, including contact center, retail stores, dealers, self service, and field service, as well as via any media (phone, face to face, web, mobile device, chat, email, SMS, mail, the customer's bill, etc.). ·       Support single interface for customer contact details, preferences, account details, offers, customer premise equipment, bill details, bill cycle details, and customer interactions.   CRM applications interact with customers through customer touch points like portals, point-of-sale terminals, interactive voice response systems, etc. The requests by customers are sent via fulfillment/provisioning to billing system for ordering processing.   2. Billing and Revenue Management   Billing and Revenue Management handles the collection of appropriate usage records and production of timely and accurate bills – for providing pre-bill usage information and billing to customers; for processing their payments; and for performing payment collections. In addition, it handles customer inquiries about bills, provides billing inquiry status, and is responsible for resolving billing problems to the customer's satisfaction in a timely manner. This process grouping also supports prepayment for services.   The key functionalities provided by these applications are   ·       To ensure that enterprise revenue is billed and invoices delivered appropriately to customers. ·       To manage customers’ billing accounts, process their payments, perform payment collections, and monitor the status of the account balance. ·       To ensure the timely and effective fulfillment of all customer bill inquiries and complaints. ·       Collect the usage records from mediation and ensure appropriate rating and discounting of all usage and pricing. ·       Support revenue sharing; split charging where usage is guided to an account different from the service consumer. ·       Support prepaid and post-paid rating. ·       Send notification on approach / exceeding the usage thresholds as enforced by the subscribed offer, and / or as setup by the customer. ·       Support prepaid, post paid, and hybrid (where some services are prepaid and the rest of the services post paid) customers and conversion from post paid to prepaid, and vice versa. ·       Support different billing function requirements like charge prorating, promotion, discount, adjustment, waiver, write-off, account receivable, GL Interface, late payment fee, credit control, dunning, account or service suspension, re-activation, expiry, termination, contract violation penalty, etc. ·       Initiate direct debit to collect payment against an invoice outstanding. ·       Send notification to Middleware on different events; for example, payment receipt, pre-suspension, threshold exceed, etc.   Billing systems typically get usage data from mediation systems for rating and billing. They get provisioning requests from order management systems and inquiries from CRM systems. Convergent and real-time billing systems can directly get usage details from network elements.   3. Mediation   Mediation systems transform/translate the Raw or Native Usage Data Records into a general format that is acceptable to billing for their rating purposes.   The following lists the high-level roles and responsibilities executed by the Mediation system in the end-to-end solution.   ·       Collect Usage Data Records from different data sources – like network elements, routers, servers – via different protocol and interfaces. ·       Process Usage Data Records – Mediation will process Usage Data Records as per the source format. ·       Validate Usage Data Records from each source. ·       Segregates Usage Data Records coming from each source to multiple, based on the segregation requirement of end Application. ·       Aggregates Usage Data Records based on the aggregation rule if any from different sources. ·       Consolidates multiple Usage Data Records from each source. ·       Delivers formatted Usage Data Records to different end application like Billing, Interconnect, Fraud Management, etc. ·       Generates audit trail for incoming Usage Data Records and keeps track of all the Usage Data Records at various stages of mediation process. ·       Checks duplicate Usage Data Records across files for a given time window.   4. Fulfillment   This area is responsible for providing customers with their requested products in a timely and correct manner. It translates the customer's business or personal need into a solution that can be delivered using the specific products in the enterprise's portfolio. This process informs the customers of the status of their purchase order, and ensures completion on time, as well as ensuring a delighted customer. These processes are responsible for accepting and issuing orders. They deal with pre-order feasibility determination, credit authorization, order issuance, order status and tracking, customer update on customer order activities, and customer notification on order completion. Order management and provisioning applications fall into this category.   The key functionalities provided by these applications are   ·       Issuing new customer orders, modifying open customer orders, or canceling open customer orders; ·       Verifying whether specific non-standard offerings sought by customers are feasible and supportable; ·       Checking the credit worthiness of customers as part of the customer order process; ·       Testing the completed offering to ensure it is working correctly; ·       Updating of the Customer Inventory Database to reflect that the specific product offering has been allocated, modified, or cancelled; ·       Assigning and tracking customer provisioning activities; ·       Managing customer provisioning jeopardy conditions; and ·       Reporting progress on customer orders and other processes to customer.   These applications typically get orders from CRM systems. They interact with network elements and billing systems for fulfillment of orders.   5. Enterprise Management   This process area includes those processes that manage enterprise-wide activities and needs, or have application within the enterprise as a whole. They encompass all business management processes that   ·       Are necessary to support the whole of the enterprise, including processes for financial management, legal management, regulatory management, process, cost, and quality management, etc.;   ·       Are responsible for setting corporate policies, strategies, and directions, and for providing guidelines and targets for the whole of the business, including strategy development and planning for areas, such as Enterprise Architecture, that are integral to the direction and development of the business;   ·       Occur throughout the enterprise, including processes for project management, performance assessments, cost assessments, etc.     (i) Enterprise Risk Management:   Enterprise Risk Management focuses on assuring that risks and threats to the enterprise value and/or reputation are identified, and appropriate controls are in place to minimize or eliminate the identified risks. The identified risks may be physical or logical/virtual. Successful risk management ensures that the enterprise can support its mission critical operations, processes, applications, and communications in the face of serious incidents such as security threats/violations and fraud attempts. Two key areas covered in Risk Management by telecom operators are:   ·       Revenue Assurance: Revenue assurance system will be responsible for identifying revenue loss scenarios across components/systems, and will help in rectifying the problems. The following lists the high-level roles and responsibilities executed by the Revenue Assurance system in the end-to-end solution. o   Identify all usage information dropped when networks are being upgraded. o   Interconnect bill verification. o   Identify where services are routinely provisioned but never billed. o   Identify poor sales policies that are intensifying collections problems. o   Find leakage where usage is sent to error bucket and never billed for. o   Find leakage where field service, CRM, and network build-out are not optimized.   ·       Fraud Management: Involves collecting data from different systems to identify abnormalities in traffic patterns, usage patterns, and subscription patterns to report suspicious activity that might suggest fraudulent usage of resources, resulting in revenue losses to the operator.   The key roles and responsibilities of the system component are as follows:   o   Fraud management system will capture and monitor high usage (over a certain threshold) in terms of duration, value, and number of calls for each subscriber. The threshold for each subscriber is decided by the system and fixed automatically. o   Fraud management will be able to detect the unauthorized access to services for certain subscribers. These subscribers may have been provided unauthorized services by employees. The component will raise the alert to the operator the very first time of such illegal calls or calls which are not billed. o   The solution will be to have an alarm management system that will deliver alarms to the operator/provider whenever it detects a fraud, thus minimizing fraud by catching it the first time it occurs. o   The Fraud Management system will be capable of interfacing with switches, mediation systems, and billing systems   (ii) Knowledge Management   This process focuses on knowledge management, technology research within the enterprise, and the evaluation of potential technology acquisitions.   Key responsibilities of knowledge base management are to   ·       Maintain knowledge base – Creation and updating of knowledge base on ongoing basis. ·       Search knowledge base – Search of knowledge base on keywords or category browse ·       Maintain metadata – Management of metadata on knowledge base to ensure effective management and search. ·       Run report generator. ·       Provide content – Add content to the knowledge base, e.g., user guides, operational manual, etc.   (iii) Document Management   It focuses on maintaining a repository of all electronic documents or images of paper documents relevant to the enterprise using a system.   (iv) Data Management   It manages data as a valuable resource for any enterprise. For telecom enterprises, the typical areas covered are Master Data Management, Data Warehousing, and Business Intelligence. It is also responsible for data governance, security, quality, and database management.   Key responsibilities of Data Management are   ·       Using ETL, extract the data from CRM, Billing, web content, ERP, campaign management, financial, network operations, asset management info, customer contact data, customer measures, benchmarks, process data, e.g., process inputs, outputs, and measures, into Enterprise Data Warehouse. ·       Management of data traceability with source, data related business rules/decisions, data quality, data cleansing data reconciliation, competitors data – storage for all the enterprise data (customer profiles, products, offers, revenues, etc.) ·       Get online update through night time replication or physical backup process at regular frequency. ·       Provide the data access to business intelligence and other systems for their analysis, report generation, and use.   (v) Business Intelligence   It uses the Enterprise Data to provide the various analysis and reports that contain prospects and analytics for customer retention, acquisition of new customers due to the offers, and SLAs. It will generate right and optimized plans – bolt-ons for the customers.   The following lists the high-level roles and responsibilities executed by the Business Intelligence system at the Enterprise Level:   ·       It will do Pattern analysis and reports problem. ·       It will do Data Analysis – Statistical analysis, data profiling, affinity analysis of data, customer segment wise usage patterns on offers, products, service and revenue generation against services and customer segments. ·       It will do Performance (business, system, and forecast) analysis, churn propensity, response time, and SLAs analysis. ·       It will support for online and offline analysis, and report drill down capability. ·       It will collect, store, and report various SLA data. ·       It will provide the necessary intelligence for marketing and working on campaigns, etc., with cost benefit analysis and predictions.   It will advise on customer promotions with additional services based on loyalty and credit history of customer   ·       It will Interface with Enterprise Data Management system for data to run reports and analysis tasks. It will interface with the campaign schedules, based on historical success evidence.   (vi) Stakeholder and External Relations Management   It manages the enterprise's relationship with stakeholders and outside entities. Stakeholders include shareholders, employee organizations, etc. Outside entities include regulators, local community, and unions. Some of the processes within this grouping are Shareholder Relations, External Affairs, Labor Relations, and Public Relations.   (vii) Enterprise Resource Planning   It is used to manage internal and external resources, including tangible assets, financial resources, materials, and human resources. Its purpose is to facilitate the flow of information between all business functions inside the boundaries of the enterprise and manage the connections to outside stakeholders. ERP systems consolidate all business operations into a uniform and enterprise wide system environment.   The key roles and responsibilities for Enterprise System are given below:   ·        It will handle responsibilities such as core accounting, financial, and management reporting. ·       It will interface with CRM for capturing customer account and details. ·       It will interface with billing to capture the billing revenue and other financial data. ·       It will be responsible for executing the dunning process. Billing will send the required feed to ERP for execution of dunning. ·       It will interface with the CRM and Billing through batch interfaces. Enterprise management systems are like horizontals in the enterprise and typically interact with all major telecom systems. E.g., an ERP system interacts with CRM, Fulfillment, and Billing systems for different kinds of data exchanges.   6. External Interfaces/Touch Points   The typical external parties are customers, suppliers/partners, employees, shareholders, and other stakeholders. External interactions from/to a Service Provider to other parties can be achieved by a variety of mechanisms, including:   ·       Exchange of emails or faxes ·       Call Centers ·       Web Portals ·       Business-to-Business (B2B) automated transactions   These applications provide an Internet technology driven interface to external parties to undertake a variety of business functions directly for themselves. These can provide fully or partially automated service to external parties through various touch points.   Typical characteristics of these touch points are   ·       Pre-integrated self-service system, including stand-alone web framework or integration front end with a portal engine ·       Self services layer exposing atomic web services/APIs for reuse by multiple systems across the architectural environment ·       Portlets driven connectivity exposing data and services interoperability through a portal engine or web application   These touch points mostly interact with the CRM systems for requests, inquiries, and responses.   7. Middleware   The component will be primarily responsible for integrating the different systems components under a common platform. It should provide a Standards-Based Platform for building Service Oriented Architecture and Composite Applications. The following lists the high-level roles and responsibilities executed by the Middleware component in the end-to-end solution.   ·       As an integration framework, covering to and fro interfaces ·       Provide a web service framework with service registry. ·       Support SOA framework with SOA service registry. ·       Each of the interfaces from / to Middleware to other components would handle data transformation, translation, and mapping of data points. ·       Receive data from the caller / activate and/or forward the data to the recipient system in XML format. ·       Use standard XML for data exchange. ·       Provide the response back to the service/call initiator. ·       Provide a tracking until the response completion. ·       Keep a store transitional data against each call/transaction. ·       Interface through Middleware to get any information that is possible and allowed from the existing systems to enterprise systems; e.g., customer profile and customer history, etc. ·       Provide the data in a common unified format to the SOA calls across systems, and follow the Enterprise Architecture directive. ·       Provide an audit trail for all transactions being handled by the component.   8. Network Elements   The term Network Element means a facility or equipment used in the provision of a telecommunications service. Such terms also includes features, functions, and capabilities that are provided by means of such facility or equipment, including subscriber numbers, databases, signaling systems, and information sufficient for billing and collection or used in the transmission, routing, or other provision of a telecommunications service.   Typical network elements in a GSM network are Home Location Register (HLR), Intelligent Network (IN), Mobile Switching Center (MSC), SMS Center (SMSC), and network elements for other value added services like Push-to-talk (PTT), Ring Back Tone (RBT), etc.   Network elements are invoked when subscribers use their telecom devices for any kind of usage. These elements generate usage data and pass it on to downstream systems like mediation and billing system for rating and billing. They also integrate with provisioning systems for order/service fulfillment.   9. 3rd Party Applications   3rd Party systems are applications like content providers, payment gateways, point of sale terminals, and databases/applications maintained by the Government.   Depending on applicability and the type of functionality provided by 3rd party applications, the integration with different telecom systems like CRM, provisioning, and billing will be done.   10. Service Delivery Platform   A service delivery platform (SDP) provides the architecture for the rapid deployment, provisioning, execution, management, and billing of value added telecom services. SDPs are based on the concept of SOA and layered architecture. They support the delivery of voice, data services, and content in network and device-independent fashion. They allow application developers to aggregate network capabilities, services, and sources of content. SDPs typically contain layers for web services exposure, service application development, and network abstraction.   SOA Reference Architecture   SOA concept is based on the principle of developing reusable business service and building applications by composing those services, instead of building monolithic applications in silos. It’s about bridging the gap between business and IT through a set of business-aligned IT services, using a set of design principles, patterns, and techniques.   In an SOA, resources are made available to participants in a value net, enterprise, line of business (typically spanning multiple applications within an enterprise or across multiple enterprises). It consists of a set of business-aligned IT services that collectively fulfill an organization’s business processes and goals. We can choreograph these services into composite applications and invoke them through standard protocols. SOA, apart from agility and reusability, enables:   ·       The business to specify processes as orchestrations of reusable services ·       Technology agnostic business design, with technology hidden behind service interface ·       A contractual-like interaction between business and IT, based on service SLAs ·       Accountability and governance, better aligned to business services ·       Applications interconnections untangling by allowing access only through service interfaces, reducing the daunting side effects of change ·       Reduced pressure to replace legacy and extended lifetime for legacy applications, through encapsulation in services   ·       A Cloud Computing paradigm, using web services technologies, that makes possible service outsourcing on an on-demand, utility-like, pay-per-usage basis   The following section represents the Reference Architecture of logical view for the Telecom Solution. The new custom built application needs to align with this logical architecture in the long run to achieve EA benefits.   Packaged implementation applications, such as ERP billing applications, need to expose their functions as service providers (as other applications consume) and interact with other applications as service consumers.   COT applications need to expose services through wrappers such as adapters to utilize existing resources and at the same time achieve Enterprise Architecture goal and objectives.   The following are the various layers for Enterprise level deployment of SOA. This diagram captures the abstract view of Enterprise SOA layers and important components of each layer. Layered architecture means decomposition of services such that most interactions occur between adjacent layers. However, there is no strict rule that top layers should not directly communicate with bottom layers.   The diagram below represents the important logical pieces that would result from overall SOA transformation. @font-face { font-family: "Arial"; }@font-face { font-family: "Courier New"; }@font-face { font-family: "Wingdings"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoCaption, li.MsoCaption, div.MsoCaption { margin: 0cm 0cm 10pt; font-size: 9pt; font-family: "Times New Roman"; color: rgb(79, 129, 189); font-weight: bold; }p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpFirst, li.MsoListParagraphCxSpFirst, div.MsoListParagraphCxSpFirst { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpMiddle, li.MsoListParagraphCxSpMiddle, div.MsoListParagraphCxSpMiddle { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpLast, li.MsoListParagraphCxSpLast, div.MsoListParagraphCxSpLast { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; }ol { margin-bottom: 0cm; }ul { margin-bottom: 0cm; } Figure 3. Enterprise SOA Reference Architecture 1.          Operational System Layer: This layer consists of all packaged applications like CRM, ERP, custom built applications, COTS based applications like Billing, Revenue Management, Fulfilment, and the Enterprise databases that are essential and contribute directly or indirectly to the Enterprise OSS/BSS Transformation.   ERP holds the data of Asset Lifecycle Management, Supply Chain, and Advanced Procurement and Human Capital Management, etc.   CRM holds the data related to Order, Sales, and Marketing, Customer Care, Partner Relationship Management, Loyalty, etc.   Content Management handles Enterprise Search and Query. Billing application consists of the following components:   ·       Collections Management, Customer Billing Management, Invoices, Real-Time Rating, Discounting, and Applying of Charges ·       Enterprise databases will hold both the application and service data, whether structured or unstructured.   MDM - Master data majorly consists of Customer, Order, Product, and Service Data.     2.          Enterprise Component Layer:   This layer consists of the Application Services and Common Services that are responsible for realizing the functionality and maintaining the QoS of the exposed services. This layer uses container-based technologies such as application servers to implement the components, workload management, high availability, and load balancing.   Application Services: This Service Layer enables application, technology, and database abstraction so that the complex accessing logic is hidden from the other service layers. This is a basic service layer, which exposes application functionalities and data as reusable services. The three types of the Application access services are:   ·       Application Access Service: This Service Layer exposes application level functionalities as a reusable service between BSS to BSS and BSS to OSS integration. This layer is enabled using disparate technology such as Web Service, Integration Servers, and Adaptors, etc.   ·       Data Access Service: This Service Layer exposes application data services as a reusable reference data service. This is done via direct interaction with application data. and provides the federated query.   ·       Network Access Service: This Service Layer exposes provisioning layer as a reusable service from OSS to OSS integration. This integration service emphasizes the need for high performance, stateless process flows, and distributed design.   Common Services encompasses management of structured, semi-structured, and unstructured data such as information services, portal services, interaction services, infrastructure services, and security services, etc.   3.          Integration Layer:   This consists of service infrastructure components like service bus, service gateway for partner integration, service registry, service repository, and BPEL processor. Service bus will carry the service invocation payloads/messages between consumers and providers. The other important functions expected from it are itinerary based routing, distributed caching of routing information, transformations, and all qualities of service for messaging-like reliability, scalability, and availability, etc. Service registry will hold all contracts (wsdl) of services, and it helps developers to locate or discover service during design time or runtime.   • BPEL processor would be useful in orchestrating the services to compose a complex business scenario or process. • Workflow and business rules management are also required to support manual triggering of certain activities within business process. based on the rules setup and also the state machine information. Application, data, and service mediation layer typically forms the overall composite application development framework or SOA Framework.   4.          Business Process Layer: These are typically the intermediate services layer and represent Shared Business Process Services. At Enterprise Level, these services are from Customer Management, Order Management, Billing, Finance, and Asset Management application domains.   5.          Access Layer: This layer consists of portals for Enterprise and provides a single view of Enterprise information management and dashboard services.   6.          Channel Layer: This consists of various devices; applications that form part of extended enterprise; browsers through which users access the applications.   7.          Client Layer: This designates the different types of users accessing the enterprise applications. The type of user typically would be an important factor in determining the level of access to applications.   8.          Vertical pieces like management, monitoring, security, and development cut across all horizontal layers Management and monitoring involves all aspects of SOA-like services, SLAs, and other QoS lifecycle processes for both applications and services surrounding SOA governance.     9.          EA Governance, Reference Architecture, Roadmap, Principles, and Best Practices:   EA Governance is important in terms of providing the overall direction to SOA implementation within the enterprise. This involves board-level involvement, in addition to business and IT executives. At a high level, this involves managing the SOA projects implementation, managing SOA infrastructure, and controlling the entire effort through all fine-tuned IT processes in accordance with COBIT (Control Objectives for Information Technology).   Devising tools and techniques to promote reuse culture, and the SOA way of doing things needs competency centers to be established in addition to training the workforce to take up new roles that are suited to SOA journey.   Conclusions   Reference Architectures can serve as the basis for disparate architecture efforts throughout the organization, even if they use different tools and technologies. Reference architectures provide best practices and approaches in the independent way a vendor deals with technology and standards. Reference Architectures model the abstract architectural elements for an enterprise independent of the technologies, protocols, and products that are used to implement an SOA. Telecom enterprises today are facing significant business and technology challenges due to growing competition, a multitude of services, and convergence. Adopting architectural best practices could go a long way in meeting these challenges. The use of SOA-based architecture for communication to each of the external systems like Billing, CRM, etc., in OSS/BSS system has made the architecture very loosely coupled, with greater flexibility. Any change in the external systems would be absorbed at the Integration Layer without affecting the rest of the ecosystem. The use of a Business Process Management (BPM) tool makes the management and maintenance of the business processes easy, with better performance in terms of lead time, quality, and cost. Since the Architecture is based on standards, it will lower the cost of deploying and managing OSS/BSS applications over their lifecycles.

    Read the article

  • Agile Development

    - by James Oloo Onyango
    Alot of literature has and is being written about agile developement and its surrounding philosophies. In my quest to find the best way to express the importance of agile methodologies, i have found Robert C. Martin's "A Satire Of Two Companies" to be both the most concise and thorough! Enjoy the read! Rufus Inc Project Kick Off Your name is Bob. The date is January 3, 2001, and your head still aches from the recent millennial revelry. You are sitting in a conference room with several managers and a group of your peers. You are a project team leader. Your boss is there, and he has brought along all of his team leaders. His boss called the meeting. "We have a new project to develop," says your boss's boss. Call him BB. The points in his hair are so long that they scrape the ceiling. Your boss's points are just starting to grow, but he eagerly awaits the day when he can leave Brylcream stains on the acoustic tiles. BB describes the essence of the new market they have identified and the product they want to develop to exploit this market. "We must have this new project up and working by fourth quarter October 1," BB demands. "Nothing is of higher priority, so we are cancelling your current project." The reaction in the room is stunned silence. Months of work are simply going to be thrown away. Slowly, a murmur of objection begins to circulate around the conference table.   His points give off an evil green glow as BB meets the eyes of everyone in the room. One by one, that insidious stare reduces each attendee to quivering lumps of protoplasm. It is clear that he will brook no discussion on this matter. Once silence has been restored, BB says, "We need to begin immediately. How long will it take you to do the analysis?" You raise your hand. Your boss tries to stop you, but his spitwad misses you and you are unaware of his efforts.   "Sir, we can't tell you how long the analysis will take until we have some requirements." "The requirements document won't be ready for 3 or 4 weeks," BB says, his points vibrating with frustration. "So, pretend that you have the requirements in front of you now. How long will you require for analysis?" No one breathes. Everyone looks around to see whether anyone has some idea. "If analysis goes beyond April 1, we have a problem. Can you finish the analysis by then?" Your boss visibly gathers his courage: "We'll find a way, sir!" His points grow 3 mm, and your headache increases by two Tylenol. "Good." BB smiles. "Now, how long will it take to do the design?" "Sir," you say. Your boss visibly pales. He is clearly worried that his 3 mms are at risk. "Without an analysis, it will not be possible to tell you how long design will take." BB's expression shifts beyond austere.   "PRETEND you have the analysis already!" he says, while fixing you with his vacant, beady little eyes. "How long will it take you to do the design?" Two Tylenol are not going to cut it. Your boss, in a desperate attempt to save his new growth, babbles: "Well, sir, with only six months left to complete the project, design had better take no longer than 3 months."   "I'm glad you agree, Smithers!" BB says, beaming. Your boss relaxes. He knows his points are secure. After a while, he starts lightly humming the Brylcream jingle. BB continues, "So, analysis will be complete by April 1, design will be complete by July 1, and that gives you 3 months to implement the project. This meeting is an example of how well our new consensus and empowerment policies are working. Now, get out there and start working. I'll expect to see TQM plans and QIT assignments on my desk by next week. Oh, and don't forget that your crossfunctional team meetings and reports will be needed for next month's quality audit." "Forget the Tylenol," you think to yourself as you return to your cubicle. "I need bourbon."   Visibly excited, your boss comes over to you and says, "Gosh, what a great meeting. I think we're really going to do some world shaking with this project." You nod in agreement, too disgusted to do anything else. "Oh," your boss continues, "I almost forgot." He hands you a 30-page document. "Remember that the SEI is coming to do an evaluation next week. This is the evaluation guide. You need to read through it, memorize it, and then shred it. It tells you how to answer any questions that the SEI auditors ask you. It also tells you what parts of the building you are allowed to take them to and what parts to avoid. We are determined to be a CMM level 3 organization by June!"   You and your peers start working on the analysis of the new project. This is difficult because you have no requirements. But from the 10-minute introduction given by BB on that fateful morning, you have some idea of what the product is supposed to do.   Corporate process demands that you begin by creating a use case document. You and your team begin enumerating use cases and drawing oval and stick diagrams. Philosophical debates break out among the team members. There is disagreement as to whether certain use cases should be connected with <<extends>> or <<includes>> relationships. Competing models are created, but nobody knows how to evaluate them. The debate continues, effectively paralyzing progress.   After a week, somebody finds the iceberg.com Web site, which recommends disposing entirely of <<extends>> and <<includes>> and replacing them with <<precedes>> and <<uses>>. The documents on this Web site, authored by Don Sengroiux, describes a method known as stalwart-analysis, which claims to be a step-by-step method for translating use cases into design diagrams. More competing use case models are created using this new scheme, but again, people can't agree on how to evaluate them. The thrashing continues. More and more, the use case meetings are driven by emotion rather than by reason. If it weren't for the fact that you don't have requirements, you'd be pretty upset by the lack of progress you are making. The requirements document arrives on February 15. And then again on February 20, 25, and every week thereafter. Each new version contradicts the previous one. Clearly, the marketing folks who are writing the requirements, empowered though they might be, are not finding consensus.   At the same time, several new competing use case templates have been proposed by the various team members. Each template presents its own particularly creative way of delaying progress. The debates rage on. On March 1, Prudence Putrigence, the process proctor, succeeds in integrating all the competing use case forms and templates into a single, all-encompassing form. Just the blank form is 15 pages long. She has managed to include every field that appeared on all the competing templates. She also presents a 159- page document describing how to fill out the use case form. All current use cases must be rewritten according to the new standard.   You marvel to yourself that it now requires 15 pages of fill-in-the-blank and essay questions to answer the question: What should the system do when the user presses Return? The corporate process (authored by L. E. Ott, famed author of "Holistic Analysis: A Progressive Dialectic for Software Engineers") insists that you discover all primary use cases, 87 percent of all secondary use cases, and 36.274 percent of all tertiary use cases before you can complete analysis and enter the design phase. You have no idea what a tertiary use case is. So in an attempt to meet this requirement, you try to get your use case document reviewed by the marketing department, which you hope will know what a tertiary use case is.   Unfortunately, the marketing folks are too busy with sales support to talk to you. Indeed, since the project started, you have not been able to get a single meeting with marketing, which has provided a never-ending stream of changing and contradictory requirements documents.   While one team has been spinning endlessly on the use case document, another team has been working out the domain model. Endless variations of UML documents are pouring out of this team. Every week, the model is reworked.   The team members can't decide whether to use <<interfaces>> or <<types>> in the model. A huge disagreement has been raging on the proper syntax and application of OCL. Others on the team just got back from a 5-day class on catabolism, and have been producing incredibly detailed and arcane diagrams that nobody else can fathom.   On March 27, with one week to go before analysis is to be complete, you have produced a sea of documents and diagrams but are no closer to a cogent analysis of the problem than you were on January 3. **** And then, a miracle happens.   **** On Saturday, April 1, you check your e-mail from home. You see a memo from your boss to BB. It states unequivocally that you are done with the analysis! You phone your boss and complain. "How could you have told BB that we were done with the analysis?" "Have you looked at a calendar lately?" he responds. "It's April 1!" The irony of that date does not escape you. "But we have so much more to think about. So much more to analyze! We haven't even decided whether to use <<extends>> or <<precedes>>!" "Where is your evidence that you are not done?" inquires your boss, impatiently. "Whaaa . . . ." But he cuts you off. "Analysis can go on forever; it has to be stopped at some point. And since this is the date it was scheduled to stop, it has been stopped. Now, on Monday, I want you to gather up all existing analysis materials and put them into a public folder. Release that folder to Prudence so that she can log it in the CM system by Monday afternoon. Then get busy and start designing."   As you hang up the phone, you begin to consider the benefits of keeping a bottle of bourbon in your bottom desk drawer. They threw a party to celebrate the on-time completion of the analysis phase. BB gave a colon-stirring speech on empowerment. And your boss, another 3 mm taller, congratulated his team on the incredible show of unity and teamwork. Finally, the CIO takes the stage to tell everyone that the SEI audit went very well and to thank everyone for studying and shredding the evaluation guides that were passed out. Level 3 now seems assured and will be awarded by June. (Scuttlebutt has it that managers at the level of BB and above are to receive significant bonuses once the SEI awards level 3.)   As the weeks flow by, you and your team work on the design of the system. Of course, you find that the analysis that the design is supposedly based on is flawedno, useless; no, worse than useless. But when you tell your boss that you need to go back and work some more on the analysis to shore up its weaker sections, he simply states, "The analysis phase is over. The only allowable activity is design. Now get back to it."   So, you and your team hack the design as best you can, unsure of whether the requirements have been properly analyzed. Of course, it really doesn't matter much, since the requirements document is still thrashing with weekly revisions, and the marketing department still refuses to meet with you.     The design is a nightmare. Your boss recently misread a book named The Finish Line in which the author, Mark DeThomaso, blithely suggested that design documents should be taken down to code-level detail. "If we are going to be working at that level of detail," you ask, "why don't we simply write the code instead?" "Because then you wouldn't be designing, of course. And the only allowable activity in the design phase is design!" "Besides," he continues, "we have just purchased a companywide license for Dandelion! This tool enables 'Round the Horn Engineering!' You are to transfer all design diagrams into this tool. It will automatically generate our code for us! It will also keep the design diagrams in sync with the code!" Your boss hands you a brightly colored shrinkwrapped box containing the Dandelion distribution. You accept it numbly and shuffle off to your cubicle. Twelve hours, eight crashes, one disk reformatting, and eight shots of 151 later, you finally have the tool installed on your server. You consider the week your team will lose while attending Dandelion training. Then you smile and think, "Any week I'm not here is a good week." Design diagram after design diagram is created by your team. Dandelion makes it very difficult to draw these diagrams. There are dozens and dozens of deeply nested dialog boxes with funny text fields and check boxes that must all be filled in correctly. And then there's the problem of moving classes between packages. At first, these diagram are driven from the use cases. But the requirements are changing so often that the use cases rapidly become meaningless. Debates rage about whether VISITOR or DECORATOR design patterns should be used. One developer refuses to use VISITOR in any form, claiming that it's not a properly object-oriented construct. Someone refuses to use multiple inheritance, since it is the spawn of the devil. Review meetings rapidly degenerate into debates about the meaning of object orientation, the definition of analysis versus design, or when to use aggregation versus association. Midway through the design cycle, the marketing folks announce that they have rethought the focus of the system. Their new requirements document is completely restructured. They have eliminated several major feature areas and replaced them with feature areas that they anticipate customer surveys will show to be more appropriate. You tell your boss that these changes mean that you need to reanalyze and redesign much of the system. But he says, "The analysis phase is system. But he says, "The analysis phase is over. The only allowable activity is design. Now get back to it."   You suggest that it might be better to create a simple prototype to show to the marketing folks and even some potential customers. But your boss says, "The analysis phase is over. The only allowable activity is design. Now get back to it." Hack, hack, hack, hack. You try to create some kind of a design document that might reflect the new requirements documents. However, the revolution of the requirements has not caused them to stop thrashing. Indeed, if anything, the wild oscillations of the requirements document have only increased in frequency and amplitude.   You slog your way through them.   On June 15, the Dandelion database gets corrupted. Apparently, the corruption has been progressive. Small errors in the DB accumulated over the months into bigger and bigger errors. Eventually, the CASE tool just stopped working. Of course, the slowly encroaching corruption is present on all the backups. Calls to the Dandelion technical support line go unanswered for several days. Finally, you receive a brief e-mail from Dandelion, informing you that this is a known problem and that the solution is to purchase the new version, which they promise will be ready some time next quarter, and then reenter all the diagrams by hand.   ****   Then, on July 1 another miracle happens! You are done with the design!   Rather than go to your boss and complain, you stock your middle desk drawer with some vodka.   **** They threw a party to celebrate the on-time completion of the design phase and their graduation to CMM level 3. This time, you find BB's speech so stirring that you have to use the restroom before it begins. New banners and plaques are all over your workplace. They show pictures of eagles and mountain climbers, and they talk about teamwork and empowerment. They read better after a few scotches. That reminds you that you need to clear out your file cabinet to make room for the brandy. You and your team begin to code. But you rapidly discover that the design is lacking in some significant areas. Actually, it's lacking any significance at all. You convene a design session in one of the conference rooms to try to work through some of the nastier problems. But your boss catches you at it and disbands the meeting, saying, "The design phase is over. The only allowable activity is coding. Now get back to it."   ****   The code generated by Dandelion is really hideous. It turns out that you and your team were using association and aggregation the wrong way, after all. All the generated code has to be edited to correct these flaws. Editing this code is extremely difficult because it has been instrumented with ugly comment blocks that have special syntax that Dandelion needs in order to keep the diagrams in sync with the code. If you accidentally alter one of these comments, the diagrams will be regenerated incorrectly. It turns out that "Round the Horn Engineering" requires an awful lot of effort. The more you try to keep the code compatible with Dandelion, the more errors Dandelion generates. In the end, you give up and decide to keep the diagrams up to date manually. A second later, you decide that there's no point in keeping the diagrams up to date at all. Besides, who has time?   Your boss hires a consultant to build tools to count the number of lines of code that are being produced. He puts a big thermometer graph on the wall with the number 1,000,000 on the top. Every day, he extends the red line to show how many lines have been added. Three days after the thermometer appears on the wall, your boss stops you in the hall. "That graph isn't growing quickly enough. We need to have a million lines done by October 1." "We aren't even sh-sh-sure that the proshect will require a m-million linezh," you blather. "We have to have a million lines done by October 1," your boss reiterates. His points have grown again, and the Grecian formula he uses on them creates an aura of authority and competence. "Are you sure your comment blocks are big enough?" Then, in a flash of managerial insight, he says, "I have it! I want you to institute a new policy among the engineers. No line of code is to be longer than 20 characters. Any such line must be split into two or more preferably more. All existing code needs to be reworked to this standard. That'll get our line count up!"   You decide not to tell him that this will require two unscheduled work months. You decide not to tell him anything at all. You decide that intravenous injections of pure ethanol are the only solution. You make the appropriate arrangements. Hack, hack, hack, and hack. You and your team madly code away. By August 1, your boss, frowning at the thermometer on the wall, institutes a mandatory 50-hour workweek.   Hack, hack, hack, and hack. By September 1st, the thermometer is at 1.2 million lines and your boss asks you to write a report describing why you exceeded the coding budget by 20 percent. He institutes mandatory Saturdays and demands that the project be brought back down to a million lines. You start a campaign of remerging lines. Hack, hack, hack, and hack. Tempers are flaring; people are quitting; QA is raining trouble reports down on you. Customers are demanding installation and user manuals; salespeople are demanding advance demonstrations for special customers; the requirements document is still thrashing, the marketing folks are complaining that the product isn't anything like they specified, and the liquor store won't accept your credit card anymore. Something has to give.    On September 15, BB calls a meeting. As he enters the room, his points are emitting clouds of steam. When he speaks, the bass overtones of his carefully manicured voice cause the pit of your stomach to roll over. "The QA manager has told me that this project has less than 50 percent of the required features implemented. He has also informed me that the system crashes all the time, yields wrong results, and is hideously slow. He has also complained that he cannot keep up with the continuous train of daily releases, each more buggy than the last!" He stops for a few seconds, visibly trying to compose himself. "The QA manager estimates that, at this rate of development, we won't be able to ship the product until December!" Actually, you think it's more like March, but you don't say anything. "December!" BB roars with such derision that people duck their heads as though he were pointing an assault rifle at them. "December is absolutely out of the question. Team leaders, I want new estimates on my desk in the morning. I am hereby mandating 65-hour work weeks until this project is complete. And it better be complete by November 1."   As he leaves the conference room, he is heard to mutter: "Empowermentbah!" * * * Your boss is bald; his points are mounted on BB's wall. The fluorescent lights reflecting off his pate momentarily dazzle you. "Do you have anything to drink?" he asks. Having just finished your last bottle of Boone's Farm, you pull a bottle of Thunderbird from your bookshelf and pour it into his coffee mug. "What's it going to take to get this project done? " he asks. "We need to freeze the requirements, analyze them, design them, and then implement them," you say callously. "By November 1?" your boss exclaims incredulously. "No way! Just get back to coding the damned thing." He storms out, scratching his vacant head.   A few days later, you find that your boss has been transferred to the corporate research division. Turnover has skyrocketed. Customers, informed at the last minute that their orders cannot be fulfilled on time, have begun to cancel their orders. Marketing is re-evaluating whether this product aligns with the overall goals of the company. Memos fly, heads roll, policies change, and things are, overall, pretty grim. Finally, by March, after far too many sixty-five hour weeks, a very shaky version of the software is ready. In the field, bug-discovery rates are high, and the technical support staff are at their wits' end, trying to cope with the complaints and demands of the irate customers. Nobody is happy.   In April, BB decides to buy his way out of the problem by licensing a product produced by Rupert Industries and redistributing it. The customers are mollified, the marketing folks are smug, and you are laid off.     Rupert Industries: Project Alpha   Your name is Robert. The date is January 3, 2001. The quiet hours spent with your family this holiday have left you refreshed and ready for work. You are sitting in a conference room with your team of professionals. The manager of the division called the meeting. "We have some ideas for a new project," says the division manager. Call him Russ. He is a high-strung British chap with more energy than a fusion reactor. He is ambitious and driven but understands the value of a team. Russ describes the essence of the new market opportunity the company has identified and introduces you to Jane, the marketing manager, who is responsible for defining the products that will address it. Addressing you, Jane says, "We'd like to start defining our first product offering as soon as possible. When can you and your team meet with me?" You reply, "We'll be done with the current iteration of our project this Friday. We can spare a few hours for you between now and then. After that, we'll take a few people from the team and dedicate them to you. We'll begin hiring their replacements and the new people for your team immediately." "Great," says Russ, "but I want you to understand that it is critical that we have something to exhibit at the trade show coming up this July. If we can't be there with something significant, we'll lose the opportunity."   "I understand," you reply. "I don't yet know what it is that you have in mind, but I'm sure we can have something by July. I just can't tell you what that something will be right now. In any case, you and Jane are going to have complete control over what we developers do, so you can rest assured that by July, you'll have the most important things that can be accomplished in that time ready to exhibit."   Russ nods in satisfaction. He knows how this works. Your team has always kept him advised and allowed him to steer their development. He has the utmost confidence that your team will work on the most important things first and will produce a high-quality product.   * * *   "So, Robert," says Jane at their first meeting, "How does your team feel about being split up?" "We'll miss working with each other," you answer, "but some of us were getting pretty tired of that last project and are looking forward to a change. So, what are you people cooking up?" Jane beams. "You know how much trouble our customers currently have . . ." And she spends a half hour or so describing the problem and possible solution. "OK, wait a second" you respond. "I need to be clear about this." And so you and Jane talk about how this system might work. Some of her ideas aren't fully formed. You suggest possible solutions. She likes some of them. You continue discussing.   During the discussion, as each new topic is addressed, Jane writes user story cards. Each card represents something that the new system has to do. The cards accumulate on the table and are spread out in front of you. Both you and Jane point at them, pick them up, and make notes on them as you discuss the stories. The cards are powerful mnemonic devices that you can use to represent complex ideas that are barely formed.   At the end of the meeting, you say, "OK, I've got a general idea of what you want. I'm going to talk to the team about it. I imagine they'll want to run some experiments with various database structures and presentation formats. Next time we meet, it'll be as a group, and we'll start identifying the most important features of the system."   A week later, your nascent team meets with Jane. They spread the existing user story cards out on the table and begin to get into some of the details of the system. The meeting is very dynamic. Jane presents the stories in the order of their importance. There is much discussion about each one. The developers are concerned about keeping the stories small enough to estimate and test. So they continually ask Jane to split one story into several smaller stories. Jane is concerned that each story have a clear business value and priority, so as she splits them, she makes sure that this stays true.   The stories accumulate on the table. Jane writes them, but the developers make notes on them as needed. Nobody tries to capture everything that is said; the cards are not meant to capture everything but are simply reminders of the conversation.   As the developers become more comfortable with the stories, they begin writing estimates on them. These estimates are crude and budgetary, but they give Jane an idea of what the story will cost.   At the end of the meeting, it is clear that many more stories could be discussed. It is also clear that the most important stories have been addressed and that they represent several months worth of work. Jane closes the meeting by taking the cards with her and promising to have a proposal for the first release in the morning.   * * *   The next morning, you reconvene the meeting. Jane chooses five cards and places them on the table. "According to your estimates, these cards represent about one perfect team-week's worth of work. The last iteration of the previous project managed to get one perfect team-week done in 3 real weeks. If we can get these five stories done in 3 weeks, we'll be able to demonstrate them to Russ. That will make him feel very comfortable about our progress." Jane is pushing it. The sheepish look on her face lets you know that she knows it too. You reply, "Jane, this is a new team, working on a new project. It's a bit presumptuous to expect that our velocity will be the same as the previous team's. However, I met with the team yesterday afternoon, and we all agreed that our initial velocity should, in fact, be set to one perfectweek for every 3 real-weeks. So you've lucked out on this one." "Just remember," you continue, "that the story estimates and the story velocity are very tentative at this point. We'll learn more when we plan the iteration and even more when we implement it."   Jane looks over her glasses at you as if to say "Who's the boss around here, anyway?" and then smiles and says, "Yeah, don't worry. I know the drill by now."Jane then puts 15 more cards on the table. She says, "If we can get all these cards done by the end of March, we can turn the system over to our beta test customers. And we'll get good feedback from them."   You reply, "OK, so we've got our first iteration defined, and we have the stories for the next three iterations after that. These four iterations will make our first release."   "So," says Jane, can you really do these five stories in the next 3 weeks?" "I don't know for sure, Jane," you reply. "Let's break them down into tasks and see what we get."   So Jane, you, and your team spend the next several hours taking each of the five stories that Jane chose for the first iteration and breaking them down into small tasks. The developers quickly realize that some of the tasks can be shared between stories and that other tasks have commonalities that can probably be taken advantage of. It is clear that potential designs are popping into the developers' heads. From time to time, they form little discussion knots and scribble UML diagrams on some cards.   Soon, the whiteboard is filled with the tasks that, once completed, will implement the five stories for this iteration. You start the sign-up process by saying, "OK, let's sign up for these tasks." "I'll take the initial database generation." Says Pete. "That's what I did on the last project, and this doesn't look very different. I estimate it at two of my perfect workdays." "OK, well, then, I'll take the login screen," says Joe. "Aw, darn," says Elaine, the junior member of the team, "I've never done a GUI, and kinda wanted to try that one."   "Ah, the impatience of youth," Joe says sagely, with a wink in your direction. "You can assist me with it, young Jedi." To Jane: "I think it'll take me about three of my perfect workdays."   One by one, the developers sign up for tasks and estimate them in terms of their own perfect workdays. Both you and Jane know that it is best to let the developers volunteer for tasks than to assign the tasks to them. You also know full well that you daren't challenge any of the developers' estimates. You know these people, and you trust them. You know that they are going to do the very best they can.   The developers know that they can't sign up for more perfect workdays than they finished in the last iteration they worked on. Once each developer has filled his or her schedule for the iteration, they stop signing up for tasks.   Eventually, all the developers have stopped signing up for tasks. But, of course, tasks are still left on the board.   "I was worried that that might happen," you say, "OK, there's only one thing to do, Jane. We've got too much to do in this iteration. What stories or tasks can we remove?" Jane sighs. She knows that this is the only option. Working overtime at the beginning of a project is insane, and projects where she's tried it have not fared well.   So Jane starts to remove the least-important functionality. "Well, we really don't need the login screen just yet. We can simply start the system in the logged-in state." "Rats!" cries Elaine. "I really wanted to do that." "Patience, grasshopper." says Joe. "Those who wait for the bees to leave the hive will not have lips too swollen to relish the honey." Elaine looks confused. Everyone looks confused. "So . . .," Jane continues, "I think we can also do away with . . ." And so, bit by bit, the list of tasks shrinks. Developers who lose a task sign up for one of the remaining ones.   The negotiation is not painless. Several times, Jane exhibits obvious frustration and impatience. Once, when tensions are especially high, Elaine volunteers, "I'll work extra hard to make up some of the missing time." You are about to correct her when, fortunately, Joe looks her in the eye and says, "When once you proceed down the dark path, forever will it dominate your destiny."   In the end, an iteration acceptable to Jane is reached. It's not what Jane wanted. Indeed, it is significantly less. But it's something the team feels that can be achieved in the next 3 weeks.   And, after all, it still addresses the most important things that Jane wanted in the iteration. "So, Jane," you say when things had quieted down a bit, "when can we expect acceptance tests from you?" Jane sighs. This is the other side of the coin. For every story the development team implements,   Jane must supply a suite of acceptance tests that prove that it works. And the team needs these long before the end of the iteration, since they will certainly point out differences in the way Jane and the developers imagine the system's behaviour.   "I'll get you some example test scripts today," Jane promises. "I'll add to them every day after that. You'll have the entire suite by the middle of the iteration."   * * *   The iteration begins on Monday morning with a flurry of Class, Responsibilities, Collaborators sessions. By midmorning, all the developers have assembled into pairs and are rapidly coding away. "And now, my young apprentice," Joe says to Elaine, "you shall learn the mysteries of test-first design!"   "Wow, that sounds pretty rad," Elaine replies. "How do you do it?" Joe beams. It's clear that he has been anticipating this moment. "OK, what does the code do right now?" "Huh?" replied Elaine, "It doesn't do anything at all; there is no code."   "So, consider our task; can you think of something the code should do?" "Sure," Elaine said with youthful assurance, "First, it should connect to the database." "And thereupon, what must needs be required to connecteth the database?" "You sure talk weird," laughed Elaine. "I think we'd have to get the database object from some registry and call the Connect() method. "Ah, astute young wizard. Thou perceives correctly that we requireth an object within which we can cacheth the database object." "Is 'cacheth' really a word?" "It is when I say it! So, what test can we write that we know the database registry should pass?" Elaine sighs. She knows she'll just have to play along. "We should be able to create a database object and pass it to the registry in a Store() method. And then we should be able to pull it out of the registry with a Get() method and make sure it's the same object." "Oh, well said, my prepubescent sprite!" "Hay!" "So, now, let's write a test function that proves your case." "But shouldn't we write the database object and registry object first?" "Ah, you've much to learn, my young impatient one. Just write the test first." "But it won't even compile!" "Are you sure? What if it did?" "Uh . . ." "Just write the test, Elaine. Trust me." And so Joe, Elaine, and all the other developers began to code their tasks, one test case at a time. The room in which they worked was abuzz with the conversations between the pairs. The murmur was punctuated by an occasional high five when a pair managed to finish a task or a difficult test case.   As development proceeded, the developers changed partners once or twice a day. Each developer got to see what all the others were doing, and so knowledge of the code spread generally throughout the team.   Whenever a pair finished something significant whether a whole task or simply an important part of a task they integrated what they had with the rest of the system. Thus, the code base grew daily, and integration difficulties were minimized.   The developers communicated with Jane on a daily basis. They'd go to her whenever they had a question about the functionality of the system or the interpretation of an acceptance test case.   Jane, good as her word, supplied the team with a steady stream of acceptance test scripts. The team read these carefully and thereby gained a much better understanding of what Jane expected the system to do. By the beginning of the second week, there was enough functionality to demonstrate to Jane. She watched eagerly as the demonstration passed test case after test case. "This is really cool," Jane said as the demonstration finally ended. "But this doesn't seem like one-third of the tasks. Is your velocity slower than anticipated?"   You grimace. You'd been waiting for a good time to mention this to Jane but now she was forcing the issue. "Yes, unfortunately, we are going more slowly than we had expected. The new application server we are using is turning out to be a pain to configure. Also, it takes forever to reboot, and we have to reboot it whenever we make even the slightest change to its configuration."   Jane eyes you with suspicion. The stress of last Monday's negotiations had still not entirely dissipated. She says, "And what does this mean to our schedule? We can't slip it again, we just can't. Russ will have a fit! He'll haul us all into the woodshed and ream us some new ones."   You look Jane right in the eyes. There's no pleasant way to give someone news like this. So you just blurt out, "Look, if things keep going like they're going, we're not going to be done with everything by next Friday. Now it's possible that we'll figure out a way to go faster. But, frankly, I wouldn't depend on that. You should start thinking about one or two tasks that could be removed from the iteration without ruining the demonstration for Russ. Come hell or high water, we are going to give that demonstration on Friday, and I don't think you want us to choose which tasks to omit."   "Aw forchrisakes!" Jane barely manages to stifle yelling that last word as she stalks away, shaking her head. Not for the first time, you say to yourself, "Nobody ever promised me project management would be easy." You are pretty sure it won't be the last time, either.   Actually, things went a bit better than you had hoped. The team did, in fact, have to drop one task from the iteration, but Jane had chosen wisely, and the demonstration for Russ went without a hitch. Russ was not impressed with the progress, but neither was he dismayed. He simply said, "This is pretty good. But remember, we have to be able to demonstrate this system at the trade show in July, and at this rate, it doesn't look like you'll have all that much to show." Jane, whose attitude had improved dramatically with the completion of the iteration, responded to Russ by saying, "Russ, this team is working hard, and well. When July comes around, I am confident that we'll have something significant to demonstrate. It won't be everything, and some of it may be smoke and mirrors, but we'll have something."   Painful though the last iteration was, it had calibrated your velocity numbers. The next iteration went much better. Not because your team got more done than in the last iteration but simply because the team didn't have to remove any tasks or stories in the middle of the iteration.   By the start of the fourth iteration, a natural rhythm has been established. Jane, you, and the team know exactly what to expect from one another. The team is running hard, but the pace is sustainable. You are confident that the team can keep up this pace for a year or more.   The number of surprises in the schedule diminishes to near zero; however, the number of surprises in the requirements does not. Jane and Russ frequently look over the growing system and make recommendations or changes to the existing functionality. But all parties realize that these changes take time and must be scheduled. So the changes do not cause anyone's expectations to be violated. In March, there is a major demonstration of the system to the board of directors. The system is very limited and is not yet in a form good enough to take to the trade show, but progress is steady, and the board is reasonably impressed.   The second release goes even more smoothly than the first. By now, the team has figured out a way to automate Jane's acceptance test scripts. The team has also refactored the design of the system to the point that it is really easy to add new features and change old ones. The second release was done by the end of June and was taken to the trade show. It had less in it than Jane and Russ would have liked, but it did demonstrate the most important features of the system. Although customers at the trade show noticed that certain features were missing, they were very impressed overall. You, Russ, and Jane all returned from the trade show with smiles on your faces. You all felt as though this project was a winner.   Indeed, many months later, you are contacted by Rufus Inc. That company had been working on a system like this for its internal operations. Rufus has canceled the development of that system after a death-march project and is negotiating to license your technology for its environment.   Indeed, things are looking up!

    Read the article

  • Fitting an Image to Screen on Rotation iPhone / iPad ?

    - by user356937
    I have been playing around with one of the iPhone examples from Apple' web site (ScrollViewSuite) . I am trying to tweak it a bit so that when I rotate the the iPad the image will fit into the screen in landscape mode vertical. I have been successful in getting the image to rotate, but the image is larger than the height of the landscape screen, so the bottom is below the screen. I would like to image to scale to the height of the landscape screen. I have been playing around with various autoSizingMask attributes without success. The imageView is called "zoomView" this is the actual image which loads into a scrollView called imageScrollView. I am trying to achieve the screen to rotate and look like this.... olsonvox.com/photos/correct.png However, this is what My screen is looking like. olsonvox.com/photos/incorrect.png I would really appreciate some advice or guidance. Below is the RootViewController.m for the project. Blade # import "RootViewController.h" #define ZOOM_VIEW_TAG 100 #define ZOOM_STEP 1.5 #define THUMB_HEIGHT 150 #define THUMB_V_PADDING 25 #define THUMB_H_PADDING 25 #define CREDIT_LABEL_HEIGHT 25 #define AUTOSCROLL_THRESHOLD 30 @interface RootViewController (ViewHandlingMethods) - (void)toggleThumbView; - (void)pickImageNamed:(NSString *)name; - (NSArray *)imageNames; - (void)createThumbScrollViewIfNecessary; - (void)createSlideUpViewIfNecessary; @end @interface RootViewController (AutoscrollingMethods) - (void)maybeAutoscrollForThumb:(ThumbImageView *)thumb; - (void)autoscrollTimerFired:(NSTimer *)timer; - (void)legalizeAutoscrollDistance; - (float)autoscrollDistanceForProximityToEdge:(float)proximity; @end @interface RootViewController (UtilityMethods) - (CGRect)zoomRectForScale:(float)scale withCenter:(CGPoint)center; @end @implementation RootViewController - (void)loadView { [super loadView]; imageScrollView = [[UIScrollView alloc] initWithFrame:[[self view]bounds]]; // this code makes the image resize to the width and height properly. imageScrollView.autoresizingMask = UIViewAutoresizingFlexibleHeight | UIViewAutoresizingFlexibleLeftMargin | UIViewAutoresizingFlexibleRightMargin| UIViewAutoresizingFlexibleBottomMargin| UIViewAutoresizingFlexibleBottomMargin; // TRY SETTNG CENTER HERE SOMEHOW&gt;.... [imageScrollView setBackgroundColor:[UIColor blackColor]]; [imageScrollView setDelegate:self]; [imageScrollView setBouncesZoom:YES]; [[self view] addSubview:imageScrollView]; [self toggleThumbView]; // intitializes with the first image. [self pickImageNamed:@"lookbook1"]; } - (void)dealloc { [imageScrollView release]; [slideUpView release]; [thumbScrollView release]; [super dealloc]; } #pragma mark UIScrollViewDelegate methods - (UIView *)viewForZoomingInScrollView:(UIScrollView *)scrollView { UIView *view = nil; if (scrollView == imageScrollView) { view = [imageScrollView viewWithTag:ZOOM_VIEW_TAG]; } return view; } /************************************** NOTE **************************************/ /* The following delegate method works around a known bug in zoomToRect:animated: */ /* In the next release after 3.0 this workaround will no longer be necessary */ /**********************************************************************************/ - (void)scrollViewDidEndZooming:(UIScrollView *)scrollView withView:(UIView *)view atScale:(float)scale { [scrollView setZoomScale:scale+0.01 animated:NO]; [scrollView setZoomScale:scale animated:NO]; } #pragma mark TapDetectingImageViewDelegate methods - (void)tapDetectingImageView:(TapDetectingImageView *)view gotSingleTapAtPoint:(CGPoint)tapPoint { // Single tap shows or hides drawer of thumbnails. [self toggleThumbView]; } - (void)tapDetectingImageView:(TapDetectingImageView *)view gotDoubleTapAtPoint:(CGPoint)tapPoint { // double tap zooms in float newScale = [imageScrollView zoomScale] * ZOOM_STEP; CGRect zoomRect = [self zoomRectForScale:newScale withCenter:tapPoint]; [imageScrollView zoomToRect:zoomRect animated:YES]; } - (void)tapDetectingImageView:(TapDetectingImageView *)view gotTwoFingerTapAtPoint:(CGPoint)tapPoint { // two-finger tap zooms out float newScale = [imageScrollView zoomScale] / ZOOM_STEP; CGRect zoomRect = [self zoomRectForScale:newScale withCenter:tapPoint]; [imageScrollView zoomToRect:zoomRect animated:YES]; } #pragma mark ThumbImageViewDelegate methods - (void)thumbImageViewWasTapped:(ThumbImageView *)tiv { [self pickImageNamed:[tiv imageName]]; [self toggleThumbView]; } - (void)thumbImageViewStartedTracking:(ThumbImageView *)tiv { [thumbScrollView bringSubviewToFront:tiv]; } // CONTROLS DRAGGING AND DROPPING THUMBNAILS... - (void)thumbImageViewMoved:(ThumbImageView *)draggingThumb { // check if we've moved close enough to an edge to autoscroll, or far enough away to stop autoscrolling [self maybeAutoscrollForThumb:draggingThumb]; /* The rest of this method handles the reordering of thumbnails in the thumbScrollView. See */ /* ThumbImageView.h and ThumbImageView.m for more information about how this works. */ // we'll reorder only if the thumb is overlapping the scroll view if (CGRectIntersectsRect([draggingThumb frame], [thumbScrollView bounds])) { BOOL draggingRight = [draggingThumb frame].origin.x &gt; [draggingThumb home].origin.x ? YES : NO; /* we're going to shift over all the thumbs who live between the home of the moving thumb */ /* and the current touch location. A thumb counts as living in this area if the midpoint */ /* of its home is contained in the area. */ NSMutableArray *thumbsToShift = [[NSMutableArray alloc] init]; // get the touch location in the coordinate system of the scroll view CGPoint touchLocation = [draggingThumb convertPoint:[draggingThumb touchLocation] toView:thumbScrollView]; // calculate minimum and maximum boundaries of the affected area float minX = draggingRight ? CGRectGetMaxX([draggingThumb home]) : touchLocation.x; float maxX = draggingRight ? touchLocation.x : CGRectGetMinX([draggingThumb home]); // iterate through thumbnails and see which ones need to move over for (ThumbImageView *thumb in [thumbScrollView subviews]) { // skip the thumb being dragged if (thumb == draggingThumb) continue; // skip non-thumb subviews of the scroll view (such as the scroll indicators) if (! [thumb isMemberOfClass:[ThumbImageView class]]) continue; float thumbMidpoint = CGRectGetMidX([thumb home]); if (thumbMidpoint &gt;= minX &amp;&amp; thumbMidpoint &lt;= maxX) { [thumbsToShift addObject:thumb]; } } // shift over the other thumbs to make room for the dragging thumb. (if we're dragging right, they shift to the left) float otherThumbShift = ([draggingThumb home].size.width + THUMB_H_PADDING) * (draggingRight ? -1 : 1); // as we shift over the other thumbs, we'll calculate how much the dragging thumb's home is going to move float draggingThumbShift = 0.0; // send each of the shifting thumbs to its new home for (ThumbImageView *otherThumb in thumbsToShift) { CGRect home = [otherThumb home]; home.origin.x += otherThumbShift; [otherThumb setHome:home]; [otherThumb goHome]; draggingThumbShift += ([otherThumb frame].size.width + THUMB_H_PADDING) * (draggingRight ? 1 : -1); } // change the home of the dragging thumb, but don't send it there because it's still being dragged CGRect home = [draggingThumb home]; home.origin.x += draggingThumbShift; [draggingThumb setHome:home]; } } - (void)thumbImageViewStoppedTracking:(ThumbImageView *)tiv { // if the user lets go of the thumb image view, stop autoscrolling [autoscrollTimer invalidate]; autoscrollTimer = nil; } #pragma mark Autoscrolling methods - (void)maybeAutoscrollForThumb:(ThumbImageView *)thumb { autoscrollDistance = 0; // only autoscroll if the thumb is overlapping the thumbScrollView if (CGRectIntersectsRect([thumb frame], [thumbScrollView bounds])) { CGPoint touchLocation = [thumb convertPoint:[thumb touchLocation] toView:thumbScrollView]; float distanceFromLeftEdge = touchLocation.x - CGRectGetMinX([thumbScrollView bounds]); float distanceFromRightEdge = CGRectGetMaxX([thumbScrollView bounds]) - touchLocation.x; if (distanceFromLeftEdge &lt; AUTOSCROLL_THRESHOLD) { autoscrollDistance = [self autoscrollDistanceForProximityToEdge:distanceFromLeftEdge] * -1; // if scrolling left, distance is negative } else if (distanceFromRightEdge &lt; AUTOSCROLL_THRESHOLD) { autoscrollDistance = [self autoscrollDistanceForProximityToEdge:distanceFromRightEdge]; } } // if no autoscrolling, stop and clear timer if (autoscrollDistance == 0) { [autoscrollTimer invalidate]; autoscrollTimer = nil; } // otherwise create and start timer (if we don't already have a timer going) else if (autoscrollTimer == nil) { autoscrollTimer = [NSTimer scheduledTimerWithTimeInterval:(1.0 / 60.0) target:self selector:@selector(autoscrollTimerFired:) userInfo:thumb repeats:YES]; } } - (float)autoscrollDistanceForProximityToEdge:(float)proximity { // the scroll distance grows as the proximity to the edge decreases, so that moving the thumb // further over results in faster scrolling. return ceilf((AUTOSCROLL_THRESHOLD - proximity) / 5.0); } - (void)legalizeAutoscrollDistance { // makes sure the autoscroll distance won't result in scrolling past the content of the scroll view float minimumLegalDistance = [thumbScrollView contentOffset].x * -1; float maximumLegalDistance = [thumbScrollView contentSize].width - ([thumbScrollView frame].size.width + [thumbScrollView contentOffset].x); autoscrollDistance = MAX(autoscrollDistance, minimumLegalDistance); autoscrollDistance = MIN(autoscrollDistance, maximumLegalDistance); } - (void)autoscrollTimerFired:(NSTimer*)timer { [self legalizeAutoscrollDistance]; // autoscroll by changing content offset CGPoint contentOffset = [thumbScrollView contentOffset]; contentOffset.x += autoscrollDistance; [thumbScrollView setContentOffset:contentOffset]; // adjust thumb position so it appears to stay still ThumbImageView *thumb = (ThumbImageView *)[timer userInfo]; [thumb moveByOffset:CGPointMake(autoscrollDistance, 0)]; } #pragma mark View handling methods - (void)toggleThumbView { [self createSlideUpViewIfNecessary]; // no-op if slideUpView has already been created CGRect frame = [slideUpView frame]; if (thumbViewShowing) { frame.origin.y = 0; } else { frame.origin.y = -225; } [UIView beginAnimations:nil context:nil]; [UIView setAnimationDuration:0.3]; [slideUpView setFrame:frame]; [UIView commitAnimations]; thumbViewShowing = !thumbViewShowing; } - (void)pickImageNamed:(NSString *)name { // first remove previous image view, if any [[imageScrollView viewWithTag:ZOOM_VIEW_TAG] removeFromSuperview]; UIImage *image = [UIImage imageNamed:[NSString stringWithFormat:@"%@.jpg", name]]; TapDetectingImageView *zoomView = [[TapDetectingImageView alloc] initWithImage:image]; zoomView.autoresizingMask = UIViewAutoresizingFlexibleWidth ; [zoomView setDelegate:self]; [zoomView setTag:ZOOM_VIEW_TAG]; [imageScrollView addSubview:zoomView]; [imageScrollView setContentSize:[zoomView frame].size]; [zoomView release]; // choose minimum scale so image width fits screen float minScale = [imageScrollView frame].size.width / [zoomView frame].size.width; [imageScrollView setMinimumZoomScale:minScale]; [imageScrollView setZoomScale:minScale]; [imageScrollView setContentOffset:CGPointZero]; } - (NSArray *)imageNames { // the filenames are stored in a plist in the app bundle, so create array by reading this plist NSString *path = [[NSBundle mainBundle] pathForResource:@"Images" ofType:@"plist"]; NSData *plistData = [NSData dataWithContentsOfFile:path]; NSString *error; NSPropertyListFormat format; NSArray *imageNames = [NSPropertyListSerialization propertyListFromData:plistData mutabilityOption:NSPropertyListImmutable format:&amp;format errorDescription:&amp;error]; if (!imageNames) { NSLog(@"Failed to read image names. Error: %@", error); [error release]; } return imageNames; } - (void)createSlideUpViewIfNecessary { if (!slideUpView) { [self createThumbScrollViewIfNecessary]; CGRect bounds = [[self view] bounds]; float thumbHeight = [thumbScrollView frame].size.height; float labelHeight = CREDIT_LABEL_HEIGHT; // create label giving credit for images UILabel *creditLabel = [[UILabel alloc] initWithFrame:CGRectMake(0, thumbHeight, bounds.size.width, labelHeight)]; [creditLabel setBackgroundColor:[UIColor clearColor]]; [creditLabel setTextColor:[UIColor whiteColor]]; // [creditLabel setFont:[UIFont fontWithName:@"Helvetica" size:16]]; // [creditLabel setText:@"SAMPLE TEXT"]; [creditLabel setTextAlignment:UITextAlignmentCenter]; // create container view that will hold scroll view and label CGRect frame = CGRectMake(0.0, -225.00, bounds.size.width+256, thumbHeight + labelHeight); slideUpView.autoresizingMask = UIViewAutoresizingFlexibleWidth | UIViewAutoresizingFlexibleTopMargin; slideUpView = [[UIView alloc] initWithFrame:frame]; [slideUpView setBackgroundColor:[UIColor blackColor]]; [slideUpView setOpaque:NO]; [slideUpView setAlpha:.75]; [[self view] addSubview:slideUpView]; // add subviews to container view [slideUpView addSubview:thumbScrollView]; [slideUpView addSubview:creditLabel]; [creditLabel release]; } } - (void)createThumbScrollViewIfNecessary { if (!thumbScrollView) { float scrollViewHeight = THUMB_HEIGHT + THUMB_V_PADDING; float scrollViewWidth = [[self view] bounds].size.width; thumbScrollView = [[UIScrollView alloc] initWithFrame:CGRectMake(0, 0, scrollViewWidth, scrollViewHeight)]; [thumbScrollView setCanCancelContentTouches:NO]; [thumbScrollView setClipsToBounds:NO]; // now place all the thumb views as subviews of the scroll view // and in the course of doing so calculate the content width float xPosition = THUMB_H_PADDING; for (NSString *name in [self imageNames]) { UIImage *thumbImage = [UIImage imageNamed:[NSString stringWithFormat:@"%@_thumb.jpg", name]]; if (thumbImage) { ThumbImageView *thumbView = [[ThumbImageView alloc] initWithImage:thumbImage]; [thumbView setDelegate:self]; [thumbView setImageName:name]; CGRect frame = [thumbView frame]; frame.origin.y = THUMB_V_PADDING; frame.origin.x = xPosition; [thumbView setFrame:frame]; [thumbView setHome:frame]; [thumbScrollView addSubview:thumbView]; [thumbView release]; xPosition += (frame.size.width + THUMB_H_PADDING); } } [thumbScrollView setContentSize:CGSizeMake(xPosition, scrollViewHeight)]; } } #pragma mark Utility methods - (CGRect)zoomRectForScale:(float)scale withCenter:(CGPoint)center { CGRect zoomRect; // the zoom rect is in the content view's coordinates. // At a zoom scale of 1.0, it would be the size of the imageScrollView's bounds. // As the zoom scale decreases, so more content is visible, the size of the rect grows. zoomRect.size.height = [imageScrollView frame].size.height / scale; zoomRect.size.width = [imageScrollView frame].size.width / scale; // choose an origin so as to get the right center. zoomRect.origin.x = center.x - (zoomRect.size.width / 2.0); zoomRect.origin.y = center.y - (zoomRect.size.height / 2.0); return zoomRect; } #pragma mark - #pragma mark Rotation support // Ensure that the view controller supports rotation and that the split view can therefore show in both portrait and landscape. - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { return YES; } @end

    Read the article

  • game speed problem

    - by Meko
    HI..I made a little game.But this game works on every computer with different speed.I think it is about resolution.I used every thing in paintcomponent.and If I change screen size the game goes slower or faster.And if i run this game on another computer wich has different resolution it also works different. This is my game http://rapidshare.com/files/364597095/ShooterGame.2.6.0.jar and here code public class Shooter extends JFrame implements KeyListener, Runnable { JFrame frame = new JFrame(); String player; Font startFont, startSubFont, timerFont,healthFont; Image img; Image backGround; Graphics dbi; URL url1 = this.getClass().getResource("Images/p2.gif"); URL url2 = this.getClass().getResource("Images/p3.gif"); URL url3 = this.getClass().getResource("Images/p1.gif"); URL url4 = this.getClass().getResource("Images/p4.gif"); URL urlMap = this.getClass().getResource("Images/zemin.jpg"); Player p1 = new Player(5, 150, 10, 40, Color.GREEN, url3); Computer p2 = new Computer(750, 150, 10, 40, Color.BLUE, url1); Computer p3 = new Computer(0, 0, 10, 40, Color.BLUE, url2); Computer p4 = new Computer(0, 0, 10, 40, Color.BLUE, url4); ArrayList<Bullets> b = new ArrayList<Bullets>(); ArrayList<CBullets> cb = new ArrayList<CBullets>(); Thread sheap; boolean a, d, w, s; boolean toUp, toDown; boolean GameOver; boolean Level2; boolean newGame, resart, pause; int S, E; int random; int cbSpeed = 0; long timeStart, timeEnd; int timeElapsed; long GameStart, GameEnd; int GameScore; int Timer = 0; int timerStart, timerEnd; public Shooter() { sheap = new Thread(this); sheap.start(); startFont = new Font("Tiresias PCFont Z", Font.BOLD + Font.ITALIC, 32); startSubFont = new Font("Tiresias PCFont Z", Font.BOLD + Font.ITALIC, 25); timerFont = new Font("Tiresias PCFont Z", Font.BOLD + Font.ITALIC, 16); healthFont = new Font("Tiresias PCFont Z", Font.BOLD + Font.ITALIC, 16); setTitle("Shooter 2.5.1"); setBounds(350, 250, 800, 600); // setResizable(false); setBackground(Color.black); setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); addKeyListener(this); a = d = w = s = false; toUp = toDown = true; GameOver = true; newGame = true; Level2 = false; S = E = 0; setVisible(true); } public void paint(Graphics g) { img = createImage(getWidth(), getHeight()); dbi = img.getGraphics(); paintComponent(dbi); g.drawImage(img, 0, 0, this); } public void paintComponent(Graphics g) { repaint(); timeStart = System.currentTimeMillis(); backGround = Toolkit.getDefaultToolkit().getImage(urlMap); g.drawImage(backGround, 0, 0, null); g.setColor(Color.red); g.setFont(healthFont); g.drawString("" + player + " Health : " + p1.health, 30, 50); g.setColor(Color.red); g.drawString("Computer Health : " + (p2.health + p3.health + p4.health), 600, 50); g.setColor(Color.BLUE); g.setFont(timerFont); g.drawString("Time : " + Timer, 330, 50); if (newGame) { g.setColor(Color.LIGHT_GRAY); g.setFont(startFont); g.drawString("Well Come To Shoot Game", 200, 190); g.drawString("Press ENTER To Start", 250, 220); g.setColor(Color.LIGHT_GRAY); g.setFont(startSubFont); g.drawString("Use W,A,S,D and Space For Fire", 200, 250); g.drawString("GOOD LUCK", 250, 280); newGame(); } if (!GameOver) { for (Bullets b1 : b) { b1.draw(g); } for (CBullets b2 : cb) { b2.draw(g); } update(); // Here MOvements for Player and For Fires } if (p1.health <= 0) { g.setColor(p2.col); g.setFont(startFont); g.drawString("Computer Wins ", 200, 190); g.drawString("Press Key R to Restart ", 200, 220); GameOver = true; } else if (p2.health <= 0 && p3.health <= 0 && p4.health <= 0) { g.setColor(p1.col); g.setFont(startFont); g.drawString(""+player+" Wins ", 200, 190); g.drawString("Press Key R to Resart ", 200, 220); GameOver = true; g.setColor(Color.MAGENTA); g.drawString(""+player+"`s Score is " + Timer, 200, 120); } if (Level2) { if (p3.health >= 0) { p3.draw(g); for (CBullets b3 : cb) { b3.draw(g); } } else { p3.x = 1000; } if (p4.health >= 0) { p4.draw(g); for (CBullets b4 : cb) { b4.draw(g); } } else { p4.x = 1000; } } if (p1.health >= 0) { p1.draw(g); } if (p2.health >= 0) { p2.draw(g); } else { p2.x = 1000; } } public void update() { if (w && p1.y > 54) { p1.moveUp(); } if (s && p1.y < 547) { p1.moveDown(); } if (a && p1.x > 0) { p1.moveLeft(); } if (d && p1.x < 200) { p1.moveRight(); } random = 1 * (int) (Math.random() * 100); if (random > 96) { if (p2.health >= 0) { CBullets bo = p2.getCBull(); bo.xVel =-1-cbSpeed; cb.add(bo); } if (Level2) { if (p3.health >= 0) { CBullets bo1 = p3.getCBull(); bo1.xVel = -2-cbSpeed; cb.add(bo1); } if (p4.health >= 0) { CBullets bo2 = p4.getCBull(); bo2.xVel = -4-cbSpeed; cb.add(bo2); } } } if (S == 1) { if (p1.health >= 0) { Bullets bu = p1.getBull(); bu.xVel = 5; b.add(bu); S += 1; } } //Here Also Problem .. When COmputer have More fire then it gaves Array Exeption . Or Player have More Fire for (int i = cb.size() -1; i = 0 ; i--) { boolean bremoved = false; for (int j = b.size() -1 ; j =0 ; j--) { if (b.get(j).rect.intersects(cb.get(i).rect) || cb.get(i).rect.intersects(b.get(j).rect)) { bremoved = true; b.remove(j); } } if(bremoved) cb.remove(i); } for (int i = 0; i < b.size(); i++) { b.get(i).move(); if (b.get(i).rect.intersects(p2.rect)) { if (p2.health >= 0) { p2.health--; b.remove(i); // System.out.println("Hited P2"); i--; continue; } } if (b.get(i).rect.intersects(p3.rect)) { if (p3.health >= 0) { p3.health--; b.remove(i); // System.out.println("Hited P3"); i--; continue; } } if (b.get(i).rect.intersects(p4.rect)) { if (p4.health >= 0) { p4.health--; b.remove(i); // System.out.println("Hited P4"); i--; continue; } } if (b.get(i).rect.x > 790) { b.remove(i); } } for (int j = 0; j < cb.size(); j++) { cb.get(j).move(); if (cb.get(j).rect.intersects(p1.rect) && cb.get(j).xVel < 0) { p1.health--; cb.remove(j); j--; continue; } } timeEnd = System.currentTimeMillis(); timeElapsed = (int) (timeEnd - timeStart); } public void level2() { if (p2.health <= 10) { Level2 = true; cbSpeed = 4; p3.x = 750; p4.x = 750; p2.speed = 10; p3.speed = 20; p4.speed = 30; } } public void keyTyped(KeyEvent e) { } public void keyPressed(KeyEvent e) { switch (e.getKeyCode()) { case KeyEvent.VK_ENTER: newGame = false; break; case KeyEvent.VK_P: pause = true; break; case KeyEvent.VK_R: resart = true; break; case KeyEvent.VK_A: a = true; break; case KeyEvent.VK_D: d = true; break; case KeyEvent.VK_W: w = true; break; case KeyEvent.VK_S: s = true; break; case KeyEvent.VK_SPACE: S += 1; break; } } public void keyReleased(KeyEvent e) { switch (e.getKeyCode()) { case KeyEvent.VK_A: a = false; break; case KeyEvent.VK_D: d = false; break; case KeyEvent.VK_W: w = false; break; case KeyEvent.VK_S: s = false; break; case KeyEvent.VK_SPACE: S = 0; break; } } public void newGame() { p1.health = 20; p2.health = 20; p3.health = 20; p4.health = 20; p3.x = 0; p4.x = 0; p2.x = 750; Level2 = false; cbSpeed = 0; p2.speed = 9; b.removeAll(b); cb.removeAll(cb); timerStart = (int) System.currentTimeMillis(); GameOver = false; } public static void main(String[] args) { SwingUtilities.invokeLater(new Runnable() { public void run() { KeyListener k = new Shooter(); } }); } @Override public void run() { player = JOptionPane.showInputDialog(frame, "Enter Player Name", "New Player", JOptionPane.DEFAULT_OPTION); while (true) { timerEnd = (int) System.currentTimeMillis(); if (resart) { newGame(); resart = false; } if (pause) { Thread.currentThread().notify(); } try { if (!GameOver) { Timer = timerEnd - timerStart; level2(); if (p1.y < p2.y && p2.y60) { p2.moveUp(); } if (p1.y < p3.y && p3.y43) { p3.moveUp(); } if (p1.y < p4.y && p4.y43) { p4.moveUp(); } if (p1.y > p2.y && p2.y<535) { p2.moveDown(); } if (p1.y > p3.y && p3.y<535) { p3.moveDown(); } if (p1.y > p4.y && p4.y<530) { p4.moveDown(); } } if (timeElapsed < 125) { Thread.currentThread().sleep(125); } } catch (InterruptedException ex) { System.out.print("FInished"); } } } }

    Read the article

  • Sensible Way to Pass Web Data in XML to a SQL Server Database

    - by Emtucifor
    After exploring several different ways to pass web data to a database for update purposes, I'm wondering if XML might be a good strategy. The database is currently SQL 2000. In a few months it will move to SQL 2005 and I will be able to change things if needed, but I need a SQL 2000 solution now. First of all, the database in question uses the EAV model. I know that this kind of database is generally highly frowned on, so for the purposes of this question, please just accept that this is not going to change. The current update method has the web server inserting values (that have all been converted first to their correct underlying types, then to sql_variant) to a temp table. A stored procedure is then run which expects the temp table to exist and it takes care of updating, inserting, or deleting things as needed. So far, only a single element has needed to be updated at a time. But now, there is a requirement to be able to edit multiple elements at once, and also to support hierarchical elements, each of which can have its own list of attributes. Here's some example XML I hand-typed to demonstrate what I'm thinking of. Note that in this database the Entity is Element and an ID of 0 signifies "create" aka an insert of a new item. <Elements> <Element ID="1234"> <Attr ID="221">Value</Attr> <Attr ID="225">287</Attr> <Attr ID="234"> <Element ID="99825"> <Attr ID="7">Value1</Attr> <Attr ID="8">Value2</Attr> <Attr ID="9" Action="delete" /> </Element> <Element ID="99826" Action="delete" /> <Element ID="0" Type="24"> <Attr ID="7">Value4</Attr> <Attr ID="8">Value5</Attr> <Attr ID="9">Value6</Attr> </Element> <Element ID="0" Type="24"> <Attr ID="7">Value7</Attr> <Attr ID="8">Value8</Attr> <Attr ID="9">Value9</Attr> </Element> </Attr> <Rel ID="3827" Action="delete" /> <Rel ID="2284" Role="parent"> <Element ID="3827" /> <Element ID="3829" /> <Attr ID="665">1</Attr> </Rel> <Rel ID="0" Type="23" Role="child"> <Element ID="3830" /> <Attr ID="67" </Rel> </Element> <Element ID="0" Type="87"> <Attr ID="221">Value</Attr> <Attr ID="225">569</Attr> <Attr ID="234"> <Element ID="0" Type="24"> <Attr ID="7">Value10</Attr> <Attr ID="8">Value11</Attr> <Attr ID="9">Value12</Attr> </Element> </Attr> </Element> <Element ID="1235" Action="delete" /> </Elements> Some Attributes are straight value types, such as AttrID 221. But AttrID 234 is a special "multi-value" type that can have a list of elements underneath it, and each one can have one or more values. Types only need to be presented when a new item is created, since the ElementID fully implies the type if it already exists. I'll probably support only passing in changed items (as detected by javascript). And there may be an Action="Delete" on Attr elements as well, since NULLs are treated as "unselected"--sometimes it's very important to know if a Yes/No question has intentionally been answered No or if no one's bothered to say Yes yet. There is also a different kind of data, a Relationship. At this time, those are updated through individual AJAX calls as things are edited in the UI, but I'd like to include those so that changes to relationships can be canceled (right now, once you change it, it's done). So those are really elements, too, but they are called Rel instead of Element. Relationships are implemented as ElementID1 and ElementID2, so the RelID 2284 in the XML above is in the database as: ElementID 2284 ElementID1 1234 ElementID2 3827 Having multiple children in one relationship isn't currently supported, but it would be nice later. Does this strategy and the example XML make sense? Is there a more sensible way? I'm just looking for some broad critique to help save me from going down a bad path. Any aspect that you'd like to comment on would be helpful. The web language happens to be Classic ASP, but that could change to ASP.Net at some point. A persistence engine like Linq or nHibernate is probably not acceptable right now--I just want to get this already working application enhanced without a huge amount of development time. I'll choose the answer that shows experience and has a balance of good warnings about what not to do, confirmations of what I'm planning to do, and recommendations about something else to do. I'll make it as objective as possible. P.S. I'd like to handle unicode characters as well as very long strings (10k +). UPDATE I have had this working for some time and I used the ADO Recordset Save-To-Stream trick to make creating the XML really easy. The result seems to be fairly fast, though if speed ever becomes a problem I may revisit this. In the meantime, my code works to handle any number of elements and attributes on the page at once, including updating, deleting, and creating new items all in one go. I settled on a scheme like so for all my elements: Existing data elements Example: input name e12345_a678 (element 12345, attribute 678), the input value is the value of the attribute. New elements Javascript copies a hidden template of the set of HTML elements needed for the type into the correct location on the page, increments a counter to get a new ID for this item, and prepends the number to the names of the form items. var newid = 0; function metadataAdd(reference, nameid, value) { var t = document.createElement('input'); t.setAttribute('name', nameid); t.setAttribute('id', nameid); t.setAttribute('type', 'hidden'); t.setAttribute('value', value); reference.appendChild(t); } function multiAdd(target, parentelementid, attrid, elementtypeid) { var proto = document.getElementById('a' + attrid + '_proto'); var instance = document.createElement('p'); target.parentNode.parentNode.insertBefore(instance, target.parentNode); var thisid = ++newid; instance.innerHTML = proto.innerHTML.replace(/{prefix}/g, 'n' + thisid + '_'); instance.id = 'n' + thisid; instance.className += ' new'; metadataAdd(instance, 'n' + thisid + '_p', parentelementid); metadataAdd(instance, 'n' + thisid + '_c', attrid); metadataAdd(instance, 'n' + thisid + '_t', elementtypeid); return false; } Example: Template input name _a678 becomes n1_a678 (a new element, the first one on the page, attribute 678). all attributes of this new element are tagged with the same prefix of n1. The next new item will be n2, and so on. Some hidden form inputs are created: n1_t, value is the elementtype of the element to be created n1_p, value is the parent id of the element (if it is a relationship) n1_c, value is the child id of the element (if it is a relationship) Deleting elements A hidden input is created in the form e12345_t with value set to 0. The existing controls displaying that attribute's values are disabled so they are not included in the form post. So "set type to 0" is treated as delete. With this scheme, every item on the page has a unique name and can be distinguished properly, and every action can be represented properly. When the form is posted, here's a sample of building one of the two recordsets used (classic ASP code): Set Data = Server.CreateObject("ADODB.Recordset") Data.Fields.Append "ElementID", adInteger, 4, adFldKeyColumn Data.Fields.Append "AttrID", adInteger, 4, adFldKeyColumn Data.Fields.Append "Value", adLongVarWChar, 2147483647, adFldIsNullable Or adFldMayBeNull Data.CursorLocation = adUseClient Data.CursorType = adOpenDynamic Data.Open This is the recordset for values, the other is for the elements themselves. I step through the posted form and for the element recordset use a Scripting.Dictionary populated with instances of a custom Class that has the properties I need, so that I can add the values piecemeal, since they don't always come in order. New elements are added as negative to distinguish them from regular elements (rather than requiring a separate column to indicate if it is new or addresses an existing element). I use regular expression to tear apart the form keys: "^(e|n)([0-9]{1,10})_(a|p|t|c)([0-9]{0,10})$" Then, adding an attribute looks like this. Data.AddNew ElementID.Value = DataID AttrID.Value = Integerize(Matches(0).SubMatches(3)) AttrValue.Value = Request.Form(Key) Data.Update ElementID, AttrID, and AttrValue are references to the fields of the recordset. This method is hugely faster than using Data.Fields("ElementID").Value each time. I loop through the Dictionary of element updates and ignore any that don't have all the proper information, adding the good ones to the recordset. Then I call my data-updating stored procedure like so: Set Cmd = Server.CreateObject("ADODB.Command") With Cmd Set .ActiveConnection = MyDBConn .CommandType = adCmdStoredProc .CommandText = "DataPost" .Prepared = False .Parameters.Append .CreateParameter("@ElementMetadata", adLongVarWChar, adParamInput, 2147483647, XMLFromRecordset(Element)) .Parameters.Append .CreateParameter("@ElementData", adLongVarWChar, adParamInput, 2147483647, XMLFromRecordset(Data)) End With Result.Open Cmd ' previously created recordset object with options set Here's the function that does the xml conversion: Private Function XMLFromRecordset(Recordset) Dim Stream Set Stream = Server.CreateObject("ADODB.Stream") Stream.Open Recordset.Save Stream, adPersistXML Stream.Position = 0 XMLFromRecordset = Stream.ReadText End Function Just in case the web page needs to know, the SP returns a recordset of any new elements, showing their page value and their created value (so I can see that n1 is now e12346 for example). Here are some key snippets from the stored procedure. Note this is SQL 2000 for now, though I'll be able to switch to 2005 soon: CREATE PROCEDURE [dbo].[DataPost] @ElementMetaData ntext, @ElementData ntext AS DECLARE @hdoc int --- snip --- EXEC sp_xml_preparedocument @hdoc OUTPUT, @ElementMetaData, '<xml xmlns:s="uuid:BDC6E3F0-6DA3-11d1-A2A3-00AA00C14882" xmlns:dt="uuid:C2F41010-65B3-11d1-A29F-00AA00C14882" xmlns:rs="urn:schemas-microsoft-com:rowset" xmlns:z="#RowsetSchema" />' INSERT #ElementMetadata (ElementID, ElementTypeID, ElementID1, ElementID2) SELECT * FROM OPENXML(@hdoc, '/xml/rs:data/rs:insert/z:row', 0) WITH ( ElementID int, ElementTypeID int, ElementID1 int, ElementID2 int ) ORDER BY ElementID -- orders negative items (new elements) first so they begin counting at 1 for later ID calculation EXEC sp_xml_removedocument @hdoc --- snip --- UPDATE E SET E.ElementTypeID = M.ElementTypeID FROM Element E INNER JOIN #ElementMetadata M ON E.ElementID = M.ElementID WHERE E.ElementID >= 1 AND M.ElementTypeID >= 1 The following query does the correlation of the negative new element ids to the newly inserted ones: UPDATE #ElementMetadata -- Correlate the new ElementIDs with the input rows SET NewElementID = Scope_Identity() - @@RowCount + DataID WHERE ElementID < 0 Other set-based queries do all the other work of validating that the attributes are allowed, are the correct data type, and inserting, updating, and deleting elements and attributes. I hope this brief run-down is useful to others some day! Converting ADO Recordsets to an XML stream was a huge winner for me as it saved all sorts of time and had a namespace and schema already defined that made the results come out correctly. Using a flatter XML format with 2 inputs was also much easier than sticking to some ideal about having everything in a single XML stream.

    Read the article

  • Sensible Way to Pass Web Data to Sql Server Database

    - by Emtucifor
    After exploring several different ways to pass web data to a database for update purposes, I'm wondering if XML might be a good strategy. The database is currently SQL 2000. In a few months it will move to SQL 2005 and I will be able to change things if needed, but I need a SQL 2000 solution now. First of all, the database in question uses the EAV model. I know that this kind of database is generally highly frowned on, so for the purposes of this question, please just accept that this is not going to change. The current update method has the web server inserting values (that have all been converted first to their correct underlying types, then to sql_variant) to a temp table. A stored procedure is then run which expects the temp table to exist and it takes care of updating, inserting, or deleting things as needed. So far, only a single element has needed to be updated at a time. But now, there is a requirement to be able to edit multiple elements at once, and also to support hierarchical elements, each of which can have its own list of attributes. Here's some example XML I hand-typed to demonstrate what I'm thinking of. Note that in this database the Entity is Element and an ID of 0 signifies "create" aka an insert of a new item. <Elements> <Element ID="1234"> <Attr ID="221">Value</Attr> <Attr ID="225">287</Attr> <Attr ID="234"> <Element ID="99825"> <Attr ID="7">Value1</Attr> <Attr ID="8">Value2</Attr> <Attr ID="9" Action="delete" /> </Element> <Element ID="99826" Action="delete" /> <Element ID="0" Type="24"> <Attr ID="7">Value4</Attr> <Attr ID="8">Value5</Attr> <Attr ID="9">Value6</Attr> </Element> <Element ID="0" Type="24"> <Attr ID="7">Value7</Attr> <Attr ID="8">Value8</Attr> <Attr ID="9">Value9</Attr> </Element> </Attr> <Rel ID="3827" Action="delete" /> <Rel ID="2284" Role="parent"> <Element ID="3827" /> <Element ID="3829" /> <Attr ID="665">1</Attr> </Rel> <Rel ID="0" Type="23" Role="child"> <Element ID="3830" /> <Attr ID="67" </Rel> </Element> <Element ID="0" Type="87"> <Attr ID="221">Value</Attr> <Attr ID="225">569</Attr> <Attr ID="234"> <Element ID="0" Type="24"> <Attr ID="7">Value10</Attr> <Attr ID="8">Value11</Attr> <Attr ID="9">Value12</Attr> </Element> </Attr> </Element> <Element ID="1235" Action="delete" /> </Elements> Some Attributes are straight value types, such as AttrID 221. But AttrID 234 is a special "multi-value" type that can have a list of elements underneath it, and each one can have one or more values. Types only need to be presented when a new item is created, since the ElementID fully implies the type if it already exists. I'll probably support only passing in changed items (as detected by javascript). And there may be an Action="Delete" on Attr elements as well, since NULLs are treated as "unselected"--sometimes it's very important to know if a Yes/No question has intentionally been answered No or if no one's bothered to say Yes yet. There is also a different kind of data, a Relationship. At this time, those are updated through individual AJAX calls as things are edited in the UI, but I'd like to include those so that changes to relationships can be canceled (right now, once you change it, it's done). So those are really elements, too, but they are called Rel instead of Element. Relationships are implemented as ElementID1 and ElementID2, so the RelID 2284 in the XML above is in the database as: ElementID 2284 ElementID1 1234 ElementID2 3827 Having multiple children in one relationship isn't currently supported, but it would be nice later. Does this strategy and the example XML make sense? Is there a more sensible way? I'm just looking for some broad critique to help save me from going down a bad path. Any aspect that you'd like to comment on would be helpful. The web language happens to be Classic ASP, but that could change to ASP.Net at some point. A persistence engine like Linq or nHibernate is probably not acceptable right now--I just want to get this already working application enhanced without a huge amount of development time. I'll choose the answer that shows experience and has a balance of good warnings about what not to do, confirmations of what I'm planning to do, and recommendations about something else to do. I'll make it as objective as possible. P.S. I'd like to handle unicode characters as well as very long strings (10k +). UPDATE I have had this working for some time and I used the ADO Recordset Save-To-Stream trick to make creating the XML really easy. The result seems to be fairly fast, though if speed ever becomes a problem I may revisit this. In the meantime, my code works to handle any number of elements and attributes on the page at once, including updating, deleting, and creating new items all in one go. I settled on a scheme like so for all my elements: Existing data elements Example: input name e12345_a678 (element 12345, attribute 678), the input value is the value of the attribute. New elements Javascript copies a hidden template of the set of HTML elements needed for the type into the correct location on the page, increments a counter to get a new ID for this item, and prepends the number to the names of the form items. var newid = 0; function metadataAdd(reference, nameid, value) { var t = document.createElement('input'); t.setAttribute('name', nameid); t.setAttribute('id', nameid); t.setAttribute('type', 'hidden'); t.setAttribute('value', value); reference.appendChild(t); } function multiAdd(target, parentelementid, attrid, elementtypeid) { var proto = document.getElementById('a' + attrid + '_proto'); var instance = document.createElement('p'); target.parentNode.parentNode.insertBefore(instance, target.parentNode); var thisid = ++newid; instance.innerHTML = proto.innerHTML.replace(/{prefix}/g, 'n' + thisid + '_'); instance.id = 'n' + thisid; instance.className += ' new'; metadataAdd(instance, 'n' + thisid + '_p', parentelementid); metadataAdd(instance, 'n' + thisid + '_c', attrid); metadataAdd(instance, 'n' + thisid + '_t', elementtypeid); return false; } Example: Template input name _a678 becomes n1_a678 (a new element, the first one on the page, attribute 678). all attributes of this new element are tagged with the same prefix of n1. The next new item will be n2, and so on. Some hidden form inputs are created: n1_t, value is the elementtype of the element to be created n1_p, value is the parent id of the element (if it is a relationship) n1_c, value is the child id of the element (if it is a relationship) Deleting elements A hidden input is created in the form e12345_t with value set to 0. The existing controls displaying that attribute's values are disabled so they are not included in the form post. So "set type to 0" is treated as delete. With this scheme, every item on the page has a unique name and can be distinguished properly, and every action can be represented properly. When the form is posted, here's a sample of building one of the two recordsets used (classic ASP code): Set Data = Server.CreateObject("ADODB.Recordset") Data.Fields.Append "ElementID", adInteger, 4, adFldKeyColumn Data.Fields.Append "AttrID", adInteger, 4, adFldKeyColumn Data.Fields.Append "Value", adLongVarWChar, 2147483647, adFldIsNullable Or adFldMayBeNull Data.CursorLocation = adUseClient Data.CursorType = adOpenDynamic Data.Open This is the recordset for values, the other is for the elements themselves. I step through the posted form and for the element recordset use a Scripting.Dictionary populated with instances of a custom Class that has the properties I need, so that I can add the values piecemeal, since they don't always come in order. New elements are added as negative to distinguish them from regular elements (rather than requiring a separate column to indicate if it is new or addresses an existing element). I use regular expression to tear apart the form keys: "^(e|n)([0-9]{1,10})_(a|p|t|c)([0-9]{0,10})$" Then, adding an attribute looks like this. Data.AddNew ElementID.Value = DataID AttrID.Value = Integerize(Matches(0).SubMatches(3)) AttrValue.Value = Request.Form(Key) Data.Update ElementID, AttrID, and AttrValue are references to the fields of the recordset. This method is hugely faster than using Data.Fields("ElementID").Value each time. I loop through the Dictionary of element updates and ignore any that don't have all the proper information, adding the good ones to the recordset. Then I call my data-updating stored procedure like so: Set Cmd = Server.CreateObject("ADODB.Command") With Cmd Set .ActiveConnection = MyDBConn .CommandType = adCmdStoredProc .CommandText = "DataPost" .Prepared = False .Parameters.Append .CreateParameter("@ElementMetadata", adLongVarWChar, adParamInput, 2147483647, XMLFromRecordset(Element)) .Parameters.Append .CreateParameter("@ElementData", adLongVarWChar, adParamInput, 2147483647, XMLFromRecordset(Data)) End With Result.Open Cmd ' previously created recordset object with options set Here's the function that does the xml conversion: Private Function XMLFromRecordset(Recordset) Dim Stream Set Stream = Server.CreateObject("ADODB.Stream") Stream.Open Recordset.Save Stream, adPersistXML Stream.Position = 0 XMLFromRecordset = Stream.ReadText End Function Just in case the web page needs to know, the SP returns a recordset of any new elements, showing their page value and their created value (so I can see that n1 is now e12346 for example). Here are some key snippets from the stored procedure. Note this is SQL 2000 for now, though I'll be able to switch to 2005 soon: CREATE PROCEDURE [dbo].[DataPost] @ElementMetaData ntext, @ElementData ntext AS DECLARE @hdoc int --- snip --- EXEC sp_xml_preparedocument @hdoc OUTPUT, @ElementMetaData, '<xml xmlns:s="uuid:BDC6E3F0-6DA3-11d1-A2A3-00AA00C14882" xmlns:dt="uuid:C2F41010-65B3-11d1-A29F-00AA00C14882" xmlns:rs="urn:schemas-microsoft-com:rowset" xmlns:z="#RowsetSchema" />' INSERT #ElementMetadata (ElementID, ElementTypeID, ElementID1, ElementID2) SELECT * FROM OPENXML(@hdoc, '/xml/rs:data/rs:insert/z:row', 0) WITH ( ElementID int, ElementTypeID int, ElementID1 int, ElementID2 int ) ORDER BY ElementID -- orders negative items (new elements) first so they begin counting at 1 for later ID calculation EXEC sp_xml_removedocument @hdoc --- snip --- UPDATE E SET E.ElementTypeID = M.ElementTypeID FROM Element E INNER JOIN #ElementMetadata M ON E.ElementID = M.ElementID WHERE E.ElementID >= 1 AND M.ElementTypeID >= 1 The following query does the correlation of the negative new element ids to the newly inserted ones: UPDATE #ElementMetadata -- Correlate the new ElementIDs with the input rows SET NewElementID = Scope_Identity() - @@RowCount + DataID WHERE ElementID < 0 Other set-based queries do all the other work of validating that the attributes are allowed, are the correct data type, and inserting, updating, and deleting elements and attributes. I hope this brief run-down is useful to others some day! Converting ADO Recordsets to an XML stream was a huge winner for me as it saved all sorts of time and had a namespace and schema already defined that made the results come out correctly. Using a flatter XML format with 2 inputs was also much easier than sticking to some ideal about having everything in a single XML stream.

    Read the article

  • Bulk inserting best way to about it? + Helping me understand fully what I found so far

    - by chobo2
    Hi So I saw this post here and read it and it seems like bulk copy might be the way to go. http://stackoverflow.com/questions/682015/whats-the-best-way-to-bulk-database-inserts-from-c I still have some questions and want to know how things actually work. So I found 2 tutorials. http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx First way uses 2 ado.net 2.0 features. BulkInsert and BulkCopy. the second one uses linq to sql and OpenXML. This sort of appeals to me as I am using linq to sql already and prefer it over ado.net. However as one person pointed out in the posts what he just going around the issue at the cost of performance( nothing wrong with that in my opinion) First I will talk about the 2 ways in the first tutorial I am using VS2010 Express, .net 4.0, MVC 2.0, SQl Server 2005 Is ado.net 2.0 the most current version? Based on the technology I am using, is there some updates to what I am going to show that would improve it somehow? Is there any thing that these tutorial left out that I should know about? BulkInsert I am using this table for all the examples. CREATE TABLE [dbo].[TBL_TEST_TEST] ( ID INT IDENTITY(1,1) PRIMARY KEY, [NAME] [varchar](50) ) SP Code USE [Test] GO /****** Object: StoredProcedure [dbo].[sp_BatchInsert] Script Date: 05/19/2010 15:12:47 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[sp_BatchInsert] (@Name VARCHAR(50) ) AS BEGIN INSERT INTO TBL_TEST_TEST VALUES (@Name); END C# Code /// <summary> /// Another ado.net 2.0 way that uses a stored procedure to do a bulk insert. /// Seems slower then "BatchBulkCopy" way and it crashes when you try to insert 500,000 records in one go. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchInsert() { // Get the DataTable with Rows State as RowState.Added DataTable dtInsertRows = GetDataTable(); SqlConnection connection = new SqlConnection(connectionString); SqlCommand command = new SqlCommand("sp_BatchInsert", connection); command.CommandType = CommandType.StoredProcedure; command.UpdatedRowSource = UpdateRowSource.None; // Set the Parameter with appropriate Source Column Name command.Parameters.Add("@Name", SqlDbType.VarChar, 50, dtInsertRows.Columns[0].ColumnName); SqlDataAdapter adpt = new SqlDataAdapter(); adpt.InsertCommand = command; // Specify the number of records to be Inserted/Updated in one go. Default is 1. adpt.UpdateBatchSize = 1000; connection.Open(); int recordsInserted = adpt.Update(dtInsertRows); connection.Close(); } So first thing is the batch size. Why would you set a batch size to anything but the number of records you are sending? Like I am sending 500,000 records so I did a Batch size of 500,000. Next why does it crash when I do this? If I set it to 1000 for batch size it works just fine. System.Data.SqlClient.SqlException was unhandled Message="A transport-level error has occurred when sending the request to the server. (provider: Shared Memory Provider, error: 0 - No process is on the other end of the pipe.)" Source=".Net SqlClient Data Provider" ErrorCode=-2146232060 Class=20 LineNumber=0 Number=233 Server="" State=0 StackTrace: at System.Data.Common.DbDataAdapter.UpdatedRowStatusErrors(RowUpdatedEventArgs rowUpdatedEvent, BatchCommandInfo[] batchCommands, Int32 commandCount) at System.Data.Common.DbDataAdapter.UpdatedRowStatus(RowUpdatedEventArgs rowUpdatedEvent, BatchCommandInfo[] batchCommands, Int32 commandCount) at System.Data.Common.DbDataAdapter.Update(DataRow[] dataRows, DataTableMapping tableMapping) at System.Data.Common.DbDataAdapter.UpdateFromDataTable(DataTable dataTable, DataTableMapping tableMapping) at System.Data.Common.DbDataAdapter.Update(DataTable dataTable) at TestIQueryable.Program.BatchInsert() in C:\Users\a\Downloads\TestIQueryable\TestIQueryable\TestIQueryable\Program.cs:line 124 at TestIQueryable.Program.Main(String[] args) in C:\Users\a\Downloads\TestIQueryable\TestIQueryable\TestIQueryable\Program.cs:line 16 InnerException: Time it took to insert 500,000 records with insert batch size of 1000 took "2 mins and 54 seconds" Of course this is no official time I sat there with a stop watch( I am sure there are better ways but was too lazy to look what they where) So I find that kinda slow compared to all my other ones(expect the linq to sql insert one) and I am not really sure why. Next I looked at bulkcopy /// <summary> /// An ado.net 2.0 way to mass insert records. This seems to be the fastest. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchBulkCopy() { // Get the DataTable DataTable dtInsertRows = GetDataTable(); using (SqlBulkCopy sbc = new SqlBulkCopy(connectionString, SqlBulkCopyOptions.KeepIdentity)) { sbc.DestinationTableName = "TBL_TEST_TEST"; // Number of records to be processed in one go sbc.BatchSize = 500000; // Map the Source Column from DataTabel to the Destination Columns in SQL Server 2005 Person Table // sbc.ColumnMappings.Add("ID", "ID"); sbc.ColumnMappings.Add("NAME", "NAME"); // Number of records after which client has to be notified about its status sbc.NotifyAfter = dtInsertRows.Rows.Count; // Event that gets fired when NotifyAfter number of records are processed. sbc.SqlRowsCopied += new SqlRowsCopiedEventHandler(sbc_SqlRowsCopied); // Finally write to server sbc.WriteToServer(dtInsertRows); sbc.Close(); } } This one seemed to go really fast and did not even need a SP( can you use SP with bulk copy? If you can would it be better?) BatchCopy had no problem with a 500,000 batch size.So again why make it smaller then the number of records you want to send? I found that with BatchCopy and 500,000 batch size it took only 5 seconds to complete. I then tried with a batch size of 1,000 and it only took 8 seconds. So much faster then the bulkinsert one above. Now I tried the other tutorial. USE [Test] GO /****** Object: StoredProcedure [dbo].[spTEST_InsertXMLTEST_TEST] Script Date: 05/19/2010 15:39:03 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[spTEST_InsertXMLTEST_TEST](@UpdatedProdData nText) AS DECLARE @hDoc int exec sp_xml_preparedocument @hDoc OUTPUT,@UpdatedProdData INSERT INTO TBL_TEST_TEST(NAME) SELECT XMLProdTable.NAME FROM OPENXML(@hDoc, 'ArrayOfTBL_TEST_TEST/TBL_TEST_TEST', 2) WITH ( ID Int, NAME varchar(100) ) XMLProdTable EXEC sp_xml_removedocument @hDoc C# code. /// <summary> /// This is using linq to sql to make the table objects. /// It is then serailzed to to an xml document and sent to a stored proedure /// that then does a bulk insert(I think with OpenXML) /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertXMLBatch() { using (TestDataContext db = new TestDataContext()) { TBL_TEST_TEST[] testRecords = new TBL_TEST_TEST[500000]; for (int count = 0; count < 500000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; testRecords[count] = testRecord; } StringBuilder sBuilder = new StringBuilder(); System.IO.StringWriter sWriter = new System.IO.StringWriter(sBuilder); XmlSerializer serializer = new XmlSerializer(typeof(TBL_TEST_TEST[])); serializer.Serialize(sWriter, testRecords); db.insertTestData(sBuilder.ToString()); } } So I like this because I get to use objects even though it is kinda redundant. I don't get how the SP works. Like I don't get the whole thing. I don't know if OPENXML has some batch insert under the hood but I do not even know how to take this example SP and change it to fit my tables since like I said I don't know what is going on. I also don't know what would happen if the object you have more tables in it. Like say I have a ProductName table what has a relationship to a Product table or something like that. In linq to sql you could get the product name object and make changes to the Product table in that same object. So I am not sure how to take that into account. I am not sure if I would have to do separate inserts or what. The time was pretty good for 500,000 records it took 52 seconds The last way of course was just using linq to do it all and it was pretty bad. /// <summary> /// This is using linq to sql to to insert lots of records. /// This way is slow as it uses no mass insert. /// Only tried to insert 50,000 records as I did not want to sit around till it did 500,000 records. /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertAll() { using (TestDataContext db = new TestDataContext()) { db.CommandTimeout = 600; for (int count = 0; count < 50000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; db.TBL_TEST_TESTs.InsertOnSubmit(testRecord); } db.SubmitChanges(); } } I did only 50,000 records and that took over a minute to do. So I really narrowed it done to the linq to sql bulk insert way or bulk copy. I am just not sure how to do it when you have relationship for either way. I am not sure how they both stand up when doing updates instead of inserts as I have not gotten around to try it yet. I don't think I will ever need to insert/update more than 50,000 records at one type but at the same time I know I will have to do validation on records before inserting so that will slow it down and that sort of makes linq to sql nicer as your got objects especially if your first parsing data from a xml file before you insert into the database. Full C# code using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Xml.Serialization; using System.Data; using System.Data.SqlClient; namespace TestIQueryable { class Program { private static string connectionString = ""; static void Main(string[] args) { BatchInsert(); Console.WriteLine("done"); } /// <summary> /// This is using linq to sql to to insert lots of records. /// This way is slow as it uses no mass insert. /// Only tried to insert 50,000 records as I did not want to sit around till it did 500,000 records. /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertAll() { using (TestDataContext db = new TestDataContext()) { db.CommandTimeout = 600; for (int count = 0; count < 50000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; db.TBL_TEST_TESTs.InsertOnSubmit(testRecord); } db.SubmitChanges(); } } /// <summary> /// This is using linq to sql to make the table objects. /// It is then serailzed to to an xml document and sent to a stored proedure /// that then does a bulk insert(I think with OpenXML) /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertXMLBatch() { using (TestDataContext db = new TestDataContext()) { TBL_TEST_TEST[] testRecords = new TBL_TEST_TEST[500000]; for (int count = 0; count < 500000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; testRecords[count] = testRecord; } StringBuilder sBuilder = new StringBuilder(); System.IO.StringWriter sWriter = new System.IO.StringWriter(sBuilder); XmlSerializer serializer = new XmlSerializer(typeof(TBL_TEST_TEST[])); serializer.Serialize(sWriter, testRecords); db.insertTestData(sBuilder.ToString()); } } /// <summary> /// An ado.net 2.0 way to mass insert records. This seems to be the fastest. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchBulkCopy() { // Get the DataTable DataTable dtInsertRows = GetDataTable(); using (SqlBulkCopy sbc = new SqlBulkCopy(connectionString, SqlBulkCopyOptions.KeepIdentity)) { sbc.DestinationTableName = "TBL_TEST_TEST"; // Number of records to be processed in one go sbc.BatchSize = 500000; // Map the Source Column from DataTabel to the Destination Columns in SQL Server 2005 Person Table // sbc.ColumnMappings.Add("ID", "ID"); sbc.ColumnMappings.Add("NAME", "NAME"); // Number of records after which client has to be notified about its status sbc.NotifyAfter = dtInsertRows.Rows.Count; // Event that gets fired when NotifyAfter number of records are processed. sbc.SqlRowsCopied += new SqlRowsCopiedEventHandler(sbc_SqlRowsCopied); // Finally write to server sbc.WriteToServer(dtInsertRows); sbc.Close(); } } /// <summary> /// Another ado.net 2.0 way that uses a stored procedure to do a bulk insert. /// Seems slower then "BatchBulkCopy" way and it crashes when you try to insert 500,000 records in one go. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchInsert() { // Get the DataTable with Rows State as RowState.Added DataTable dtInsertRows = GetDataTable(); SqlConnection connection = new SqlConnection(connectionString); SqlCommand command = new SqlCommand("sp_BatchInsert", connection); command.CommandType = CommandType.StoredProcedure; command.UpdatedRowSource = UpdateRowSource.None; // Set the Parameter with appropriate Source Column Name command.Parameters.Add("@Name", SqlDbType.VarChar, 50, dtInsertRows.Columns[0].ColumnName); SqlDataAdapter adpt = new SqlDataAdapter(); adpt.InsertCommand = command; // Specify the number of records to be Inserted/Updated in one go. Default is 1. adpt.UpdateBatchSize = 500000; connection.Open(); int recordsInserted = adpt.Update(dtInsertRows); connection.Close(); } private static DataTable GetDataTable() { // You First need a DataTable and have all the insert values in it DataTable dtInsertRows = new DataTable(); dtInsertRows.Columns.Add("NAME"); for (int i = 0; i < 500000; i++) { DataRow drInsertRow = dtInsertRows.NewRow(); string name = "Name : " + i; drInsertRow["NAME"] = name; dtInsertRows.Rows.Add(drInsertRow); } return dtInsertRows; } static void sbc_SqlRowsCopied(object sender, SqlRowsCopiedEventArgs e) { Console.WriteLine("Number of records affected : " + e.RowsCopied.ToString()); } } }

    Read the article

  • How can this PHP code be improved? What should be changed?

    - by Noctis Skytower
    This is a custom encryption library. I do not know much about PHP's standard library of functions and was wondering if the following code can be improved in any way. The implementation should yield the same results, the API should remain as it is, but ways to make is more PHP-ish would be greatly appreciated. Code <?php /*************************************** Create random major and minor SPICE key. ***************************************/ function crypt_major() { $all = range("\x00", "\xFF"); shuffle($all); $major_key = implode("", $all); return $major_key; } function crypt_minor() { $sample = array(); do { array_push($sample, 0, 1, 2, 3); } while (count($sample) != 256); shuffle($sample); $list = array(); for ($index = 0; $index < 64; $index++) { $b12 = $sample[$index * 4] << 6; $b34 = $sample[$index * 4 + 1] << 4; $b56 = $sample[$index * 4 + 2] << 2; $b78 = $sample[$index * 4 + 3]; array_push($list, $b12 + $b34 + $b56 + $b78); } $minor_key = implode("", array_map("chr", $list)); return $minor_key; } /*************************************** Create the SPICE key via the given name. ***************************************/ function named_major($name) { srand(crc32($name)); return crypt_major(); } function named_minor($name) { srand(crc32($name)); return crypt_minor(); } /*************************************** Check validity for major and minor keys. ***************************************/ function _check_major($key) { if (is_string($key) && strlen($key) == 256) { foreach (range("\x00", "\xFF") as $char) { if (substr_count($key, $char) == 0) { return FALSE; } } return TRUE; } return FALSE; } function _check_minor($key) { if (is_string($key) && strlen($key) == 64) { $indexs = array(); foreach (array_map("ord", str_split($key)) as $byte) { foreach (range(6, 0, 2) as $shift) { array_push($indexs, ($byte >> $shift) & 3); } } $dict = array_count_values($indexs); foreach (range(0, 3) as $index) { if ($dict[$index] != 64) { return FALSE; } } return TRUE; } return FALSE; } /*************************************** Create encode maps for encode functions. ***************************************/ function _encode_map_1($major) { return array_map("ord", str_split($major)); } function _encode_map_2($minor) { $map_2 = array(array(), array(), array(), array()); $list = array(); foreach (array_map("ord", str_split($minor)) as $byte) { foreach (range(6, 0, 2) as $shift) { array_push($list, ($byte >> $shift) & 3); } } for ($byte = 0; $byte < 256; $byte++) { array_push($map_2[$list[$byte]], chr($byte)); } return $map_2; } /*************************************** Create decode maps for decode functions. ***************************************/ function _decode_map_1($minor) { $map_1 = array(); foreach (array_map("ord", str_split($minor)) as $byte) { foreach (range(6, 0, 2) as $shift) { array_push($map_1, ($byte >> $shift) & 3); } } return $map_1; }function _decode_map_2($major) { $map_2 = array(); $temp = array_map("ord", str_split($major)); for ($byte = 0; $byte < 256; $byte++) { $map_2[$temp[$byte]] = chr($byte); } return $map_2; } /*************************************** Encrypt or decrypt the string with maps. ***************************************/ function _encode($string, $map_1, $map_2) { $cache = ""; foreach (str_split($string) as $char) { $byte = $map_1[ord($char)]; foreach (range(6, 0, 2) as $shift) { $cache .= $map_2[($byte >> $shift) & 3][mt_rand(0, 63)]; } } return $cache; } function _decode($string, $map_1, $map_2) { $cache = ""; $temp = str_split($string); for ($iter = 0; $iter < strlen($string) / 4; $iter++) { $b12 = $map_1[ord($temp[$iter * 4])] << 6; $b34 = $map_1[ord($temp[$iter * 4 + 1])] << 4; $b56 = $map_1[ord($temp[$iter * 4 + 2])] << 2; $b78 = $map_1[ord($temp[$iter * 4 + 3])]; $cache .= $map_2[$b12 + $b34 + $b56 + $b78]; } return $cache; } /*************************************** This is the public interface for coding. ***************************************/ function encode_string($string, $major, $minor) { if (is_string($string)) { if (_check_major($major) && _check_minor($minor)) { $map_1 = _encode_map_1($major); $map_2 = _encode_map_2($minor); return _encode($string, $map_1, $map_2); } } return FALSE; } function decode_string($string, $major, $minor) { if (is_string($string) && strlen($string) % 4 == 0) { if (_check_major($major) && _check_minor($minor)) { $map_1 = _decode_map_1($minor); $map_2 = _decode_map_2($major); return _decode($string, $map_1, $map_2); } } return FALSE; } ?> This is a sample showing how the code is being used. Hex editors may be of help with the input / output. Example <?php # get and process all of the form data @ $input = htmlspecialchars($_POST["input"]); @ $majorname = htmlspecialchars($_POST["majorname"]); @ $minorname = htmlspecialchars($_POST["minorname"]); @ $majorkey = htmlspecialchars($_POST["majorkey"]); @ $minorkey = htmlspecialchars($_POST["minorkey"]); @ $output = htmlspecialchars($_POST["output"]); # process the submissions by operation # CREATE @ $operation = $_POST["operation"]; if ($operation == "Create") { if (strlen($_POST["majorname"]) == 0) { $majorkey = bin2hex(crypt_major()); } if (strlen($_POST["minorname"]) == 0) { $minorkey = bin2hex(crypt_minor()); } if (strlen($_POST["majorname"]) != 0) { $majorkey = bin2hex(named_major($_POST["majorname"])); } if (strlen($_POST["minorname"]) != 0) { $minorkey = bin2hex(named_minor($_POST["minorname"])); } } # ENCRYPT or DECRYPT function is_hex($char) { if ($char == "0"): return TRUE; elseif ($char == "1"): return TRUE; elseif ($char == "2"): return TRUE; elseif ($char == "3"): return TRUE; elseif ($char == "4"): return TRUE; elseif ($char == "5"): return TRUE; elseif ($char == "6"): return TRUE; elseif ($char == "7"): return TRUE; elseif ($char == "8"): return TRUE; elseif ($char == "9"): return TRUE; elseif ($char == "a"): return TRUE; elseif ($char == "b"): return TRUE; elseif ($char == "c"): return TRUE; elseif ($char == "d"): return TRUE; elseif ($char == "e"): return TRUE; elseif ($char == "f"): return TRUE; else: return FALSE; endif; } function hex2bin($str) { if (strlen($str) % 2 == 0): $string = strtolower($str); else: $string = strtolower("0" . $str); endif; $cache = ""; $temp = str_split($str); for ($index = 0; $index < count($temp) / 2; $index++) { $h1 = $temp[$index * 2]; if (is_hex($h1)) { $h2 = $temp[$index * 2 + 1]; if (is_hex($h2)) { $cache .= chr(hexdec($h1 . $h2)); } else { return FALSE; } } else { return FALSE; } } return $cache; } if ($operation == "Encrypt" || $operation == "Decrypt") { # CHECK FOR ANY ERROR $errors = array(); if (strlen($_POST["input"]) == 0) { $output = ""; } $binmajor = hex2bin($_POST["majorkey"]); if (strlen($_POST["majorkey"]) == 0) { array_push($errors, "There must be a major key."); } elseif ($binmajor == FALSE) { array_push($errors, "The major key must be in hex."); } elseif (_check_major($binmajor) == FALSE) { array_push($errors, "The major key is corrupt."); } $binminor = hex2bin($_POST["minorkey"]); if (strlen($_POST["minorkey"]) == 0) { array_push($errors, "There must be a minor key."); } elseif ($binminor == FALSE) { array_push($errors, "The minor key must be in hex."); } elseif (_check_minor($binminor) == FALSE) { array_push($errors, "The minor key is corrupt."); } if ($_POST["operation"] == "Decrypt") { $bininput = hex2bin(str_replace("\r", "", str_replace("\n", "", $_POST["input"]))); if ($bininput == FALSE) { if (strlen($_POST["input"]) != 0) { array_push($errors, "The input data must be in hex."); } } elseif (strlen($bininput) % 4 != 0) { array_push($errors, "The input data is corrupt."); } } if (count($errors) != 0) { # ERRORS ARE FOUND $output = "ERROR:"; foreach ($errors as $error) { $output .= "\n" . $error; } } elseif (strlen($_POST["input"]) != 0) { # CONTINUE WORKING if ($_POST["operation"] == "Encrypt") { # ENCRYPT $output = substr(chunk_split(bin2hex(encode_string($_POST["input"], $binmajor, $binminor)), 58), 0, -2); } else { # DECRYPT $output = htmlspecialchars(decode_string($bininput, $binmajor, $binminor)); } } } # echo the form with the values filled echo "<P><TEXTAREA class=maintextarea name=input rows=25 cols=25>" . $input . "</TEXTAREA></P>\n"; echo "<P>Major Name:</P>\n"; echo "<P><INPUT id=textbox1 name=majorname value=\"" . $majorname . "\"></P>\n"; echo "<P>Minor Name:</P>\n"; echo "<P><INPUT id=textbox1 name=minorname value=\"" . $minorname . "\"></P>\n"; echo "<DIV style=\"TEXT-ALIGN: center\"><INPUT class=submit type=submit value=Create name=operation>\n"; echo "</DIV>\n"; echo "<P>Major Key:</P>\n"; echo "<P><INPUT id=textbox1 name=majorkey value=\"" . $majorkey . "\"></P>\n"; echo "<P>Minor Key:</P>\n"; echo "<P><INPUT id=textbox1 name=minorkey value=\"" . $minorkey . "\"></P>\n"; echo "<DIV style=\"TEXT-ALIGN: center\"><INPUT class=submit type=submit value=Encrypt name=operation> \n"; echo "<INPUT class=submit type=submit value=Decrypt name=operation> </DIV>\n"; echo "<P>Result:</P>\n"; echo "<P><TEXTAREA class=maintextarea name=output rows=25 readOnly cols=25>" . $output . "</TEXTAREA></P></DIV></FORM>\n"; ?> What should be editted for better memory efficiency or faster execution?

    Read the article

  • How can this PHP code be improved? What should change?

    - by Noctis Skytower
    This is a custom encryption library. I do not know much about PHP's standard library of functions and was wondering if the following code can be improved in any way. The implementation should yield the same results, the API should remain as it is, but ways to make is more PHP-ish would be greatly appreciated. Code <?php /*************************************** Create random major and minor SPICE key. ***************************************/ function crypt_major() { $all = range("\x00", "\xFF"); shuffle($all); $major_key = implode("", $all); return $major_key; } function crypt_minor() { $sample = array(); do { array_push($sample, 0, 1, 2, 3); } while (count($sample) != 256); shuffle($sample); $list = array(); for ($index = 0; $index < 64; $index++) { $b12 = $sample[$index * 4] << 6; $b34 = $sample[$index * 4 + 1] << 4; $b56 = $sample[$index * 4 + 2] << 2; $b78 = $sample[$index * 4 + 3]; array_push($list, $b12 + $b34 + $b56 + $b78); } $minor_key = implode("", array_map("chr", $list)); return $minor_key; } /*************************************** Create the SPICE key via the given name. ***************************************/ function named_major($name) { srand(crc32($name)); return crypt_major(); } function named_minor($name) { srand(crc32($name)); return crypt_minor(); } /*************************************** Check validity for major and minor keys. ***************************************/ function _check_major($key) { if (is_string($key) && strlen($key) == 256) { foreach (range("\x00", "\xFF") as $char) { if (substr_count($key, $char) == 0) { return FALSE; } } return TRUE; } return FALSE; } function _check_minor($key) { if (is_string($key) && strlen($key) == 64) { $indexs = array(); foreach (array_map("ord", str_split($key)) as $byte) { foreach (range(6, 0, 2) as $shift) { array_push($indexs, ($byte >> $shift) & 3); } } $dict = array_count_values($indexs); foreach (range(0, 3) as $index) { if ($dict[$index] != 64) { return FALSE; } } return TRUE; } return FALSE; } /*************************************** Create encode maps for encode functions. ***************************************/ function _encode_map_1($major) { return array_map("ord", str_split($major)); } function _encode_map_2($minor) { $map_2 = array(array(), array(), array(), array()); $list = array(); foreach (array_map("ord", str_split($minor)) as $byte) { foreach (range(6, 0, 2) as $shift) { array_push($list, ($byte >> $shift) & 3); } } for ($byte = 0; $byte < 256; $byte++) { array_push($map_2[$list[$byte]], chr($byte)); } return $map_2; } /*************************************** Create decode maps for decode functions. ***************************************/ function _decode_map_1($minor) { $map_1 = array(); foreach (array_map("ord", str_split($minor)) as $byte) { foreach (range(6, 0, 2) as $shift) { array_push($map_1, ($byte >> $shift) & 3); } } return $map_1; }function _decode_map_2($major) { $map_2 = array(); $temp = array_map("ord", str_split($major)); for ($byte = 0; $byte < 256; $byte++) { $map_2[$temp[$byte]] = chr($byte); } return $map_2; } /*************************************** Encrypt or decrypt the string with maps. ***************************************/ function _encode($string, $map_1, $map_2) { $cache = ""; foreach (str_split($string) as $char) { $byte = $map_1[ord($char)]; foreach (range(6, 0, 2) as $shift) { $cache .= $map_2[($byte >> $shift) & 3][mt_rand(0, 63)]; } } return $cache; } function _decode($string, $map_1, $map_2) { $cache = ""; $temp = str_split($string); for ($iter = 0; $iter < strlen($string) / 4; $iter++) { $b12 = $map_1[ord($temp[$iter * 4])] << 6; $b34 = $map_1[ord($temp[$iter * 4 + 1])] << 4; $b56 = $map_1[ord($temp[$iter * 4 + 2])] << 2; $b78 = $map_1[ord($temp[$iter * 4 + 3])]; $cache .= $map_2[$b12 + $b34 + $b56 + $b78]; } return $cache; } /*************************************** This is the public interface for coding. ***************************************/ function encode_string($string, $major, $minor) { if (is_string($string)) { if (_check_major($major) && _check_minor($minor)) { $map_1 = _encode_map_1($major); $map_2 = _encode_map_2($minor); return _encode($string, $map_1, $map_2); } } return FALSE; } function decode_string($string, $major, $minor) { if (is_string($string) && strlen($string) % 4 == 0) { if (_check_major($major) && _check_minor($minor)) { $map_1 = _decode_map_1($minor); $map_2 = _decode_map_2($major); return _decode($string, $map_1, $map_2); } } return FALSE; } ?> This is a sample showing how the code is being used. Hex editors may be of help with the input / output. Example <?php # get and process all of the form data @ $input = htmlspecialchars($_POST["input"]); @ $majorname = htmlspecialchars($_POST["majorname"]); @ $minorname = htmlspecialchars($_POST["minorname"]); @ $majorkey = htmlspecialchars($_POST["majorkey"]); @ $minorkey = htmlspecialchars($_POST["minorkey"]); @ $output = htmlspecialchars($_POST["output"]); # process the submissions by operation # CREATE @ $operation = $_POST["operation"]; if ($operation == "Create") { if (strlen($_POST["majorname"]) == 0) { $majorkey = bin2hex(crypt_major()); } if (strlen($_POST["minorname"]) == 0) { $minorkey = bin2hex(crypt_minor()); } if (strlen($_POST["majorname"]) != 0) { $majorkey = bin2hex(named_major($_POST["majorname"])); } if (strlen($_POST["minorname"]) != 0) { $minorkey = bin2hex(named_minor($_POST["minorname"])); } } # ENCRYPT or DECRYPT function is_hex($char) { if ($char == "0"): return TRUE; elseif ($char == "1"): return TRUE; elseif ($char == "2"): return TRUE; elseif ($char == "3"): return TRUE; elseif ($char == "4"): return TRUE; elseif ($char == "5"): return TRUE; elseif ($char == "6"): return TRUE; elseif ($char == "7"): return TRUE; elseif ($char == "8"): return TRUE; elseif ($char == "9"): return TRUE; elseif ($char == "a"): return TRUE; elseif ($char == "b"): return TRUE; elseif ($char == "c"): return TRUE; elseif ($char == "d"): return TRUE; elseif ($char == "e"): return TRUE; elseif ($char == "f"): return TRUE; else: return FALSE; endif; } function hex2bin($str) { if (strlen($str) % 2 == 0): $string = strtolower($str); else: $string = strtolower("0" . $str); endif; $cache = ""; $temp = str_split($str); for ($index = 0; $index < count($temp) / 2; $index++) { $h1 = $temp[$index * 2]; if (is_hex($h1)) { $h2 = $temp[$index * 2 + 1]; if (is_hex($h2)) { $cache .= chr(hexdec($h1 . $h2)); } else { return FALSE; } } else { return FALSE; } } return $cache; } if ($operation == "Encrypt" || $operation == "Decrypt") { # CHECK FOR ANY ERROR $errors = array(); if (strlen($_POST["input"]) == 0) { $output = ""; } $binmajor = hex2bin($_POST["majorkey"]); if (strlen($_POST["majorkey"]) == 0) { array_push($errors, "There must be a major key."); } elseif ($binmajor == FALSE) { array_push($errors, "The major key must be in hex."); } elseif (_check_major($binmajor) == FALSE) { array_push($errors, "The major key is corrupt."); } $binminor = hex2bin($_POST["minorkey"]); if (strlen($_POST["minorkey"]) == 0) { array_push($errors, "There must be a minor key."); } elseif ($binminor == FALSE) { array_push($errors, "The minor key must be in hex."); } elseif (_check_minor($binminor) == FALSE) { array_push($errors, "The minor key is corrupt."); } if ($_POST["operation"] == "Decrypt") { $bininput = hex2bin(str_replace("\r", "", str_replace("\n", "", $_POST["input"]))); if ($bininput == FALSE) { if (strlen($_POST["input"]) != 0) { array_push($errors, "The input data must be in hex."); } } elseif (strlen($bininput) % 4 != 0) { array_push($errors, "The input data is corrupt."); } } if (count($errors) != 0) { # ERRORS ARE FOUND $output = "ERROR:"; foreach ($errors as $error) { $output .= "\n" . $error; } } elseif (strlen($_POST["input"]) != 0) { # CONTINUE WORKING if ($_POST["operation"] == "Encrypt") { # ENCRYPT $output = substr(chunk_split(bin2hex(encode_string($_POST["input"], $binmajor, $binminor)), 58), 0, -2); } else { # DECRYPT $output = htmlspecialchars(decode_string($bininput, $binmajor, $binminor)); } } } # echo the form with the values filled echo "<P><TEXTAREA class=maintextarea name=input rows=25 cols=25>" . $input . "</TEXTAREA></P>\n"; echo "<P>Major Name:</P>\n"; echo "<P><INPUT id=textbox1 name=majorname value=\"" . $majorname . "\"></P>\n"; echo "<P>Minor Name:</P>\n"; echo "<P><INPUT id=textbox1 name=minorname value=\"" . $minorname . "\"></P>\n"; echo "<DIV style=\"TEXT-ALIGN: center\"><INPUT class=submit type=submit value=Create name=operation>\n"; echo "</DIV>\n"; echo "<P>Major Key:</P>\n"; echo "<P><INPUT id=textbox1 name=majorkey value=\"" . $majorkey . "\"></P>\n"; echo "<P>Minor Key:</P>\n"; echo "<P><INPUT id=textbox1 name=minorkey value=\"" . $minorkey . "\"></P>\n"; echo "<DIV style=\"TEXT-ALIGN: center\"><INPUT class=submit type=submit value=Encrypt name=operation> \n"; echo "<INPUT class=submit type=submit value=Decrypt name=operation> </DIV>\n"; echo "<P>Result:</P>\n"; echo "<P><TEXTAREA class=maintextarea name=output rows=25 readOnly cols=25>" . $output . "</TEXTAREA></P></DIV></FORM>\n"; ?> What should be editted for better memory efficiency or faster execution?

    Read the article

  • What can be improved in this PHP code?

    - by Noctis Skytower
    This is a custom encryption library. I do not know much about PHP's standard library of functions and was wondering if the following code can be improved in any way. The implementation should yield the same results, the API should remain as it is, but ways to make is more PHP-ish would be greatly appreciated. Code <?php /*************************************** Create random major and minor SPICE key. ***************************************/ function crypt_major() { $all = range("\x00", "\xFF"); shuffle($all); $major_key = implode("", $all); return $major_key; } function crypt_minor() { $sample = array(); do { array_push($sample, 0, 1, 2, 3); } while (count($sample) != 256); shuffle($sample); $list = array(); for ($index = 0; $index < 64; $index++) { $b12 = $sample[$index * 4] << 6; $b34 = $sample[$index * 4 + 1] << 4; $b56 = $sample[$index * 4 + 2] << 2; $b78 = $sample[$index * 4 + 3]; array_push($list, $b12 + $b34 + $b56 + $b78); } $minor_key = implode("", array_map("chr", $list)); return $minor_key; } /*************************************** Create the SPICE key via the given name. ***************************************/ function named_major($name) { srand(crc32($name)); return crypt_major(); } function named_minor($name) { srand(crc32($name)); return crypt_minor(); } /*************************************** Check validity for major and minor keys. ***************************************/ function _check_major($key) { if (is_string($key) && strlen($key) == 256) { foreach (range("\x00", "\xFF") as $char) { if (substr_count($key, $char) == 0) { return FALSE; } } return TRUE; } return FALSE; } function _check_minor($key) { if (is_string($key) && strlen($key) == 64) { $indexs = array(); foreach (array_map("ord", str_split($key)) as $byte) { foreach (range(6, 0, 2) as $shift) { array_push($indexs, ($byte >> $shift) & 3); } } $dict = array_count_values($indexs); foreach (range(0, 3) as $index) { if ($dict[$index] != 64) { return FALSE; } } return TRUE; } return FALSE; } /*************************************** Create encode maps for encode functions. ***************************************/ function _encode_map_1($major) { return array_map("ord", str_split($major)); } function _encode_map_2($minor) { $map_2 = array(array(), array(), array(), array()); $list = array(); foreach (array_map("ord", str_split($minor)) as $byte) { foreach (range(6, 0, 2) as $shift) { array_push($list, ($byte >> $shift) & 3); } } for ($byte = 0; $byte < 256; $byte++) { array_push($map_2[$list[$byte]], chr($byte)); } return $map_2; } /*************************************** Create decode maps for decode functions. ***************************************/ function _decode_map_1($minor) { $map_1 = array(); foreach (array_map("ord", str_split($minor)) as $byte) { foreach (range(6, 0, 2) as $shift) { array_push($map_1, ($byte >> $shift) & 3); } } return $map_1; }function _decode_map_2($major) { $map_2 = array(); $temp = array_map("ord", str_split($major)); for ($byte = 0; $byte < 256; $byte++) { $map_2[$temp[$byte]] = chr($byte); } return $map_2; } /*************************************** Encrypt or decrypt the string with maps. ***************************************/ function _encode($string, $map_1, $map_2) { $cache = ""; foreach (str_split($string) as $char) { $byte = $map_1[ord($char)]; foreach (range(6, 0, 2) as $shift) { $cache .= $map_2[($byte >> $shift) & 3][mt_rand(0, 63)]; } } return $cache; } function _decode($string, $map_1, $map_2) { $cache = ""; $temp = str_split($string); for ($iter = 0; $iter < strlen($string) / 4; $iter++) { $b12 = $map_1[ord($temp[$iter * 4])] << 6; $b34 = $map_1[ord($temp[$iter * 4 + 1])] << 4; $b56 = $map_1[ord($temp[$iter * 4 + 2])] << 2; $b78 = $map_1[ord($temp[$iter * 4 + 3])]; $cache .= $map_2[$b12 + $b34 + $b56 + $b78]; } return $cache; } /*************************************** This is the public interface for coding. ***************************************/ function encode_string($string, $major, $minor) { if (is_string($string)) { if (_check_major($major) && _check_minor($minor)) { $map_1 = _encode_map_1($major); $map_2 = _encode_map_2($minor); return _encode($string, $map_1, $map_2); } } return FALSE; } function decode_string($string, $major, $minor) { if (is_string($string) && strlen($string) % 4 == 0) { if (_check_major($major) && _check_minor($minor)) { $map_1 = _decode_map_1($minor); $map_2 = _decode_map_2($major); return _decode($string, $map_1, $map_2); } } return FALSE; } ?> This is a sample showing how the code is being used. Hex editors may be of help with the input / output. Example <?php # get and process all of the form data @ $input = htmlspecialchars($_POST["input"]); @ $majorname = htmlspecialchars($_POST["majorname"]); @ $minorname = htmlspecialchars($_POST["minorname"]); @ $majorkey = htmlspecialchars($_POST["majorkey"]); @ $minorkey = htmlspecialchars($_POST["minorkey"]); @ $output = htmlspecialchars($_POST["output"]); # process the submissions by operation # CREATE @ $operation = $_POST["operation"]; if ($operation == "Create") { if (strlen($_POST["majorname"]) == 0) { $majorkey = bin2hex(crypt_major()); } if (strlen($_POST["minorname"]) == 0) { $minorkey = bin2hex(crypt_minor()); } if (strlen($_POST["majorname"]) != 0) { $majorkey = bin2hex(named_major($_POST["majorname"])); } if (strlen($_POST["minorname"]) != 0) { $minorkey = bin2hex(named_minor($_POST["minorname"])); } } # ENCRYPT or DECRYPT function is_hex($char) { if ($char == "0"): return TRUE; elseif ($char == "1"): return TRUE; elseif ($char == "2"): return TRUE; elseif ($char == "3"): return TRUE; elseif ($char == "4"): return TRUE; elseif ($char == "5"): return TRUE; elseif ($char == "6"): return TRUE; elseif ($char == "7"): return TRUE; elseif ($char == "8"): return TRUE; elseif ($char == "9"): return TRUE; elseif ($char == "a"): return TRUE; elseif ($char == "b"): return TRUE; elseif ($char == "c"): return TRUE; elseif ($char == "d"): return TRUE; elseif ($char == "e"): return TRUE; elseif ($char == "f"): return TRUE; else: return FALSE; endif; } function hex2bin($str) { if (strlen($str) % 2 == 0): $string = strtolower($str); else: $string = strtolower("0" . $str); endif; $cache = ""; $temp = str_split($str); for ($index = 0; $index < count($temp) / 2; $index++) { $h1 = $temp[$index * 2]; if (is_hex($h1)) { $h2 = $temp[$index * 2 + 1]; if (is_hex($h2)) { $cache .= chr(hexdec($h1 . $h2)); } else { return FALSE; } } else { return FALSE; } } return $cache; } if ($operation == "Encrypt" || $operation == "Decrypt") { # CHECK FOR ANY ERROR $errors = array(); if (strlen($_POST["input"]) == 0) { $output = ""; } $binmajor = hex2bin($_POST["majorkey"]); if (strlen($_POST["majorkey"]) == 0) { array_push($errors, "There must be a major key."); } elseif ($binmajor == FALSE) { array_push($errors, "The major key must be in hex."); } elseif (_check_major($binmajor) == FALSE) { array_push($errors, "The major key is corrupt."); } $binminor = hex2bin($_POST["minorkey"]); if (strlen($_POST["minorkey"]) == 0) { array_push($errors, "There must be a minor key."); } elseif ($binminor == FALSE) { array_push($errors, "The minor key must be in hex."); } elseif (_check_minor($binminor) == FALSE) { array_push($errors, "The minor key is corrupt."); } if ($_POST["operation"] == "Decrypt") { $bininput = hex2bin(str_replace("\r", "", str_replace("\n", "", $_POST["input"]))); if ($bininput == FALSE) { if (strlen($_POST["input"]) != 0) { array_push($errors, "The input data must be in hex."); } } elseif (strlen($bininput) % 4 != 0) { array_push($errors, "The input data is corrupt."); } } if (count($errors) != 0) { # ERRORS ARE FOUND $output = "ERROR:"; foreach ($errors as $error) { $output .= "\n" . $error; } } elseif (strlen($_POST["input"]) != 0) { # CONTINUE WORKING if ($_POST["operation"] == "Encrypt") { # ENCRYPT $output = substr(chunk_split(bin2hex(encode_string($_POST["input"], $binmajor, $binminor)), 58), 0, -2); } else { # DECRYPT $output = htmlspecialchars(decode_string($bininput, $binmajor, $binminor)); } } } # echo the form with the values filled echo "<P><TEXTAREA class=maintextarea name=input rows=25 cols=25>" . $input . "</TEXTAREA></P>\n"; echo "<P>Major Name:</P>\n"; echo "<P><INPUT id=textbox1 name=majorname value=\"" . $majorname . "\"></P>\n"; echo "<P>Minor Name:</P>\n"; echo "<P><INPUT id=textbox1 name=minorname value=\"" . $minorname . "\"></P>\n"; echo "<DIV style=\"TEXT-ALIGN: center\"><INPUT class=submit type=submit value=Create name=operation>\n"; echo "</DIV>\n"; echo "<P>Major Key:</P>\n"; echo "<P><INPUT id=textbox1 name=majorkey value=\"" . $majorkey . "\"></P>\n"; echo "<P>Minor Key:</P>\n"; echo "<P><INPUT id=textbox1 name=minorkey value=\"" . $minorkey . "\"></P>\n"; echo "<DIV style=\"TEXT-ALIGN: center\"><INPUT class=submit type=submit value=Encrypt name=operation> \n"; echo "<INPUT class=submit type=submit value=Decrypt name=operation> </DIV>\n"; echo "<P>Result:</P>\n"; echo "<P><TEXTAREA class=maintextarea name=output rows=25 readOnly cols=25>" . $output . "</TEXTAREA></P></DIV></FORM>\n"; ?> What should be editted for better memory efficiency or faster execution?

    Read the article

< Previous Page | 179 180 181 182 183 184  | Next Page >