Search Results

Search found 11978 results on 480 pages for 'shallow copy'.

Page 377/480 | < Previous Page | 373 374 375 376 377 378 379 380 381 382 383 384  | Next Page >

  • Higher order function « filter » in C++

    - by Red Hyena
    Hi all. I wanted to write a higher order function filter with C++. The code I have come up with so far is as follows: #include <iostream> #include <string> #include <functional> #include <algorithm> #include <vector> #include <list> #include <iterator> using namespace std; bool isOdd(int const i) { return i % 2 != 0; } template < template <class, class> class Container, class Predicate, class Allocator, class A > Container<A, Allocator> filter(Container<A, Allocator> const & container, Predicate const & pred) { Container<A, Allocator> filtered(container); container.erase(remove_if(filtered.begin(), filtered.end(), pred), filtered.end()); return filtered; } int main() { int const a[] = {23, 12, 78, 21, 97, 64}; vector<int const> const v(a, a + 6); vector<int const> const filtered = filter(v, isOdd); copy(filtered.begin(), filtered.end(), ostream_iterator<int const>(cout, " ")); } However on compiling this code, I get the following error messages that I am unable to understand and hence get rid of: /usr/include/c++/4.3/ext/new_allocator.h: In instantiation of ‘__gnu_cxx::new_allocator<const int>’: /usr/include/c++/4.3/bits/allocator.h:84: instantiated from ‘std::allocator<const int>’ /usr/include/c++/4.3/bits/stl_vector.h:75: instantiated from ‘std::_Vector_base<const int, std::allocator<const int> >’ /usr/include/c++/4.3/bits/stl_vector.h:176: instantiated from ‘std::vector<const int, std::allocator<const int> >’ Filter.cpp:29: instantiated from here /usr/include/c++/4.3/ext/new_allocator.h:82: error: ‘const _Tp* __gnu_cxx::new_allocator<_Tp>::address(const _Tp&) const [with _Tp = const int]’ cannot be overloaded /usr/include/c++/4.3/ext/new_allocator.h:79: error: with ‘_Tp* __gnu_cxx::new_allocator<_Tp>::address(_Tp&) const [with _Tp = const int]’ Filter.cpp: In function ‘Container<A, Allocator> filter(const Container<A, Allocator>&, const Predicate&) [with Container = std::vector, Predicate = bool ()(int), Allocator = std::allocator<const int>, A = const int]’: Filter.cpp:30: instantiated from here Filter.cpp:23: error: passing ‘const std::vector<const int, std::allocator<const int> >’ as ‘this’ argument of ‘__gnu_cxx::__normal_iterator<typename std::_Vector_base<_Tp, _Alloc>::_Tp_alloc_type::pointer, std::vector<_Tp, _Alloc> > std::vector<_Tp, _Alloc>::erase(__gnu_cxx::__normal_iterator<typename std::_Vector_base<_Tp, _Alloc>::_Tp_alloc_type::pointer, std::vector<_Tp, _Alloc> >, __gnu_cxx::__normal_iterator<typename std::_Vector_base<_Tp, _Alloc>::_Tp_alloc_type::pointer, std::vector<_Tp, _Alloc> >) [with _Tp = const int, _Alloc = std::allocator<const int>]’ discards qualifiers /usr/include/c++/4.3/bits/stl_algo.h: In function ‘_FIter std::remove_if(_FIter, _FIter, _Predicate) [with _FIter = __gnu_cxx::__normal_iterator<const int*, std::vector<const int, std::allocator<const int> > >, _Predicate = bool (*)(int)]’: Filter.cpp:23: instantiated from ‘Container<A, Allocator> filter(const Container<A, Allocator>&, const Predicate&) [with Container = std::vector, Predicate = bool ()(int), Allocator = std::allocator<const int>, A = const int]’ Filter.cpp:30: instantiated from here /usr/include/c++/4.3/bits/stl_algo.h:821: error: assignment of read-only location ‘__result.__gnu_cxx::__normal_iterator<_Iterator, _Container>::operator* [with _Iterator = const int*, _Container = std::vector<const int, std::allocator<const int> >]()’ /usr/include/c++/4.3/ext/new_allocator.h: In member function ‘void __gnu_cxx::new_allocator<_Tp>::deallocate(_Tp*, size_t) [with _Tp = const int]’: /usr/include/c++/4.3/bits/stl_vector.h:150: instantiated from ‘void std::_Vector_base<_Tp, _Alloc>::_M_deallocate(_Tp*, size_t) [with _Tp = const int, _Alloc = std::allocator<const int>]’ /usr/include/c++/4.3/bits/stl_vector.h:136: instantiated from ‘std::_Vector_base<_Tp, _Alloc>::~_Vector_base() [with _Tp = const int, _Alloc = std::allocator<const int>]’ /usr/include/c++/4.3/bits/stl_vector.h:286: instantiated from ‘std::vector<_Tp, _Alloc>::vector(_InputIterator, _InputIterator, const _Alloc&) [with _InputIterator = const int*, _Tp = const int, _Alloc = std::allocator<const int>]’ Filter.cpp:29: instantiated from here /usr/include/c++/4.3/ext/new_allocator.h:98: error: invalid conversion from ‘const void*’ to ‘void*’ /usr/include/c++/4.3/ext/new_allocator.h:98: error: initializing argument 1 of ‘void operator delete(void*)’ /usr/include/c++/4.3/bits/stl_algobase.h: In function ‘_OI std::__copy_move_a(_II, _II, _OI) [with bool _IsMove = false, _II = const int*, _OI = const int*]’: /usr/include/c++/4.3/bits/stl_algobase.h:435: instantiated from ‘_OI std::__copy_move_a2(_II, _II, _OI) [with bool _IsMove = false, _II = __gnu_cxx::__normal_iterator<const int*, std::vector<const int, std::allocator<const int> > >, _OI = __gnu_cxx::__normal_iterator<const int*, std::vector<const int, std::allocator<const int> > >]’ /usr/include/c++/4.3/bits/stl_algobase.h:466: instantiated from ‘_OI std::copy(_II, _II, _OI) [with _II = __gnu_cxx::__normal_iterator<const int*, std::vector<const int, std::allocator<const int> > >, _OI = __gnu_cxx::__normal_iterator<const int*, std::vector<const int, std::allocator<const int> > >]’ /usr/include/c++/4.3/bits/vector.tcc:136: instantiated from ‘__gnu_cxx::__normal_iterator<typename std::_Vector_base<_Tp, _Alloc>::_Tp_alloc_type::pointer, std::vector<_Tp, _Alloc> > std::vector<_Tp, _Alloc>::erase(__gnu_cxx::__normal_iterator<typename std::_Vector_base<_Tp, _Alloc>::_Tp_alloc_type::pointer, std::vector<_Tp, _Alloc> >, __gnu_cxx::__normal_iterator<typename std::_Vector_base<_Tp, _Alloc>::_Tp_alloc_type::pointer, std::vector<_Tp, _Alloc> >) [with _Tp = const int, _Alloc = std::allocator<const int>]’ Filter.cpp:23: instantiated from ‘Container<A, Allocator> filter(const Container<A, Allocator>&, const Predicate&) [with Container = std::vector, Predicate = bool ()(int), Allocator = std::allocator<const int>, A = const int]’ Filter.cpp:30: instantiated from here /usr/include/c++/4.3/bits/stl_algobase.h:396: error: no matching function for call to ‘std::__copy_move<false, true, std::random_access_iterator_tag>::__copy_m(const int*&, const int*&, const int*&)’ Please tell me what I am doing wrong here and what is the correct way to achieve the kind of higher order polymorphism I want. Thanks.

    Read the article

  • How do I debug this javascript -- I don't get an error in Firebug but it's not working as expected.

    - by Angela
    I installed the plugin better-edit-in-place (http://github.com/nakajima/better-edit-in-place) but I dont' seem to be able to make it work. The plugin creates javascript, and also automatically creates a rel and class. The expected behavior is to make an edit-in-place, but it currently is not. Nothing happens when I mouse over. When I use firebug, it is rendering the value to be edited correctly: <span rel="/emails/1" id="email_1_days" class="editable">7</span> And it is showing the full javascript which should work on class editable. I didn't copy everything, just the chunks that seemed should be operationable if I have a class name in the DOM. // Editable: Better in-place-editing // http://github.com/nakajima/nakatype/wikis/better-edit-in-place-editable-js var Editable = Class.create({ initialize: function(element, options) { this.element = $(element); Object.extend(this, options); // Set default values for options this.editField = this.editField || {}; this.editField.type = this.editField.type || 'input'; this.onLoading = this.onLoading || Prototype.emptyFunction; this.onComplete = this.onComplete || Prototype.emptyFunction; this.field = this.parseField(); this.value = this.element.innerHTML; this.setupForm(); this.setupBehaviors(); }, // In order to parse the field correctly, it's necessary that the element // you want to edit in place for have an id of (model_name)_(id)_(field_name). // For example, if you want to edit the "caption" field in a "Photo" model, // your id should be something like "photo_#{@photo.id}_caption". // If you want to edit the "comment_body" field in a "MemberBlogPost" model, // it would be: "member_blog_post_#{@member_blog_post.id}_comment_body" parseField: function() { var matches = this.element.id.match(/(.*)_\d*_(.*)/); this.modelName = matches[1]; this.fieldName = matches[2]; if (this.editField.foreignKey) this.fieldName += '_id'; return this.modelName + '[' + this.fieldName + ']'; }, // Create the editing form for the editable and inserts it after the element. // If window._token is defined, then we add a hidden element that contains the // authenticity_token for the AJAX request. setupForm: function() { this.editForm = new Element('form', { 'action': this.element.readAttribute('rel'), 'style':'display:none', 'class':'in-place-editor' }); this.setupInputElement(); if (this.editField.tag != 'select') { this.saveInput = new Element('input', { type:'submit', value: Editable.options.saveText }); if (this.submitButtonClass) this.saveInput.addClassName(this.submitButtonClass); this.cancelLink = new Element('a', { href:'#' }).update(Editable.options.cancelText); if (this.cancelButtonClass) this.cancelLink.addClassName(this.cancelButtonClass); } var methodInput = new Element('input', { type:'hidden', value:'put', name:'_method' }); if (typeof(window._token) != 'undefined') { this.editForm.insert(new Element('input', { type: 'hidden', value: window._token, name: 'authenticity_token' })); } this.editForm.insert(this.editField.element); if (this.editField.type != 'select') { this.editForm.insert(this.saveInput); this.editForm.insert(this.cancelLink); } this.editForm.insert(methodInput); this.element.insert({ after: this.editForm }); }, // Create input element - text input, text area or select box. setupInputElement: function() { this.editField.element = new Element(this.editField.type, { 'name':this.field, 'id':('edit_' + this.element.id) }); if(this.editField['class']) this.editField.element.addClassName(this.editField['class']); if(this.editField.type == 'select') { // Create options var options = this.editField.options.map(function(option) { return new Option(option[0], option[1]); }); // And assign them to select element options.each(function(option, index) { this.editField.element.options[index] = options[index]; }.bind(this)); // Set selected option try { this.editField.element.selectedIndex = $A(this.editField.element.options).find(function(option) { return option.text == this.element.innerHTML; }.bind(this)).index; } catch(e) { this.editField.element.selectedIndex = 0; } // Set event handlers to automaticall submit form when option is changed this.editField.element.observe('blur', this.cancel.bind(this)); this.editField.element.observe('change', this.save.bind(this)); } else { // Copy value of the element to the input this.editField.element.value = this.element.innerHTML; } }, // Sets up event handles for editable. setupBehaviors: function() { this.element.observe('click', this.edit.bindAsEventListener(this)); if (this.saveInput) this.editForm.observe('submit', this.save.bindAsEventListener(this)); if (this.cancelLink) this.cancelLink.observe('click', this.cancel.bindAsEventListener(this)); }, // Event Handler that activates form and hides element. edit: function(event) { this.element.hide(); this.editForm.show(); this.editField.element.activate ? this.editField.element.activate() : this.editField.element.focus(); if (event) event.stop(); }, // Event handler that makes request to server, then handles a JSON response. save: function(event) { var pars = this.editForm.serialize(true); var url = this.editForm.readAttribute('action'); this.editForm.disable(); new Ajax.Request(url + ".json", { method: 'put', parameters: pars, onSuccess: function(transport) { var json = transport.responseText.evalJSON(); var value; if (json[this.modelName]) { value = json[this.modelName][this.fieldName]; } else { value = json[this.fieldName]; } // If we're using foreign key, read value from the form // instead of displaying foreign key ID if (this.editField.foreignKey) { value = $A(this.editField.element.options).find(function(option) { return option.value == value; }).text; } this.value = value; this.editField.element.value = this.value; this.element.update(this.value); this.editForm.enable(); if (Editable.afterSave) { Editable.afterSave(this); } this.cancel(); }.bind(this), onFailure: function(transport) { this.cancel(); alert("Your change could not be saved."); }.bind(this), onLoading: this.onLoading.bind(this), onComplete: this.onComplete.bind(this) }); if (event) { event.stop(); } }, // Event handler that restores original editable value and hides form. cancel: function(event) { this.element.show(); this.editField.element.value = this.value; this.editForm.hide(); if (event) { event.stop(); } }, // Removes editable behavior from an element. clobber: function() { this.element.stopObserving('click'); try { this.editForm.remove(); delete(this); } catch(e) { delete(this); } } }); // Editable class methods. Object.extend(Editable, { options: { saveText: 'Save', cancelText: 'Cancel' }, create: function(element) { new Editable(element); }, setupAll: function(klass) { klass = klass || '.editable'; $$(klass).each(Editable.create); } }); But when I point my mouse at the element, no in-place-editing action!

    Read the article

  • Optimizing sorting container of objects with heap-allocated buffers - how to avoid hard-copying buff

    - by Kache4
    I was making sure I knew how to do the op= and copy constructor correctly in order to sort() properly, so I wrote up a test case. After getting it to work, I realized that the op= was hard-copying all the data_. I figure if I wanted to sort a container with this structure (its elements have heap allocated char buffer arrays), it'd be faster to just swap the pointers around. Is there a way to do that? Would I have to write my own sort/swap function? #include <deque> //#include <string> //#include <utility> //#include <cstdlib> #include <cstring> #include <iostream> //#include <algorithm> // I use sort(), so why does this still compile when commented out? #include <boost/filesystem.hpp> #include <boost/foreach.hpp> using namespace std; namespace fs = boost::filesystem; class Page { public: // constructor Page(const char* path, const char* data, int size) : path_(fs::path(path)), size_(size), data_(new char[size]) { // cout << "Creating Page..." << endl; strncpy(data_, data, size); // cout << "done creating Page..." << endl; } // copy constructor Page(const Page& other) : path_(fs::path(other.path())), size_(other.size()), data_(new char[other.size()]) { // cout << "Copying Page..." << endl; strncpy(data_, other.data(), size_); // cout << "done copying Page..." << endl; } // destructor ~Page() { delete[] data_; } // accessors const fs::path& path() const { return path_; } const char* data() const { return data_; } int size() const { return size_; } // operators Page& operator = (const Page& other) { if (this == &other) return *this; char* newImage = new char[other.size()]; strncpy(newImage, other.data(), other.size()); delete[] data_; data_ = newImage; path_ = fs::path(other.path()); size_ = other.size(); return *this; } bool operator < (const Page& other) const { return path_ < other.path(); } private: fs::path path_; int size_; char* data_; }; class Book { public: Book(const char* path) : path_(fs::path(path)) { cout << "Creating Book..." << endl; cout << "pushing back #1" << endl; pages_.push_back(Page("image1.jpg", "firstImageData", 14)); cout << "pushing back #3" << endl; pages_.push_back(Page("image3.jpg", "thirdImageData", 14)); cout << "pushing back #2" << endl; pages_.push_back(Page("image2.jpg", "secondImageData", 15)); cout << "testing operator <" << endl; cout << pages_[0].path().string() << (pages_[0] < pages_[1]? " < " : " > ") << pages_[1].path().string() << endl; cout << pages_[1].path().string() << (pages_[1] < pages_[2]? " < " : " > ") << pages_[2].path().string() << endl; cout << pages_[0].path().string() << (pages_[0] < pages_[2]? " < " : " > ") << pages_[2].path().string() << endl; cout << "sorting" << endl; BOOST_FOREACH (Page p, pages_) cout << p.path().string() << endl; sort(pages_.begin(), pages_.end()); cout << "done sorting\n"; BOOST_FOREACH (Page p, pages_) cout << p.path().string() << endl; cout << "checking datas" << endl; BOOST_FOREACH (Page p, pages_) { char data[p.size() + 1]; strncpy((char*)&data, p.data(), p.size()); data[p.size()] = '\0'; cout << p.path().string() << " " << data << endl; } cout << "done Creating Book" << endl; } private: deque<Page> pages_; fs::path path_; }; int main() { Book* book = new Book("/some/path/"); }

    Read the article

  • 47 memory leaks. STL pointers.

    - by icelated
    I have a major amount of memory leaks. I know that the Sets have pointers and i cannot change that! I cannot change anything, but clean up the mess i have... I am creating memory with new in just about every function to add information to the sets. I have a Cd/ DVD/book: super classes of ITEM class and a library class.. In the library class i have 2 functions for cleaning up the sets.. Also, the CD, DVD, book destructors are not being called.. here is my potential leaks.. library.h #pragma once #include <ostream> #include <map> #include <set> #include <string> #include "Item.h" using namespace std; typedef set<Item*> ItemSet; typedef map<string,Item*> ItemMap; typedef map<string,ItemSet*> ItemSetMap; class Library { public: // general functions void addKeywordForItem(const Item* const item, const string& keyword); const ItemSet* itemsForKeyword(const string& keyword) const; void printItem(ostream& out, const Item* const item) const; // book-related functions const Item* addBook(const string& title, const string& author, int const nPages); const ItemSet* booksByAuthor(const string& author) const; const ItemSet* books() const; // music-related functions const Item* addMusicCD(const string& title, const string& band, const int nSongs); void addBandMember(const Item* const musicCD, const string& member); const ItemSet* musicByBand(const string& band) const; const ItemSet* musicByMusician(const string& musician) const; const ItemSet* musicCDs() const; // movie-related functions const Item* addMovieDVD(const string& title, const string& director, const int nScenes); void addCastMember(const Item* const movie, const string& member); const ItemSet* moviesByDirector(const string& director) const; const ItemSet* moviesByActor(const string& actor) const; const ItemSet* movies() const; ~Library(); void Purge(ItemSet &set); void Purge(ItemSetMap &map); }; here is some functions for adding info using new in library. Keep in mind i am cutting out alot of code to keep this post short. library.cpp #include "Library.h" #include "book.h" #include "cd.h" #include "dvd.h" #include <iostream> // general functions ItemSet allBooks; ItemSet allCDS; ItemSet allDVDs; ItemSetMap allBooksByAuthor; ItemSetMap allmoviesByDirector; ItemSetMap allmoviesByActor; ItemSetMap allMusicByBand; ItemSetMap allMusicByMusician; const ItemSet* Library::itemsForKeyword(const string& keyword) const { const StringSet* kw; ItemSet* obj = new ItemSet(); return obj; const Item* Library::addBook(const string& title, const string& author, const int nPages) { ItemSet* obj = new ItemSet(); Book* item = new Book(title,author,nPages); allBooks.insert(item); // add to set of all books obj->insert(item); return item; const Item* Library::addMusicCD(const string& title, const string& band, const int nSongs) { ItemSet* obj = new ItemSet(); CD* item = new CD(title,band,nSongs); return item; void Library::addBandMember(const Item* musicCD, const string& member) { ItemSet* obj = new ItemSet(); (((CD*) musicCD)->addBandMember(member)); obj->insert((CD*) musicCD); here is the library destructor..... Library::~Library() { Purge(allBooks); Purge(allCDS); Purge(allDVDs); Purge(allBooksByAuthor); Purge(allmoviesByDirector); Purge(allmoviesByActor); Purge(allMusicByBand); Purge(allMusicByMusician); } void Library::Purge(ItemSet &set) { for (ItemSet::iterator it = set.begin(); it != set.end(); ++it) delete *it; set.clear(); } void Library::Purge(ItemSetMap &map) { for (ItemSetMap::iterator it = map.begin(); it != map.end(); ++it) delete it->second; map.clear(); } so, basically item, cd, dvd class all have a set like this: typedef set<string> StringSet; class CD : public Item StringSet* music; and i am deleting it like: but those superclasses are not being called.. Item destructor is. CD::~CD() { delete music; } Do, i need a copy constructor? and how do i delete those objects i am creating in the library class? and how can i get the cd,dvd, destructor called? would the addbandmember function located in the library.cpp cause me to have a copy constructor? Any real help you can provide me to help me clean up this mess instead of telling me not to use pointers in my sets i would really appreciate. How can i delete the memory i am creating in those functions? I cannot delete them in the function!!

    Read the article

  • Is it possible to pass a structure of delegates from managed to native?

    - by Veiva
    I am writing a wrapper for the game programming library "Allegro" and its less stable 4.9 branch. Now, I have done good insofar, except for when it comes to wrapping a structure of function pointers. Basically, I can't change the original code, despite having access to it, because that would require me to fork it in some manner. I need to know how I can somehow pass a structure of delegates from managed to native without causing an AccessViolationException that has occurred so far. Now, for the code. Here is the Allegro definition of the structure: typedef struct ALLEGRO_FILE_INTERFACE { AL_METHOD(ALLEGRO_FILE*, fi_fopen, (const char *path, const char *mode)); AL_METHOD(void, fi_fclose, (ALLEGRO_FILE *handle)); AL_METHOD(size_t, fi_fread, (ALLEGRO_FILE *f, void *ptr, size_t size)); AL_METHOD(size_t, fi_fwrite, (ALLEGRO_FILE *f, const void *ptr, size_t size)); AL_METHOD(bool, fi_fflush, (ALLEGRO_FILE *f)); AL_METHOD(int64_t, fi_ftell, (ALLEGRO_FILE *f)); AL_METHOD(bool, fi_fseek, (ALLEGRO_FILE *f, int64_t offset, int whence)); AL_METHOD(bool, fi_feof, (ALLEGRO_FILE *f)); AL_METHOD(bool, fi_ferror, (ALLEGRO_FILE *f)); AL_METHOD(int, fi_fungetc, (ALLEGRO_FILE *f, int c)); AL_METHOD(off_t, fi_fsize, (ALLEGRO_FILE *f)); } ALLEGRO_FILE_INTERFACE; My simple attempt at wrapping it: public delegate IntPtr AllegroInternalOpenFileDelegate(string path, string mode); public delegate void AllegroInternalCloseFileDelegate(IntPtr file); public delegate int AllegroInternalReadFileDelegate(IntPtr file, IntPtr data, int size); public delegate int AllegroInternalWriteFileDelegate(IntPtr file, IntPtr data, int size); public delegate bool AllegroInternalFlushFileDelegate(IntPtr file); public delegate long AllegroInternalTellFileDelegate(IntPtr file); public delegate bool AllegroInternalSeekFileDelegate(IntPtr file, long offset, int where); public delegate bool AllegroInternalIsEndOfFileDelegate(IntPtr file); public delegate bool AllegroInternalIsErrorFileDelegate(IntPtr file); public delegate int AllegroInternalUngetCharFileDelegate(IntPtr file, int c); public delegate long AllegroInternalFileSizeDelegate(IntPtr file); [StructLayout(LayoutKind.Sequential, Pack = 0)] public struct AllegroInternalFileInterface { [MarshalAs(UnmanagedType.FunctionPtr)] public AllegroInternalOpenFileDelegate fi_fopen; [MarshalAs(UnmanagedType.FunctionPtr)] public AllegroInternalCloseFileDelegate fi_fclose; [MarshalAs(UnmanagedType.FunctionPtr)] public AllegroInternalReadFileDelegate fi_fread; [MarshalAs(UnmanagedType.FunctionPtr)] public AllegroInternalWriteFileDelegate fi_fwrite; [MarshalAs(UnmanagedType.FunctionPtr)] public AllegroInternalFlushFileDelegate fi_fflush; [MarshalAs(UnmanagedType.FunctionPtr)] public AllegroInternalTellFileDelegate fi_ftell; [MarshalAs(UnmanagedType.FunctionPtr)] public AllegroInternalSeekFileDelegate fi_fseek; [MarshalAs(UnmanagedType.FunctionPtr)] public AllegroInternalIsEndOfFileDelegate fi_feof; [MarshalAs(UnmanagedType.FunctionPtr)] public AllegroInternalIsErrorFileDelegate fi_ferror; [MarshalAs(UnmanagedType.FunctionPtr)] public AllegroInternalUngetCharFileDelegate fi_fungetc; [MarshalAs(UnmanagedType.FunctionPtr)] public AllegroInternalFileSizeDelegate fi_fsize; } I have a simple auxiliary wrapper that turns an ALLEGRO_FILE_INTERFACE into an ALLEGRO_FILE, like so: #define ALLEGRO_NO_MAGIC_MAIN #include <allegro5/allegro5.h> #include <stdlib.h> #include <string.h> #include <assert.h> __declspec(dllexport) ALLEGRO_FILE * al_aux_create_file(ALLEGRO_FILE_INTERFACE * fi) { ALLEGRO_FILE * file; assert(fi && "`fi' null"); file = (ALLEGRO_FILE *)malloc(sizeof(ALLEGRO_FILE)); if (!file) return NULL; file->vtable = (ALLEGRO_FILE_INTERFACE *)malloc(sizeof(ALLEGRO_FILE_INTERFACE)); if (!(file->vtable)) { free(file); return NULL; } memcpy(file->vtable, fi, sizeof(ALLEGRO_FILE_INTERFACE)); return file; } __declspec(dllexport) void al_aux_destroy_file(ALLEGRO_FILE * f) { assert(f && "`f' null"); assert(f->vtable && "`f->vtable' null"); free(f->vtable); free(f); } Lastly, I have a class that accepts a Stream and provides the proper methods to interact with the stream. Just to make sure, here it is: /// <summary> /// A semi-opaque data type that allows one to load fonts, etc from a stream. /// </summary> public class AllegroFile : AllegroResource, IDisposable { AllegroInternalFileInterface fileInterface; Stream fileStream; /// <summary> /// Gets the file interface. /// </summary> internal AllegroInternalFileInterface FileInterface { get { return fileInterface; } } /// <summary> /// Constructs an Allegro file from the stream provided. /// </summary> /// <param name="stream">The stream to use.</param> public AllegroFile(Stream stream) { fileStream = stream; fileInterface = new AllegroInternalFileInterface(); fileInterface.fi_fopen = Open; fileInterface.fi_fclose = Close; fileInterface.fi_fread = Read; fileInterface.fi_fwrite = Write; fileInterface.fi_fflush = Flush; fileInterface.fi_ftell = GetPosition; fileInterface.fi_fseek = Seek; fileInterface.fi_feof = GetIsEndOfFile; fileInterface.fi_ferror = GetIsError; fileInterface.fi_fungetc = UngetCharacter; fileInterface.fi_fsize = GetLength; Resource = AllegroFunctions.al_aux_create_file(ref fileInterface); if (!IsValid) throw new AllegroException("Unable to create file"); } /// <summary> /// Disposes of all resources. /// </summary> ~AllegroFile() { Dispose(); } /// <summary> /// Disposes of all resources used. /// </summary> public void Dispose() { if (IsValid) { Resource = IntPtr.Zero; // Should call AllegroFunctions.al_aux_destroy_file fileStream.Dispose(); } } IntPtr Open(string path, string mode) { return IntPtr.Zero; } void Close(IntPtr file) { fileStream.Close(); } int Read(IntPtr file, IntPtr data, int size) { byte[] d = new byte[size]; int read = fileStream.Read(d, 0, size); Marshal.Copy(d, 0, data, size); return read; } int Write(IntPtr file, IntPtr data, int size) { byte[] d = new byte[size]; Marshal.Copy(data, d, 0, size); fileStream.Write(d, 0, size); return size; } bool Flush(IntPtr file) { fileStream.Flush(); return true; } long GetPosition(IntPtr file) { return fileStream.Position; } bool Seek(IntPtr file, long offset, int whence) { SeekOrigin origin = SeekOrigin.Begin; if (whence == 1) origin = SeekOrigin.Current; else if (whence == 2) origin = SeekOrigin.End; fileStream.Seek(offset, origin); return true; } bool GetIsEndOfFile(IntPtr file) { return fileStream.Position == fileStream.Length; } bool GetIsError(IntPtr file) { return false; } int UngetCharacter(IntPtr file, int character) { return -1; } long GetLength(IntPtr file) { return fileStream.Length; } } Now, when I do something like this: AllegroFile file = new AllegroFile(new FileStream("Test.bmp", FileMode.Create, FileAccess.ReadWrite)); bitmap.SaveToFile(file, ".bmp"); ...I get an AccessViolationException. I think I understand why (the garbage collector can relocate structs and classes whenever), but I'd think that the method stub that is created by the framework would take this into consideration and route the calls to the valid classes. However, it seems obviously so that I'm wrong. So basically, is there any way I can successfully wrap that structure? (And I'm sorry for all the code! Hope it's not too much...)

    Read the article

  • C socket programming: select() is returning 0 despite messages sent from server

    - by Fantastic Fourier
    Hey all, I'm using select() to recv() messages from server, using TCP/IP. When I send() messages from the server, it returns a reasonable number of bytes, saying it's sent successful. And it does get to the client successfully when I use while loop to just recv(). Everything is fine and dandy. while(1) recv() // obviously pseudocode However, when I try to use select(), select() returns 0 from timeout (which is set to 1 second) and for the life of me I cannot figure out why it doesn't see the messages sent from the server. I should also mention that when the server disconnects, select() doesn't see that either, where as if I were to use recv(), it would return 0 to indicate that the connection using the socket has been closed. Any inputs or thoughts are deeply appreciated. #include <arpa/inet.h> #include <errno.h> #include <fcntl.h> #include <netdb.h> #include <netinet/in.h> #include <pthread.h> #include <stdio.h> #include <stdlib.h> #include <string.h> #include <strings.h> #include <sys/select.h> #include <sys/socket.h> #include <sys/time.h> #include <sys/types.h> #include <time.h> #include <unistd.h> #define SERVER_PORT 10000 #define MAX_CONNECTION 20 #define MAX_MSG 50 struct client { char c_name[MAX_MSG]; char g_name[MAX_MSG]; int csock; int host; // 0 = not host of a multicast group struct sockaddr_in client_address; struct client * next_host; struct client * next_client; }; struct fd_info { char c_name[MAX_MSG]; int socks_inuse[MAX_CONNECTION]; int sock_fd, max_fd; int exit; struct client * c_sys; struct sockaddr_in c_address[MAX_CONNECTION]; struct sockaddr_in server_address; struct sockaddr_in client_address; fd_set read_set; }; struct message { char c_name[MAX_MSG]; char g_name[MAX_MSG]; char _command[3][MAX_MSG]; char _payload[MAX_MSG]; struct sockaddr_in client_address; struct client peer; }; int main(int argc, char * argv[]) { char * host; char * temp; int i, sockfd; int msg_len, rv, ready; int connection, management, socketread; int sockfds[MAX_CONNECTION]; // for three threads that handle new connections, user inputs and select() for sockets pthread_t connection_handler, manager, socket_reader; struct sockaddr_in server_address, client_address; struct hostent * hserver, cserver; struct timeval timeout; struct message msg; struct fd_info info; info.exit = 0; // exit information: if exit = 1, threads quit info.c_sys = NULL; // looking up from the host database if (argc == 3) { host = argv[1]; // server address strncpy(info.c_name, argv[2], strlen(argv[2])); // client name } else { printf("plz read the manual, kthxbai\n"); exit(1); } printf("host is %s and hp is %p\n", host, hserver); hserver = gethostbyname(host); if (hserver) { printf("host found: %s\n", hserver->h_name ); } else { printf("host not found\n"); exit(1); } // setting up address and port structure information on serverside bzero((char * ) &server_address, sizeof(server_address)); // copy zeroes into string server_address.sin_family = AF_INET; memcpy(&server_address.sin_addr, hserver->h_addr, hserver->h_length); server_address.sin_port = htons(SERVER_PORT); bzero((char * ) &client_address, sizeof(client_address)); // copy zeroes into string client_address.sin_family = AF_INET; client_address.sin_addr.s_addr = htonl(INADDR_ANY); client_address.sin_port = htons(SERVER_PORT); // opening up socket sockfd = socket(AF_INET, SOCK_STREAM, 0); if (sockfd < 0) exit(1); else { printf("socket is opened: %i \n", sockfd); info.sock_fd = sockfd; } // sets up time out option for the bound socket timeout.tv_sec = 1; // seconds timeout.tv_usec = 0; // micro seconds ( 0.5 seconds) setsockopt(sockfd, SOL_SOCKET, SO_RCVTIMEO, &timeout, sizeof(struct timeval)); // binding socket to a port rv = bind(sockfd, (struct sockaddr *) &client_address, sizeof(client_address)); if (rv < 0) { printf("MAIN: ERROR bind() %i: %s\n", errno, strerror(errno)); exit(1); } else printf("socket is bound\n"); printf("MAIN: %li \n", client_address.sin_addr.s_addr); // connecting rv = connect(sockfd, (struct sockaddr *) &server_address, sizeof(server_address)); info.server_address = server_address; info.client_address = client_address; info.sock_fd = sockfd; info.max_fd = sockfd; printf("rv = %i\n", rv); if (rv < 0) { printf("MAIN: ERROR connect() %i: %s\n", errno, strerror(errno)); exit(1); } else printf("connected\n"); fd_set readset; FD_ZERO(&readset); FD_ZERO(&info.read_set); FD_SET(info.sock_fd, &info.read_set); while(1) { readset = info.read_set; printf("MAIN: %i \n", readset); ready = select((info.max_fd)+1, &readset, NULL, NULL, &timeout); if(ready == -1) { sleep(2); printf("TEST: MAIN: ready = -1. %s \n", strerror(errno)); } else if (ready == 0) { sleep(2); printf("TEST: MAIN: ready = 0. %s \n", strerror(errno)); } else if (ready > 0) { printf("TEST: MAIN: ready = %i. %s at socket %i \n", ready, strerror(errno), i); for(i = 0; i < ((info.max_fd)+1); i++) { if(FD_ISSET(i, &readset)) { rv = recv(sockfd, &msg, 500, 0); if(rv < 0) continue; else if(rv > 0) printf("MAIN: TEST: %s %s \n", msg._command[0], msg._payload); else if (rv == 0) { sleep(3); printf("MAIN: TEST: SOCKET CLOSEDDDDDD \n"); } FD_CLR(i, &readset); } } } info.read_set = readset; } // close connection close(sockfd); printf("socket closed. BYE! \n"); return(0); }

    Read the article

  • jQuery memory game: if $('.opened').length; == number run function.

    - by Carl Papworth
    So I'm trying to change the a.heart when there is td.opened == 24. I'm not sure what's going wrong though since nothings happening. HTML: <body> <header> <div id="headerTitle"><a href="index.html">&lt;html<span class="heart">&hearts;</span>ve&gt;</a> </div> <div id="help"> <h2>?</h2> <div id="helpInfo"> <p>How many tiles are there? Let's see [calculating] 25...</p> </div> </div> </header> <div id="reward"> <div id="rewardContainer"> <div id="rewardBG" class="heart">&hearts; </div> <p>OMG, this must be luv<br><a href="index.html" class="exit">x</a></p> </div> </div> <div id="pageWrap"> <div id="mainContent"> <!-- DON'T BE A CHEATER !--> <table id="memory"> <tr> <td class="pair1"><a>&Psi;</a></td> <td class="pair2"><a>&para;</a></td> <td class="pair3"><a>&Xi;</a></td> <td class="pair1"><a>&Psi;</a></td> <td class="pair4"><a >&otimes;</a></td> </tr> <tr> <td class="pair5"><a>&spades;</a></td> <td class="pair6"><a >&Phi;</a></td> <td class="pair7"><a>&sect;</a></td> <td class="pair8"><a>&clubs;</a></td> <td class="pair4"><a>&otimes;</a></td> </tr> <tr> <td class="pair9"><a>&Omega;</a></td> <td class="pair2"><a>&para;</a></td> <td id="goal"> <a href="#reward" class="heart">&hearts;</a> </td> <td class="pair10"><a>&copy;</a></td> <td class="pair9"><a>&Omega;</a></td> </tr> <tr> <td class="pair11"><a>&there4;</a></td> <td class="pair8"><a>&clubs;</a></td> <td class="pair12"><a>&dagger;</a></td> <td class="pair6"><a>&Phi;</a></td> <td class="pair11"><a>&there4;</a></td> </tr> <tr> <td><a class="pair12">&dagger;</a></td> <td><a class="pair5">&spades;</a></td> <td><a class="pair10">&copy;</a></td> <td><a class="pair3">&Xi;</a></td> <td><a class="pair7">&sect;</a></td> </tr> </table> <!-- DON'T BE A CHEATER !--> </div> </div> <!-- END Page Wrap --> <footer> <div class="heartCollection"> <p>collect us if u need luv:<p> <ul> <li><a id="collection1">&hearts;</a></li> <li><a id="collection2">&hearts;</a></li> <li><a id="collection3">&hearts;</a></li> <li><a id="collection4">&hearts;</a></li> <li><a id="collection5">&hearts;</a></li> <li><a id="collection6">&hearts;</a></li> </ul> </div> <div class="credits">with love from Popm0uth ©2012</div> </footer> </body> </html> Javascript: var thisCard = $(this).text(); var activeCard = $('.active').text(); var openedCards = $('.opened').length; $(document).ready(function() { $('a.heart').css('color', '#CCCCCC'); $('a.heart').off('click'); function reset(){ $('td').removeClass('opened'); $('a').removeClass('visible'); $('td').removeClass('active'); }; $('td').click(openCard); function openCard(){ $(this).addClass('opened'); $(this).find('a').addClass('visible'); if ($(".active")[0]){ if ($(this).text() != $('.active').text()) { setTimeout(function(){ reset(); }, 1000); } else { $('.active').removeClass('active'); } } else { $(this).addClass("active"); } if (openedCards == 24){ $(".active").removeClass("active"); $("a.heart").css('color', '#ff63ff'); $("a.heart").off('click'); } } });

    Read the article

  • 256 Windows Azure Worker Roles, Windows Kinect and a 90's Text-Based Ray-Tracer

    - by Alan Smith
    For a couple of years I have been demoing a simple render farm hosted in Windows Azure using worker roles and the Azure Storage service. At the start of the presentation I deploy an Azure application that uses 16 worker roles to render a 1,500 frame 3D ray-traced animation. At the end of the presentation, when the animation was complete, I would play the animation delete the Azure deployment. The standing joke with the audience was that it was that it was a “$2 demo”, as the compute charges for running the 16 instances for an hour was $1.92, factor in the bandwidth charges and it’s a couple of dollars. The point of the demo is that it highlights one of the great benefits of cloud computing, you pay for what you use, and if you need massive compute power for a short period of time using Windows Azure can work out very cost effective. The “$2 demo” was great for presenting at user groups and conferences in that it could be deployed to Azure, used to render an animation, and then removed in a one hour session. I have always had the idea of doing something a bit more impressive with the demo, and scaling it from a “$2 demo” to a “$30 demo”. The challenge was to create a visually appealing animation in high definition format and keep the demo time down to one hour.  This article will take a run through how I achieved this. Ray Tracing Ray tracing, a technique for generating high quality photorealistic images, gained popularity in the 90’s with companies like Pixar creating feature length computer animations, and also the emergence of shareware text-based ray tracers that could run on a home PC. In order to render a ray traced image, the ray of light that would pass from the view point must be tracked until it intersects with an object. At the intersection, the color, reflectiveness, transparency, and refractive index of the object are used to calculate if the ray will be reflected or refracted. Each pixel may require thousands of calculations to determine what color it will be in the rendered image. Pin-Board Toys Having very little artistic talent and a basic understanding of maths I decided to focus on an animation that could be modeled fairly easily and would look visually impressive. I’ve always liked the pin-board desktop toys that become popular in the 80’s and when I was working as a 3D animator back in the 90’s I always had the idea of creating a 3D ray-traced animation of a pin-board, but never found the energy to do it. Even if I had a go at it, the render time to produce an animation that would look respectable on a 486 would have been measured in months. PolyRay Back in 1995 I landed my first real job, after spending three years being a beach-ski-climbing-paragliding-bum, and was employed to create 3D ray-traced animations for a CD-ROM that school kids would use to learn physics. I had got into the strange and wonderful world of text-based ray tracing, and was using a shareware ray-tracer called PolyRay. PolyRay takes a text file describing a scene as input and, after a few hours processing on a 486, produced a high quality ray-traced image. The following is an example of a basic PolyRay scene file. background Midnight_Blue   static define matte surface { ambient 0.1 diffuse 0.7 } define matte_white texture { matte { color white } } define matte_black texture { matte { color dark_slate_gray } } define position_cylindrical 3 define lookup_sawtooth 1 define light_wood <0.6, 0.24, 0.1> define median_wood <0.3, 0.12, 0.03> define dark_wood <0.05, 0.01, 0.005>     define wooden texture { noise surface { ambient 0.2  diffuse 0.7  specular white, 0.5 microfacet Reitz 10 position_fn position_cylindrical position_scale 1  lookup_fn lookup_sawtooth octaves 1 turbulence 1 color_map( [0.0, 0.2, light_wood, light_wood] [0.2, 0.3, light_wood, median_wood] [0.3, 0.4, median_wood, light_wood] [0.4, 0.7, light_wood, light_wood] [0.7, 0.8, light_wood, median_wood] [0.8, 0.9, median_wood, light_wood] [0.9, 1.0, light_wood, dark_wood]) } } define glass texture { surface { ambient 0 diffuse 0 specular 0.2 reflection white, 0.1 transmission white, 1, 1.5 }} define shiny surface { ambient 0.1 diffuse 0.6 specular white, 0.6 microfacet Phong 7  } define steely_blue texture { shiny { color black } } define chrome texture { surface { color white ambient 0.0 diffuse 0.2 specular 0.4 microfacet Phong 10 reflection 0.8 } }   viewpoint {     from <4.000, -1.000, 1.000> at <0.000, 0.000, 0.000> up <0, 1, 0> angle 60     resolution 640, 480 aspect 1.6 image_format 0 }       light <-10, 30, 20> light <-10, 30, -20>   object { disc <0, -2, 0>, <0, 1, 0>, 30 wooden }   object { sphere <0.000, 0.000, 0.000>, 1.00 chrome } object { cylinder <0.000, 0.000, 0.000>, <0.000, 0.000, -4.000>, 0.50 chrome }   After setting up the background and defining colors and textures, the viewpoint is specified. The “camera” is located at a point in 3D space, and it looks towards another point. The angle, image resolution, and aspect ratio are specified. Two lights are present in the image at defined coordinates. The three objects in the image are a wooden disc to represent a table top, and a sphere and cylinder that intersect to form a pin that will be used for the pin board toy in the final animation. When the image is rendered, the following image is produced. The pins are modeled with a chrome surface, so they reflect the environment around them. Note that the scale of the pin shaft is not correct, this will be fixed later. Modeling the Pin Board The frame of the pin-board is made up of three boxes, and six cylinders, the front box is modeled using a clear, slightly reflective solid, with the same refractive index of glass. The other shapes are modeled as metal. object { box <-5.5, -1.5, 1>, <5.5, 5.5, 1.2> glass } object { box <-5.5, -1.5, -0.04>, <5.5, 5.5, -0.09> steely_blue } object { box <-5.5, -1.5, -0.52>, <5.5, 5.5, -0.59> steely_blue } object { cylinder <-5.2, -1.2, 1.4>, <-5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, -1.2, 1.4>, <5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <-5.2, 5.2, 1.4>, <-5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, 5.2, 1.4>, <5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <0, -1.2, 1.4>, <0, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <0, 5.2, 1.4>, <0, 5.2, -0.74>, 0.2 steely_blue }   In order to create the matrix of pins that make up the pin board I used a basic console application with a few nested loops to create two intersecting matrixes of pins, which models the layout used in the pin boards. The resulting image is shown below. The pin board contains 11,481 pins, with the scene file containing 23,709 lines of code. For the complete animation 2,000 scene files will be created, which is over 47 million lines of code. Each pin in the pin-board will slide out a specific distance when an object is pressed into the back of the board. This is easily modeled by setting the Z coordinate of the pin to a specific value. In order to set all of the pins in the pin-board to the correct position, a bitmap image can be used. The position of the pin can be set based on the color of the pixel at the appropriate position in the image. When the Windows Azure logo is used to set the Z coordinate of the pins, the following image is generated. The challenge now was to make a cool animation. The Azure Logo is fine, but it is static. Using a normal video to animate the pins would not work; the colors in the video would not be the same as the depth of the objects from the camera. In order to simulate the pin board accurately a series of frames from a depth camera could be used. Windows Kinect The Kenect controllers for the X-Box 360 and Windows feature a depth camera. The Kinect SDK for Windows provides a programming interface for Kenect, providing easy access for .NET developers to the Kinect sensors. The Kinect Explorer provided with the Kinect SDK is a great starting point for exploring Kinect from a developers perspective. Both the X-Box 360 Kinect and the Windows Kinect will work with the Kinect SDK, the Windows Kinect is required for commercial applications, but the X-Box Kinect can be used for hobby projects. The Windows Kinect has the advantage of providing a mode to allow depth capture with objects closer to the camera, which makes for a more accurate depth image for setting the pin positions. Creating a Depth Field Animation The depth field animation used to set the positions of the pin in the pin board was created using a modified version of the Kinect Explorer sample application. In order to simulate the pin board accurately, a small section of the depth range from the depth sensor will be used. Any part of the object in front of the depth range will result in a white pixel; anything behind the depth range will be black. Within the depth range the pixels in the image will be set to RGB values from 0,0,0 to 255,255,255. A screen shot of the modified Kinect Explorer application is shown below. The Kinect Explorer sample application was modified to include slider controls that are used to set the depth range that forms the image from the depth stream. This allows the fine tuning of the depth image that is required for simulating the position of the pins in the pin board. The Kinect Explorer was also modified to record a series of images from the depth camera and save them as a sequence JPEG files that will be used to animate the pins in the animation the Start and Stop buttons are used to start and stop the image recording. En example of one of the depth images is shown below. Once a series of 2,000 depth images has been captured, the task of creating the animation can begin. Rendering a Test Frame In order to test the creation of frames and get an approximation of the time required to render each frame a test frame was rendered on-premise using PolyRay. The output of the rendering process is shown below. The test frame contained 23,629 primitive shapes, most of which are the spheres and cylinders that are used for the 11,800 or so pins in the pin board. The 1280x720 image contains 921,600 pixels, but as anti-aliasing was used the number of rays that were calculated was 4,235,777, with 3,478,754,073 object boundaries checked. The test frame of the pin board with the depth field image applied is shown below. The tracing time for the test frame was 4 minutes 27 seconds, which means rendering the2,000 frames in the animation would take over 148 hours, or a little over 6 days. Although this is much faster that an old 486, waiting almost a week to see the results of an animation would make it challenging for animators to create, view, and refine their animations. It would be much better if the animation could be rendered in less than one hour. Windows Azure Worker Roles The cost of creating an on-premise render farm to render animations increases in proportion to the number of servers. The table below shows the cost of servers for creating a render farm, assuming a cost of $500 per server. Number of Servers Cost 1 $500 16 $8,000 256 $128,000   As well as the cost of the servers, there would be additional costs for networking, racks etc. Hosting an environment of 256 servers on-premise would require a server room with cooling, and some pretty hefty power cabling. The Windows Azure compute services provide worker roles, which are ideal for performing processor intensive compute tasks. With the scalability available in Windows Azure a job that takes 256 hours to complete could be perfumed using different numbers of worker roles. The time and cost of using 1, 16 or 256 worker roles is shown below. Number of Worker Roles Render Time Cost 1 256 hours $30.72 16 16 hours $30.72 256 1 hour $30.72   Using worker roles in Windows Azure provides the same cost for the 256 hour job, irrespective of the number of worker roles used. Provided the compute task can be broken down into many small units, and the worker role compute power can be used effectively, it makes sense to scale the application so that the task is completed quickly, making the results available in a timely fashion. The task of rendering 2,000 frames in an animation is one that can easily be broken down into 2,000 individual pieces, which can be performed by a number of worker roles. Creating a Render Farm in Windows Azure The architecture of the render farm is shown in the following diagram. The render farm is a hybrid application with the following components: ·         On-Premise o   Windows Kinect – Used combined with the Kinect Explorer to create a stream of depth images. o   Animation Creator – This application uses the depth images from the Kinect sensor to create scene description files for PolyRay. These files are then uploaded to the jobs blob container, and job messages added to the jobs queue. o   Process Monitor – This application queries the role instance lifecycle table and displays statistics about the render farm environment and render process. o   Image Downloader – This application polls the image queue and downloads the rendered animation files once they are complete. ·         Windows Azure o   Azure Storage – Queues and blobs are used for the scene description files and completed frames. A table is used to store the statistics about the rendering environment.   The architecture of each worker role is shown below.   The worker role is configured to use local storage, which provides file storage on the worker role instance that can be use by the applications to render the image and transform the format of the image. The service definition for the worker role with the local storage configuration highlighted is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="CloudRay" >   <WorkerRole name="CloudRayWorkerRole" vmsize="Small">     <Imports>     </Imports>     <ConfigurationSettings>       <Setting name="DataConnectionString" />     </ConfigurationSettings>     <LocalResources>       <LocalStorage name="RayFolder" cleanOnRoleRecycle="true" />     </LocalResources>   </WorkerRole> </ServiceDefinition>     The two executable programs, PolyRay.exe and DTA.exe are included in the Azure project, with Copy Always set as the property. PolyRay will take the scene description file and render it to a Truevision TGA file. As the TGA format has not seen much use since the mid 90’s it is converted to a JPG image using Dave's Targa Animator, another shareware application from the 90’s. Each worker roll will use the following process to render the animation frames. 1.       The worker process polls the job queue, if a job is available the scene description file is downloaded from blob storage to local storage. 2.       PolyRay.exe is started in a process with the appropriate command line arguments to render the image as a TGA file. 3.       DTA.exe is started in a process with the appropriate command line arguments convert the TGA file to a JPG file. 4.       The JPG file is uploaded from local storage to the images blob container. 5.       A message is placed on the images queue to indicate a new image is available for download. 6.       The job message is deleted from the job queue. 7.       The role instance lifecycle table is updated with statistics on the number of frames rendered by the worker role instance, and the CPU time used. The code for this is shown below. public override void Run() {     // Set environment variables     string polyRayPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), PolyRayLocation);     string dtaPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), DTALocation);       LocalResource rayStorage = RoleEnvironment.GetLocalResource("RayFolder");     string localStorageRootPath = rayStorage.RootPath;       JobQueue jobQueue = new JobQueue("renderjobs");     JobQueue downloadQueue = new JobQueue("renderimagedownloadjobs");     CloudRayBlob sceneBlob = new CloudRayBlob("scenes");     CloudRayBlob imageBlob = new CloudRayBlob("images");     RoleLifecycleDataSource roleLifecycleDataSource = new RoleLifecycleDataSource();       Frames = 0;       while (true)     {         // Get the render job from the queue         CloudQueueMessage jobMsg = jobQueue.Get();           if (jobMsg != null)         {             // Get the file details             string sceneFile = jobMsg.AsString;             string tgaFile = sceneFile.Replace(".pi", ".tga");             string jpgFile = sceneFile.Replace(".pi", ".jpg");               string sceneFilePath = Path.Combine(localStorageRootPath, sceneFile);             string tgaFilePath = Path.Combine(localStorageRootPath, tgaFile);             string jpgFilePath = Path.Combine(localStorageRootPath, jpgFile);               // Copy the scene file to local storage             sceneBlob.DownloadFile(sceneFilePath);               // Run the ray tracer.             string polyrayArguments =                 string.Format("\"{0}\" -o \"{1}\" -a 2", sceneFilePath, tgaFilePath);             Process polyRayProcess = new Process();             polyRayProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), polyRayPath);             polyRayProcess.StartInfo.Arguments = polyrayArguments;             polyRayProcess.Start();             polyRayProcess.WaitForExit();               // Convert the image             string dtaArguments =                 string.Format(" {0} /FJ /P{1}", tgaFilePath, Path.GetDirectoryName (jpgFilePath));             Process dtaProcess = new Process();             dtaProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), dtaPath);             dtaProcess.StartInfo.Arguments = dtaArguments;             dtaProcess.Start();             dtaProcess.WaitForExit();               // Upload the image to blob storage             imageBlob.UploadFile(jpgFilePath);               // Add a download job.             downloadQueue.Add(jpgFile);               // Delete the render job message             jobQueue.Delete(jobMsg);               Frames++;         }         else         {             Thread.Sleep(1000);         }           // Log the worker role activity.         roleLifecycleDataSource.Alive             ("CloudRayWorker", RoleLifecycleDataSource.RoleLifecycleId, Frames);     } }     Monitoring Worker Role Instance Lifecycle In order to get more accurate statistics about the lifecycle of the worker role instances used to render the animation data was tracked in an Azure storage table. The following class was used to track the worker role lifecycles in Azure storage.   public class RoleLifecycle : TableServiceEntity {     public string ServerName { get; set; }     public string Status { get; set; }     public DateTime StartTime { get; set; }     public DateTime EndTime { get; set; }     public long SecondsRunning { get; set; }     public DateTime LastActiveTime { get; set; }     public int Frames { get; set; }     public string Comment { get; set; }       public RoleLifecycle()     {     }       public RoleLifecycle(string roleName)     {         PartitionKey = roleName;         RowKey = Utils.GetAscendingRowKey();         Status = "Started";         StartTime = DateTime.UtcNow;         LastActiveTime = StartTime;         EndTime = StartTime;         SecondsRunning = 0;         Frames = 0;     } }     A new instance of this class is created and added to the storage table when the role starts. It is then updated each time the worker renders a frame to record the total number of frames rendered and the total processing time. These statistics are used be the monitoring application to determine the effectiveness of use of resources in the render farm. Rendering the Animation The Azure solution was deployed to Windows Azure with the service configuration set to 16 worker role instances. This allows for the application to be tested in the cloud environment, and the performance of the application determined. When I demo the application at conferences and user groups I often start with 16 instances, and then scale up the application to the full 256 instances. The configuration to run 16 instances is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="16" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     About six minutes after deploying the application the first worker roles become active and start to render the first frames of the animation. The CloudRay Monitor application displays an icon for each worker role instance, with a number indicating the number of frames that the worker role has rendered. The statistics on the left show the number of active worker roles and statistics about the render process. The render time is the time since the first worker role became active; the CPU time is the total amount of processing time used by all worker role instances to render the frames.   Five minutes after the first worker role became active the last of the 16 worker roles activated. By this time the first seven worker roles had each rendered one frame of the animation.   With 16 worker roles u and running it can be seen that one hour and 45 minutes CPU time has been used to render 32 frames with a render time of just under 10 minutes.     At this rate it would take over 10 hours to render the 2,000 frames of the full animation. In order to complete the animation in under an hour more processing power will be required. Scaling the render farm from 16 instances to 256 instances is easy using the new management portal. The slider is set to 256 instances, and the configuration saved. We do not need to re-deploy the application, and the 16 instances that are up and running will not be affected. Alternatively, the configuration file for the Azure service could be modified to specify 256 instances.   <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="256" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     Six minutes after the new configuration has been applied 75 new worker roles have activated and are processing their first frames.   Five minutes later the full configuration of 256 worker roles is up and running. We can see that the average rate of frame rendering has increased from 3 to 12 frames per minute, and that over 17 hours of CPU time has been utilized in 23 minutes. In this test the time to provision 140 worker roles was about 11 minutes, which works out at about one every five seconds.   We are now half way through the rendering, with 1,000 frames complete. This has utilized just under three days of CPU time in a little over 35 minutes.   The animation is now complete, with 2,000 frames rendered in a little over 52 minutes. The CPU time used by the 256 worker roles is 6 days, 7 hours and 22 minutes with an average frame rate of 38 frames per minute. The rendering of the last 1,000 frames took 16 minutes 27 seconds, which works out at a rendering rate of 60 frames per minute. The frame counts in the server instances indicate that the use of a queue to distribute the workload has been very effective in distributing the load across the 256 worker role instances. The first 16 instances that were deployed first have rendered between 11 and 13 frames each, whilst the 240 instances that were added when the application was scaled have rendered between 6 and 9 frames each.   Completed Animation I’ve uploaded the completed animation to YouTube, a low resolution preview is shown below. Pin Board Animation Created using Windows Kinect and 256 Windows Azure Worker Roles   The animation can be viewed in 1280x720 resolution at the following link: http://www.youtube.com/watch?v=n5jy6bvSxWc Effective Use of Resources According to the CloudRay monitor statistics the animation took 6 days, 7 hours and 22 minutes CPU to render, this works out at 152 hours of compute time, rounded up to the nearest hour. As the usage for the worker role instances are billed for the full hour, it may have been possible to render the animation using fewer than 256 worker roles. When deciding the optimal usage of resources, the time required to provision and start the worker roles must also be considered. In the demo I started with 16 worker roles, and then scaled the application to 256 worker roles. It would have been more optimal to start the application with maybe 200 worker roles, and utilized the full hour that I was being billed for. This would, however, have prevented showing the ease of scalability of the application. The new management portal displays the CPU usage across the worker roles in the deployment. The average CPU usage across all instances is 93.27%, with over 99% used when all the instances are up and running. This shows that the worker role resources are being used very effectively. Grid Computing Scenarios Although I am using this scenario for a hobby project, there are many scenarios where a large amount of compute power is required for a short period of time. Windows Azure provides a great platform for developing these types of grid computing applications, and can work out very cost effective. ·         Windows Azure can provide massive compute power, on demand, in a matter of minutes. ·         The use of queues to manage the load balancing of jobs between role instances is a simple and effective solution. ·         Using a cloud-computing platform like Windows Azure allows proof-of-concept scenarios to be tested and evaluated on a very low budget. ·         No charges for inbound data transfer makes the uploading of large data sets to Windows Azure Storage services cost effective. (Transaction charges still apply.) Tips for using Windows Azure for Grid Computing Scenarios I found the implementation of a render farm using Windows Azure a fairly simple scenario to implement. I was impressed by ease of scalability that Azure provides, and by the short time that the application took to scale from 16 to 256 worker role instances. In this case it was around 13 minutes, in other tests it took between 10 and 20 minutes. The following tips may be useful when implementing a grid computing project in Windows Azure. ·         Using an Azure Storage queue to load-balance the units of work across multiple worker roles is simple and very effective. The design I have used in this scenario could easily scale to many thousands of worker role instances. ·         Windows Azure accounts are typically limited to 20 cores. If you need to use more than this, a call to support and a credit card check will be required. ·         Be aware of how the billing model works. You will be charged for worker role instances for the full clock our in which the instance is deployed. Schedule the workload to start just after the clock hour has started. ·         Monitor the utilization of the resources you are provisioning, ensure that you are not paying for worker roles that are idle. ·         If you are deploying third party applications to worker roles, you may well run into licensing issues. Purchasing software licenses on a per-processor basis when using hundreds of processors for a short time period would not be cost effective. ·         Third party software may also require installation onto the worker roles, which can be accomplished using start-up tasks. Bear in mind that adding a startup task and possible re-boot will add to the time required for the worker role instance to start and activate. An alternative may be to use a prepared VM and use VM roles. ·         Consider using the Windows Azure Autoscaling Application Block (WASABi) to autoscale the worker roles in your application. When using a large number of worker roles, the utilization must be carefully monitored, if the scaling algorithms are not optimal it could get very expensive!

    Read the article

  • Original object is also changed when values of cloned object are changed.

    - by fari
    I am trying to use clone but the original object is also changed when values of cloned object are changed. As you can see KalaGameState does not use any objects so a shallow copy should work. /** * This class represents the current state of a Kala game, including * which player's turn it is along with the state of the board; i.e. the * numbers of stones in each side pit, and each player's 'kala'). */ public class KalaGameState implements Cloneable { // your code goes here private int turn; private int[] sidePit; private boolean game; public Object clone() { try { return super.clone(); } catch (CloneNotSupportedException e) { // This should never happen throw new InternalError(e.toString()); } } /** * Constructs a new GameState with a specified number of stones in each * player's side pits. * @param startingStones the number of starting stones in each side pit. * @throws InvalidStartingStonesException if startingStones not in the range 1-10. */ public KalaGameState(int startingStones) throws InvalidStartingStonesException { game=true; turn=0; sidePit=new int[14]; for (int i=0; i <= 13 ; i++) { sidePit[i] = startingStones; } sidePit[6] =0; sidePit[13] =0; // your code goes here } /** * Returns the ID of the player whose turn it is. * @return A value of 0 = Player A, 1 = Player B. */ public int getTurn() { return turn; // your code goes here } /** * Returns the current kala for a specified player. * @param playerNum A value of 0 for Player A, 1 for Player B. * @throws IllegalPlayerNumException if the playerNum parameter * is not 0 or 1. */ public int getKala(int playerNum) throws IllegalPlayerNumException { if(playerNum!=0 || playerNum!=1) throw new IllegalPlayerNumException(playerNum); if(playerNum==0) return sidePit[6]; else return sidePit[13]; // your code goes here } /** * Returns the current number of stones in the specified pit for * the player whose turn it is. * @param sidePitNum the side pit being queried in the range 1-6. * @throws IllegalSidePitNumException if the sidePitNum parameter. * is not in the range 1-6. */ public int getNumStones(int sidePitNum) throws IllegalSidePitNumException { if(turn==0) { if(sidePitNum>6 ) throw new IllegalSidePitNumException(sidePitNum); } else if(sidePitNum>12) throw new IllegalSidePitNumException(sidePitNum); if(turn==0) return sidePit[sidePitNum]; else return sidePit[sidePitNum+6]; // your code goes here } /** * Returns the current number of stones in the specified pit for a specified player. * @param playerNum the player whose kala is sought. (0 = Player A, 1 = Player B). * @param sidePitNum the side pit being queried (in the range 1-6). * @throws IllegalPlayerNumException if the playerNum parameter is not 0 or 1. * @throws IllegalSidePitNumException if the sidePitNum parameter is not in the * range 1-6. */ public int getNumStones(int playerNum, int sidePitNum) throws IllegalPlayerNumException, IllegalSidePitNumException { /*if(playerNum>2) throw new IllegalPlayerNumException(playerNum); if(turn==0) { if(sidePitNum>6 ) throw new IllegalSidePitNumException(sidePitNum); } else if(sidePitNum>12) throw new IllegalSidePitNumException(sidePitNum); */ if(playerNum==0) return sidePit[sidePitNum]; else if(playerNum==1) return sidePit[sidePitNum+7]; else return sidePit[sidePitNum]; } /** * Returns the current score for a specified player - the player's * kala plus the number of stones in each of their side pits. * @param playerNum the player whose kala is sought. (0 = Player A, 1 = Player B). * @throws IllegalPlayerNumException if the playerNum parameter is not 0 or 1. */ public int getScore(int playerNum) throws IllegalPlayerNumException { if(playerNum>1) throw new IllegalPlayerNumException(playerNum); int score=0; if(playerNum==0) { for(int i=0;i<=5;i++) score=score+sidePit[i]; score=score+sidePit[6]; } else { for(int i=7;i<=12;i++) score=score+sidePit[i]; score=score+sidePit[13]; } // your code goes here return score; } private int getSidePitArrayIndex(int sidePitNum) throws IllegalSidePitNumException { if(turn==0) { if(sidePitNum>6 ) throw new IllegalSidePitNumException(sidePitNum); } else if(sidePitNum>12) throw new IllegalSidePitNumException(sidePitNum); if(turn==0) { return sidePitNum--; } else { return sidePitNum+6; } } public boolean gameOver() { int stone=0; if(turn==0) for(int i=0;i<=5;i++) stone=stone+getNumStones(i); else for(int i=7;i<=12;i++) stone=stone+getNumStones(i-7); if (stone==0) game=false; return game; } /** * Makes a move for the player whose turn it is. * @param sidePitNum the side pit being queried (should be in the range 1-6) * @throws IllegalSidePitNumException if the sidePitNum parameter is not in the range 1-6. * @throws IllegalMoveException if the side pit is empty and has no stones in it. */ public void makeMove(int sidePitNum) throws IllegalSidePitNumException, IllegalMoveException { if(turn==0) { if(sidePitNum>6 ) throw new IllegalSidePitNumException(sidePitNum); } else if(sidePitNum>12) throw new IllegalSidePitNumException(sidePitNum); /* if(turn==0) { if(sidePit[sidePitNum-1]==0) throw new IllegalMoveException(sidePitNum); } else { if(sidePit[sidePitNum-1+7]==0) throw new IllegalMoveException(sidePitNum); } */ sidePitNum--; int temp=sidePitNum; int pitNum=sidePitNum+1; int stones=getNumStones(turn,sidePitNum); if(turn==0) sidePit[sidePitNum]=0; else { sidePitNum=sidePitNum+7; sidePit[sidePitNum]=0; pitNum=pitNum+7; } while(stones!=0) { if(turn==0) { sidePit[pitNum]=sidePit[pitNum]+1; stones--; pitNum++; if(pitNum==13) pitNum=0; } else { sidePit[pitNum]=sidePit[pitNum]+1; stones--; pitNum++; if(pitNum==6) pitNum=7; else if(pitNum==14) pitNum=0; } } boolean res=anotherTurn(pitNum); if(!res){ capture(pitNum,temp); if(turn==0) turn=1; else turn=0;} } private boolean anotherTurn(int pitNum) {pitNum--; boolean temp=false; if(turn==0) {if(pitNum==6) {turn=0; temp=true; } } else if(pitNum==-1) {turn=1; temp=true; } return temp; } private void capture(int pitNum, int pit) { pitNum--; if(turn==0){ if(sidePit[pitNum]==1 && pitNum<6) { if(pitNum==0) { sidePit[6]=sidePit[6]+sidePit[12]+1; sidePit[12]=0; } else if(pitNum==1) { sidePit[6]=sidePit[6]+sidePit[11]+1; sidePit[11]=0; } else if(pitNum==2) { sidePit[6]=sidePit[6]+sidePit[10]+1; sidePit[10]=0; } else if(pitNum==3) { sidePit[6]=sidePit[6]+sidePit[9]+1; sidePit[9]=0; } else if(pitNum==4) { sidePit[6]=sidePit[6]+sidePit[8]+1; sidePit[8]=0; } else if(pitNum==5) { sidePit[6]=sidePit[6]+sidePit[7]+1; sidePit[7]=0; } sidePit[pitNum]=0; } } if(turn==1) { //pitNum=pitNum; if(sidePit[pitNum]==1 && pit+7>6) { if(pitNum==7) { sidePit[13]=sidePit[13]+sidePit[5]+1; sidePit[7]=0; } else if(pitNum==8) { sidePit[13]=sidePit[13]+sidePit[4]+1; sidePit[4]=0; } else if(pitNum==9) { sidePit[13]=sidePit[13]+sidePit[3]+1; sidePit[3]=0; } else if(pitNum==10) { sidePit[13]=sidePit[13]+sidePit[2]+1; sidePit[2]=0; } else if(pitNum==11) { sidePit[13]=sidePit[13]+sidePit[1]+1; sidePit[1]=0; } else if(pitNum==12) { sidePit[13]=sidePit[13]+sidePit[0]+1; sidePit[0]=0; } sidePit[pitNum]=0; } } } } import java.io.BufferedReader; import java.io.InputStreamReader; public class RandomPlayer extends KalaPlayer{ //KalaGameState state; public int chooseMove(KalaGameState gs) throws NoMoveAvailableException {int[] moves; moves=getMoves(gs); try{ for(int i=0;i<=5;i++) System.out.println(moves[i]); for(int i=0;i<=5;i++) { if(moves[i]==1) { KalaGameState state=(KalaGameState) gs.clone(); state.makeMove(moves[i]); gs.getTurn(); moves[i]=evalValue(state.getScore(0),state.getScore(1)); } } } catch(IllegalMoveException e) { System.out.println(e); //chooseMove(state); } return 10; } private int evalValue(int score0,int score1) { int score=0; //int score0=0; // int score1=0; //getScore(0); //score1=state.getScore(1); //if((state.getTurn())==0) score=score1-score0; //else //score=score1-score0; System.out.println("score: "+score); return score; } public int[] getMoves(KalaGameState gs) { int[] moves=new int[6]; for(int i=1;i<=6;i++) { if(gs.getNumStones(i)!=0) moves[i-1]=1; else moves[i-1]=0; } return moves; } } Can you explain what is going wrong, please?

    Read the article

  • FreeBSD performance tuning. Sysctls, loader.conf, kernel

    - by SaveTheRbtz
    I wanted to share knowledge of tuning FreeBSD via sysctl.conf/loader.conf/KENCONF. It was initially based on Igor Sysoev's (author of nginx) presentation about FreeBSD tuning up to 100,000-200,000 active connections. Tunings are for FreeBSD-CURRENT. Since 7.2 amd64 some of them are tuned well by default. Prior 7.0 some of them are boot only (set via /boot/loader.conf) or does not exist at all. sysctl.conf: # No zero mapping feature # May break wine # (There are also reports about broken samba3) #security.bsd.map_at_zero=0 # If you have really busy webserver with apache13 you may run out of processes #kern.maxproc=10000 # Same for servers with apache2 / Pound #kern.threads.max_threads_per_proc=4096 # Max. backlog size kern.ipc.somaxconn=4096 # Shared memory // 7.2+ can use shared memory > 2Gb kern.ipc.shmmax=2147483648 # Sockets kern.ipc.maxsockets=204800 # Can cause this on older kernels: # http://old.nabble.com/Significant-performance-regression-for-increased-maxsockbuf-on-8.0-RELEASE-tt26745981.html#a26745981 ) kern.ipc.maxsockbuf=10485760 # Mbuf 2k clusters (on amd64 7.2+ 25600 is default) # For such high value vm.kmem_size must be increased to 3G kern.ipc.nmbclusters=262144 # Jumbo pagesize(_SC_PAGESIZE) clusters # Used as general packet storage for jumbo frames # can be monitored via `netstat -m` #kern.ipc.nmbjumbop=262144 # Jumbo 9k/16k clusters # If you are using them #kern.ipc.nmbjumbo9=65536 #kern.ipc.nmbjumbo16=32768 # For lower latency you can decrease scheduler's maximum time slice # default: stathz/10 (~ 13) #kern.sched.slice=1 # Increase max command-line length showed in `ps` (e.g for Tomcat/Java) # Default is PAGE_SIZE / 16 or 256 on x86 # This avoids commands to be presented as [executable] in `ps` # For more info see: http://www.freebsd.org/cgi/query-pr.cgi?pr=120749 kern.ps_arg_cache_limit=4096 # Every socket is a file, so increase them kern.maxfiles=204800 kern.maxfilesperproc=200000 kern.maxvnodes=200000 # On some systems HPET is almost 2 times faster than default ACPI-fast # Useful on systems with lots of clock_gettime / gettimeofday calls # See http://old.nabble.com/ACPI-fast-default-timecounter,-but-HPET-83--faster-td23248172.html # After revision 222222 HPET became default: http://svnweb.freebsd.org/base?view=revision&revision=222222 kern.timecounter.hardware=HPET # Small receive space, only usable on http-server, on file server this # should be increased to 65535 or even more #net.inet.tcp.recvspace=8192 # This is useful on Fat-Long-Pipes #net.inet.tcp.recvbuf_max=10485760 #net.inet.tcp.recvbuf_inc=65535 # Small send space is useful for http servers that serve small files # Autotuned since 7.x net.inet.tcp.sendspace=16384 # This is useful on Fat-Long-Pipes #net.inet.tcp.sendbuf_max=10485760 #net.inet.tcp.sendbuf_inc=65535 # Turn off receive autotuning # You can play with it. #net.inet.tcp.recvbuf_auto=0 #net.inet.tcp.sendbuf_auto=0 # This should be enabled if you going to use big spaces (>64k) # Also timestamp field is useful when using syncookies net.inet.tcp.rfc1323=1 # Turn this off on high-speed, lossless connections (LAN 1Gbit+) # If you set it there is no need in TCP_NODELAY sockopt (see man tcp) net.inet.tcp.delayed_ack=0 # This feature is useful if you are serving data over modems, Gigabit Ethernet, # or even high speed WAN links (or any other link with a high bandwidth delay product), # especially if you are also using window scaling or have configured a large send window. # Automatically disables on small RTT ( http://www.freebsd.org/cgi/cvsweb.cgi/src/sys/netinet/tcp_subr.c?#rev1.237 ) # This sysctl was removed in 10-CURRENT: # See: http://www.mail-archive.com/[email protected]/msg06178.html #net.inet.tcp.inflight.enable=0 # TCP slowstart algorithm tunings # We assuming we have very fast clients #net.inet.tcp.slowstart_flightsize=100 #net.inet.tcp.local_slowstart_flightsize=100 # Disable randomizing of ports to avoid false RST # Before usage check SA here www.bsdcan.org/2006/papers/ImprovingTCPIP.pdf # (it's also says that port randomization auto-disables at some conn.rates, but I didn't checked it thou) #net.inet.ip.portrange.randomized=0 # Increase portrange # For outgoing connections only. Good for seed-boxes and ftp servers. net.inet.ip.portrange.first=1024 net.inet.ip.portrange.last=65535 # # stops route cache degregation during a high-bandwidth flood # http://www.freebsd.org/doc/en/books/handbook/securing-freebsd.html #net.inet.ip.rtexpire=2 net.inet.ip.rtminexpire=2 net.inet.ip.rtmaxcache=1024 # Security net.inet.ip.redirect=0 net.inet.ip.sourceroute=0 net.inet.ip.accept_sourceroute=0 net.inet.icmp.maskrepl=0 net.inet.icmp.log_redirect=0 net.inet.icmp.drop_redirect=1 net.inet.tcp.drop_synfin=1 # # There is also good example of sysctl.conf with comments: # http://www.thern.org/projects/sysctl.conf # # icmp may NOT rst, helpful for those pesky spoofed # icmp/udp floods that end up taking up your outgoing # bandwidth/ifqueue due to all that outgoing RST traffic. # #net.inet.tcp.icmp_may_rst=0 # Security net.inet.udp.blackhole=1 net.inet.tcp.blackhole=2 # IPv6 Security # For more info see http://www.fosslc.org/drupal/content/security-implications-ipv6 # Disable Node info replies # To see this vulnerability in action run `ping6 -a sglAac ::1` or `ping6 -w ::1` on unprotected node net.inet6.icmp6.nodeinfo=0 # Turn on IPv6 privacy extensions # For more info see proposal http://unix.derkeiler.com/Mailing-Lists/FreeBSD/net/2008-06/msg00103.html net.inet6.ip6.use_tempaddr=1 net.inet6.ip6.prefer_tempaddr=1 # Disable ICMP redirect net.inet6.icmp6.rediraccept=0 # Disable acceptation of RA and auto linklocal generation if you don't use them #net.inet6.ip6.accept_rtadv=0 #net.inet6.ip6.auto_linklocal=0 # Increases default TTL, sometimes useful # Default is 64 net.inet.ip.ttl=128 # Lessen max segment life to conserve resources # ACK waiting time in miliseconds # (default: 30000. RFC from 1979 recommends 120000) net.inet.tcp.msl=5000 # Max bumber of timewait sockets net.inet.tcp.maxtcptw=200000 # Don't use tw on local connections # As of 15 Apr 2009. Igor Sysoev says that nolocaltimewait has some buggy realization. # So disable it or now till get fixed #net.inet.tcp.nolocaltimewait=1 # FIN_WAIT_2 state fast recycle net.inet.tcp.fast_finwait2_recycle=1 # Time before tcp keepalive probe is sent # default is 2 hours (7200000) #net.inet.tcp.keepidle=60000 # Should be increased until net.inet.ip.intr_queue_drops is zero net.inet.ip.intr_queue_maxlen=4096 # Interrupt handling via multiple CPU, but with context switch. # You can play with it. Default is 1; #net.isr.direct=0 # This is for routers only #net.inet.ip.forwarding=1 #net.inet.ip.fastforwarding=1 # This speed ups dummynet when channel isn't saturated net.inet.ip.dummynet.io_fast=1 # Increase dummynet(4) hash #net.inet.ip.dummynet.hash_size=2048 #net.inet.ip.dummynet.max_chain_len # Should be increased when you have A LOT of files on server # (Increase until vfs.ufs.dirhash_mem becomes lower) vfs.ufs.dirhash_maxmem=67108864 # Note from commit http://svn.freebsd.org/base/head@211031 : # For systems with RAID volumes and/or virtualization envirnments, where # read performance is very important, increasing this sysctl tunable to 32 # or even more will demonstratively yield additional performance benefits. vfs.read_max=32 # Explicit Congestion Notification (see http://en.wikipedia.org/wiki/Explicit_Congestion_Notification) net.inet.tcp.ecn.enable=1 # Flowtable - flow caching mechanism # Useful for routers #net.inet.flowtable.enable=1 #net.inet.flowtable.nmbflows=65535 # Extreme polling tuning #kern.polling.burst_max=1000 #kern.polling.each_burst=1000 #kern.polling.reg_frac=100 #kern.polling.user_frac=1 #kern.polling.idle_poll=0 # IPFW dynamic rules and timeouts tuning # Increase dyn_buckets till net.inet.ip.fw.curr_dyn_buckets is lower net.inet.ip.fw.dyn_buckets=65536 net.inet.ip.fw.dyn_max=65536 net.inet.ip.fw.dyn_ack_lifetime=120 net.inet.ip.fw.dyn_syn_lifetime=10 net.inet.ip.fw.dyn_fin_lifetime=2 net.inet.ip.fw.dyn_short_lifetime=10 # Make packets pass firewall only once when using dummynet # i.e. packets going thru pipe are passing out from firewall with accept #net.inet.ip.fw.one_pass=1 # shm_use_phys Wires all shared pages, making them unswappable # Use this to lessen Virtual Memory Manager's work when using Shared Mem. # Useful for databases #kern.ipc.shm_use_phys=1 # ZFS # Enable prefetch. Useful for sequential load type i.e fileserver. # FreeBSD sets vfs.zfs.prefetch_disable to 1 on any i386 systems and # on any amd64 systems with less than 4GB of avaiable memory # For additional info check this nabble thread http://old.nabble.com/Samba-read-speed-performance-tuning-td27964534.html #vfs.zfs.prefetch_disable=0 # On highload servers you may notice following message in dmesg: # "Approaching the limit on PV entries, consider increasing either the # vm.pmap.shpgperproc or the vm.pmap.pv_entry_max tunable" vm.pmap.shpgperproc=2048 loader.conf: # Accept filters for data, http and DNS requests # Useful when your software uses select() instead of kevent/kqueue or when you under DDoS # DNS accf available on 8.0+ accf_data_load="YES" accf_http_load="YES" accf_dns_load="YES" # Async IO system calls aio_load="YES" # Linux specific devices in /dev # As for 8.1 it only /dev/full #lindev_load="YES" # Adds NCQ support in FreeBSD # WARNING! all ad[0-9]+ devices will be renamed to ada[0-9]+ # 8.0+ only #ahci_load="YES" #siis_load="YES" # FreeBSD 8.2+ # New Congestion Control for FreeBSD # http://caia.swin.edu.au/urp/newtcp/tools/cc_chd-readme-0.1.txt # http://www.ietf.org/proceedings/78/slides/iccrg-5.pdf # Initial merge commit message http://www.mail-archive.com/[email protected]/msg31410.html #cc_chd_load="YES" # Increase kernel memory size to 3G. # # Use ONLY if you have KVA_PAGES in kernel configuration, and you have more than 3G RAM # Otherwise panic will happen on next reboot! # # It's required for high buffer sizes: kern.ipc.nmbjumbop, kern.ipc.nmbclusters, etc # Useful on highload stateful firewalls, proxies or ZFS fileservers # (FreeBSD 7.2+ amd64 users: Check that current value is lower!) #vm.kmem_size="3G" # If your server has lots of swap (>4Gb) you should increase following value # according to http://lists.freebsd.org/pipermail/freebsd-hackers/2009-October/029616.html # Otherwise you'll be getting errors # "kernel: swap zone exhausted, increase kern.maxswzone" # kern.maxswzone="256M" # Older versions of FreeBSD can't tune maxfiles on the fly #kern.maxfiles="200000" # Useful for databases # Sets maximum data size to 1G # (FreeBSD 7.2+ amd64 users: Check that current value is lower!) #kern.maxdsiz="1G" # Maximum buffer size(vfs.maxbufspace) # You can check current one via vfs.bufspace # Should be lowered/upped depending on server's load-type # Usually decreased to preserve kmem # (default is 10% of mem) #kern.maxbcache="512M" # Sendfile buffers # For i386 only #kern.ipc.nsfbufs=10240 # FreeBSD 9+ # HPET "legacy route" support. It should allow HPET to work per-CPU # See http://www.mail-archive.com/[email protected]/msg03603.html #hint.atrtc.0.clock=0 #hint.attimer.0.clock=0 #hint.hpet.0.legacy_route=1 # syncache Hash table tuning net.inet.tcp.syncache.hashsize=1024 net.inet.tcp.syncache.bucketlimit=512 net.inet.tcp.syncache.cachelimit=65536 # Increased hostcache # Later host cache can be viewed via net.inet.tcp.hostcache.list hidden sysctl # Very useful for it's RTT RTTVAR # Must be power of two net.inet.tcp.hostcache.hashsize=65536 # hashsize * bucketlimit (which is 30 by default) # It allocates 255Mb (1966080*136) of RAM net.inet.tcp.hostcache.cachelimit=1966080 # TCP control-block Hash table tuning net.inet.tcp.tcbhashsize=4096 # Disable ipfw deny all # Should be uncommented when there is a chance that # kernel and ipfw binary may be out-of sync on next reboot #net.inet.ip.fw.default_to_accept=1 # # SIFTR (Statistical Information For TCP Research) is a kernel module that # logs a range of statistics on active TCP connections to a log file. # See prerelease notes http://groups.google.com/group/mailing.freebsd.current/browse_thread/thread/b4c18be6cdce76e4 # and man 4 sitfr #siftr_load="YES" # Enable superpages, for 7.2+ only # Also read http://lists.freebsd.org/pipermail/freebsd-hackers/2009-November/030094.html vm.pmap.pg_ps_enabled=1 # Usefull if you are using Intel-Gigabit NIC #hw.em.rxd=4096 #hw.em.txd=4096 #hw.em.rx_process_limit="-1" # Also if you have ALOT interrupts on NIC - play with following parameters # NOTE: You should set them for every NIC #dev.em.0.rx_int_delay: 250 #dev.em.0.tx_int_delay: 250 #dev.em.0.rx_abs_int_delay: 250 #dev.em.0.tx_abs_int_delay: 250 # There is also multithreaded version of em/igb drivers can be found here: # http://people.yandex-team.ru/~wawa/ # # for additional em monitoring and statistics use # sysctl dev.em.0.stats=1 ; dmesg # sysctl dev.em.0.debug=1 ; dmesg # Also after r209242 (-CURRENT) there is a separate sysctl for each stat variable; # Same tunings for igb #hw.igb.rxd=4096 #hw.igb.txd=4096 #hw.igb.rx_process_limit=100 # Some useful netisr tunables. See sysctl net.isr #net.isr.maxthreads=4 #net.isr.defaultqlimit=4096 #net.isr.maxqlimit: 10240 # Bind netisr threads to CPUs #net.isr.bindthreads=1 # # FreeBSD 9.x+ # Increase interface send queue length # See commit message http://svn.freebsd.org/viewvc/base?view=revision&revision=207554 #net.link.ifqmaxlen=1024 # Nicer boot logo =) loader_logo="beastie" And finally here is KERNCONF: # Just some of them, see also # cat /sys/{i386,amd64,}/conf/NOTES # This one useful only on i386 #options KVA_PAGES=512 # You can play with HZ in environments with high interrupt rate (default is 1000) # 100 is for my notebook to prolong it's battery life #options HZ=100 # Polling is goot on network loads with high packet rates and low-end NICs # NB! Do not enable it if you want more than one netisr thread #options DEVICE_POLLING # Eliminate datacopy on socket read-write # To take advantage with zero copy sockets you should have an MTU >= 4k # This req. is only for receiving data. # Read more in man zero_copy_sockets # Also this epic thread on kernel trap: # http://kerneltrap.org/node/6506 # Here Linus says that "anybody that does it that way (FreeBSD) is totally incompetent" #options ZERO_COPY_SOCKETS # Support TCP sign. Used for IPSec options TCP_SIGNATURE # There was stackoverflow found in KAME IPSec stack: # See http://secunia.com/advisories/43995/ # For quick workaround you can use `ipfw add deny proto ipcomp` options IPSEC # This ones can be loaded as modules. They described in loader.conf section #options ACCEPT_FILTER_DATA #options ACCEPT_FILTER_HTTP # Adding ipfw, also can be loaded as modules options IPFIREWALL # On 8.1+ you can disable verbose to see blocked packets on ipfw0 interface. # Also there is no point in compiling verbose into the kernel, because # now there is net.inet.ip.fw.verbose tunable. #options IPFIREWALL_VERBOSE #options IPFIREWALL_VERBOSE_LIMIT=10 options IPFIREWALL_FORWARD # Adding kernel NAT options IPFIREWALL_NAT options LIBALIAS # Traffic shaping options DUMMYNET # Divert, i.e. for userspace NAT options IPDIVERT # This is for OpenBSD's pf firewall device pf device pflog # pf's QoS - ALTQ options ALTQ options ALTQ_CBQ # Class Bases Queuing (CBQ) options ALTQ_RED # Random Early Detection (RED) options ALTQ_RIO # RED In/Out options ALTQ_HFSC # Hierarchical Packet Scheduler (HFSC) options ALTQ_PRIQ # Priority Queuing (PRIQ) options ALTQ_NOPCC # Required for SMP build # Pretty console # Manual can be found here http://forums.freebsd.org/showthread.php?t=6134 #options VESA #options SC_PIXEL_MODE # Disable reboot on Ctrl Alt Del #options SC_DISABLE_REBOOT # Change normal|kernel messages color options SC_NORM_ATTR=(FG_GREEN|BG_BLACK) options SC_KERNEL_CONS_ATTR=(FG_YELLOW|BG_BLACK) # More scroll space options SC_HISTORY_SIZE=8192 # Adding hardware crypto device device crypto device cryptodev # Useful network interfaces device vlan device tap #Virtual Ethernet driver device gre #IP over IP tunneling device if_bridge #Bridge interface device pfsync #synchronization interface for PF device carp #Common Address Redundancy Protocol device enc #IPsec interface device lagg #Link aggregation interface device stf #IPv4-IPv6 port # Also for my notebook, but may be used with Opteron device amdtemp # Same for Intel processors device coretemp # man 4 cpuctl device cpuctl # CPU control pseudo-device # Support for ECMP. More than one route for destination # Works even with default route so one can use it as LB for two ISP # For now code is unstable and panics (panic: rtfree 2) on route deletions. #options RADIX_MPATH # Multicast routing #options MROUTING #options PIM # Debug & DTrace options KDB # Kernel debugger related code options KDB_TRACE # Print a stack trace for a panic options KDTRACE_FRAME # amd64-only(?) options KDTRACE_HOOKS # all architectures - enable general DTrace hooks #options DDB #options DDB_CTF # all architectures - kernel ELF linker loads CTF data # Adaptive spining in lockmgr (8.x+) # See http://www.mail-archive.com/[email protected]/msg10782.html options ADAPTIVE_LOCKMGRS # UTF-8 in console (8.x+) #options TEKEN_UTF8 # FreeBSD 8.1+ # Deadlock resolver thread # For additional information see http://www.mail-archive.com/[email protected]/msg18124.html # (FYI: "resolution" is panic so use with caution) #options DEADLKRES # Increase maximum size of Raw I/O and sendfile(2) readahead #options MAXPHYS=(1024*1024) #options MAXBSIZE=(1024*1024) # For scheduler debug enable following option. # Debug will be available via `kern.sched.stats` sysctl # For more information see http://svnweb.freebsd.org/base/head/sys/conf/NOTES?view=markup #options SCHED_STATS If you are tuning network for maximum performance you may wish to play with ifconfig options like: # You can list all capabilities via `ifconfig -m` ifconfig [-]rxcsum [-]txcsum [-]tso [-]lro mtu In case you've enabled DDB in kernel config, you should edit your /etc/ddb.conf and add something like this to enable automatic reboot (and textdump as bonus): script kdb.enter.panic=textdump set; capture on; show pcpu; bt; ps; alltrace; capture off; call doadump; reset script kdb.enter.default=textdump set; capture on; bt; ps; capture off; call doadump; reset And do not forget to add ddb_enable="YES" to /etc/rc.conf Since FreeBSD 9 you can select to enable/disable flowcontrol on your NIC: # See http://en.wikipedia.org/wiki/Ethernet_flow_control and # http://www.mail-archive.com/[email protected]/msg07927.html for additional info ifconfig bge0 media auto mediaopt flowcontrol PS. Also most of FreeBSD's limits can be monitored by # vmstat -z and # limits PPS. variety of network counters can be monitored via # netstat -s In FreeBSD-9 netstat's -Q option appeared, try following command to display netisr stats # netstat -Q PPPS. also see # man 7 tuning PPPPS. I wanted to thank FreeBSD community, especially author of nginx - Igor Sysoev, nginx-ru@ and FreeBSD-performance@ mailing lists for providing useful information about FreeBSD tuning. FreeBSD WIP * Whats cooking for FreeBSD 7? * Whats cooking for FreeBSD 8? * Whats cooking for FreeBSD 9? So here is the question: What tunings are you using on yours FreeBSD servers? You can also post your /etc/sysctl.conf, /boot/loader.conf, kernel options, etc with description of its' meaning (do not copy-paste from sysctl -d). Don't forget to specify server type (web, smb, gateway, etc) Let's share experience!

    Read the article

  • trying to use mod_proxy with httpd and tomcat

    - by techsjs2012
    I been trying to use mod_proxy with httpd and tomcat... I have on VirtualBox running Scientific Linux which has httpd and tomcat 6 on it.. I made two nodes of tomcat6. I followed this guide like 10 times and still cant get the 2nd node of tomcat working.. http://www.richardnichols.net/2010/08/5-minute-guide-clustering-apache-tomcat/ Here is the lines from my http.conf file <Proxy balancer://testcluster stickysession=JSESSIONID> BalancerMember ajp://127.0.0.1:8009 min=10 max=100 route=node1 loadfactor=1 BalancerMember ajp://127.0.0.1:8109 min=10 max=100 route=node2 loadfactor=1 </Proxy> ProxyPass /examples balancer://testcluster/examples <Location /balancer-manager> SetHandler balancer-manager AuthType Basic AuthName "Balancer Manager" AuthUserFile "/etc/httpd/conf/.htpasswd" Require valid-user </Location> Now here is my server.xml from node1 <?xml version='1.0' encoding='utf-8'?> <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <!-- Note: A "Server" is not itself a "Container", so you may not define subcomponents such as "Valves" at this level. Documentation at /docs/config/server.html --> <Server port="8005" shutdown="SHUTDOWN"> <!--APR library loader. Documentation at /docs/apr.html --> <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /> <!--Initialize Jasper prior to webapps are loaded. Documentation at /docs/jasper-howto.html --> <Listener className="org.apache.catalina.core.JasperListener" /> <!-- Prevent memory leaks due to use of particular java/javax APIs--> <Listener className="org.apache.catalina.core.JreMemoryLeakPreventionListener" /> <!-- JMX Support for the Tomcat server. Documentation at /docs/non-existent.html --> <Listener className="org.apache.catalina.mbeans.ServerLifecycleListener" /> <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" /> <!-- Global JNDI resources Documentation at /docs/jndi-resources-howto.html --> <GlobalNamingResources> <!-- Editable user database that can also be used by UserDatabaseRealm to authenticate users --> <Resource name="UserDatabase" auth="Container" type="org.apache.catalina.UserDatabase" description="User database that can be updated and saved" factory="org.apache.catalina.users.MemoryUserDatabaseFactory" pathname="conf/tomcat-users.xml" /> </GlobalNamingResources> <!-- A "Service" is a collection of one or more "Connectors" that share a single "Container" Note: A "Service" is not itself a "Container", so you may not define subcomponents such as "Valves" at this level. Documentation at /docs/config/service.html --> <Service name="Catalina"> <!--The connectors can use a shared executor, you can define one or more named thread pools--> <!-- <Executor name="tomcatThreadPool" namePrefix="catalina-exec-" maxThreads="150" minSpareThreads="4"/> --> <!-- A "Connector" represents an endpoint by which requests are received and responses are returned. Documentation at : Java HTTP Connector: /docs/config/http.html (blocking & non-blocking) Java AJP Connector: /docs/config/ajp.html APR (HTTP/AJP) Connector: /docs/apr.html Define a non-SSL HTTP/1.1 Connector on port 8080 <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> --> <!-- A "Connector" using the shared thread pool--> <!-- <Connector executor="tomcatThreadPool" port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> --> <!-- Define a SSL HTTP/1.1 Connector on port 8443 This connector uses the JSSE configuration, when using APR, the connector should be using the OpenSSL style configuration described in the APR documentation --> <!-- <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" /> --> <!-- Define an AJP 1.3 Connector on port 8009 --> <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" /> <!-- An Engine represents the entry point (within Catalina) that processes every request. The Engine implementation for Tomcat stand alone analyzes the HTTP headers included with the request, and passes them on to the appropriate Host (virtual host). Documentation at /docs/config/engine.html --> <!-- You should set jvmRoute to support load-balancing via AJP ie : <Engine name="Catalina" defaultHost="localhost" jvmRoute="jvm1"> --> <Engine name="Catalina" defaultHost="localhost" jvmRoute="node1"> <!--For clustering, please take a look at documentation at: /docs/cluster-howto.html (simple how to) /docs/config/cluster.html (reference documentation) --> <!-- <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/> --> <!-- The request dumper valve dumps useful debugging information about the request and response data received and sent by Tomcat. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.RequestDumperValve"/> --> <!-- This Realm uses the UserDatabase configured in the global JNDI resources under the key "UserDatabase". Any edits that are performed against this UserDatabase are immediately available for use by the Realm. --> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> <!-- Define the default virtual host Note: XML Schema validation will not work with Xerces 2.2. --> <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> <!-- SingleSignOn valve, share authentication between web applications Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.authenticator.SingleSignOn" /> --> <!-- Access log processes all example. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log." suffix=".txt" pattern="common" resolveHosts="false"/> --> </Host> </Engine> </Service> </Server> now here is the server.xml file from node2 <?xml version='1.0' encoding='utf-8'?> <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <!-- Note: A "Server" is not itself a "Container", so you may not define subcomponents such as "Valves" at this level. Documentation at /docs/config/server.html --> <Server port="8105" shutdown="SHUTDOWN"> <!--APR library loader. Documentation at /docs/apr.html --> <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /> <!--Initialize Jasper prior to webapps are loaded. Documentation at /docs/jasper-howto.html --> <Listener className="org.apache.catalina.core.JasperListener" /> <!-- Prevent memory leaks due to use of particular java/javax APIs--> <Listener className="org.apache.catalina.core.JreMemoryLeakPreventionListener" /> <!-- JMX Support for the Tomcat server. Documentation at /docs/non-existent.html --> <Listener className="org.apache.catalina.mbeans.ServerLifecycleListener" /> <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" /> <!-- Global JNDI resources Documentation at /docs/jndi-resources-howto.html --> <GlobalNamingResources> <!-- Editable user database that can also be used by UserDatabaseRealm to authenticate users --> <Resource name="UserDatabase" auth="Container" type="org.apache.catalina.UserDatabase" description="User database that can be updated and saved" factory="org.apache.catalina.users.MemoryUserDatabaseFactory" pathname="conf/tomcat-users.xml" /> </GlobalNamingResources> <!-- A "Service" is a collection of one or more "Connectors" that share a single "Container" Note: A "Service" is not itself a "Container", so you may not define subcomponents such as "Valves" at this level. Documentation at /docs/config/service.html --> <Service name="Catalina"> <!--The connectors can use a shared executor, you can define one or more named thread pools--> <!-- <Executor name="tomcatThreadPool" namePrefix="catalina-exec-" maxThreads="150" minSpareThreads="4"/> --> <!-- A "Connector" represents an endpoint by which requests are received and responses are returned. Documentation at : Java HTTP Connector: /docs/config/http.html (blocking & non-blocking) Java AJP Connector: /docs/config/ajp.html APR (HTTP/AJP) Connector: /docs/apr.html Define a non-SSL HTTP/1.1 Connector on port 8080 <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> --> <!-- A "Connector" using the shared thread pool--> <!-- <Connector executor="tomcatThreadPool" port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> --> <!-- Define a SSL HTTP/1.1 Connector on port 8443 This connector uses the JSSE configuration, when using APR, the connector should be using the OpenSSL style configuration described in the APR documentation --> <!-- <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" /> --> <!-- Define an AJP 1.3 Connector on port 8009 --> <Connector port="8109" protocol="AJP/1.3" redirectPort="8443" /> <!-- An Engine represents the entry point (within Catalina) that processes every request. The Engine implementation for Tomcat stand alone analyzes the HTTP headers included with the request, and passes them on to the appropriate Host (virtual host). Documentation at /docs/config/engine.html --> <!-- You should set jvmRoute to support load-balancing via AJP ie : <Engine name="Catalina" defaultHost="localhost" jvmRoute="jvm1"> --> <Engine name="Catalina" defaultHost="localhost" jvmRoute="node2"> <!--For clustering, please take a look at documentation at: /docs/cluster-howto.html (simple how to) /docs/config/cluster.html (reference documentation) --> <!-- <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/> --> <!-- The request dumper valve dumps useful debugging information about the request and response data received and sent by Tomcat. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.RequestDumperValve"/> --> <!-- This Realm uses the UserDatabase configured in the global JNDI resources under the key "UserDatabase". Any edits that are performed against this UserDatabase are immediately available for use by the Realm. --> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> <!-- Define the default virtual host Note: XML Schema validation will not work with Xerces 2.2. --> <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> <!-- SingleSignOn valve, share authentication between web applications Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.authenticator.SingleSignOn" /> --> <!-- Access log processes all example. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log." suffix=".txt" pattern="common" resolveHosts="false"/> --> </Host> </Engine> </Service> </Server> I dont know what it is. but I been trying for days

    Read the article

  • Running MSBuild fails to read SDKToolsPath

    - by Scott Mayfield
    Howdy, I'm having a bit of an issue runnning a NAnt script that used to properly build my .Net 2.0 based website, when compiling with VS2008 and it's associated tools. I've recently upgraded all the project/solution files to VS2010, and now my build fails with the following error: [exec] C:\Windows\Microsoft.NET\Framework64\v4.0.30319\Microsoft.Common.targets(2249,9): error MSB3086: Task could not find "sgen.exe" using the S dkToolsPath "" or the registry key "HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SDKs\Windows\v7.0A". Make sure the SdkToolsPath is set and the tool exists in the correct processor specific location under the SdkToolsPath and that the Microsoft Windows SDK is installed Now, I DO have prior versions (.Net 3.5) of the Windows SDK installed on the build server, and the full .Net 4.0 framework is installed, but I've not run across a .Net 4.0 specific version of the Windows SDK. After a bit of experimentation and research, I finally just setup a new environmental variable "SDKToolsPath" and pointed it to the copy of sgen.exe in my windows 6.0 sdk folder. This generated the same error, but it got me to notice that even though the SDKToolsPath environmental variable IS set (confirmed that I can "echo" it at the command line and it has the expected value), the error message seems to indicated that it's not being read (note the empty quotes). Most of the information I've found is .Net 3.5 (or earlier) specific. Not much 4.0 related out there yet. Searching for error code MSB3086 generated nothing useful either. Any idea what this might be? Scott

    Read the article

  • How to use QSerialDevice in Qt?

    - by Tobias
    Hello, I am trying to use QSerialDevice in Qt to get a connection to my serial port. I also tried QextSerialPort before (which works on Windows Vista but unfortunately not on Windows XP ..) but I need an API which supports XP, Vista and Win7... I build the library and configured it this way: CONFIG += dll CONFIG += debug I did use the current version from SVN (0.2.0 - 2010-04-05) and the 0.2.0 zip package. After building the library I did copy it to my Qt Libdir (C:\Qt\2009.05\qt\lib) and also to C:\Windows\system32. Now I try to link against the lib in my project file: LIBS += -lqserialdevice I import the needed header (abstractserial.h) and use my own AbstractSerial like this: // Initialize this->serialPort->setDeviceName("COM1"); if (!this->serialPort->open(QIODevice::ReadWrite | QIODevice::Unbuffered)) qWarning() << "Error" << this->serialPort->errorString(); // Configure SerialPort this->serialPort->setBaudRate(AbstractSerial::BaudRate4800); this->serialPort->setDataBits(AbstractSerial::DataBits8); this->serialPort->setFlowControl(AbstractSerial::FlowControlOff); this->serialPort->setParity(AbstractSerial::ParityNone); this->serialPort->setStopBits(AbstractSerial::StopBits1); The problem is, that if I run my application, it crashes immediately with exit code -1073741515 (application failed to initialize properly). This is the same error I got using QextSerialPort under Windows XP (it worked with Windows Vista). If I build the QSerialDevice lib with release config and also my program, it crashes immediately but with exit code -1073741819 Can someone help me with this program or with another solution of getting a serial port to work with Qt (maybe another API or something?) Otherwise I have to use Windows API functions which would mean that my program won't work with UNIX systems.. If you have a solution for the problem with QextSerialPort under WinXP SP3, they are also welcome ;) Best Regards, Tobias

    Read the article

  • Excel 2007 - "The macro may not be available in this workbook" Error

    - by Psycho Bob
    We use an Excel sheet that has been protected to prevent modification of it from end users. All in all they are only able to edit certain tabs to add information that will then be used to generate information on other tabs using equations and such. On the tab with the equations, a button is present called "Prep for Internal Hard Copy Print." This button runs a macro that selects the information on the tab, unprotects it, then sends a print job to the user's default printer that contains the unprotected content. Normally this works like a champ. This time around, however, the macro is throwing the following error: Cannot run the macro "FILENAME.xlsx'!MacroName'. The macro may not be available in this workbook or all macros may be disabled. As far as I can tell, the macros are still present within the workbook. This sheet is normally a .xlsm though the user saved it with a different filename as a .xlsx. Also, the macros appear only as MacroName in the .xlsm file and not "FILENAME.xlsx'!MacroName' as it does in the .xlsx. Finally, when I open the .xlsm it asks if I want to enable the macro content while the .xlsx does not prompt for this. Can anyone tell me what's going on with this sheet or know of a way that I can get the macros working in the .xlsx without having to start over with a different sheet?

    Read the article

  • Unable to determine the URL to the Xap file

    - by Matthew Glace
    I started developing a Silverlight (SL) 4 application hosted in an ASP.NET site using Visual Studio 2010. Now I want to change it so that the SL application runs out of browser as if I had not chosen to host it in a web site when I created the SL project. First, I went to the properties of the web application and removed the SL project from the “Silverlight Applications” tab. Then, I went to the properties of the SL project and made sure the “Out-of-browser Application” option was chosen on the “Debug” tab with the SL project selected in the drop list. Finally, I made sure that the SL project was set as the “StartUp” project for the solution. The solution builds successfully however when I try to run I get the following message in an error dialog: Unable to determine the URL to the Xap file from web [web project name]. I'm assuming this is happening because the SL project is trying to copy the Xap file to the ClientBin folder of the web site. Obviously, the SL and web projects are still linked in some way despite my attempt to unlink the two. I have examined each project file in notepad to find any reference between them with no luck. What’s more frustrating is that I can achieve my desired result by creating a new SL application and un-checking the “Host the Silverlight application in a new or existing Web site in the solution” option. I know I could start my project over from scratch to solve this problem however I’m really curious as to what is causing this error. A Google search yielded no results.

    Read the article

  • java.lang.ClassCastException: java.lang.Integer cannot be cast to java.util.HashMap

    - by kongkea
    I've got this Error When I click listview to show full image size. how can i solve it? Error 11-20 10:27:47.039: D/AndroidRuntime(5078): Shutting down VM 11-20 10:27:47.039: W/dalvikvm(5078): threadid=1: thread exiting with uncaught exception (group=0x40c061f8) 11-20 10:27:47.047: E/AndroidRuntime(5078): FATAL EXCEPTION: main 11-20 10:27:47.047: E/AndroidRuntime(5078): java.lang.ClassCastException: java.lang.Integer cannot be cast to java.util.HashMap 11-20 10:27:47.047: E/AndroidRuntime(5078): at com.example.mylistview.MainActivity$1.onItemClick(MainActivity.java:103) 11-20 10:27:47.047: E/AndroidRuntime(5078): at android.widget.AdapterView.performItemClick(AdapterView.java:292) 11-20 10:27:47.047: E/AndroidRuntime(5078): at android.widget.AbsListView.performItemClick(AbsListView.java:1173) 11-20 10:27:47.047: E/AndroidRuntime(5078): at android.widget.AbsListView$PerformClick.run(AbsListView.java:2701) 11-20 10:27:47.047: E/AndroidRuntime(5078): at android.widget.AbsListView$1.run(AbsListView.java:3453) 11-20 10:27:47.047: E/AndroidRuntime(5078): at android.os.Handler.handleCallback(Handler.java:605) 11-20 10:27:47.047: E/AndroidRuntime(5078): at android.os.Handler.dispatchMessage(Handler.java:92) 11-20 10:27:47.047: E/AndroidRuntime(5078): at android.os.Looper.loop(Looper.java:137) 11-20 10:27:47.047: E/AndroidRuntime(5078): at android.app.ActivityThread.main(ActivityThread.java:4514) 11-20 10:27:47.047: E/AndroidRuntime(5078): at java.lang.reflect.Method.invokeNative(Native Method) 11-20 10:27:47.047: E/AndroidRuntime(5078): at java.lang.reflect.Method.invoke(Method.java:511) 11-20 10:27:47.047: E/AndroidRuntime(5078): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:790) 11-20 10:27:47.047: E/AndroidRuntime(5078): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:557) 11-20 10:27:47.047: E/AndroidRuntime(5078): at dalvik.system.NativeStart.main(Native Method) MainActivity public class MainActivity extends Activity { public static final int DIALOG_DOWNLOAD_JSON_PROGRESS = 0; private ProgressDialog mProgressDialog; ArrayList<HashMap<String, Object>> MyArrList; @SuppressLint("NewApi") @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); // Permission StrictMode if (android.os.Build.VERSION.SDK_INT > 9) { StrictMode.ThreadPolicy policy = new StrictMode.ThreadPolicy.Builder().permitAll().build(); StrictMode.setThreadPolicy(policy); } // Download JSON File new DownloadJSONFileAsync().execute(); } @Override protected Dialog onCreateDialog(int id) { switch (id) { case DIALOG_DOWNLOAD_JSON_PROGRESS: mProgressDialog = new ProgressDialog(this); mProgressDialog.setMessage("Downloading....."); mProgressDialog.setProgressStyle(ProgressDialog.STYLE_SPINNER); mProgressDialog.setCancelable(true); mProgressDialog.show(); return mProgressDialog; default: return null; } } // Show All Content public void ShowAllContent() { // listView1 final ListView lstView1 = (ListView)findViewById(R.id.listView1); lstView1.setAdapter(new ImageAdapter(MainActivity.this,MyArrList)); lstView1.setOnItemClickListener(new OnItemClickListener() { @Override public void onItemClick(AdapterView<?> parent, View v, int position, long id) { HashMap<String, Object> hm = (HashMap<String, Object>) lstView1.getAdapter().getItem(position); String imagePath = (String) hm.get("photo"); Intent i = new Intent(MainActivity.this,FullImageActivity.class); i.putExtra("fullImage", imagePath); startActivity(i); } }); } public class ImageAdapter extends BaseAdapter { private Context context; private ArrayList<HashMap<String, Object>> MyArr = new ArrayList<HashMap<String, Object>>(); public ImageAdapter(Context c, ArrayList<HashMap<String, Object>> myArrList) { // TODO Auto-generated method stub context = c; MyArr = myArrList; } public int getCount() { // TODO Auto-generated method stub return MyArr.size(); } public Object getItem(int position) { // TODO Auto-generated method stub return position; } public long getItemId(int position) { // TODO Auto-generated method stub return position; } public View getView(int position, View convertView, ViewGroup parent) { // TODO Auto-generated method stub LayoutInflater inflater = (LayoutInflater) context .getSystemService(Context.LAYOUT_INFLATER_SERVICE); if (convertView == null) { convertView = inflater.inflate(R.layout.activity_column, null); } // ColImage ImageView imageView = (ImageView) convertView.findViewById(R.id.ColImgPath); imageView.getLayoutParams().height = 80; imageView.getLayoutParams().width = 80; imageView.setPadding(5, 5, 5, 5); imageView.setScaleType(ImageView.ScaleType.CENTER_CROP); try { imageView.setImageBitmap((Bitmap)MyArr.get(position).get("ImageThumBitmap")); } catch (Exception e) { // When Error imageView.setImageResource(android.R.drawable.ic_menu_report_image); } // ColImgID TextView txtImgID = (TextView) convertView.findViewById(R.id.ColImgID); txtImgID.setPadding(10, 0, 0, 0); txtImgID.setText("ID : " + MyArr.get(position).get("id").toString()); // ColImgName TextView txtPicName = (TextView) convertView.findViewById(R.id.ColImgName); txtPicName.setPadding(50, 0, 0, 0); txtPicName.setText("Name : " + MyArr.get(position).get("first_name").toString()); return convertView; } } // Download JSON in Background public class DownloadJSONFileAsync extends AsyncTask<String, Void, Void> { protected void onPreExecute() { super.onPreExecute(); showDialog(DIALOG_DOWNLOAD_JSON_PROGRESS); } @Override protected Void doInBackground(String... params) { // TODO Auto-generated method stub String url = "http://192.168.10.104/adchara1/"; JSONArray data; try { data = new JSONArray(getJSONUrl(url)); MyArrList = new ArrayList<HashMap<String, Object>>(); HashMap<String, Object> map; for(int i = 0; i < data.length(); i++){ JSONObject c = data.getJSONObject(i); map = new HashMap<String, Object>(); map.put("id", (String)c.getString("id")); map.put("first_name", (String)c.getString("first_name")); // Thumbnail Get ImageBitmap To Object map.put("photo", (String)c.getString("photo")); map.put("ImageThumBitmap", (Bitmap)loadBitmap(c.getString("photo"))); // Full (for View Popup) map.put("frame", (String)c.getString("frame")); MyArrList.add(map); } } catch (JSONException e) { // TODO Auto-generated catch block e.printStackTrace(); } return null; } protected void onPostExecute(Void unused) { ShowAllContent(); // When Finish Show Content dismissDialog(DIALOG_DOWNLOAD_JSON_PROGRESS); removeDialog(DIALOG_DOWNLOAD_JSON_PROGRESS); } } /*** Get JSON Code from URL ***/ public String getJSONUrl(String url) { StringBuilder str = new StringBuilder(); HttpClient client = new DefaultHttpClient(); HttpGet httpGet = new HttpGet(url); try { HttpResponse response = client.execute(httpGet); StatusLine statusLine = response.getStatusLine(); int statusCode = statusLine.getStatusCode(); if (statusCode == 200) { // Download OK HttpEntity entity = response.getEntity(); InputStream content = entity.getContent(); BufferedReader reader = new BufferedReader(new InputStreamReader(content)); String line; while ((line = reader.readLine()) != null) { str.append(line); } } else { Log.e("Log", "Failed to download file.."); } } catch (ClientProtocolException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } return str.toString(); } /***** Get Image Resource from URL (Start) *****/ private static final String TAG = "Image"; private static final int IO_BUFFER_SIZE = 4 * 1024; public static Bitmap loadBitmap(String url) { Bitmap bitmap = null; InputStream in = null; BufferedOutputStream out = null; try { in = new BufferedInputStream(new URL(url).openStream(), IO_BUFFER_SIZE); final ByteArrayOutputStream dataStream = new ByteArrayOutputStream(); out = new BufferedOutputStream(dataStream, IO_BUFFER_SIZE); copy(in, out); out.flush(); final byte[] data = dataStream.toByteArray(); BitmapFactory.Options options = new BitmapFactory.Options(); //options.inSampleSize = 1; bitmap = BitmapFactory.decodeByteArray(data, 0, data.length,options); } catch (IOException e) { Log.e(TAG, "Could not load Bitmap from: " + url); } finally { closeStream(in); closeStream(out); } return bitmap; } private static void closeStream(Closeable stream) { if (stream != null) { try { stream.close(); } catch (IOException e) { android.util.Log.e(TAG, "Could not close stream", e); } } } private static void copy(InputStream in, OutputStream out) throws IOException { byte[] b = new byte[IO_BUFFER_SIZE]; int read; while ((read = in.read(b)) != -1) { out.write(b, 0, read); } } /***** Get Image Resource from URL (End) *****/ @Override public boolean onCreateOptionsMenu(Menu menu) { getMenuInflater().inflate(R.menu.activity_main, menu); return true; } } FullImageActivity String imagePath = getIntent().getStringExtra("fullImage"); if(imagePath != null && !imagePath.isEmpty()){ File imageFile = new File(imagePath); if(imageFile.exists()){ Bitmap myBitmap = BitmapFactory.decodeFile(imageFile.getAbsolutePath()); ImageView iv = (ImageView) findViewById(R.id.fullimage); iv.setImageBitmap(myBitmap); } }

    Read the article

  • Deleting file with SharePoint List web service fails

    - by Robert MacLean
    I am trying to delete a file from SharePoint using the list web service which is failing with the following error. Error Code: 0x81020030 Message: Invalid file name Detail: The file name you specified could not be used. It may be the name of an existing file or directory, or you may not have permission to access the file. The update XML I sent through is: <Batch OnError="Continue" PreCalc="TRUE" ListVersion="0" ViewName="{8FE4E2C8-939E-4462-ABA2-D633EED7F76E}"><Method ID="1" Cmd="Delete"><Field Name="ID">84</Field><Field Name="FileRef">http://win-4h0xp59sn75:40414/Shared%20Documents/del.txt</Field></Method></Batch> The SharePoint server error logs indicate: ERROR: Failed to OpenThreadToken, LastError=1008 The file you are attempting to save or retrieve has been blocked from this Web site by the server administrators. Things I have tried I've tried the changes in #1372971 which has no helped. I have also tried the changes recommended on the Microsoft Social site, which has also not helped. I have confirmed that the txt file extension is not blocked as indicated here. In addition I can remove the file via the website, it is just on the web service that this fails. The permissions are correct (or rather not in play) as I am running as a SharePoint administrator, which is the same account that uploaded it via the copy web service.

    Read the article

  • ASP.NET MVC tries to load older version of Owin assembly

    - by d_mcg
    As a bit of context, I'm developing an ASP.NET MVC 5 application that uses OAuth-based authentication via Microsoft's OWIN implementation, for Facebook and Google only at this stage. Currently (as of v3.0.0, git-commit 4932c2f), the FacebookAuthenticationOptions and GoogleOAuth2AuthenticationOptions don't provide any property to force Facebook nor Google respectively to reauthenticate users (via appending the appropriate query string parameters) when signing in. Initially, I set out to override the following classes: FacebookAuthenticationOptions GoogleOAuth2AuthenticationOptions FacebookAuthenticationHandler (specifically AuthenticateCoreAsync()) GoogleOAuth2AuthenticationHandler (specifically AuthenticateCoreAsync()) yet discovered that the ~AuthenticationHandler classes are marked as internal. So I pulled a copy of the source for the Katana project (http://katanaproject.codeplex.com/) and modified the source accordingly. After compiling, I found that there are several dependencies that needed updating in order to use these updated assemblies (Microsoft.Owin.Security.Facebook and Microsoft.Owin.Security.Google) in the MVC project: Microsoft.Owin Microsoft.Owin.Security Microsoft.Owin.Security.Cookies Microsoft.Owin.Security.OAuth Microsoft.Owin.Host.SystemWeb This was done by replacing the existing project references to the 3.0.0 versions and updating those in web.config. Good news: the project compiles successfully. In debugging, I received an exception on startup: An exception of type 'System.IO.FileLoadException' occurred in [MVC web assembly].dll but was not handled in user code Additional information: Could not load file or assembly 'Microsoft.Owin.Security, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040) The underlying exception indicated that Microsoft.AspNet.Identity.Owin was trying to load v2.1.0 of Microsoft.Owin.Security when calling app.UseExternalSignInCookie() from Startup.ConfigureAuth(IAppBuilder app) in Startup.Auth.cs. Unfortunately that assembly (and its other dependency, Microsoft.AspNet.Identity.Owin) aren't part of the Project Katana solution, and I can't find any accessible repository for these assemblies online. Are the Microsoft.AspNet.Identity assemblies open source, like the Katana project? Is there a way to fool those assemblies to use the referenced v3.0.0 assemblies instead of v2.1.0? The /bin folder contains the 3.0.0 versions of the Owin assemblies. I've upgraded the NuGet packages for Microsoft.AspNet.Identity.Owin, and this is still an issue. Any ideas on how to resolve this issue?

    Read the article

  • Visual Studio 2008 doesn't create *.refresh files for external DLL references... what am I missing?

    - by Cory Larson
    Hi all-- I've got a question about something that's just been irritating me. A colleague and I are building a support framework for our current client that we want to reference in other projects. The DLL we want as a reference in our project would be an external reference. We're adding it by doing "Add Reference...", then browsing to the location of the .dll. What I want Visual Studio to do is only add the .xml, .pdb, and a .dll.refresh file, but instead it copies the actual .dll (and .xml and .pdb) into the bin. When we rebuild the framework project, the other project that uses its .dll gets all out of whack until we drop and re-add the reference. Everything I've read online says that VS2008 is supposed to create the .dll.refresh files for you, but it never does. Any ideas? Am I missing something or doing something wrong? At this point I'm ready to add a pre-build event to simply copy the framework .dll into my bin, but the .refresh file seems like less of a hassle if it would just work. Thanks, Cory

    Read the article

  • logparser not matching on a LIKE pattern

    - by user79339
    Hi I seem to have the strangest problem. I am using logparser to search an event log for some text that I know is there (i copied and pasted the string from the event into the sql search string). But the sql LIKE statement is returning a empty results. But other LIKE statments seem to be working file. I have even tried using two '%' symbols in case the shell was trying to replace the search pattern with an environment variable '%%NavigationOccuredEventHandler%%', escaping the % with a \ and with a ' but all these just give me "No valid LIKE mask" error My logparser command - C:\Program Files\Log Parser 2.2LogParser.exe "select * from D:\Temp\07i132ppa1_app.evt where Message like '%NavigationOccuredEventHandler%' " -i:EVT -o:Datagrid The Entry in event log (found using "Select * from D:\Temp\07i132ppa1_app.evt" and doing a copy paste of relevant row) - 'D:\Temp\07i132ppa1_app.evt 5976788 2010-03-09 11:53:23 2010-03-09 11:53:23 2 1 Error event 0 None ICP Timestamp: 9/03/2010 1:53:23 AM Message: Error # 068464030040-07I132PPA1 System.Web.HttpUnhandledException: Exception of type 'System.Web.HttpUnhandledException' was thrown. ---> System.NullReferenceException: Object reference not set to an instance of an object. at ClientRegistration.Controller.ContactDetailsController.NavigationOccuredEventHandler(Object sender, NavigateEventArgs e) at Microsoft.ApplicationBlocks.UIProcess.UIPManager.NavigateEventHandler.Invoke(Object sender, NavigateEventArgs e) at Microsoft.ApplicationBlocks.UIProcess.UIPManager.InvokeEventHandlers(State state) in . . . Truncated for brevity ' output Statistics: Elements processed: 240993 Elements output: 0 Execution time: 59.47 seconds But if i searched for the pattern '%object reference not set%' it works fine, returns results. I copied and pasted the string into a dummy sql table and ran the sql query there and it works fine. Just doesn't seem to work in logparser. Very baffling. Any help would be much appreciated

    Read the article

  • Visual Studio: Add Item / Add as link rather than just Add

    - by Pete d'Oronzio
    I'm new to visual studio, coming from Delphi. I have a directory tree full of .cs files (root is \Common). I also have a directory tree full of Applications (root is \Applications) Finally, I've got a tree full of Assemblies (root is \Assemblies) I'd like to keep my .cs files in the Common tree and all the environment voodoo (solutions, projects, settings, metadata, debug data, bin, etc.) in the Assmblies tree. So, for a simple example, I've got an assembly called PdMagic.Common.Math.dll. The Solution and project is located in \Assemblies\Common\Math. All of its source (.cs) files are in \Common\Math. (matrix.cs, trig.cs, mathtypes.cs, mathfuncs.cs, stats.cs, etc.) When I use Add Existing Item to add matrix.cs to my project, a copy of it is added to the \Assemblies\Common\Math folder. I just want to reference it. I don't want multiple copies laying around. I've tried Add Existing Item, and used the drop down to "Add link" rather than just "Add", and that seems to do what I want. Question: What is the "best practice" for this sort of thing? Do most people just put those .cs files all in the same folder as the project? Why isn't "Add link" the default? Thanks!

    Read the article

  • C#, Powershell - Microsoft.Exchange.Management.PowerShell.Admin

    - by Svein Erik
    I'm having troubles using the Microsoft.Exchange.Management.PowerShell.Admin on a server. The server is not the one running Exchange 2007, it's a remote server (in the same zone). I can't figure out how to add the Snapin for Powershell - Microsoft.Exchange.Management.PowerShell.Admin. Is it possible to just get the dll file from the Exchange 2007 server, and copy it to the server where my code is running? Can someone please explain what I need to do to get my code running? The exception that i'm getting now is: "No Windows PowerShell Snap-ins are available for version 1". This is the code that generates the error: public void CreateMailBox(User user) { //Create a runspace for your cmdlets to run and include the Exchange Management SnapIn... RunspaceConfiguration runspaceConf = RunspaceConfiguration.Create(); PSSnapInException PSException = null; PSSnapInInfo info = runspaceConf.AddPSSnapIn("Microsoft.Exchange.Management.PowerShell.Admin", out PSException); Runspace runspace = RunspaceFactory.CreateRunspace(runspaceConf); runspace.Open(); Pipeline pipeline = runspace.CreatePipeline(); Command command = new Command("New-Mailbox"); command.Parameters.Add("Name", user.UserName); .... The error is coming on the line with PSSnapInfo info = runspaceConf..... I'm using .NET 3.5

    Read the article

  • What is Causing This Memory Leak in Delphi?

    - by lkessler
    I just can't figure out this memory leak that EurekaLog is reporting for my program. I'm using Delphi 2009. Here it is: Memory Leak: Type=Data; Total size=26; Count=1; The stack is: System.pas _UStrSetLength 17477 System.pas _UStrCat 17572 Process.pas InputGedcomFile 1145 That is all there is in the stack. EurekaLog is pointing me to the location where the memory that was not released was first allocated. According to it, the line in my program is line 1145 of InputGedcomFile. That line is: CurStruct0Key := 'HEAD' + Level0Key; where CurStruct0Key and Level0Key are simply defined in the procedure as local variables that should be dynamically handled by the Delphi memory manager when entering and leaving the procedure: var CurStruct0Key, Level0Key: string; So now I look at the _UStrCat procedure in the System Unit. Line 17572 is: CALL _UStrSetLength // Set length of Dest and I go to the _UStrSetLength procedure in the System Unit, and the relevant lines are: @@isUnicode: CMP [EAX-skew].StrRec.refCnt,1 // !!! MT safety JNE @@copyString // not unique, so copy SUB EAX,rOff // Offset EAX "S" to start of memory block ADD EDX,EDX // Double length to get size JO @@overflow ADD EDX,rOff+2 // Add string rec size JO @@overflow PUSH EAX // Put S on stack MOV EAX,ESP // to pass by reference CALL _ReallocMem POP EAX ADD EAX,rOff // Readjust MOV [EBX],EAX // Store MOV [EAX-skew].StrRec.length,ESI MOV WORD PTR [EAX+ESI*2],0 // Null terminate TEST EDI,EDI // Was a temp created? JZ @@exit PUSH EDI MOV EAX,ESP CALL _LStrClr POP EDI JMP @@exit where line 17477 is the "CALL _ReallocMem" line. So then what is the memory leak? Surely a simple concatenate of a string constant to a local string variable should not be causing a memory leak. Why is EurekaLog pointing me to the ReallocMem line in a _UStrSetLength routine that is part of Delphi? This is Delphi 2009 and I am using the new unicode strings. Any help or explanation here will be much appreciated.

    Read the article

  • Installing an exe with Powershell DSC Package resource gets return code 1619

    - by Jay Spang
    I'm trying to use Powershell DSC's Package resource to install an exe... Perforce's P4V to be specific. Here's my code: Configuration PerforceMachine { Node "SERVERNAME" { Package P4V { Ensure = "Present" Name = "Perforce Visual Components" Path = "\\nas\share\p4vinst64.exe" ProductId = '' Arguments = "/S /V/qn" # args for silent mode LogPath = "$env:ProgramData\p4v_install.log" } } } When running this, this is the error Powershell gives me: PowerShell provider MSFT_PackageResource failed to execute Set-TargetResource functionality with error message: The return code 1619 was not expected. Configuration is likely not correct + CategoryInfo : InvalidOperation: (:) [], CimException + FullyQualifiedErrorId : ProviderOperationExecutionFailure + PSComputerName : SERVERNAME According to documentation, return code 1619 means the MSI package couldn't be opened. However, when I manually log in to the machine and run "\\nas\share\p4vinst64.exe /S /V/qn", the install works flawlessly. Does anyone know why this is failing? Alternately, can anyone tell me how to troubleshoot this? I pasted all the error information I got from the terminal, my log file (p4v_install.log) is a 0 byte file, and there are no events in the event viewer. I don't know how to troubleshoot it any further! EDIT: I should note that I also tried using the File resource to copy the file locally, and then install it from there. Sadly, that met with the same result.

    Read the article

  • SPException: Catastrophic failure (Exception from HRESULT: 0x8000FFF (E_UNEXPECTED) in Sharepoint

    - by BeraCim
    I've been trying to programmatically copy custom content type and its custom columns from one web to another for some time now, and I always get different errors or exceptions every time. After yet more tries, I received more strange and cryptic exception from Sharepoint after clicking onto a newly copied custom column in a custom content type. I checked the logs, and this is what I got: Unknown SPRequest erorr occurred. More information: 0x80070002 Unable to locate the xml-definition for FieldName with FieldId 'guid without braces', exception: Microsoft.SharePoint.SPException: Catastrophic failure (Exception from HRESULT: 0x8000FFF (E_UNEXPECTED)) ---> System.Runtime.InteropServices.COMException... ... at Microsoft.SharePoint.Library.SPRequestInternalClass.GetGlobalContentTypeXml(String bstrUrl, Int32 type, UInt 32 lcid, Object varIdBytes... Failed to find the content type schema for ct-1033-0x1000blahblahblahcontenttypeId while caching feature data. Unknown SPRequest error occurred. More informationL 0x8000ffff Unable to locate the xml-definition for CType with SPContentTypeId '0x0100MorecontenttypeId', exception: Microsoft.SharePoint.SPException: Catastrophic failure(Exception from HRESULT: 0x8000FFFF (E_UNEXPECTED)) ---> System.Runtime.InteropServices.COMException (0x8000FFFF): Catastrophic failure... ... at Microsoft.SharePoint.Library.SPRequestInternalClass.GetGlobalContentTypeXml(String bstrUrl, Int32 type, UInt 32 lcid, Object varIdBytes... It failed to find quite a few content type schema. I'm confused with what Sharepoint is trying to do here, and why a simple process of copying a custom content type from one web to another just wouldn't work in contrast to the information found on the web e.g. this. Appreciate any help to get over this problem. Thanks.

    Read the article

< Previous Page | 373 374 375 376 377 378 379 380 381 382 383 384  | Next Page >