Search Results

Search found 9273 results on 371 pages for 'complex strings'.

Page 367/371 | < Previous Page | 363 364 365 366 367 368 369 370 371  | Next Page >

  • For Loop help In a Hash Cracker Homework.

    - by aaron burns
    On the homework I am working on we are making a hash cracker. I am implementing it so as to have my cracker. java call worker.java. Worker.java implements Runnable. Worker is to take the start and end of a list of char, the hash it is to crack, and the max length of the password that made the hash. I know I want to do a loop in run() BUT I cannot think of how I would do it so it would go to the given max pasword length. I have posted the code I have so far. Any directions or areas I should look into.... I thought there was a way to do this with a certain way to write the loop but I don't know or can't find the correct syntax. Oh.. also. In main I divide up so x amount of threads can be chosen and I know that as of write now it only works for an even number of the 40 possible char given. package HashCracker; import java.util.*; import java.security.MessageDigest; import java.security.NoSuchAlgorithmException; public class Cracker { // Array of chars used to produce strings public static final char[] CHARS = "abcdefghijklmnopqrstuvwxyz0123456789.,-!".toCharArray(); public static final int numOfChar=40; /* Given a byte[] array, produces a hex String, such as "234a6f". with 2 chars for each byte in the array. (provided code) */ public static String hexToString(byte[] bytes) { StringBuffer buff = new StringBuffer(); for (int i=0; i<bytes.length; i++) { int val = bytes[i]; val = val & 0xff; // remove higher bits, sign if (val<16) buff.append('0'); // leading 0 buff.append(Integer.toString(val, 16)); } return buff.toString(); } /* Given a string of hex byte values such as "24a26f", creates a byte[] array of those values, one byte value -128..127 for each 2 chars. (provided code) */ public static byte[] hexToArray(String hex) { byte[] result = new byte[hex.length()/2]; for (int i=0; i<hex.length(); i+=2) { result[i/2] = (byte) Integer.parseInt(hex.substring(i, i+2), 16); } return result; } public static void main(String args[]) throws NoSuchAlgorithmException { if(args.length==1)//Hash Maker { //create a byte array , meassage digestand put password into it //and get out a hash value printed to the screen using provided methods. byte[] myByteArray=args[0].getBytes(); MessageDigest hasher=MessageDigest.getInstance("SHA-1"); hasher.update(myByteArray); byte[] digestedByte=hasher.digest(); String hashValue=Cracker.hexToString(digestedByte); System.out.println(hashValue); } else//Hash Cracker { ArrayList<Thread> myRunnables=new ArrayList<Thread>(); int numOfThreads = Integer.parseInt(args[2]); int charPerThread=Cracker.numOfChar/numOfThreads; int start=0; int end=charPerThread-1; for(int i=0; i<numOfThreads; i++) { //creates, stores and starts threads. Runnable tempWorker=new Worker(start, end, args[1], Integer.parseInt(args[1])); Thread temp=new Thread(tempWorker); myRunnables.add(temp); temp.start(); start=end+1; end=end+charPerThread; } } } import java.util.*; public class Worker implements Runnable{ private int charStart; private int charEnd; private String Hash2Crack; private int maxLength; public Worker(int start, int end, String hashValue, int maxPWlength) { charStart=start; charEnd=end; Hash2Crack=hashValue; maxLength=maxPWlength; } public void run() { byte[] myHash2Crack_=Cracker.hexToArray(Hash2Crack); for(int i=charStart; i<charEnd+1; i++) { Cracker.numOfChar[i]////// this is where I am stuck. } } }

    Read the article

  • C -Segmentation fault after the 4th call of the function!

    - by FILIaS
    It seems at least weird to me... The program runs normally.But after I call the enter() function for the 4th time,there is a segmentation fault!I would appreciate any help. With the following function enter() I wanna add user commands' datas to a list. [Some part of the code is already posted on another question of me, but I think I should post it again...as it's a different problem I'm facing now.] /* struct for all the datas that user enters on file*/ typedef struct catalog { char short_name[50]; char surname[50]; signed int amount; char description[1000]; struct catalog *next; }catalog,*catalogPointer; catalogPointer current; catalogPointer head = NULL; void enter(void) //user command: i <name> <surname> <amount> <description> { int n,j=2,k=0; char temp[1500]; char *short_name,*surname,*description; signed int amount; char* params = strchr(command,' ') + 1; //strchr returns a pointer to the 1st space on the command.U want a pointer to the char right after that space. strcpy(temp, params); //params is saved as temp. char *curToken = strtok(temp," "); //strtok cuts 'temp' into strings between the spaces and saves them to 'curToken' printf("temp is:%s \n",temp); printf("\nWhat you entered for saving:\n"); for (n = 0; curToken; ++n) //until curToken ends: { if (curToken) { short_name = malloc(strlen(curToken) + 1); strncpy(short_name, curToken, sizeof (short_name)); } printf("Short Name: %s \n",short_name); curToken = strtok(NULL," "); if (curToken) { surname = malloc(strlen(curToken) + 1); strncpy(surname, curToken,sizeof (surname)); } printf("SurName: %s \n",surname); curToken = strtok(NULL," "); if (curToken) { //int * amount= malloc(sizeof (signed int *)); char *chk; amount = (int) strtol(curToken, &chk, 10); if (!isspace(*chk) && *chk != 0) fprintf(stderr,"Warning: expected integer value for amount, received %s instead\n",curToken); } printf("Amount: %d \n",amount); curToken = strtok(NULL,"\0"); if (curToken) { description = malloc(strlen(curToken) + 1); strncpy(description, curToken, sizeof (description)); } printf("Description: %s \n",description); break; } if (findEntryExists(head, surname,short_name) != NULL) //call function in order to see if entry exists already on the catalog printf("\nAn entry for <%s %s> is already in the catalog!\nNew entry not entered.\n",short_name,surname); else { printf("\nTry to entry <%s %s %d %s> in the catalog list!\n",short_name,surname,amount,description); newEntry(&head,short_name,surname,amount,description); printf("\n**Entry done!**\n"); } // Maintain the list in alphabetical order by surname. } catalogPointer findEntryExists (catalogPointer head, char num[],char first[]) { catalogPointer p = head; while (p != NULL && strcmp(p->surname, num) != 0 && strcmp(p->short_name,first) != 0) { p = p->next; } return p; } catalogPointer newEntry (catalog** headRef,char short_name[], char surname[], signed int amount, char description[]) { catalogPointer newNode = (catalogPointer)malloc(sizeof(catalog)); catalogPointer first; catalogPointer second; catalogPointer tmp; first=head; second=NULL; strcpy(newNode->short_name, short_name); strcpy(newNode->surname, surname); newNode->amount=amount; strcpy(newNode->description, description); //SEGMENTATION ON THE 4TH RUN OF PROGRAM STOPS HERE depending on the names each time it gets! while (first!=NULL) { if (strcmp(surname,first->surname)>0) second=first; else if (strcmp(surname,first->surname)==0) { if (strcmp(short_name,first->short_name)>0) second=first; } first=first->next; } if (second==NULL) { newNode->next=head; head=newNode; } else { tmp=second->next; newNode->next=tmp; first->next=newNode; } }

    Read the article

  • how to use a class method as a WIN32 application callback method (WINPROC)... Error static struct HI

    - by numerical25
    I am receiving errors and at the same time trying to make this work so please read what I got to say. Thanks.... I am creating a c++ application and majority of the application is encapsulated into a class. That means that my WinProc function is a static class method that I use as my call back method for my win32 application. The problem is that since I created that as a win32 application, every class member I use inside that method has to be static as well. Including my HINSTANCE class which I use inside of it. Now I receive this error error LNK2001: unresolved external symbol "public: static struct HINSTANCE__ I need to figure out how to make this all work and below is my class declaration. My static member static HINSTANCE m_hInst is causing the error I believe. #pragma once #include "stdafx.h" #include "resource.h" #define MAX_LOADSTRING 100 class RenderEngine { protected: int m_width; int m_height; ATOM RegisterEngineClass(); public: static HINSTANCE m_hInst; //<------ This is causing the error HWND m_hWnd; int m_nCmdShow; TCHAR m_szTitle[MAX_LOADSTRING]; // The title bar text TCHAR m_szWindowClass[MAX_LOADSTRING]; // the main window class name bool InitWindow(); bool InitDirectX(); bool InitInstance(); //static functions static LRESULT CALLBACK WndProc(HWND hWnd, UINT message, WPARAM wParam, LPARAM lParam); static INT_PTR CALLBACK About(HWND hDlg, UINT message, WPARAM wParam, LPARAM lParam); int Run(); }; Below is the implementation #include "stdafx.h" #include "RenderEngine.h" bool RenderEngine::InitWindow() { RenderEngine::m_hInst = NULL; // Initialize global strings LoadString(m_hInst, IDS_APP_TITLE, m_szTitle, MAX_LOADSTRING); LoadString(m_hInst, IDC_RENDERENGINE, m_szWindowClass, MAX_LOADSTRING); if(!RegisterEngineClass()) { return false; } if(!InitInstance()) { return false; } return true; } ATOM RenderEngine::RegisterEngineClass() { WNDCLASSEX wcex; wcex.cbSize = sizeof(WNDCLASSEX); wcex.style = CS_HREDRAW | CS_VREDRAW; wcex.lpfnWndProc = RenderEngine::WndProc; wcex.cbClsExtra = 0; wcex.cbWndExtra = 0; wcex.hInstance = m_hInst; wcex.hIcon = LoadIcon(m_hInst, MAKEINTRESOURCE(IDI_RENDERENGINE)); wcex.hCursor = LoadCursor(NULL, IDC_ARROW); wcex.hbrBackground = (HBRUSH)(COLOR_WINDOW+1); wcex.lpszMenuName = MAKEINTRESOURCE(IDC_RENDERENGINE); wcex.lpszClassName = m_szWindowClass; wcex.hIconSm = LoadIcon(wcex.hInstance, MAKEINTRESOURCE(IDI_SMALL)); return RegisterClassEx(&wcex); } LRESULT CALLBACK RenderEngine::WndProc(HWND hWnd, UINT message, WPARAM wParam, LPARAM lParam) { int wmId, wmEvent; PAINTSTRUCT ps; HDC hdc; switch (message) { case WM_COMMAND: wmId = LOWORD(wParam); wmEvent = HIWORD(wParam); // Parse the menu selections: switch (wmId) { case IDM_ABOUT: DialogBox(m_hInst, MAKEINTRESOURCE(IDD_ABOUTBOX), hWnd, About); break; case IDM_EXIT: DestroyWindow(hWnd); break; default: return DefWindowProc(hWnd, message, wParam, lParam); } break; case WM_PAINT: hdc = BeginPaint(hWnd, &ps); // TODO: Add any drawing code here... EndPaint(hWnd, &ps); break; case WM_DESTROY: PostQuitMessage(0); break; default: return DefWindowProc(hWnd, message, wParam, lParam); } return 0; } bool RenderEngine::InitInstance() { m_hWnd = NULL; m_hWnd = CreateWindow(m_szWindowClass, m_szTitle, WS_OVERLAPPEDWINDOW, CW_USEDEFAULT, 0, CW_USEDEFAULT, 0, NULL, NULL, m_hInst, NULL); if (!m_hWnd) { return FALSE; } if(!ShowWindow(m_hWnd, m_nCmdShow)) { return false; } UpdateWindow(m_hWnd); return true; } // Message handler for about box. INT_PTR CALLBACK RenderEngine::About(HWND hDlg, UINT message, WPARAM wParam, LPARAM lParam) { UNREFERENCED_PARAMETER(lParam); switch (message) { case WM_INITDIALOG: return (INT_PTR)TRUE; case WM_COMMAND: if (LOWORD(wParam) == IDOK || LOWORD(wParam) == IDCANCEL) { EndDialog(hDlg, LOWORD(wParam)); return (INT_PTR)TRUE; } break; } return (INT_PTR)FALSE; } int RenderEngine::Run() { MSG msg; HACCEL hAccelTable; hAccelTable = LoadAccelerators(m_hInst, MAKEINTRESOURCE(IDC_RENDERENGINE)); // Main message loop: while (GetMessage(&msg, NULL, 0, 0)) { if (!TranslateAccelerator(msg.hwnd, hAccelTable, &msg)) { TranslateMessage(&msg); DispatchMessage(&msg); } } return (int) msg.wParam; } and below is the code being used within the WinMain RenderEngine go; int APIENTRY _tWinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPTSTR lpCmdLine, int nCmdShow) { UNREFERENCED_PARAMETER(hPrevInstance); UNREFERENCED_PARAMETER(lpCmdLine); // TODO: Place code here. RenderEngine::m_hInst = hInstance; go.m_nCmdShow = nCmdShow; if(!go.InitWindow()) { return 0; } go.Run(); return 0; } If anything does not make any sense, then I apologize, I am a newb. Thanks for the help!!

    Read the article

  • Compile error with initializer_list when trying to use it to initialize member value of class

    - by ilektron
    I am trying to make a class initializable from an initialization_list in a class constructor's constructor's initialization list. It works for a std::map, but not for my custom class. I don't see any difference other than templates are used in std::map. #include <iostream> #include <initializer_list> #include <string> #include <sstream> #include <map> using std::string; class text_thing { private: string m_text; public: text_thing() { } text_thing(text_thing& other); text_thing(std::initializer_list< std::pair<const string, const string> >& il); text_thing& operator=(std::initializer_list< std::pair<const string, const string> >& il); operator string() { return m_text; } }; class static_base { private: std::map<string, string> m_test_map; text_thing m_thing; static_base(); public: static static_base& getInstance() { static static_base instance; return instance; } string getText() { return (string)m_thing; } }; typedef std::pair<const string, const string> spair; text_thing::text_thing(text_thing& other) { m_text = other.m_text; } text_thing::text_thing(std::initializer_list< std::pair<const string, const string> >& il) { std::stringstream text_gen; for (auto& apair : il) { text_gen << "{" << apair.first << ", " << apair.second << "}" << std::endl; } } text_thing& text_thing::operator=(std::initializer_list< std::pair<const string, const string> >& il) { std::stringstream text_gen; for (auto& apair : il) { text_gen << "{" << apair.first << ", " << apair.second << "}" << std::endl; } return *this; } static_base::static_base() : m_test_map{{"test", "1"}, {"test2", "2"}}, // Compiler fine with this m_thing{{"test", "1"}, {"test2", "2"}} // Compiler doesn't like this { } int main() { std::cout << "Starting the program" << std::endl; std::cout << "The text thing: " << std::endl << static_base::getInstance().getText(); } I get this compiler output g++ -O0 -g3 -Wall -c -fmessage-length=0 -std=c++11 -MMD -MP -MF"static_base.d" -MT"static_base.d" -o "static_base.o" "../static_base.cpp" Finished building: ../static_base.cpp Building file: ../test.cpp Invoking: GCC C++ Compiler g++ -O0 -g3 -Wall -c -fmessage-length=0 -std=c++11 -MMD -MP -MF"test.d" -MT"test.d" -o "test.o" "../test.cpp" ../test.cpp: In constructor ‘static_base::static_base()’: ../test.cpp:94:40: error: no matching function for call to ‘text_thing::text_thing(<brace-enclosed initializer list>)’ m_thing{{"test", "1"}, {"test2", "2"}} ^ ../test.cpp:94:40: note: candidates are: ../test.cpp:72:1: note: text_thing::text_thing(std::initializer_list<std::pair<const std::basic_string<char>, const std::basic_string<char> > >&) text_thing::text_thing(std::initializer_list< std::pair<const string, const string> >& il) ^ ../test.cpp:72:1: note: candidate expects 1 argument, 2 provided ../test.cpp:67:1: note: text_thing::text_thing(text_thing&) text_thing::text_thing(text_thing& other) ^ ../test.cpp:67:1: note: candidate expects 1 argument, 2 provided ../test.cpp:23:2: note: text_thing::text_thing() text_thing() ^ ../test.cpp:23:2: note: candidate expects 0 arguments, 2 provided make: *** [test.o] Error 1 Output of gcc -v Using built-in specs. COLLECT_GCC=gcc COLLECT_LTO_WRAPPER=/usr/lib/gcc/x86_64-linux-gnu/4.8/lto-wrapper Target: x86_64-linux-gnu Configured with: ../src/configure -v --with-pkgversion='Ubuntu 4.8.1-2ubuntu1~13.04' --with-bugurl=file:///usr/share/doc/gcc-4.8/README.Bugs --enable-languages=c,c++,java,go,d,fortran,objc,obj-c++ --prefix=/usr --program-suffix=-4.8 --enable-shared --enable-linker-build-id --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --with-gxx-include-dir=/usr/include/c++/4.8 --libdir=/usr/lib --enable-nls --with-sysroot=/ --enable-clocale=gnu --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-gnu-unique-object --enable-plugin --with-system-zlib --disable-browser-plugin --enable-java-awt=gtk --enable-gtk-cairo --with-java-home=/usr/lib/jvm/java-1.5.0-gcj-4.8-amd64/jre --enable-java-home --with-jvm-root-dir=/usr/lib/jvm/java-1.5.0-gcj-4.8-amd64 --with-jvm-jar-dir=/usr/lib/jvm-exports/java-1.5.0-gcj-4.8-amd64 --with-arch-directory=amd64 --with-ecj-jar=/usr/share/java/eclipse-ecj.jar --enable-objc-gc --enable-multiarch --disable-werror --with-arch-32=i686 --with-abi=m64 --with-multilib-list=m32,m64,mx32 --with-tune=generic --enable-checking=release --build=x86_64-linux-gnu --host=x86_64-linux-gnu --target=x86_64-linux-gnu Thread model: posix gcc version 4.8.1 (Ubuntu 4.8.1-2ubuntu1~13.04) It compiles fine with the std::map constructed this way, and if I modify the static_base to return the strings from the maps, all is fine and dandy. Please help me understand what is going on here.

    Read the article

  • Saving image from Gallery to db - Coursor IllegalStateException

    - by MyWay
    I want to save to db some strings with image. Image can be taken from gallery or user can set the sample one. In the other activity I have a listview which should present the rows with image and name. I'm facing so long this problem. It occurs when I wanna display listview with the image from gallery, If the sample image is saved in the row everything works ok. My problem is similar to this one: how to save image taken from camera and show it to listview - crashes with "IllegalStateException" but I can't find there the solution for me My table in db looks like this: public static final String KEY_ID = "_id"; public static final String ID_DETAILS = "INTEGER PRIMARY KEY AUTOINCREMENT"; public static final int ID_COLUMN = 0; public static final String KEY_NAME = "name"; public static final String NAME_DETAILS = "TEXT NOT NULL"; public static final int NAME_COLUMN = 1; public static final String KEY_DESCRIPTION = "description"; public static final String DESCRIPTION_DETAILS = "TEXT"; public static final int DESCRIPTION_COLUMN = 2; public static final String KEY_IMAGE ="image" ; public static final String IMAGE_DETAILS = "BLOP"; public static final int IMAGE_COLUMN = 3; //method which create our table private static final String CREATE_PRODUCTLIST_IN_DB = "CREATE TABLE " + DB_TABLE + "( " + KEY_ID + " " + ID_DETAILS + ", " + KEY_NAME + " " + NAME_DETAILS + ", " + KEY_DESCRIPTION + " " + DESCRIPTION_DETAILS + ", " + KEY_IMAGE +" " + IMAGE_DETAILS + ");"; inserting statement: public long insertToProductList(String name, String description, byte[] image) { ContentValues value = new ContentValues(); // get the id of column and value value.put(KEY_NAME, name); value.put(KEY_DESCRIPTION, description); value.put(KEY_IMAGE, image); // put into db return db.insert(DB_TABLE, null, value); } Button which add the picture and onActivityResult method which saves the image and put it into the imageview public void AddPicture(View v) { // creating specified intent which have to get data Intent intent = new Intent(Intent.ACTION_PICK); // From where we want choose our pictures intent.setType("image/*"); startActivityForResult(intent, PICK_IMAGE); } @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { // TODO Auto-generated method stub super.onActivityResult(requestCode, resultCode, data); // if identification code match to the intent, //if yes we know that is our picture, if(requestCode ==PICK_IMAGE ) { // check if the data comes with intent if(data!= null) { Uri chosenImage = data.getData(); String[] filePathColumn = {MediaStore.Images.Media.DATA}; Cursor cursor = getContentResolver().query(chosenImage, filePathColumn, null, null, null); cursor.moveToFirst(); int columnIndex = cursor.getColumnIndex(filePathColumn[0]); String filePat = cursor.getString(columnIndex); cursor.close(); ImageOfProduct = BitmapFactory.decodeFile(filePat); if(ImageOfProduct!=null) { picture.setImageBitmap(ImageOfProduct); } messageDisplayer("got picture, isn't null " + IdOfPicture); } } } Then the code which converts bitmap to byte[] public byte[] bitmapToByteConvert(Bitmap bit ) { // stream of data getted for compressed bitmap ByteArrayOutputStream gettedData = new ByteArrayOutputStream(); // compressing method bit.compress(CompressFormat.PNG, 0, gettedData); // our byte array return gettedData.toByteArray(); } The method which put data to the row: byte[] image=null; // if the name isn't put to the editView if(name.getText().toString().trim().length()== 0) { messageDisplayer("At least you need to type name of product if you want add it to the DB "); } else{ String desc = description.getText().toString(); if(description.getText().toString().trim().length()==0) { messageDisplayer("the description is set as none"); desc = "none"; } DB.open(); if(ImageOfProduct!= null){ image = bitmapToByteConvert(ImageOfProduct); messageDisplayer("image isn't null"); } else { BitmapDrawable drawable = (BitmapDrawable) picture.getDrawable(); image = bitmapToByteConvert(drawable.getBitmap()); } if(image.length>0 && image!=null) { messageDisplayer(Integer.toString(image.length)); } DB.insertToProductList(name.getText().toString(), desc, image ); DB.close(); messageDisplayer("well done you add the product"); finish(); You can see that I'm checking here the length of array to be sure that I don't send empty one. And here is the place where Error appears imo, this code is from activity which presents the listview with data taken from db private void LoadOurLayoutListWithInfo() { // firstly wee need to open connection with db db= new sqliteDB(getApplicationContext()); db.open(); // creating our custom adaprer, the specification of it will be typed // in our own class (MyArrayAdapter) which will be created below ArrayAdapter<ProductFromTable> customAdapter = new MyArrayAdapter(); //get the info from whole table tablecursor = db.getAllColumns(); if(tablecursor != null) { startManagingCursor(tablecursor); tablecursor.moveToFirst(); } // now we moving all info from tablecursor to ourlist if(tablecursor != null && tablecursor.moveToFirst()) { do{ // taking info from row in table int id = tablecursor.getInt(sqliteDB.ID_COLUMN); String name= tablecursor.getString(sqliteDB.NAME_COLUMN); String description= tablecursor.getString(sqliteDB.DESCRIPTION_COLUMN); byte[] image= tablecursor.getBlob(3); tablefromDB.add(new ProductFromTable(id,name,description,image)); // moving until we didn't find last row }while(tablecursor.moveToNext()); } listView = (ListView) findViewById(R.id.tagwriter_listoftags); //as description says // setAdapter = The ListAdapter which is responsible for maintaining //the data backing this list and for producing a view to represent //an item in that data set. listView.setAdapter(customAdapter); } I put the info from row tho objects which are stored in list. I read tones of question but I can't find any solution for me. Everything works when I put the sample image ( which is stored in app res folder ). Thx for any advice

    Read the article

  • How can I simulate the effects of an observable collection in this situation?

    - by MGSoto
    I am making a configuration editor for another application and am using reflection to pull editable fields from the configuration class. The following class is the base class for my various "DataTypeViewModels" and shows how I get and set the appropriate properties. public abstract class DataTypeViewModel<T> : ViewModelBase { Func<T> getFunction; Action<T> setAction; public const string ValuePropertyName = "Value"; public string Label { get; set; } public T Value { get { return getFunction.Invoke(); } set { if (getFunction.Invoke().Equals(value)) { return; } setAction.Invoke(value); // Update bindings, no broadcast RaisePropertyChanged(ValuePropertyName); } } /// <summary> /// Initializes a new instance of the StringViewModel class. /// </summary> public DataTypeViewModel(string sectionName, string label) { if (IsInDesignMode) { // Code runs in Blend --> create design time data. } else { Label = label; getFunction = new Func<T>(() => { return (T)Settings.Instance.GetType().GetProperty(sectionName).PropertyType. GetProperty(label).GetValue(Settings.Instance.GetType().GetProperty(sectionName).GetValue(Settings.Instance, null), null); }); setAction = new Action<T>(value => { Settings.Instance.GetType().GetProperty(sectionName).PropertyType.GetProperty(label). SetValue(Settings.Instance.GetType().GetProperty(sectionName).GetValue(Settings.Instance, null), value, null); }); } } } This part works the way I want it to, the next part is a sample DataTypeViewModel on a list of strings. public class StringListViewModel : DataTypeViewModel<ICollection<string>> { /// <summary> /// The <see cref="RemoveItemCommand" /> property's name. /// </summary> public const string RemoveItemCommandPropertyName = "RemoveItemCommand"; private RelayCommand<string> _removeItemCommand = null; public ObservableCollection<string> ObservableValue { get; set; } /// <summary> /// Gets the RemoveItemCommand property. /// TODO Update documentation: /// Changes to that property's value raise the PropertyChanged event. /// This property's value is broadcasted by the Messenger's default instance when it changes. /// </summary> public RelayCommand<string> RemoveItemCommand { get { return _removeItemCommand; } set { if (_removeItemCommand == value) { return; } var oldValue = _removeItemCommand; _removeItemCommand = value; // Update bindings, no broadcast RaisePropertyChanged(RemoveItemCommandPropertyName); } } /// <summary> /// The <see cref="AddItemCommand" /> property's name. /// </summary> public const string AddItemCommandPropertyName = "AddItemCommand"; private RelayCommand<string> _addItemCommand = null; /// <summary> /// Gets the AddItemCommand property. /// TODO Update documentation: /// Changes to that property's value raise the PropertyChanged event. /// This property's value is broadcasted by the Messenger's default instance when it changes. /// </summary> public RelayCommand<string> AddItemCommand { get { return _addItemCommand; } set { if (_addItemCommand == value) { return; } var oldValue = _addItemCommand; _addItemCommand = value; // Update bindings, no broadcast RaisePropertyChanged(AddItemCommandPropertyName); } } /// <summary> /// Initializes a new instance of the StringListViewModel class. /// </summary> public StringListViewModel(string sectionName, string label) : base(sectionName, label) { ObservableValue = new ObservableCollection<string>(Value); AddItemCommand = new RelayCommand<string>(param => { if (param != string.Empty) { Value.Add(param); ObservableValue.Add(param); } }); RemoveItemCommand = new RelayCommand<string>(param => { if (param != null) { Value.Remove(param); ObservableValue.Remove(param); } }); } } As you can see in the constructor, I currently have "Value" mirrored into a new ObservableCollection called "ObservableValue", which is then bound to by a ListView in the XAML. It works well this way, but cloning the List seems like such a hacky way to do this. While bound to Value, I've tried adding: RaisePropertyChanged("Value"); to the AddItemCommand and RemoveItemCommand, but this doesn't work, the ListView won't get updated. What is the proper way to do this?

    Read the article

  • Is multithreading the right way to go for my case?

    - by Julien Lebosquain
    Hello, I'm currently designing a multi-client / server application. I'm using plain good old sockets because WCF or similar technology is not what I need. Let me explain: it isn't the classical case of a client simply calling a service; all clients can 'interact' with each other by sending a packet to the server, which will then do some action, and possible re-dispatch an answer message to one or more clients. Although doable with WCF, the application will get pretty complex with hundreds of different messages. For each connected client, I'm of course using asynchronous methods to send and receive bytes. I've got the messages fully working, everything's fine. Except that for each line of code I'm writing, my head just burns because of multithreading issues. Since there could be around 200 clients connected at the same time, I chose to go the fully multithreaded way: each received message on a socket is immediately processed on the thread pool thread it was received, not on a single consumer thread. Since each client can interact with other clients, and indirectly with shared objects on the server, I must protect almost every object that is mutable. I first went with a ReaderWriterLockSlim for each resource that must be protected, but quickly noticed that there are more writes overall than reads in the server application, and switched to the well-known Monitor to simplify the code. So far, so good. Each resource is protected, I have helper classes that I must use to get a lock and its protected resource, so I can't use an object without getting a lock. Moreover, each client has its own lock that is entered as soon as a packet is received from its socket. It's done to prevent other clients from making changes to the state of this client while it has some messages being processed, which is something that will happen frequently. Now, I don't just need to protect resources from concurrent accesses. I must keep every client in sync with the server for some collections I have. One tricky part that I'm currently struggling with is the following: I have a collection of clients. Each client has its own unique ID. When a client connects, it must receive the IDs of every connected client, and each one of them must be notified of the newcomer's ID. When a client disconnects, every other client must know it so that its ID is no longer valid for them. Every client must always have, at a given time, the same clients collection as the server so that I can assume that everybody knows everybody. This way if I'm sending a message to client #1 telling "Client #2 has done something", I know that it will always be correctly interpreted: Client 1 will never wonder "but who is Client 2 anyway?". My first attempt for handling the connection of a new client (let's call it X) was this pseudo-code (remember that newClient is already locked here): lock (clients) { foreach (var client in clients) { lock (client) { client.Send("newClient with id X has connected"); } } clients.Add(newClient); newClient.Send("the list of other clients"); } Now imagine that in the same time, another client has sent a packet that translates into a message that must be broadcasted to every connected client, the pseudo-code will be something like this (remember that the current client - let's call it Y - is already locked here): lock (clients) { foreach (var client in clients) { lock (client) { client.Send("something"); } } } An obvious deadlock occurs here: on one thread X is locked, the clients lock has been entered, started looping through the clients, and at one moment must get Y's lock... which is already acquired on the second thread, itself waiting for the clients collection lock to be released! This is not the only case like this in the server application. There are other collections which must be kept in sync with the clients, some properties on a client can be changed by another one, etc. I tried other types of locks, lock-free mechanisms and a bunch of other things. Either there were obvious deadlocks when I'm using too much locks for safety, or obvious race conditions otherwise. When I finally find a good middle point between the two, it usually comes with very subtle race conditions / dead locks and other multi-threading issues... my head hurts very quickly since for any single line of code I'm writing I have to review almost the whole application to ensure everything will behave correctly with any number of threads. So here's my final question: how would you resolve this specific case, the general case, and more importantly: aren't I going the wrong way here? I have little problems with the .NET framework, C#, simple concurrency or algorithms in general. Still, I'm lost here. I know I could use only one thread processing the incoming requests and everything will be fine. However, that won't scale well at all with more clients... But I'm thinking more and more to go this simple way. What do you think? Thanks in advance to you, StackOverflow people which have taken the time to read this huge question. I really had to explain the whole context if I want to get some help.

    Read the article

  • my window's handle is unused and cannot be evaluated

    - by numerical25
    I am trying to encapsulate my Win32 application into a class. My problem occurs when trying to initiate my primary window for the application below is my declaration and implementation... I notice the issue within my class method InitInstance(); declaration #pragma once #include "stdafx.h" #include "resource.h" #define MAX_LOADSTRING 100 class RenderEngine { protected: int m_width; int m_height; ATOM RegisterEngineClass(); public: static HINSTANCE m_hInst; HWND m_hWnd; int m_nCmdShow; TCHAR m_szTitle[MAX_LOADSTRING]; // The title bar text TCHAR m_szWindowClass[MAX_LOADSTRING]; // the main window class name bool InitWindow(); bool InitDirectX(); bool InitInstance(); //static functions static LRESULT CALLBACK WndProc(HWND hWnd, UINT message, WPARAM wParam, LPARAM lParam); static INT_PTR CALLBACK About(HWND hDlg, UINT message, WPARAM wParam, LPARAM lParam); int Run(); }; implementation #include "stdafx.h" #include "RenderEngine.h" HINSTANCE RenderEngine::m_hInst = NULL; bool RenderEngine::InitWindow() { RenderEngine::m_hInst = NULL; // Initialize global strings LoadString(m_hInst, IDS_APP_TITLE, m_szTitle, MAX_LOADSTRING); LoadString(m_hInst, IDC_RENDERENGINE, m_szWindowClass, MAX_LOADSTRING); if(!RegisterEngineClass()) { return false; } if(!InitInstance()) { return false; } return true; } ATOM RenderEngine::RegisterEngineClass() { WNDCLASSEX wcex; wcex.cbSize = sizeof(WNDCLASSEX); wcex.style = CS_HREDRAW | CS_VREDRAW; wcex.lpfnWndProc = RenderEngine::WndProc; wcex.cbClsExtra = 0; wcex.cbWndExtra = 0; wcex.hInstance = m_hInst; wcex.hIcon = LoadIcon(m_hInst, MAKEINTRESOURCE(IDI_RENDERENGINE)); wcex.hCursor = LoadCursor(NULL, IDC_ARROW); wcex.hbrBackground = (HBRUSH)(COLOR_WINDOW+1); wcex.lpszMenuName = MAKEINTRESOURCE(IDC_RENDERENGINE); wcex.lpszClassName = m_szWindowClass; wcex.hIconSm = LoadIcon(wcex.hInstance, MAKEINTRESOURCE(IDI_SMALL)); return RegisterClassEx(&wcex); } LRESULT CALLBACK RenderEngine::WndProc(HWND hWnd, UINT message, WPARAM wParam, LPARAM lParam) { int wmId, wmEvent; PAINTSTRUCT ps; HDC hdc; switch (message) { case WM_COMMAND: wmId = LOWORD(wParam); wmEvent = HIWORD(wParam); // Parse the menu selections: switch (wmId) { case IDM_ABOUT: DialogBox(m_hInst, MAKEINTRESOURCE(IDD_ABOUTBOX), hWnd, About); break; case IDM_EXIT: DestroyWindow(hWnd); break; default: return DefWindowProc(hWnd, message, wParam, lParam); } break; case WM_PAINT: hdc = BeginPaint(hWnd, &ps); // TODO: Add any drawing code here... EndPaint(hWnd, &ps); break; case WM_DESTROY: PostQuitMessage(0); break; default: return DefWindowProc(hWnd, message, wParam, lParam); } return 0; } bool RenderEngine::InitInstance() { m_hWnd = NULL;// When I step into my code it says on this line 0x000000 unused = ??? expression cannot be evaluated m_hWnd = CreateWindow(m_szWindowClass, m_szTitle, WS_OVERLAPPEDWINDOW, CW_USEDEFAULT, 0, CW_USEDEFAULT, 0, NULL, NULL, m_hInst, NULL); if (!m_hWnd)// At this point, memory has been allocated unused = ??. It steps over this { return FALSE; } if(!ShowWindow(m_hWnd, m_nCmdShow))// m_nCmdShow = 1 and m_hWnd is still unused and expression {//Still cannot be evaluated. This statement is true. and shuts down. return false; } UpdateWindow(m_hWnd); return true; } // Message handler for about box. INT_PTR CALLBACK RenderEngine::About(HWND hDlg, UINT message, WPARAM wParam, LPARAM lParam) { UNREFERENCED_PARAMETER(lParam); switch (message) { case WM_INITDIALOG: return (INT_PTR)TRUE; case WM_COMMAND: if (LOWORD(wParam) == IDOK || LOWORD(wParam) == IDCANCEL) { EndDialog(hDlg, LOWORD(wParam)); return (INT_PTR)TRUE; } break; } return (INT_PTR)FALSE; } int RenderEngine::Run() { MSG msg; HACCEL hAccelTable; hAccelTable = LoadAccelerators(m_hInst, MAKEINTRESOURCE(IDC_RENDERENGINE)); // Main message loop: while (GetMessage(&msg, NULL, 0, 0)) { if (!TranslateAccelerator(msg.hwnd, hAccelTable, &msg)) { TranslateMessage(&msg); DispatchMessage(&msg); } } return (int) msg.wParam; } and my winMain function that calls the class // RenderEngine.cpp : Defines the entry point for the application. #include "stdafx.h" #include "RenderEngine.h" // Global Variables: RenderEngine go; int APIENTRY _tWinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPTSTR lpCmdLine, int nCmdShow) { UNREFERENCED_PARAMETER(hPrevInstance); UNREFERENCED_PARAMETER(lpCmdLine); // TODO: Place code here. RenderEngine::m_hInst = hInstance; go.m_nCmdShow = nCmdShow; if(!go.InitWindow()) { return 0; } go.Run(); return 0; }

    Read the article

  • placing the matched 2 different child elements xml values in a single line from xslt2.0

    - by Girikumar Mathivanan
    I have the below input xml, <GSKProductHierarchy> <GlobalBusinessIdentifier>ZGB001</GlobalBusinessIdentifier> <Hierarchy> <Material>335165140779</Material> <Level1>02</Level1> <Level2>02AQ</Level2> <Level3>02AQ006</Level3> <Level4>02AQ006309</Level4> <Level5>02AQ006309</Level5> <Level6>02AQ006309</Level6> <Level7>02AQ006309</Level7> <Level8>02AQ006309</Level8> </Hierarchy> <Hierarchy> <Material>335165140780</Material> <Level1>02</Level1> <Level2>02AQ</Level2> <Level3>02AQ006</Level3> <Level4>02AQ006309</Level4> <Level5>02AQ006309</Level5> <Level6>02AQ006309</Level6> <Level7>02AQ006309</Level7> <Level8>02AQ006310</Level8> </Hierarchy> <Texts> <ProductHierarchy>02AQ006310</ProductHierarchy> <Language>A</Language> <Description>CREAM</Description> </Texts> <Texts> <ProductHierarchy>02AQ006309</ProductHierarchy> <Language>B</Language> <Description>CREAM</Description> </Texts> as per the requirement, xsl should check the matched value of GSKProductHierarchy/Hierarchy/Level8 in the GSKProductHierarchy/Texts/ProductHierarchy elements...and its should result as below flat file. 335165140779|02|02AQ|02AQ006|02AQ006309|02AQ006309|02AQ006309|02AQ006309|02AQ006309|02AQ006309|A|CREAM| 335165140780|02|02AQ|02AQ006|02AQ006309|02AQ006309|02AQ006309|02AQ006309|02AQ006310|02AQ006310|B|CREAM| Right now I have the below xslt, <?xml version="1.0" encoding="UTF-8"?> <xsl:stylesheet version="2.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:exsl="http://exslt.org/common" xmlns:set="http://exslt.org/sets" xmlns:str="http://exslt.org/strings" xmlns:java="http://xml.apache.org/xslt/java" xmlns:saxon="http://saxon.sf.net/" exclude-result-prefixes="exsl set str java saxon"> <xsl:output method="text" indent="yes"/> <xsl:variable name="VarPipe" select="'|'"/> <xsl:variable name="VarBreak" select="'&#xa;'"/> <xsl:template match="/"> <xsl:for-each select="GSKProductHierarchy/Hierarchy"> <xsl:variable name="currentIndex" select="position()"/> <xsl:variable name="Level8" select="Level8"/> <xsl:variable name="ProductHierarchy" select="../Texts[$currentIndex]/ProductHierarchy"/> <xsl:if test="$Level8=$ProductHierarchy"> <xsl:value-of select="Material"/> <xsl:value-of select="$VarPipe"/> <xsl:value-of select="Level1"/> <xsl:value-of select="$VarPipe"/> <xsl:value-of select="Level2"/> <xsl:value-of select="$VarPipe"/> <xsl:value-of select="Level3"/> <xsl:value-of select="$VarPipe"/> <xsl:value-of select="Level4"/> <xsl:value-of select="$VarPipe"/> <xsl:value-of select="Level5"/> <xsl:value-of select="$VarPipe"/> <xsl:value-of select="Level6"/> <xsl:value-of select="$VarPipe"/> <xsl:value-of select="Level7"/> <xsl:value-of select="$VarPipe"/> <xsl:value-of select="Level8"/> <xsl:value-of select="$VarPipe"/> <xsl:value-of select="../Texts[$currentIndex]/ProductHierarchy"/> <xsl:value-of select="$VarPipe"/> <xsl:value-of select="../Texts[$currentIndex]/Language"/> <xsl:value-of select="$VarPipe"/> <xsl:value-of select="../Texts[$currentIndex]/Description"/> <xsl:value-of select="$VarPipe"/> <xsl:if test="not(position() = last())"> <xsl:value-of select="$VarBreak"/> </xsl:if> </xsl:if> </xsl:for-each> </xsl:template> can anyone please suggest what function should i need to use to get the desired result. Regards, Giri

    Read the article

  • Why is this UL and inline JS giving errors during HTML validation?

    - by thor
    I've just run the homepage of a site I'm working on through the w3c HTML validator and it's come back with 3 errors and 2 warnings. I've taken a look at them but can't see why they would be causing a problem. I've pasted them in below (I've removed URL's/strings etc as the site isn't quite ready to be made public yet). This is being validated against XHTML Transitional by the way. The UL comes back with the following error: end tag for "ul" which is not finished <ul id='tabs'></ul> <ul id='tabs'> <li> <a href="/en/folder/folder/search?categories[]=cat1" class="tab1" title="tab_title"> <img alt="img_alt" src="img_src" /> <span> tab1_text </span> </a> </li> <li> <a href="/en/folder/folder/search?categories[]=cat2" class="tab2" title="tab_title"> <img alt="img_alt" src="img_src" /> <span> tab2_text </span> </a> </li> <li> <a href="/en/folder/folder/search?categories[]=cat3" class="tab3" title="tab_title"> <img alt="img_alt" src="img_src" /> <span> tab3_text </span> </a> </li> <li> <a href="/en/folder/folder/search?categories[]=cat4" class="tab4" title="tab_title"> <img alt="img_alt" src="img_src" /> <span> tab4_text </span> </a> </li> <li> <a href="/en/folder/folder/search?categories[]=cat5" class="tab5" title="tab_title"> <img alt="img_alt" src="img_src" /> <span> tab5_text </span> </a> </li> <li> <a href="/en/folder/folder/search?categories[]=cat6" class="tab6" title="tab_title"> <img alt="img_alt" src="img_src" /> <span> tab6_text </span> </a> </li> <li> <a href="/en/folder/folder/search?categories[]=cat7" class="tab7" title="tab_title"> <img alt="img_alt" src="img_src" /> <span> tab7_text </span> </a> </li> <li> <a href="/en/folder/folder/search?categories[]=cat8" class="tab8" title="tab_title"> <img alt="img_alt" src="img_src" /> <span> tab8_text </span> </a> </li> </ul> For the inline javascript, I'm getting 2 errors and 2 warnings all for the same thing - I have a simple if statement with && and the validator appears to be seeing this as HTML rather than javascript: character "&amp;" is the first character of a delimiter but occurred as data and xmlParseEntityRef: no name <script type='text/javascript'> if (weather_data != null && weather_data['data'] != null){ display_weather(); } </script> The javascript is placed just before the body close tag at the end of the document. If you need to see the full source then let me know and I can send it over.

    Read the article

  • Unwanted character being added to string in C

    - by Church
    I have a program that gives you shipping addresses from an input file. However at the beginning of one of the strings, order.add_one, a number is being added to the beginning of the string, that number is equivalent to the variable "choice" every time. Why is it doing this? #include <stdio.h> #include <math.h> #include <string.h> //structure typedef struct {char cust_name[25]; char cust_id[3]; char add_one[30]; char add_two[30]; char bike; char risky; int number_ordered; char cust_information[500]; }ORDER; ORDER order; int main(void){ fflush(stdin); system ( "clear" ); //initialize variables float price; float m = 359.95; float s = 279.95; //while loop, runs until user declares they no longer wish to input orders while (1==1){ printf("Options: \nEnter Customer information manually : 1 \nSearch Customer by ID(input.txt reader) : 2 \n"); int option = 0; scanf(" %d", &option); if (option == 1){ //Print and scan statements printf("Enter Customer Information\n"); printf("Customer Name: "); scanf(" %[^\n]s", &order.cust_name); printf("\nEnter Address Line One: "); scanf(" %[^\n]s", &order.add_one); printf("\nEnter Addres Line Two: "); scanf(" %[^\n]s", &order.add_two); printf("\nHow Many Bicycles Are Ordered: "); scanf(" %d", &order.number_ordered); printf("\nWhat Type Of Bike Is Ordered\n M Mountain Bike \n S Street Bike"); printf("\nChoose One (M or S): "); scanf(" %c", &order.bike); printf("\nIs The Customer Risky (Y/N): "); scanf(" %c", &order.risky); system ( "clear" ); } if (option == 2){ FILE *fpt; fpt = fopen("input.txt", "r"); if (fpt==NULL){ printf("Text file did not open\n"); return 1; } printf("Enter Customer ID: "); scanf("%s", &order.cust_id); char choice; choice = order.cust_id[0]; char x[3]; int w, u, y, z; char a[10], b[10], c[10], d[10], e[20], f[10], g[10], i[1], j[1]; int h; printf("%s value of c", c); if (choice >='1'){ while ((w = fgetc(fpt)) != '\n' ){ } } if (choice >='2'){ while ((u = fgetc(fpt)) != '\n' ){ } } if (choice >='3'){ while ((y = fgetc(fpt)) != '\n' ){ } } if (choice >= '4'){ while ((z = fgetc(fpt)) != '\n' ){ } } printf("\n"); fscanf(fpt, "%s", x); fscanf(fpt, "%s", a); printf("%s", a); strcat(order.cust_name, a); fscanf(fpt, " %s", b); printf(" %s", b); strcat(order.cust_name, " "); strcat(order.cust_name, b); fscanf(fpt, "%s", c); printf(" %s", c); strcat(order.add_one, "\0"); strcat(order.add_one, c); fscanf(fpt, "%s", d); printf(" %s", d); strcat(order.add_one, " "); strcat(order.add_one, d); fscanf(fpt, "%s", e); printf(" %s", e); strcat(order.add_two, e); fscanf(fpt, "%s", f); printf(" %s", f); strcat(order.add_two, " "); strcat(order.add_two, f); fscanf(fpt, "%s", g); printf(" %s", g); strcat(order.add_two, " "); strcat(order.add_two, g); strcat(order.add_two, "\0"); fscanf(fpt, "%d", &h); printf(" %d", h); order.number_ordered = h; fscanf(fpt, "%s", i); printf(" %s", i); order.bike = i[0]; fscanf(fpt, "%s", j); printf(" %s", j); order.risky = j[0]; fclose(fpt); printf("%s %s %s %d %c %c", order.cust_name, order.add_one, order.add_two, order.number_ordered, order.bike, order.risky); }

    Read the article

  • how do i install PHP with JSON and OAuth on Mac Snow Leopard?

    - by meilas
    i want to use the dropbox api via this library http://code.google.com/p/dropbox-php/ i installed MAMP then I tried "sudo pecl install oauth" but I got downloading oauth-1.0.0.tgz ... Starting to download oauth-1.0.0.tgz (42,834 bytes) ............done: 42,834 bytes 6 source files, building running: phpize Configuring for: PHP Api Version: 20090626 Zend Module Api No: 20090626 Zend Extension Api No: 220090626 building in /var/tmp/pear-build-root/oauth-1.0.0 running: /private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth/configure checking for grep that handles long lines and -e... /usr/bin/grep checking for egrep... /usr/bin/grep -E checking for a sed that does not truncate output... /opt/local/bin/gsed checking for cc... cc checking for C compiler default output file name... a.out checking whether the C compiler works... yes checking whether we are cross compiling... no checking for suffix of executables... checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether cc accepts -g... yes checking for cc option to accept ISO C89... none needed checking how to run the C preprocessor... cc -E checking for icc... no checking for suncc... no checking whether cc understands -c and -o together... yes checking for system library directory... lib checking if compiler supports -R... no checking if compiler supports -Wl,-rpath,... yes checking build system type... i686-apple-darwin10.4.0 checking host system type... i686-apple-darwin10.4.0 checking target system type... i686-apple-darwin10.4.0 checking for PHP prefix... /usr checking for PHP includes... -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib checking for PHP extension directory... /usr/lib/php/extensions/no-debug-non-zts-20090626 checking for PHP installed headers prefix... /usr/include/php checking if debug is enabled... no checking if zts is enabled... no checking for re2c... no configure: WARNING: You will need re2c 0.13.4 or later if you want to regenerate PHP parsers. checking for gawk... gawk checking for oauth support... yes, shared checking for cURL in default path... found in /usr checking for ld used by cc... /usr/libexec/gcc/i686-apple-darwin10/4.2.1/ld checking if the linker (/usr/libexec/gcc/i686-apple-darwin10/4.2.1/ld) is GNU ld... no checking for /usr/libexec/gcc/i686-apple-darwin10/4.2.1/ld option to reload object files... -r checking for BSD-compatible nm... /usr/bin/nm checking whether ln -s works... yes checking how to recognise dependent libraries... pass_all checking for ANSI C header files... yes checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes checking dlfcn.h usability... yes checking dlfcn.h presence... yes checking for dlfcn.h... yes checking the maximum length of command line arguments... 196608 checking command to parse /usr/bin/nm output from cc object... rm: conftest.dSYM: is a directory rm: conftest.dSYM: is a directory rm: conftest.dSYM: is a directory rm: conftest.dSYM: is a directory ok checking for objdir... .libs checking for ar... ar checking for ranlib... ranlib checking for strip... strip rm: conftest.dSYM: is a directory rm: conftest.dSYM: is a directory checking if cc static flag works... rm: conftest.dSYM: is a directory yes checking if cc supports -fno-rtti -fno-exceptions... rm: conftest.dSYM: is a directory no checking for cc option to produce PIC... -fno-common checking if cc PIC flag -fno-common works... rm: conftest.dSYM: is a directory yes checking if cc supports -c -o file.o... rm: conftest.dSYM: is a directory yes checking whether the cc linker (/usr/libexec/gcc/i686-apple-darwin10/4.2.1/ld) supports shared libraries... yes checking dynamic linker characteristics... darwin10.4.0 dyld checking how to hardcode library paths into programs... immediate checking whether stripping libraries is possible... yes checking if libtool supports shared libraries... yes checking whether to build shared libraries... yes checking whether to build static libraries... no creating libtool appending configuration tag "CXX" to libtool configure: creating ./config.status config.status: creating config.h running: make /bin/sh /private/var/tmp/pear-build-root/oauth-1.0.0/libtool --mode=compile cc -I. -I/private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth -DPHP_ATOM_INC -I/private/var/tmp/pear-build-root/oauth-1.0.0/include -I/private/var/tmp/pear-build-root/oauth-1.0.0/main -I/private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib -DHAVE_CONFIG_H -g -O2 -Wall -g -c /private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth/oauth.c -o oauth.lo mkdir .libs cc -I. "-I/private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth" -DPHP_ATOM_INC -I/private/var/tmp/pear-build-root/oauth-1.0.0/include -I/private/var/tmp/pear-build-root/oauth-1.0.0/main "-I/private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth" -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib -DHAVE_CONFIG_H -g -O2 -Wall -g -c "/private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth/oauth.c" -fno-common -DPIC -o .libs/oauth.o In file included from /private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth/php_oauth.h:47, from /private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth/oauth.c:14: /usr/include/php/ext/pcre/php_pcre.h:29:18: error: pcre.h: No such file or directory In file included from /private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth/php_oauth.h:47, from /private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth/oauth.c:14: /usr/include/php/ext/pcre/php_pcre.h:37: error: expected '=', ',', ';', 'asm' or 'attribute' before '*' token /usr/include/php/ext/pcre/php_pcre.h:38: error: expected '=', ',', ';', 'asm' or 'attribute' before '*' token /usr/include/php/ext/pcre/php_pcre.h:44: error: expected specifier-qualifier-list before 'pcre' make: * [oauth.lo] Error 1 ERROR: `make' failed

    Read the article

  • ERR_INCOMPLETE_CHUNKED_ENCODING apache 2.4

    - by Bujanca Mihai
    I upgraded my Ubuntu server to 14.04 and Apache 2.4.7. Now my images don't load and console yields net::ERR_INCOMPLETE_CHUNKED_ENCODING. Also, I can sometimes see some of the images load for a little while (1 sec max) and then they disappear. .htaccess RewriteEngine On # Serve the favicon file from img folder RewriteCond %{REQUEST_URI} ^/favicon.ico$ RewriteRule ^(.*)$ /img/$1 [NC,L] # Redirect HTTP traffic to WWW subdomain RewriteCond %{HTTPS} off [NC] RewriteCond %{HTTP_HOST} !^www\. [NC] RewriteRule ^(.*)$ http://www.%{HTTP_HOST}/$1 [R=301,L] # Redirect HTTPS traffic to WWW subdomain RewriteCond %{HTTPS} on [NC] RewriteCond %{HTTP_HOST} !^www\. [NC] RewriteRule ^(.*)$ https://www.%{HTTP_HOST}/$1 [R=301,L] # Auto Versioning rules RewriteCond %{REQUEST_FILENAME} !-s RewriteRule ^(.*)\.[\d]+\.(css|js)$ $1.$2 [L] # Default Zend rewrite rules RewriteCond %{REQUEST_FILENAME} -s [OR] RewriteCond %{REQUEST_FILENAME} -l [OR] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^.*$ - [NC,L] RewriteRule ^.*$ index.php [NC,L] VHost <VirtualHost *:80> ServerAdmin admin@localhost ServerName localhost DocumentRoot /home/mihai/ARTD/www/public/website # Omit this in production environment SetEnv APPLICATION_ENV local <Directory /home/mihai/ARTD/www/public/website > Options Indexes FollowSymLinks MultiViews AllowOverride All #Order deny,allow #Allow from all Require all granted </Directory> <IfModule mod_php5.c> php_value memory_limit 128M php_value upload_max_filesize 20M php_value post_max_size 20M </IfModule> ErrorLog /var/log/apache2/ARTD-error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/ARTD-access.log combined </VirtualHost> <IfModule mod_ssl.c> <VirtualHost *:443> ServerAdmin admin@localhost ServerName localhost DocumentRoot /home/mihai/ARTD/www/public/website # Omit this in production environment SetEnv APPLICATION_ENV local <Directory /home/mihai/ARTD/www/public/website > Options Indexes FollowSymLinks MultiViews AllowOverride All #Order deny,allow #Allow from all Require all granted </Directory> <IfModule mod_php5.c> php_value memory_limit 128M php_value upload_max_filesize 20M php_value post_max_size 20M </IfModule> ErrorLog /var/log/apache2/ARTD-ssl-error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/ARTD.log combined # SSL Engine Switch: # Enable/Disable SSL for this virtual host. SSLEngine on # A self-signed (snakeoil) certificate can be created by installing # the ssl-cert package. See # /usr/share/doc/apache2.2-common/README.Debian.gz for more info. # If both key and certificate are stored in the same file, only the # SSLCertificateFile directive is needed. SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key # Server Certificate Chain: # Point SSLCertificateChainFile at a file containing the # concatenation of PEM encoded CA certificates which form the # certificate chain for the server certificate. Alternatively # the referenced file can be the same as SSLCertificateFile # when the CA certificates are directly appended to the server # certificate for convinience. #SSLCertificateChainFile /etc/apache2/ssl.crt/server-ca.crt # Certificate Authority (CA): # Set the CA certificate verification path where to find CA # certificates for client authentication or alternatively one # huge file containing all of them (file must be PEM encoded) # Note: Inside SSLCACertificatePath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCACertificatePath /etc/ssl/certs/ #SSLCACertificateFile /etc/apache2/ssl.crt/ca-bundle.crt # Certificate Revocation Lists (CRL): # Set the CA revocation path where to find CA CRLs for client # authentication or alternatively one huge file containing all # of them (file must be PEM encoded) # Note: Inside SSLCARevocationPath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCARevocationPath /etc/apache2/ssl.crl/ #SSLCARevocationFile /etc/apache2/ssl.crl/ca-bundle.crl # Client Authentication (Type): # Client certificate verification type and depth. Types are # none, optional, require and optional_no_ca. Depth is a # number which specifies how deeply to verify the certificate # issuer chain before deciding the certificate is not valid. #SSLVerifyClient require #SSLVerifyDepth 10 # Access Control: # With SSLRequire you can do per-directory access control based # on arbitrary complex boolean expressions containing server # variable checks and other lookup directives. The syntax is a # mixture between C and Perl. See the mod_ssl documentation # for more details. #<Location /> #SSLRequire ( %{SSL_CIPHER} !~ m/^(EXP|NULL)/ \ # and %{SSL_CLIENT_S_DN_O} eq "Snake Oil, Ltd." \ # and %{SSL_CLIENT_S_DN_OU} in {"Staff", "CA", "Dev"} \ # and %{TIME_WDAY} >= 1 and %{TIME_WDAY} <= 5 \ # and %{TIME_HOUR} >= 8 and %{TIME_HOUR} <= 20 ) \ # or %{REMOTE_ADDR} =~ m/^192\.76\.162\.[0-9]+$/ #</Location> # SSL Engine Options: # Set various options for the SSL engine. # o FakeBasicAuth: # Translate the client X.509 into a Basic Authorisation. This means that # the standard Auth/DBMAuth methods can be used for access control. The # user name is the `one line' version of the client's X.509 certificate. # Note that no password is obtained from the user. Every entry in the user # file needs this password: `xxj31ZMTZzkVA'. # o ExportCertData: # This exports two additional environment variables: SSL_CLIENT_CERT and # SSL_SERVER_CERT. These contain the PEM-encoded certificates of the # server (always existing) and the client (only existing when client # authentication is used). This can be used to import the certificates # into CGI scripts. # o StdEnvVars: # This exports the standard SSL/TLS related `SSL_*' environment variables. # Per default this exportation is switched off for performance reasons, # because the extraction step is an expensive operation and is usually # useless for serving static content. So one usually enables the # exportation for CGI and SSI requests only. # o StrictRequire: # This denies access when "SSLRequireSSL" or "SSLRequire" applied even # under a "Satisfy any" situation, i.e. when it applies access is denied # and no other module can change it. # o OptRenegotiate: # This enables optimized SSL connection renegotiation handling when SSL # directives are used in per-directory context. #SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire #<FilesMatch "\.(cgi|shtml|phtml|php)$"> # SSLOptions +StdEnvVars #</FilesMatch> # SSL Protocol Adjustments: # The safe and default but still SSL/TLS standard compliant shutdown # approach is that mod_ssl sends the close notify alert but doesn't wait for # the close notify alert from client. When you need a different shutdown # approach you can use one of the following variables: # o ssl-unclean-shutdown: # This forces an unclean shutdown when the connection is closed, i.e. no # SSL close notify alert is send or allowed to received. This violates # the SSL/TLS standard but is needed for some brain-dead browsers. Use # this when you receive I/O errors because of the standard approach where # mod_ssl sends the close notify alert. # o ssl-accurate-shutdown: # This forces an accurate shutdown when the connection is closed, i.e. a # SSL close notify alert is send and mod_ssl waits for the close notify # alert of the client. This is 100% SSL/TLS standard compliant, but in # practice often causes hanging connections with brain-dead browsers. Use # this only for browsers where you know that their SSL implementation # works correctly. # Notice: Most problems of broken clients are also related to the HTTP # keep-alive facility, so you usually additionally want to disable # keep-alive for those clients, too. Use variable "nokeepalive" for this. # Similarly, one has to force some clients to use HTTP/1.0 to workaround # their broken HTTP/1.1 implementation. Use variables "downgrade-1.0" and # "force-response-1.0" for this. #BrowserMatch ".*MSIE.*" \ # nokeepalive ssl-unclean-shutdown \ # downgrade-1.0 force-response-1.0 </VirtualHost> </IfModule> logs Apache/2.4.7 (Ubuntu) PHP/5.5.9-1ubuntu4.3 OpenSSL/1.0.1f (internal dummy connection) 127.0.0.1 - - [25/Aug/2014:13:09:53 +0300] "GET /img/header/top-nav-separator.png HTTP/1.1" 200 462 "https://localhost/art" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/34.0.1847.132 Safari/537.36"

    Read the article

  • How do I install PHP with JSON and OAuth on Mac Snow Leopard?

    - by meilas
    I want to use the Dropbox API via this library, http://code.google.com/p/dropbox-php/. I installed MAMP, and then I tried sudo pecl install oauth but I got the following. >>>> downloading oauth-1.0.0.tgz ... Starting to download oauth-1.0.0.tgz (42,834 bytes) ............done: 42,834 bytes 6 source files, building running: phpize Configuring for: PHP Api Version: 20090626 Zend Module Api No: 20090626 Zend Extension Api No: 220090626 building in /var/tmp/pear-build-root/oauth-1.0.0 running: /private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth/configure checking for grep that handles long lines and -e... /usr/bin/grep checking for egrep... /usr/bin/grep -E checking for a sed that does not truncate output... /opt/local/bin/gsed checking for cc... cc checking for C compiler default output file name... a.out checking whether the C compiler works... yes checking whether we are cross compiling... no checking for suffix of executables... checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether cc accepts -g... yes checking for cc option to accept ISO C89... none needed checking how to run the C preprocessor... cc -E checking for icc... no checking for suncc... no checking whether cc understands -c and -o together... yes checking for system library directory... lib checking if compiler supports -R... no checking if compiler supports -Wl,-rpath,... yes checking build system type... i686-apple-darwin10.4.0 checking host system type... i686-apple-darwin10.4.0 checking target system type... i686-apple-darwin10.4.0 checking for PHP prefix... /usr checking for PHP includes... -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib checking for PHP extension directory... /usr/lib/php/extensions/no-debug-non-zts-20090626 checking for PHP installed headers prefix... /usr/include/php checking if debug is enabled... no checking if zts is enabled... no checking for re2c... no configure: WARNING: You will need re2c 0.13.4 or later if you want to regenerate PHP parsers. checking for gawk... gawk checking for oauth support... yes, shared checking for cURL in default path... found in /usr checking for ld used by cc... /usr/libexec/gcc/i686-apple-darwin10/4.2.1/ld checking if the linker (/usr/libexec/gcc/i686-apple-darwin10/4.2.1/ld) is GNU ld... no checking for /usr/libexec/gcc/i686-apple-darwin10/4.2.1/ld option to reload object files... -r checking for BSD-compatible nm... /usr/bin/nm checking whether ln -s works... yes checking how to recognise dependent libraries... pass_all checking for ANSI C header files... yes checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes checking dlfcn.h usability... yes checking dlfcn.h presence... yes checking for dlfcn.h... yes checking the maximum length of command line arguments... 196608 checking command to parse /usr/bin/nm output from cc object... rm: conftest.dSYM: is a directory rm: conftest.dSYM: is a directory rm: conftest.dSYM: is a directory rm: conftest.dSYM: is a directory ok checking for objdir... .libs checking for ar... ar checking for ranlib... ranlib checking for strip... strip rm: conftest.dSYM: is a directory rm: conftest.dSYM: is a directory checking if cc static flag works... rm: conftest.dSYM: is a directory yes checking if cc supports -fno-rtti -fno-exceptions... rm: conftest.dSYM: is a directory no checking for cc option to produce PIC... -fno-common checking if cc PIC flag -fno-common works... rm: conftest.dSYM: is a directory yes checking if cc supports -c -o file.o... rm: conftest.dSYM: is a directory yes checking whether the cc linker (/usr/libexec/gcc/i686-apple-darwin10/4.2.1/ld) supports shared libraries... yes checking dynamic linker characteristics... darwin10.4.0 dyld checking how to hardcode library paths into programs... immediate checking whether stripping libraries is possible... yes checking if libtool supports shared libraries... yes checking whether to build shared libraries... yes checking whether to build static libraries... no >>>> creating libtool appending configuration tag "CXX" to libtool configure: creating ./config.status config.status: creating config.h running: make /bin/sh /private/var/tmp/pear-build-root/oauth-1.0.0/libtool --mode=compile cc -I. -I/private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth -DPHP_ATOM_INC -I/private/var/tmp/pear-build-root/oauth-1.0.0/include -I/private/var/tmp/pear-build-root/oauth-1.0.0/main -I/private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib -DHAVE_CONFIG_H -g -O2 -Wall -g -c /private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth/oauth.c -o oauth.lo mkdir .libs cc -I. "-I/private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth" -DPHP_ATOM_INC -I/private/var/tmp/pear-build-root/oauth-1.0.0/include -I/private/var/tmp/pear-build-root/oauth-1.0.0/main "-I/private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth" -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib -DHAVE_CONFIG_H -g -O2 -Wall -g -c "/private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth/oauth.c" -fno-common -DPIC -o .libs/oauth.o In file included from /private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth/php_oauth.h:47, from /private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth/oauth.c:14: /usr/include/php/ext/pcre/php_pcre.h:29:18: error: pcre.h: No such file or directory In file included from /private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth/php_oauth.h:47, from /private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth/oauth.c:14: /usr/include/php/ext/pcre/php_pcre.h:37: error: expected '=', ',', ';', 'asm' or '__attribute__' before '*' token /usr/include/php/ext/pcre/php_pcre.h:38: error: expected '=', ',', ';', 'asm' or '__attribute__' before '*' token /usr/include/php/ext/pcre/php_pcre.h:44: error: expected specifier-qualifier-list before 'pcre' make: *** [oauth.lo] Error 1 ERROR: `make' failed </block>

    Read the article

  • 2-Bay External HDD Enclosure in JBOD mode fails to detect both drives (Linux & Windows)

    - by mgc8888
    I recently purchased a couple of USB 3.0 External HDD Enclosures to use for storage and backup; the idea was to have one act as backup to the other, with 4 x 3TB drives in total. However, the second drive in each is not accessible in either Linux nor Windows, and I could not determine the reason. 1. Situation The two enclosures are slightly different (couldn't find them in stock at the same time) yet from many little details appear to be the same Chinese base design with a tweaked outer shell. The models are: Sharkoon 2-Bay RAID Box Fantec MR-35DU3 The drives are Seagate 3TB Barracuda ST33000651AS, firmware CC44, all identical. From reading manuals and online sources, I determined that JBOD would be the optimal setup for my needs -- addressing the two drives separately in each enclosure would be important, making it easy to swap drives and mix&match them if needed; all the other modes implied the controller doing a combination of the drives. The software used was Debian GNU/Linux - testing/wheezy - kernel 2.6.39-2 and Windows 7 Ultimate. 2. Description of the problem Now, here comes the problem: every time I connect either of the enclosures to a PC using the supplied cable (tried a different one as well), only the HDD in the top bay is readable, the one below is detected yet errors out in various ways. According to the manuals, it should not happen: in JBOD, the system should be able to "see" two separate drives upon connection. This happens with both enclosures and any combination of HDDs (i.e. if I swap them, the same thing happens), so the HDDs are good and I think so are the enclosures (two different companies making similar products that failed in an identical fashion would be very unlikely). The top HDD can be used fine every time, I actually tried a speed test from Linux and got about 150MiB/s reads, so all is working as it should; the one below refuses to work every time. So the failure is consistent. To make sure this was not some obscure Linux bug, I tried the same under Windows 7, and the system also only created one drive letter for a drive of 3TB size (so it was only seeing one instead of both). Placing an older, known good, 2TB drive in the top bay made that the one recognised, so we have the same issue under Windows as well. Log entries under Linux (tested here with a 3TB and a 2TB drive so I could differentiate them; either one works in the top enclosure, in the test setup the 3TB one is on top). You can see them being detected, the top one is ok, but for the bottom one only errors: Jul 19 23:28:15 media kernel: [260150.582436] usb 6-1: New USB device found, idVendor=1ca1, idProduct=18ae Jul 19 23:28:15 media kernel: [260150.582440] usb 6-1: New USB device strings: Mfr=1, Product=2, SerialNumber=3 Jul 19 23:28:15 media kernel: [260150.582442] usb 6-1: Product: Usb Sata Bridge Jul 19 23:28:15 media kernel: [260150.582444] usb 6-1: Manufacturer: SYMWAVE Jul 19 23:28:15 media kernel: [260150.582446] usb 6-1: SerialNumber: 39584B304C4E3441 Jul 19 23:28:15 media kernel: [260150.870412] scsi11 : usb-storage 6-1:1.0 Jul 19 23:28:16 media kernel: [260151.882087] scsi 11:0:0:0: Direct-Access SYMWAVE ST33000651AS CC44 PQ: 0 ANSI: 4 Jul 19 23:28:16 media kernel: [260151.882242] scsi 11:0:0:1: Direct-Access SYMWAVE ST32000641AS CC12 PQ: 0 ANSI: 4 Jul 19 23:28:16 media kernel: [260151.882677] sd 11:0:0:0: Attached scsi generic sg2 type 0 Jul 19 23:28:16 media kernel: [260151.882774] sd 11:0:0:0: [sdb] Very big device. Trying to use READ CAPACITY(16). Jul 19 23:28:16 media kernel: [260151.882857] sd 11:0:0:1: Attached scsi generic sg3 type 0 Jul 19 23:28:16 media kernel: [260151.882893] sd 11:0:0:0: [sdb] 5860533168 512-byte logical blocks: (3.00 TB/2.72 TiB) Jul 19 23:28:16 media kernel: [260151.883085] xhci_hcd 0000:03:00.0: WARN: Stalled endpoint Jul 19 23:28:16 media kernel: [260151.883582] sd 11:0:0:0: [sdb] Write Protect is off Jul 19 23:28:16 media kernel: [260151.883961] sd 11:0:0:1: [sdc] 3907029168 512-byte logical blocks: (2.00 TB/1.81 TiB) Jul 19 23:28:16 media kernel: [260151.884145] xhci_hcd 0000:03:00.0: WARN: Stalled endpoint Jul 19 23:28:16 media kernel: [260151.884570] sd 11:0:0:1: [sdc] Write Protect is off Jul 19 23:28:16 media kernel: [260151.884855] sd 11:0:0:0: [sdb] Very big device. Trying to use READ CAPACITY(16). Jul 19 23:28:16 media kernel: [260151.885286] xhci_hcd 0000:03:00.0: WARN: Stalled endpoint Jul 19 23:28:16 media kernel: [260151.885807] xhci_hcd 0000:03:00.0: WARN: Stalled endpoint Jul 19 23:28:16 media kernel: [260151.909595] xhci_hcd 0000:03:00.0: WARN: Stalled endpoint Jul 19 23:28:16 media kernel: [260151.910159] sd 11:0:0:1: [sdc] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Jul 19 23:28:16 media kernel: [260151.910163] sd 11:0:0:1: [sdc] Sense Key : Illegal Request [current] Jul 19 23:28:16 media kernel: [260151.910167] Info fld=0x0 Jul 19 23:28:16 media kernel: [260151.910169] sd 11:0:0:1: [sdc] Add. Sense: Invalid field in cdb Jul 19 23:28:16 media kernel: [260151.910172] sd 11:0:0:1: [sdc] CDB: Read(10): 28 20 00 00 00 00 00 00 08 00 Jul 19 23:28:16 media kernel: [260151.910182] quiet_error: 2 callbacks suppressed Jul 19 23:28:16 media kernel: [260151.910570] xhci_hcd 0000:03:00.0: WARN: Stalled endpoint Jul 19 23:28:16 media kernel: [260151.911153] sd 11:0:0:1: [sdc] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Jul 19 23:28:16 media kernel: [260151.911156] sd 11:0:0:1: [sdc] Sense Key : Illegal Request [current] Jul 19 23:28:16 media kernel: [260151.911159] Info fld=0x0 Jul 19 23:28:16 media kernel: [260151.911161] sd 11:0:0:1: [sdc] Add. Sense: Invalid field in cdb Jul 19 23:28:16 media kernel: [260151.911164] sd 11:0:0:1: [sdc] CDB: Read(10): 28 20 00 00 00 00 00 00 08 00 Jul 19 23:28:16 media kernel: [260151.911385] xhci_hcd 0000:03:00.0: WARN: Stalled endpoint Jul 19 23:28:16 media kernel: [260151.911902] sd 11:0:0:1: [sdc] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Jul 19 23:28:16 media kernel: [260151.911905] sd 11:0:0:1: [sdc] Sense Key : Illegal Request [current] Jul 19 23:28:16 media kernel: [260151.911908] Info fld=0x0 Jul 19 23:28:16 media kernel: [260151.911910] sd 11:0:0:1: [sdc] Add. Sense: Invalid field in cdb Jul 19 23:28:16 media kernel: [260151.911913] sd 11:0:0:1: [sdc] CDB: Read(10): 28 20 00 00 00 00 00 00 08 00 Jul 19 23:28:16 media kernel: [260151.912128] xhci_hcd 0000:03:00.0: WARN: Stalled endpoint Jul 19 23:28:16 media kernel: [260151.912650] sd 11:0:0:1: [sdc] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Jul 19 23:28:16 media kernel: [260151.912653] sd 11:0:0:1: [sdc] Sense Key : Illegal Request [current] Jul 19 23:28:16 media kernel: [260151.912656] Info fld=0x0 Jul 19 23:28:16 media kernel: [260151.912657] sd 11:0:0:1: [sdc] Add. Sense: Invalid field in cdb Jul 19 23:28:16 media kernel: [260151.912660] sd 11:0:0:1: [sdc] CDB: Read(10): 28 20 00 00 00 00 00 00 08 00 Jul 19 23:28:16 media kernel: [260151.912876] xhci_hcd 0000:03:00.0: WARN: Stalled endpoint Jul 19 23:28:16 media kernel: [260151.913439] sd 11:0:0:1: [sdc] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Jul 19 23:28:16 media kernel: [260151.913442] sd 11:0:0:1: [sdc] Sense Key : Illegal Request [current] Jul 19 23:28:16 media kernel: [260151.913445] Info fld=0x0 Jul 19 23:28:16 media kernel: [260151.913446] sd 11:0:0:1: [sdc] Add. Sense: Invalid field in cdb Jul 19 23:28:16 media kernel: [260151.913449] sd 11:0:0:1: [sdc] CDB: Read(10): 28 20 00 00 00 00 00 00 08 00 Jul 19 23:28:16 media kernel: [260151.945227] xhci_hcd 0000:03:00.0: WARN: Stalled endpoint Jul 19 23:28:16 media kernel: [260151.945863] sd 11:0:0:1: [sdc] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Jul 19 23:28:16 media kernel: [260151.945866] sd 11:0:0:1: [sdc] Sense Key : Illegal Request [current] Jul 19 23:28:16 media kernel: [260151.945870] Info fld=0x0 Jul 19 23:28:16 media kernel: [260151.945871] sd 11:0:0:1: [sdc] Add. Sense: Invalid field in cdb Jul 19 23:28:16 media kernel: [260151.945875] sd 11:0:0:1: [sdc] CDB: Read(10): 28 20 00 00 00 00 00 00 08 00 (...) and so on for like 10 seconds until it gives up (...) 3. Question So, my question would be: what is causing this? Am I missing something, should I configure things differently, is this a known limitation? Searching online for more information did not yield any useful results... Thank you in advance for any help!

    Read the article

  • Does anyone really understand how HFSC scheduling in Linux/BSD works?

    - by Mecki
    I read the original SIGCOMM '97 PostScript paper about HFSC, it is very technically, but I understand the basic concept. Instead of giving a linear service curve (as with pretty much every other scheduling algorithm), you can specify a convex or concave service curve and thus it is possible to decouple bandwidth and delay. However, even though this paper mentions to kind of scheduling algorithms being used (real-time and link-share), it always only mentions ONE curve per scheduling class (the decoupling is done by specifying this curve, only one curve is needed for that). Now HFSC has been implemented for BSD (OpenBSD, FreeBSD, etc.) using the ALTQ scheduling framework and it has been implemented Linux using the TC scheduling framework (part of iproute2). Both implementations added two additional service curves, that were NOT in the original paper! A real-time service curve and an upper-limit service curve. Again, please note that the original paper mentions two scheduling algorithms (real-time and link-share), but in that paper both work with one single service curve. There never have been two independent service curves for either one as you currently find in BSD and Linux. Even worse, some version of ALTQ seems to add an additional queue priority to HSFC (there is no such thing as priority in the original paper either). I found several BSD HowTo's mentioning this priority setting (even though the man page of the latest ALTQ release knows no such parameter for HSFC, so officially it does not even exist). This all makes the HFSC scheduling even more complex than the algorithm described in the original paper and there are tons of tutorials on the Internet that often contradict each other, one claiming the opposite of the other one. This is probably the main reason why nobody really seems to understand how HFSC scheduling really works. Before I can ask my questions, we need a sample setup of some kind. I'll use a very simple one as seen in the image below: Here are some questions I cannot answer because the tutorials contradict each other: What for do I need a real-time curve at all? Assuming A1, A2, B1, B2 are all 128 kbit/s link-share (no real-time curve for either one), then each of those will get 128 kbit/s if the root has 512 kbit/s to distribute (and A and B are both 256 kbit/s of course), right? Why would I additionally give A1 and B1 a real-time curve with 128 kbit/s? What would this be good for? To give those two a higher priority? According to original paper I can give them a higher priority by using a curve, that's what HFSC is all about after all. By giving both classes a curve of [256kbit/s 20ms 128kbit/s] both have twice the priority than A2 and B2 automatically (still only getting 128 kbit/s on average) Does the real-time bandwidth count towards the link-share bandwidth? E.g. if A1 and B1 both only have 64kbit/s real-time and 64kbit/s link-share bandwidth, does that mean once they are served 64kbit/s via real-time, their link-share requirement is satisfied as well (they might get excess bandwidth, but lets ignore that for a second) or does that mean they get another 64 kbit/s via link-share? So does each class has a bandwidth "requirement" of real-time plus link-share? Or does a class only have a higher requirement than the real-time curve if the link-share curve is higher than the real-time curve (current link-share requirement equals specified link-share requirement minus real-time bandwidth already provided to this class)? Is upper limit curve applied to real-time as well, only to link-share, or maybe to both? Some tutorials say one way, some say the other way. Some even claim upper-limit is the maximum for real-time bandwidth + link-share bandwidth? What is the truth? Assuming A2 and B2 are both 128 kbit/s, does it make any difference if A1 and B1 are 128 kbit/s link-share only, or 64 kbit/s real-time and 128 kbit/s link-share, and if so, what difference? If I use the seperate real-time curve to increase priorities of classes, why would I need "curves" at all? Why is not real-time a flat value and link-share also a flat value? Why are both curves? The need for curves is clear in the original paper, because there is only one attribute of that kind per class. But now, having three attributes (real-time, link-share, and upper-limit) what for do I still need curves on each one? Why would I want the curves shape (not average bandwidth, but their slopes) to be different for real-time and link-share traffic? According to the little documentation available, real-time curve values are totally ignored for inner classes (class A and B), they are only applied to leaf classes (A1, A2, B1, B2). If that is true, why does the ALTQ HFSC sample configuration (search for 3.3 Sample configuration) set real-time curves on inner classes and claims that those set the guaranteed rate of those inner classes? Isn't that completely pointless? (note: pshare sets the link-share curve in ALTQ and grate the real-time curve; you can see this in the paragraph above the sample configuration). Some tutorials say the sum of all real-time curves may not be higher than 80% of the line speed, others say it must not be higher than 70% of the line speed. Which one is right or are they maybe both wrong? One tutorial said you shall forget all the theory. No matter how things really work (schedulers and bandwidth distribution), imagine the three curves according to the following "simplified mind model": real-time is the guaranteed bandwidth that this class will always get. link-share is the bandwidth that this class wants to become fully satisfied, but satisfaction cannot be guaranteed. In case there is excess bandwidth, the class might even get offered more bandwidth than necessary to become satisfied, but it may never use more than upper-limit says. For all this to work, the sum of all real-time bandwidths may not be above xx% of the line speed (see question above, the percentage varies). Question: Is this more or less accurate or a total misunderstanding of HSFC? And if assumption above is really accurate, where is prioritization in that model? E.g. every class might have a real-time bandwidth (guaranteed), a link-share bandwidth (not guaranteed) and an maybe an upper-limit, but still some classes have higher priority needs than other classes. In that case I must still prioritize somehow, even among real-time traffic of those classes. Would I prioritize by the slope of the curves? And if so, which curve? The real-time curve? The link-share curve? The upper-limit curve? All of them? Would I give all of them the same slope or each a different one and how to find out the right slope? I still haven't lost hope that there exists at least a hand full of people in this world that really understood HFSC and are able to answer all these questions accurately. And doing so without contradicting each other in the answers would be really nice ;-)

    Read the article

  • Varnish default.vcl grace period

    - by Vladimir
    These are my settings for a grace period (/etc/varnish/default.vcl) sub vcl_recv { .... set req.grace = 360000s; ... } sub vcl_fetch { ... set beresp.grace = 360000s; ... } I tested Varnish using localhost and nodejs as a server. I started localhost, the site was up. Then I disconnected server and the site got disconnected in less than 2 min. It says: Error 503 Service Unavailable Service Unavailable Guru Meditation: XID: 1890127100 Varnish cache server Could you tell me what could be the problem? sub vcl_fetch { if (beresp.ttl < 120s) { ##std.log("Adjusting TTL"); set beresp.ttl = 36000s; ##120s; } # Do not cache the object if the backend application does not want us to. if (beresp.http.Cache-Control ~ "(no-cache|no-store|private|must-revalidate)") { return(hit_for_pass); } # Do not cache the object if the status is not in the 200s if (beresp.status >= 300) { # Remove the Set-Cookie header #remove beresp.http.Set-Cookie; return(hit_for_pass); } # # Everything below here should be cached # # Remove the Set-Cookie header ####remove beresp.http.Set-Cookie; # Set the grace time ## set beresp.grace = 1s; //change this to minutes in case of app shutdown set beresp.grace = 360000s; ## 10 hour - reduce if it has negative impact # Static assets - browser caches tpiphem for a long time. if (req.url ~ "\.(css|js|.js|jpg|jpeg|gif|ico|png)\??\d*$") { /* Remove Expires from backend, it's not long enough */ unset beresp.http.expires; /* Set the clients TTL on this object */ set beresp.http.cache-control = "public, max-age=31536000"; /* marker for vcl_deliver to reset Age: */ set beresp.http.magicmarker = "1"; } else { set beresp.http.Cache-Control = "private, max-age=0, must-revalidate"; set beresp.http.Pragma = "no-cache"; } if (req.url ~ "\.(css|js|min|)\??\d*$") { set beresp.do_gzip = true; unset beresp.http.expires; set beresp.http.cache-control = "public, max-age=31536000"; set beresp.http.expires = beresp.ttl; set beresp.http.age = "0"; } ##do not duplicate these settings if (req.url ~ ".css") { set beresp.do_gzip = true; unset beresp.http.expires; set beresp.http.cache-control = "public, max-age=31536000"; set beresp.http.expires = beresp.ttl; set beresp.http.age = "0"; } if (req.url ~ ".js") { set beresp.do_gzip = true; unset beresp.http.expires; set beresp.http.cache-control = "public, max-age=31536000"; set beresp.http.expires = beresp.ttl; set beresp.http.age = "0"; } if (req.url ~ ".min") { set beresp.do_gzip = true; unset beresp.http.expires; set beresp.http.cache-control = "public, max-age=31536000"; set beresp.http.expires = beresp.ttl; set beresp.http.age = "0"; } ## If the request to the backend returns a code other than 200, restart the loop ## If the number of restarts reaches the value of the parameter max_restarts, ## the request will be error'ed. max_restarts defaults to 4. This prevents ## an eternal loop in the event that, e.g., the object does not exist at all. if (beresp.status != 200 && beresp.status != 403 && beresp.status != 404) { return(restart); } if (beresp.status == 302) { return(deliver); } # Never cache posts if (req.url ~ "\/post\/" || req.url ~ "\/submit\/" || req.url ~ "\/ask\/" || req.url ~ "\/add\/") { return(hit_for_pass); } ##check this setting to ensure that it does not cause issues for browsers with no gzip if (beresp.http.content-type ~ "text") { set beresp.do_gzip = true; } if (beresp.http.Set-Cookie) { return(deliver); } ##if (req.url == "/index.html") { set beresp.do_esi = true; ##} ## check if this is needed or should be used # return(deliver); the object return(deliver); } sub vcl_recv { ##avoid leeching of images call hot_link; set req.grace = 360000s; ##2m ## if one backend is down - use another if (req.restarts == 0) { set req.backend = cache_director; ##we can specify individual VMs } else if (req.restarts == 1) { set req.backend = cache_director; } ## post calls should not be cached - add cookie for these requests if using micro-caching # Pass requests that are not GET or HEAD if (req.request != "GET" && req.request != "HEAD") { return(pass); ## return(pass) goes to backend - not cache } # Don't cache the result of a redirect if (req.http.Referer ~ "redir" || req.http.Origin ~ "jumpto") { return(pass); } # Don't cache the result of a redirect (asking for logon) if (req.http.Referer ~ "post" || req.http.Referer ~ "submit" || req.http.Referer ~ "add" || req.http.Referer ~ "ask") { return(pass); } # Never cache posts - ensure that we do not use these strings in our URLs' that need to be cached if (req.url ~ "\/post\/" || req.url ~ "\/submit\/" || req.url ~ "\/ask\/" || req.url ~ "\/add\/") { return(pass); } ## if (req.http.Authorization || req.http.Cookie) { if (req.http.Authorization) { /* Not cacheable by default */ return (pass); } # Handle compression correctly. Different browsers send different # "Accept-Encoding" headers, even though they mostly all support the same # compression mechanisms. By consolidating these compression headers into # a consistent format, we can reduce the size of the cache and get more hits. # @see: http:// varnish.projects.linpro.no/wiki/FAQ/Compression if (req.http.Accept-Encoding) { if (req.url ~ "\.(jpg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|ico)$") { # No point in compressing these remove req.http.Accept-Encoding; } else if (req.http.Accept-Encoding ~ "gzip") { # If the browser supports it, we'll use gzip. set req.http.Accept-Encoding = "gzip"; } else if (req.http.Accept-Encoding ~ "deflate") { # Next, try deflate if it is supported. set req.http.Accept-Encoding = "deflate"; } else { # Unknown algorithm. Remove it and send unencoded. unset req.http.Accept-Encoding; } } # lookup graphics, css, js & ico files in the cache if (req.url ~ "\.(png|gif|jpg|jpeg|css|.js|ico)$") { return(lookup); } ##added on 0918 - check if it causes issues with user specific content if (req.request == "GET" && req.http.cookie) { return(lookup); } # Pipe requests that are non-RFC2616 or CONNECT which is weird. if (req.request != "GET" && req.request != "HEAD" && req.request != "PUT" && req.request != "POST" && req.request != "TRACE" && req.request != "OPTIONS" && req.request != "DELETE") { ##closing connection and calling pipe return(pipe); } ##purge content via localhost only if (req.request == "PURGE") { if (!client.ip ~ purge) { error 405 "Not allowed."; } return(lookup); } ## do we need this? ## return(lookup); }

    Read the article

  • Printing fails after first print with Centos 6 and HP LaserJet P3015dn printer

    - by Gavin Simpson
    Centos 6 recognises and configures a HP LaserJet P3015dn printer connected via USB. This machine is being configured as a small group file/print server. I can print a test page, which is processed/printed correctly. The next time printing is attempted (say printing a second test page), the page is not printed and the printer is set to disabled. The status of the printer is stated as: Stopped - /usr/lib/cups/backend/hp failed in the printer configuration dialogue. /var/log/cups/error_log contains this information (first two lines were there prior to the failed print job) E [24/Jun/2004:09:12:57 +0100] Returning HTTP Forbidden for Resume-Printer (ipp://localhost/printers/HP-LaserJet-P3010-Series) from localhost E [24/Jun/2004:09:20:59 +0100] Returning HTTP Forbidden for CUPS-Delete-Printer (ipp://localhost/printers/HP-LaserJet-P3010-Series) from localhost D [24/Jun/2004:09:37:28 +0100] [Job 28] The following messages were recorded from 09:36:43 AM to 09:37:28 AM D [24/Jun/2004:09:37:28 +0100] [Job 28] Adding start banner page "none". D [24/Jun/2004:09:37:28 +0100] [Job 28] Adding end banner page "none". D [24/Jun/2004:09:37:28 +0100] [Job 28] File of type application/vnd.cups-banner queued by "gavin". D [24/Jun/2004:09:37:28 +0100] [Job 28] hold_until=0 D [24/Jun/2004:09:37:28 +0100] [Job 28] Queued on "HP-LaserJet-P3010-Series" by "gavin". D [24/Jun/2004:09:37:28 +0100] [Job 28] job-sheets=none,none D [24/Jun/2004:09:37:28 +0100] [Job 28] argv[0]="HP-LaserJet-P3010-Series" D [24/Jun/2004:09:37:28 +0100] [Job 28] argv[1]="28" D [24/Jun/2004:09:37:28 +0100] [Job 28] argv[2]="gavin" D [24/Jun/2004:09:37:28 +0100] [Job 28] argv[3]="Test Page" D [24/Jun/2004:09:37:28 +0100] [Job 28] argv[4]="1" D [24/Jun/2004:09:37:28 +0100] [Job 28] argv[5]="job-uuid=urn:uuid:b3370a97-4ab6-3451-40a2-6239b13fa3e1 job-originating-host-name=localhost" D [24/Jun/2004:09:37:28 +0100] [Job 28] argv[6]="/var/spool/cups/d00028-001" D [24/Jun/2004:09:37:28 +0100] [Job 28] envp[0]="CUPS_CACHEDIR=/var/cache/cups" D [24/Jun/2004:09:37:28 +0100] [Job 28] envp[1]="CUPS_DATADIR=/usr/share/cups" D [24/Jun/2004:09:37:28 +0100] [Job 28] envp[2]="CUPS_DOCROOT=/usr/share/cups/www" D [24/Jun/2004:09:37:28 +0100] [Job 28] envp[3]="CUPS_FONTPATH=/usr/share/cups/fonts" D [24/Jun/2004:09:37:28 +0100] [Job 28] envp[4]="CUPS_REQUESTROOT=/var/spool/cups" D [24/Jun/2004:09:37:28 +0100] [Job 28] envp[5]="CUPS_SERVERBIN=/usr/lib/cups" D [24/Jun/2004:09:37:28 +0100] [Job 28] envp[6]="CUPS_SERVERROOT=/etc/cups" D [24/Jun/2004:09:37:28 +0100] [Job 28] envp[7]="CUPS_STATEDIR=/var/run/cups" D [24/Jun/2004:09:37:28 +0100] [Job 28] envp[8]="HOME=/var/spool/cups/tmp" D [24/Jun/2004:09:37:28 +0100] [Job 28] envp[9]="PATH=/usr/lib/cups/filter:/usr/bin:/usr/sbin:/bin:/usr/bin" D [24/Jun/2004:09:37:28 +0100] [Job 28] envp[10]="[email protected]" D [24/Jun/2004:09:37:28 +0100] [Job 28] envp[11]="SOFTWARE=CUPS/1.4.2" D [24/Jun/2004:09:37:28 +0100] [Job 28] envp[12]="TMPDIR=/var/spool/cups/tmp" D [24/Jun/2004:09:37:28 +0100] [Job 28] envp[13]="USER=root" D [24/Jun/2004:09:37:28 +0100] [Job 28] envp[14]="CUPS_SERVER=/var/run/cups/cups.sock" D [24/Jun/2004:09:37:28 +0100] [Job 28] envp[15]="CUPS_ENCRYPTION=IfRequested" D [24/Jun/2004:09:37:28 +0100] [Job 28] envp[16]="IPP_PORT=631" D [24/Jun/2004:09:37:28 +0100] [Job 28] envp[17]="CHARSET=utf-8" D [24/Jun/2004:09:37:28 +0100] [Job 28] envp[18]="LANG=en_US.UTF-8" D [24/Jun/2004:09:37:28 +0100] [Job 28] envp[19]="PPD=/etc/cups/ppd/HP-LaserJet-P3010-Series.ppd" D [24/Jun/2004:09:37:28 +0100] [Job 28] envp[20]="RIP_MAX_CACHE=8m" D [24/Jun/2004:09:37:28 +0100] [Job 28] envp[21]="CONTENT_TYPE=application/vnd.cups-banner" D [24/Jun/2004:09:37:28 +0100] [Job 28] envp[22]="DEVICE_URI=hp:/usb/HP_LaserJet_P3010_Series?serial=VNBV993GM4" D [24/Jun/2004:09:37:28 +0100] [Job 28] envp[23]="PRINTER_INFO=Hewlett-Packard HP LaserJet P3010 Series" D [24/Jun/2004:09:37:28 +0100] [Job 28] envp[24]="PRINTER_LOCATION=electra.geog.ucl.ac.uk" D [24/Jun/2004:09:37:28 +0100] [Job 28] envp[25]="PRINTER=HP-LaserJet-P3010-Series" D [24/Jun/2004:09:37:28 +0100] [Job 28] envp[26]="CUPS_FILETYPE=document" D [24/Jun/2004:09:37:28 +0100] [Job 28] envp[27]="FINAL_CONTENT_TYPE=application/vnd.cups-postscript" D [24/Jun/2004:09:37:28 +0100] [Job 28] Started filter /usr/lib/cups/filter/bannertops (PID 2858) D [24/Jun/2004:09:37:28 +0100] [Job 28] Started filter /usr/lib/cups/filter/pstops (PID 2859) D [24/Jun/2004:09:37:28 +0100] [Job 28] Started backend /usr/lib/cups/backend/hp (PID 2860) D [24/Jun/2004:09:37:28 +0100] [Job 28] load_banner(filename="/var/spool/cups/d00028-001") D [24/Jun/2004:09:37:28 +0100] [Job 28] Page = 612x792; 12,12 to 600,780 D [24/Jun/2004:09:37:28 +0100] [Job 28] Page = 612x792; 12,12 to 600,780 D [24/Jun/2004:09:37:28 +0100] [Job 28] slow_collate=0, slow_duplex=0, slow_order=0 D [24/Jun/2004:09:37:28 +0100] [Job 28] Before copy_comments - %!PS-Adobe-3.0 D [24/Jun/2004:09:37:28 +0100] [Job 28] %!PS-Adobe-3.0 D [24/Jun/2004:09:37:28 +0100] [Job 28] %%BoundingBox: 12 12 600 780 D [24/Jun/2004:09:37:28 +0100] [Job 28] %cupsRotation: 0 D [24/Jun/2004:09:37:28 +0100] [Job 28] %%Creator: bannertops/CUPS v1.4.2 D [24/Jun/2004:09:37:28 +0100] [Job 28] %%CreationDate: Thu 24 Jun 2004 09:36:43 AM BST D [24/Jun/2004:09:37:28 +0100] [Job 28] %%LanguageLevel: 2 D [24/Jun/2004:09:37:28 +0100] [Job 28] %%DocumentData: Clean7Bit D [24/Jun/2004:09:37:28 +0100] [Job 28] %%Title: (Test Page) D [24/Jun/2004:09:37:28 +0100] [Job 28] %%For: (gavin) D [24/Jun/2004:09:37:28 +0100] [Job 28] %%Pages: 1 D [24/Jun/2004:09:37:28 +0100] [Job 28] %%DocumentSuppliedResources: font Monospace D [24/Jun/2004:09:37:28 +0100] [Job 28] %%+ font Monospace-Bold D [24/Jun/2004:09:37:28 +0100] [Job 28] %%+ font Monospace-BoldOblique D [24/Jun/2004:09:37:28 +0100] [Job 28] %%+ font Monospace-Oblique D [24/Jun/2004:09:37:28 +0100] [Job 28] %%EndComments D [24/Jun/2004:09:37:28 +0100] [Job 28] Before copy_prolog - %%BeginProlog D [24/Jun/2004:09:37:28 +0100] [Job 28] STATE: +connecting-to-device D [24/Jun/2004:09:37:28 +0100] [Job 28] prnt/backend/hp.c 762: ERROR: cannot open channel PRINT D [24/Jun/2004:09:37:28 +0100] [Job 28] Backend returned status 1 (failed) D [24/Jun/2004:09:37:28 +0100] [Job 28] Printer stopped due to backend errors; please consult the error_log file for details. D [24/Jun/2004:09:37:28 +0100] [Job 28] End of messages D [24/Jun/2004:09:37:28 +0100] [Job 28] printer-state=5(stopped) D [24/Jun/2004:09:37:28 +0100] [Job 28] printer-state-message="/usr/lib/cups/backend/hp failed" D [24/Jun/2004:09:37:28 +0100] [Job 28] printer-state-reasons=paused /var/log/messages contains the following reports associated with the recognition of the printer and the failed print job: Jun 24 09:35:07 electra kernel: usb 1-8: new high speed USB device using ehci_hcd and address 2 Jun 24 09:35:07 electra kernel: usb 1-8: New USB device found, idVendor=03f0, idProduct=8d17 Jun 24 09:35:07 electra kernel: usb 1-8: New USB device strings: Mfr=1, Product=2, SerialNumber=3 Jun 24 09:35:07 electra kernel: usb 1-8: Product: HP LaserJet P3010 Series Jun 24 09:35:07 electra kernel: usb 1-8: Manufacturer: Hewlett-Packard Jun 24 09:35:07 electra kernel: usb 1-8: SerialNumber: VNBV993GM4 Jun 24 09:35:07 electra kernel: usb 1-8: configuration #1 chosen from 1 choice Jun 24 09:35:07 electra kernel: usblp0: USB Bidirectional printer dev 2 if 0 alt 1 proto 2 vid 0x03F0 pid 0x8D17 Jun 24 09:35:07 electra kernel: usbcore: registered new interface driver usblp Jun 24 09:35:07 electra udev-configure-printer: invalid or missing IEEE 1284 Device ID Jun 24 09:35:08 electra hp[1942]: io/hpmud/pp.c 627: unable to read device-id ret=-1 Jun 24 09:35:09 electra python: io/hpmud/pp.c 627: unable to read device-id ret=-1 Jun 24 09:35:51 electra kernel: usblp0: removed Jun 24 09:37:28 electra hp[2860]: io/hpmud/dot4.c 254: unable to read Dot4ReverseReply data: Resource temporarily unavailable exp=2 act=0 Jun 24 09:37:28 electra hp[2860]: io/hpmud/dot4.c 330: invalid DOT4InitReply: cmd=0, result=20#012, revision=0 Jun 24 09:37:28 electra hp[2860]: prnt/backend/hp.c 762: ERROR: cannot open channel PRINT I am now at a loss as to how to proceed to get this printer working on my Centos machine. How can I configure the machine to print more than a single print job without needing to be unplugged/plugged in repeatedly?

    Read the article

  • Does anyone really understand how HFSC scheduling in Linux/BSD works?

    - by Mecki
    I read the original SIGCOMM '97 PostScript paper about HFSC, it is very technically, but I understand the basic concept. Instead of giving a linear service curve (as with pretty much every other scheduling algorithm), you can specify a convex or concave service curve and thus it is possible to decouple bandwidth and delay. However, even though this paper mentions to kind of scheduling algorithms being used (real-time and link-share), it always only mentions ONE curve per scheduling class (the decoupling is done by specifying this curve, only one curve is needed for that). Now HFSC has been implemented for BSD (OpenBSD, FreeBSD, etc.) using the ALTQ scheduling framework and it has been implemented Linux using the TC scheduling framework (part of iproute2). Both implementations added two additional service curves, that were NOT in the original paper! A real-time service curve and an upper-limit service curve. Again, please note that the original paper mentions two scheduling algorithms (real-time and link-share), but in that paper both work with one single service curve. There never have been two independent service curves for either one as you currently find in BSD and Linux. Even worse, some version of ALTQ seems to add an additional queue priority to HSFC (there is no such thing as priority in the original paper either). I found several BSD HowTo's mentioning this priority setting (even though the man page of the latest ALTQ release knows no such parameter for HSFC, so officially it does not even exist). This all makes the HFSC scheduling even more complex than the algorithm described in the original paper and there are tons of tutorials on the Internet that often contradict each other, one claiming the opposite of the other one. This is probably the main reason why nobody really seems to understand how HFSC scheduling really works. Before I can ask my questions, we need a sample setup of some kind. I'll use a very simple one as seen in the image below: Here are some questions I cannot answer because the tutorials contradict each other: What for do I need a real-time curve at all? Assuming A1, A2, B1, B2 are all 128 kbit/s link-share (no real-time curve for either one), then each of those will get 128 kbit/s if the root has 512 kbit/s to distribute (and A and B are both 256 kbit/s of course), right? Why would I additionally give A1 and B1 a real-time curve with 128 kbit/s? What would this be good for? To give those two a higher priority? According to original paper I can give them a higher priority by using a curve, that's what HFSC is all about after all. By giving both classes a curve of [256kbit/s 20ms 128kbit/s] both have twice the priority than A2 and B2 automatically (still only getting 128 kbit/s on average) Does the real-time bandwidth count towards the link-share bandwidth? E.g. if A1 and B1 both only have 64kbit/s real-time and 64kbit/s link-share bandwidth, does that mean once they are served 64kbit/s via real-time, their link-share requirement is satisfied as well (they might get excess bandwidth, but lets ignore that for a second) or does that mean they get another 64 kbit/s via link-share? So does each class has a bandwidth "requirement" of real-time plus link-share? Or does a class only have a higher requirement than the real-time curve if the link-share curve is higher than the real-time curve (current link-share requirement equals specified link-share requirement minus real-time bandwidth already provided to this class)? Is upper limit curve applied to real-time as well, only to link-share, or maybe to both? Some tutorials say one way, some say the other way. Some even claim upper-limit is the maximum for real-time bandwidth + link-share bandwidth? What is the truth? Assuming A2 and B2 are both 128 kbit/s, does it make any difference if A1 and B1 are 128 kbit/s link-share only, or 64 kbit/s real-time and 128 kbit/s link-share, and if so, what difference? If I use the seperate real-time curve to increase priorities of classes, why would I need "curves" at all? Why is not real-time a flat value and link-share also a flat value? Why are both curves? The need for curves is clear in the original paper, because there is only one attribute of that kind per class. But now, having three attributes (real-time, link-share, and upper-limit) what for do I still need curves on each one? Why would I want the curves shape (not average bandwidth, but their slopes) to be different for real-time and link-share traffic? According to the little documentation available, real-time curve values are totally ignored for inner classes (class A and B), they are only applied to leaf classes (A1, A2, B1, B2). If that is true, why does the ALTQ HFSC sample configuration (search for 3.3 Sample configuration) set real-time curves on inner classes and claims that those set the guaranteed rate of those inner classes? Isn't that completely pointless? (note: pshare sets the link-share curve in ALTQ and grate the real-time curve; you can see this in the paragraph above the sample configuration). Some tutorials say the sum of all real-time curves may not be higher than 80% of the line speed, others say it must not be higher than 70% of the line speed. Which one is right or are they maybe both wrong? One tutorial said you shall forget all the theory. No matter how things really work (schedulers and bandwidth distribution), imagine the three curves according to the following "simplified mind model": real-time is the guaranteed bandwidth that this class will always get. link-share is the bandwidth that this class wants to become fully satisfied, but satisfaction cannot be guaranteed. In case there is excess bandwidth, the class might even get offered more bandwidth than necessary to become satisfied, but it may never use more than upper-limit says. For all this to work, the sum of all real-time bandwidths may not be above xx% of the line speed (see question above, the percentage varies). Question: Is this more or less accurate or a total misunderstanding of HSFC? And if assumption above is really accurate, where is prioritization in that model? E.g. every class might have a real-time bandwidth (guaranteed), a link-share bandwidth (not guaranteed) and an maybe an upper-limit, but still some classes have higher priority needs than other classes. In that case I must still prioritize somehow, even among real-time traffic of those classes. Would I prioritize by the slope of the curves? And if so, which curve? The real-time curve? The link-share curve? The upper-limit curve? All of them? Would I give all of them the same slope or each a different one and how to find out the right slope? I still haven't lost hope that there exists at least a hand full of people in this world that really understood HFSC and are able to answer all these questions accurately. And doing so without contradicting each other in the answers would be really nice ;-)

    Read the article

  • Our embedded linux system won't recognize a USB Device if it is plugged in before powerup. Suggestions?

    - by Blaine
    We are developing on a small embedded device. This device us a gumstix overo board running OpenEmbedded linux. We have our development almost completely done, and have run into the strangest of bugs that we can't figure out. We have a USB Device (Spectrophotometer) that has a USB2.0 Connection and an external power supply for the light source. Typical behavior is that you plug in the power supply, then the USB connection to a host. When the usb connection is detected by the device, the device boots up and enables the light source and fan. The device is then able to be used by the host system. The problem is that if the device is plugged into the Gumstix before we turn on the Gumstix, the USB Device apparently is not probed by the system (and hence does not turn on). Under a normal situation, when the connection is initialized by plugging in the usb cable, the spectro turns itself on and becomes available to the system (this can be seen via "lsusb" typically). Neither of these things are happening. There is no device detected via "lsusb" and no dmesg errors of any kind that we can see. It is as if the device is not plugged in. The device does show up and work fine if we unplug the USB cable and plug it back in once the system is booted up. It turns on and shows up on the usb bus, and we can access it with our driver. On any other desktop or laptop, it does not matter if the host system is on or off when we plug in the spectrometer. This behavior is what I would consider to be "normal" - that the usb system is probed and initialized at boot time, and the usb devices come online. In other words, our system is fully functional as long as we plug in the usb device after the system is booted up. Unfortunately this isn't possible in our final product - everything comes on at once. Additional Info: 1) We have tried a flash drive attached to the system when the system is turned off. Booting up the system brings the flash drive online, as expected 2) There are no messages regarding the spectro or usb device (using dmesg). "lsusb" only lists the USB hubs / controllers. It is literally as if the device is not present and not plugged in. 3) We have tried a brand new image from gumstix and an older image from last year. Both images have this problem. This problem exists on all 3 gumstix devices we use. Does anyone have any suggestions? From what I can tell it isn't really possible to do a complete "reboot" of the usb system that is a complete emulation of "unplugging" and "replugging" a usb device. I feel like what is happening is that there is no initial probe on the usb bus that would trigger the usb handshaking, but this is somehow specific to the spectro. This seems to be a kernel issue or at least an issue in how the kernel is initializing the usb subsystem. I'm not really sure though. I have tried the gumstix mailing list, but there doesn't seem to be anyone who has seen this issue before. Any advice or suggestions on where to start looking would be fantastic. Thank you! Blaine output etc. $ uname -a Linux overo 2.6.33 #1 Tue Apr 27 08:35:38 PDT 2010 armv7l GNU/Linux When the system is up and running and spectro is plugged in (working as intended), this is lsusb: Bus 001 Device 116: ID 2457:1022 Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 idVendor 0x2457 idProduct 0x1022 bcdDevice 0.02 iManufacturer 1 USB4000 1.01.11 iProduct 2 Ocean Optics USB4000 iSerial 0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 46 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0x80 (Bus Powered) MaxPower 400mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 4 bInterfaceClass 255 Vendor Specific Class bInterfaceSubClass 0 bInterfaceProtocol 0 iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x01 EP 1 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x82 EP 2 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x86 EP 6 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Device Qualifier (for other device speed): bLength 10 bDescriptorType 6 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 bNumConfigurations 1 Device Status: 0x0000 (Bus Powered) dmesg output: usb usb1: usb auto-resume hub 1-0:1.0: hub_resume usb usb2: usb auto-resume ehci-omap ehci-omap.0: resume root hub hub 1-0:1.0: state 7 ports 1 chg 0000 evt 0000 hub 2-0:1.0: hub_resume hub 2-0:1.0: state 7 ports 3 chg 0000 evt 0000 hub 1-0:1.0: hub_suspend usb usb1: bus auto-suspend hub 2-0:1.0: hub_suspend usb usb2: bus auto-suspend ehci-omap ehci-omap.0: suspend root hub usb usb2: usb resume ehci-omap ehci-omap.0: resume root hub hub 2-0:1.0: hub_resume ehci-omap ehci-omap.0: GetStatus port 2 status 001803 POWER sig=j CSC CONNECT hub 2-0:1.0: port 2: status 0501 change 0001 hub 2-0:1.0: state 7 ports 3 chg 0004 evt 0000 hub 2-0:1.0: port 2, status 0501, change 0000, 480 Mb/s ehci-omap ehci-omap.0: port 2 high speed ehci-omap ehci-omap.0: GetStatus port 2 status 001005 POWER sig=se0 PE CONNECT usb 2-2: new high speed USB device using ehci-omap and address 2 ehci-omap ehci-omap.0: port 2 high speed ehci-omap ehci-omap.0: GetStatus port 2 status 001005 POWER sig=se0 PE CONNECT usb 2-2: default language 0x0409 usb 2-2: udev 2, busnum 2, minor = 129 usb 2-2: New USB device found, idVendor=2457, idProduct=1022 usb 2-2: New USB device strings: Mfr=1, Product=2, SerialNumber=0 usb 2-2: Product: Ocean Optics USB4000 usb 2-2: Manufacturer: USB4000 1.01.11 usb 2-2: uevent usb 2-2: usb_probe_device usb 2-2: configuration #1 chosen from 1 choice usb 2-2: uevent usb 2-2: adding 2-2:1.0 (config #1, interface 0) usb 2-2:1.0: uevent drivers/usb/core/inode.c: creating file '002' dmesg has nothing to say, and lusb simply lists nothing else but the two default usb controllers / hubs if we plug the device in before the system is turned on.

    Read the article

  • Gmail rejects emails. Openspf.net fails the tests

    - by pablomedok
    I've got a problem with Gmail. It started after one of our trojan infected PCs sent spam for one day from our IP address. We've fixed the problem, but we got into 3 black lists. We've fixed that, too. But still every time we send an email to Gmail the message is rejected: So I've checked Google Bulk Sender's guide once again and found an error in our SPF record and fixed it. Google says everything should become fine after some time, but this doesn't happen. 3 weeks already passed but we still can't send emails to Gmail. Our MX setup is a bit complex, but not too much: We have a domain name delo-company.com, it has it's own mail @delo-company.com (this one is fine, but the problems are with sub-domain name corp.delo-company.com). Delo-company.com domain has several DNS records for the subdomain: corp A 82.209.198.147 corp MX 20 corp.delo-company.com corp.delo-company.com TXT "v=spf1 ip4:82.209.198.147 ~all" (I set ~all for testing purposes only, it was -all before that) These records are for our corporate Exchange 2003 server at 82.209.198.147. Its LAN name is s2.corp.delo-company.com so its HELO/EHLO greetings are also s2.corp.delo-company.com. To pass EHLO check we've also created some records in delo-company.com's DNS: s2.corp A 82.209.198.147 s2.corp.delo-company.com TXT "v=spf1 ip4:82.209.198.147 ~all" As I understand SPF verifications should be passed in this way: Out server s2 connects to MX of the recepient (Rcp.MX): EHLO s2.corp.delo-company.com Rcp.MX says Ok, and makes SPF check of HELO/EHLO. It does NSlookup for s2.corp.delo-company.com and gets the above DNS-records. TXT records says that s2.corp.delo-company.com should be only from IP 82.209.198.147. So it should be passed. Then our s2 server says RCPT FROM: Rcp.MX` server checks it, too. The values are the same so they should also be positive. Maybe there is also a rDNS check, but I'm not sure what is checked HELO or RCPT FROM. Our PTR record for 82.209.198.147 is: 147.198.209.82.in-addr.arpa. 86400 IN PTR s2.corp.delo-company.com. To me everything looks fine, but anyway all emails are rejected by Gmail. So, I've checked MXtoolbox.com - it says everything is fine, I passed http://www.kitterman.com/spf/validate.html Python check, I did 25port.com email test. It's fine, too: Return-Path: <[email protected]> Received: from s2.corp.delo-company.com (82.209.198.147) by verifier.port25.com id ha45na11u9cs for <[email protected]>; Fri, 2 Mar 2012 13:03:21 -0500 (envelope-from <[email protected]>) Authentication-Results: verifier.port25.com; spf=pass [email protected] Authentication-Results: verifier.port25.com; domainkeys=neutral (message not signed) [email protected] Authentication-Results: verifier.port25.com; dkim=neutral (message not signed) Authentication-Results: verifier.port25.com; sender-id=pass [email protected] Content-class: urn:content-classes:message MIME-Version: 1.0 Content-Type: multipart/alternative; boundary="----_=_NextPart_001_01CCF89E.BE02A069" Subject: test Date: Fri, 2 Mar 2012 21:03:15 +0300 X-MimeOLE: Produced By Microsoft Exchange V6.5 Message-ID: <[email protected]> X-MS-Has-Attach: X-MS-TNEF-Correlator: Thread-Topic: test Thread-Index: Acz4jS34oznvbyFQR4S5rXsNQFvTdg== From: =?koi8-r?B?89XQ0tXOwMsg8MHXxcw=?= <[email protected]> To: <[email protected]> I also checked with [email protected], but it FAILs all the time, no matter which SPF records I make: <s2.corp.delo-company.com #5.7.1 smtp;550 5.7.1 <[email protected]>: Recipient address rejected: SPF Tests: Mail-From Result="softfail": Mail From="[email protected]" HELO name="s2.corp.delo-company.com" HELO Result="softfail" Remote IP="82.209.198.147"> I've filled Gmail form twice, but nothing happens. We do not send spam, only emails for our clients. 2 or 3 times we did mass emails (like New Year Greetings and sales promos) from corp.delo-company.com addresses, but they where all complying to Gmail Bulk Sender's Guide (I mean SPF, Open Relays, Precedence: Bulk and Unsubscribe tags). So, this should be not a problem. Please, help me. What am I doing wrong? UPD: I also tried Unlocktheinbox.com test and the server also fails this test. Here is the result: http://bit.ly/wYr39h . Here is one more http://bit.ly/ypWLjr I also tried to send email from that server manually via telnet and everything is fine. Here is what I type: 220 mx.google.com ESMTP g15si4811326anb.170 HELO s2.corp.delo-company.com 250 mx.google.com at your service MAIL FROM: <[email protected]> 250 2.1.0 OK g15si4811326anb.170 RCPT TO: <[email protected]> 250 2.1.5 OK g15si4811326anb.170 DATA 354 Go ahead g15si4811326anb.170 From: [email protected] To: Pavel <[email protected]> Subject: Test 28 This is telnet test . 250 2.0.0 OK 1330795021 g15si4811326anb.170 QUIT 221 2.0.0 closing connection g15si4811326anb.170 And this is what I get: Delivered-To: [email protected] Received: by 10.227.132.73 with SMTP id a9csp96864wbt; Sat, 3 Mar 2012 09:17:02 -0800 (PST) Received: by 10.101.128.12 with SMTP id f12mr4837125ann.49.1330795021572; Sat, 03 Mar 2012 09:17:01 -0800 (PST) Return-Path: <[email protected]> Received: from s2.corp.delo-company.com (s2.corp.delo-company.com. [82.209.198.147]) by mx.google.com with SMTP id g15si4811326anb.170.2012.03.03.09.15.59; Sat, 03 Mar 2012 09:17:00 -0800 (PST) Received-SPF: pass (google.com: domain of [email protected] designates 82.209.198.147 as permitted sender) client-ip=82.209.198.147; Authentication-Results: mx.google.com; spf=pass (google.com: domain of [email protected] designates 82.209.198.147 as permitted sender) [email protected] Date: Sat, 03 Mar 2012 09:17:00 -0800 (PST) Message-Id: <[email protected]> From: [email protected] To: Pavel <[email protected]> Subject: Test 28 This is telnet test

    Read the article

  • Optimize php-fpm and varnish for a powerfull server

    - by Jim
    My setup is: Intel® Core™ i7-2600 and RAM 16 GB DDR3 RAM varnish+nginx+php-fpm+apc for a not very heavy WordPress blog with W3 Total Cache and CDN My problem is that after 55 hits per second according to blitz.io varnish starts giving out timeouts. CPU usage at this time is hardly 1%. Free memory at all time remains 10GB+. I tried benchmarking php-fpm directly with result of 150hits/s without any timeouts. But after that the CPU usage goes 100% and it stops responding. Can you help me optimize it to handle more? As i understand nginx has nothing to do over here so i dont put its config. php-fpm config listen = /tmp/php5-fpm.sock listen.allowed_clients = 127.0.0.1 user = nginx group = nginx pm = dynamic pm.max_children = 150 pm.start_servers = 7 pm.min_spare_servers = 2 pm.max_spare_servers = 15 pm.max_requests = 500 slowlog = /var/log/php-fpm/www-slow.log php_admin_value[error_log] = /var/log/php-fpm/www-error.log php_admin_flag[log_errors] = on apc extension = apc.so apc.enabled=1 apc.shm_size=512MB apc.num_files_hint=0 apc.user_entries_hint=0 apc.ttl=7200 apc.use_request_time=1 apc.user_ttl=7200 apc.gc_ttl=3600 apc.cache_by_default=1 apc.filters apc.mmap_file_mask=/tmp/apc.XXXXXX apc.file_update_protection=2 apc.enable_cli=0 apc.max_file_size=1M apc.stat=1 apc.stat_ctime=0 apc.canonicalize=0 apc.write_lock=1 apc.report_autofilter=0 apc.rfc1867=0 apc.rfc1867_prefix =upload_ apc.rfc1867_name=APC_UPLOAD_PROGRESS apc.rfc1867_freq=0 apc.rfc1867_ttl=3600 apc.include_once_override=0 apc.lazy_classes=0 apc.lazy_functions=0 apc.coredump_unmap=0 apc.file_md5=0 apc.preload_path Varnish VCL backend default { .host = "127.0.0.1"; .port = "8080"; .connect_timeout = 6s; .first_byte_timeout = 6s; .between_bytes_timeout = 60s; } acl purgehosts { "localhost"; "127.0.0.1"; } # Called after a document has been successfully retrieved from the backend. sub vcl_fetch { # Uncomment to make the default cache "time to live" is 5 minutes, handy # but it may cache stale pages unless purged. (TODO) # By default Varnish will use the headers sent to it by Apache (the backend server) # to figure out the correct TTL. # WP Super Cache sends a TTL of 3 seconds, set in wp-content/cache/.htaccess set beresp.ttl = 24h; # Strip cookies for static files and set a long cache expiry time. if (req.url ~ "\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|pdf|txt|tar|wav|bmp|rtf|js|flv|swf|html|htm)$") { unset beresp.http.set-cookie; set beresp.ttl = 24h; } # If WordPress cookies found then page is not cacheable if (req.http.Cookie ~"(wp-postpass|wordpress_logged_in|comment_author_)") { # set beresp.cacheable = false;#versions less than 3 #beresp.ttl>0 is cacheable so 0 will not be cached set beresp.ttl = 0s; } else { #set beresp.cacheable = true; set beresp.ttl=24h;#cache for 24hrs } # Varnish determined the object was not cacheable #if ttl is not > 0 seconds then it is cachebale if (!beresp.ttl > 0s) { # set beresp.http.X-Cacheable = "NO:Not Cacheable"; } else if ( req.http.Cookie ~"(wp-postpass|wordpress_logged_in|comment_author_)" ) { # You don't wish to cache content for logged in users set beresp.http.X-Cacheable = "NO:Got Session"; return(hit_for_pass); #previously just pass but changed in v3+ } else if ( beresp.http.Cache-Control ~ "private") { # You are respecting the Cache-Control=private header from the backend set beresp.http.X-Cacheable = "NO:Cache-Control=private"; return(hit_for_pass); } else if ( beresp.ttl < 1s ) { # You are extending the lifetime of the object artificially set beresp.ttl = 300s; set beresp.grace = 300s; set beresp.http.X-Cacheable = "YES:Forced"; } else { # Varnish determined the object was cacheable set beresp.http.X-Cacheable = "YES"; if (beresp.status == 404 || beresp.status >= 500) { set beresp.ttl = 0s; } # Deliver the content return(deliver); } sub vcl_hash { # Each cached page has to be identified by a key that unlocks it. # Add the browser cookie only if a WordPress cookie found. if ( req.http.Cookie ~"(wp-postpass|wordpress_logged_in|comment_author_)" ) { #set req.hash += req.http.Cookie; hash_data(req.http.Cookie); } } # vcl_recv is called whenever a request is received sub vcl_recv { # remove ?ver=xxxxx strings from urls so css and js files are cached. # Watch out when upgrading WordPress, need to restart Varnish or flush cache. set req.url = regsub(req.url, "\?ver=.*$", ""); # Remove "replytocom" from requests to make caching better. set req.url = regsub(req.url, "\?replytocom=.*$", ""); remove req.http.X-Forwarded-For; set req.http.X-Forwarded-For = client.ip; # Exclude this site because it breaks if cached if ( req.http.host == "sr.ituts.gr" ) { return( pass ); } # Serve objects up to 2 minutes past their expiry if the backend is slow to respond. set req.grace = 120s; # Strip cookies for static files: if (req.url ~ "\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|pdf|txt|tar|wav|bmp|rtf|js|flv|swf|html|htm)$") { unset req.http.Cookie; return(lookup); } # Remove has_js and Google Analytics __* cookies. set req.http.Cookie = regsuball(req.http.Cookie, "(^|;\s*)(__[a-z]+|has_js)=[^;]*", ""); # Remove a ";" prefix, if present. set req.http.Cookie = regsub(req.http.Cookie, "^;\s*", ""); # Remove empty cookies. if (req.http.Cookie ~ "^\s*$") { unset req.http.Cookie; } if (req.request == "PURGE") { if (!client.ip ~ purgehosts) { error 405 "Not allowed."; } #previous version ban() was purge() ban("req.url ~ " + req.url + " && req.http.host == " + req.http.host); error 200 "Purged."; } # Pass anything other than GET and HEAD directly. if (req.request != "GET" && req.request != "HEAD") { return( pass ); } /* We only deal with GET and HEAD by default */ # remove cookies for comments cookie to make caching better. set req.http.cookie = regsub(req.http.cookie, "1231111111111111122222222333333=[^;]+(; )?", ""); # never cache the admin pages, or the server-status page, or your feed? you may want to..i don't if (req.request == "GET" && (req.url ~ "(wp-admin|bb-admin|server-status|feed)")) { return(pipe); } # don't cache authenticated sessions if (req.http.Cookie && req.http.Cookie ~ "(wordpress_|PHPSESSID)") { return(lookup); } # don't cache ajax requests if(req.http.X-Requested-With == "XMLHttpRequest" || req.url ~ "nocache" || req.url ~ "(control.php|wp-comments-post.php|wp-login.php|bb-login.php|bb-reset-password.php|register.php)") { return (pass); } return( lookup ); } Varnish Daemon options DAEMON_OPTS="-a :80 \ -T 127.0.0.1:6082 \ -f /etc/varnish/ituts.vcl \ -u varnish -g varnish \ -S /etc/varnish/secret \ -p thread_pool_add_delay=2 \ -p thread_pools=8 \ -p thread_pool_min=100 \ -p thread_pool_max=1000 \ -p session_linger=50 \ -p session_max=150000 \ -p sess_workspace=262144 \ -s malloc,5G" Im not sure where to start, should i for start optimize php-fpm and then go to varnish or php-fpm is at its max right now so i should start looking for the problem in varnish?

    Read the article

  • e2fsck extremely slow, although enough memory exists

    - by kaefert
    I've got this external USB-Disk: kaefert@blechmobil:~$ lsusb -s 2:3 Bus 002 Device 003: ID 0bc2:3320 Seagate RSS LLC As can be seen in this dmesg output, there is some problem that prevents that disk from beeing mounted: kaefert@blechmobil:~$ dmesg ... [ 113.084079] usb 2-1: new high-speed USB device number 3 using ehci_hcd [ 113.217783] usb 2-1: New USB device found, idVendor=0bc2, idProduct=3320 [ 113.217787] usb 2-1: New USB device strings: Mfr=2, Product=3, SerialNumber=1 [ 113.217790] usb 2-1: Product: Expansion Desk [ 113.217792] usb 2-1: Manufacturer: Seagate [ 113.217794] usb 2-1: SerialNumber: NA4J4N6K [ 113.435404] usbcore: registered new interface driver uas [ 113.455315] Initializing USB Mass Storage driver... [ 113.468051] scsi5 : usb-storage 2-1:1.0 [ 113.468180] usbcore: registered new interface driver usb-storage [ 113.468182] USB Mass Storage support registered. [ 114.473105] scsi 5:0:0:0: Direct-Access Seagate Expansion Desk 070B PQ: 0 ANSI: 6 [ 114.474342] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.475089] sd 5:0:0:0: [sdb] Write Protect is off [ 114.475092] sd 5:0:0:0: [sdb] Mode Sense: 43 00 00 00 [ 114.475959] sd 5:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 114.477093] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.501649] sdb: sdb1 [ 114.502717] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.504354] sd 5:0:0:0: [sdb] Attached SCSI disk [ 116.804408] EXT4-fs (sdb1): ext4_check_descriptors: Checksum for group 3976 failed (47397!=61519) [ 116.804413] EXT4-fs (sdb1): group descriptors corrupted! ... So I went and fired up my favorite partition manager - gparted, and told it to verify and repair the partition sdb1. This made gparted call e2fsck (version 1.42.4 (12-Jun-2012)) e2fsck -f -y -v /dev/sdb1 Although gparted called e2fsck with the "-v" option, sadly it doesn't show me the output of my e2fsck process (bugreport https://bugzilla.gnome.org/show_bug.cgi?id=467925 ) I started this whole thing on Sunday (2012-11-04_2200) evening, so about 48 hours ago, this is what htop says about it now (2012-11-06-1900): PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command 3704 root 39 19 1560M 1166M 768 R 98.0 19.5 42h56:43 e2fsck -f -y -v /dev/sdb1 Now I found a few posts on the internet that discuss e2fsck running slow, for example: http://gparted-forum.surf4.info/viewtopic.php?id=13613 where they write that its a good idea to see if the disk is just that slow because maybe its damaged, and I think these outputs tell me that this is not the case in my case: kaefert@blechmobil:~$ sudo hdparm -tT /dev/sdb /dev/sdb: Timing cached reads: 3562 MB in 2.00 seconds = 1783.29 MB/sec Timing buffered disk reads: 82 MB in 3.01 seconds = 27.26 MB/sec kaefert@blechmobil:~$ sudo hdparm /dev/sdb /dev/sdb: multcount = 0 (off) readonly = 0 (off) readahead = 256 (on) geometry = 364801/255/63, sectors = 5860533160, start = 0 However, although I can read quickly from that disk, this disk speed doesn't seem to be used by e2fsck, considering tools like gkrellm or iotop or this: kaefert@blechmobil:~$ iostat -x Linux 3.2.0-2-amd64 (blechmobil) 2012-11-06 _x86_64_ (2 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 14,24 47,81 14,63 0,95 0,00 22,37 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0,59 8,29 2,42 5,14 43,17 160,17 53,75 0,30 39,80 8,72 54,42 3,95 2,99 sdb 137,54 5,48 9,23 0,20 587,07 22,73 129,35 0,07 7,70 7,51 16,18 2,17 2,04 Now I researched a little bit on how to find out what e2fsck is doing with all that processor time, and I found the tool strace, which gives me this: kaefert@blechmobil:~$ sudo strace -p3704 lseek(4, 41026998272, SEEK_SET) = 41026998272 write(4, "\212\354K[_\361\3nl\212\245\352\255jR\303\354\312Yv\334p\253r\217\265\3567\325\257\3766"..., 4096) = 4096 lseek(4, 48404766720, SEEK_SET) = 48404766720 read(4, "\7t\260\366\346\337\304\210\33\267j\35\377'\31f\372\252\ffU\317.y\211\360\36\240c\30`\34"..., 4096) = 4096 lseek(4, 41027002368, SEEK_SET) = 41027002368 write(4, "\232]7Ws\321\352\t\1@[+5\263\334\276{\343zZx\352\21\316`1\271[\202\350R`"..., 4096) = 4096 lseek(4, 48404770816, SEEK_SET) = 48404770816 read(4, "\17\362r\230\327\25\346//\210H\v\311\3237\323K\304\306\361a\223\311\324\272?\213\tq \370\24"..., 4096) = 4096 lseek(4, 41027006464, SEEK_SET) = 41027006464 write(4, "\367yy>x\216?=\324Z\305\351\376&\25\244\210\271\22\306}\276\237\370(\214\205G\262\360\257#"..., 4096) = 4096 lseek(4, 48404774912, SEEK_SET) = 48404774912 read(4, "\365\25\0\21|T\0\21}3t_\272\373\222k\r\177\303\1\201\261\221$\261B\232\3142\21U\316"..., 4096) = 4096 ^CProcess 3704 detached around 16 of these lines every second, so 4 read and 4 write operations every second, which I don't consider to be a lot.. And finally, my question: Will this process ever finish? If those numbers from fseek (48404774912) represent bytes, that would be something like 45 gigabytes, with this beeing a 3 terrabyte disk, which would give me 134 days to go, if the speed stays constant, and e2fsck scans the disk like this completly and only once. Do you have some advice for me? I have most of the data on that disk elsewhere, but I've put a lot of hours into sorting and merging it to this disk, so I would prefer to getting this disk up and running again, without formatting it anew. I don't think that the hardware is damaged since the disk is only a few months and since I can't see any I/O errors in the dmesg output. UPDATE: I just looked at the strace output again (2012-11-06_2300), now it looks like this: lseek(4, 1419860611072, SEEK_SET) = 1419860611072 read(4, "3#\f\2447\335\0\22A\355\374\276j\204'\207|\217V|\23\245[\7VP\251\242\276\207\317:"..., 4096) = 4096 lseek(4, 43018145792, SEEK_SET) = 43018145792 write(4, "]\206\231\342Y\204-2I\362\242\344\6R\205\361\324\177\265\317C\334V\324\260\334\275t=\10F."..., 4096) = 4096 lseek(4, 1419860615168, SEEK_SET) = 1419860615168 read(4, "\262\305\314Y\367\37x\326\245\226\226\320N\333$s\34\204\311\222\7\315\236\336\300TK\337\264\236\211n"..., 4096) = 4096 lseek(4, 43018149888, SEEK_SET) = 43018149888 write(4, "\271\224m\311\224\25!I\376\16;\377\0\223H\25Yd\201Y\342\r\203\271\24eG<\202{\373V"..., 4096) = 4096 lseek(4, 1419860619264, SEEK_SET) = 1419860619264 read(4, ";d\360\177\n\346\253\210\222|\250\352T\335M\33\260\320\261\7g\222P\344H?t\240\20\2548\310"..., 4096) = 4096 lseek(4, 43018153984, SEEK_SET) = 43018153984 write(4, "\360\252j\317\310\251G\227\335{\214`\341\267\31Y\202\360\v\374\307oq\3063\217Z\223\313\36D\211"..., 4096) = 4096 So the numbers in the lseek lines before the reads, like 1419860619264 are already a lot bigger, standing for 1.29 terabytes if those numbers are bytes, so it doesn't seem to be a linear progress on a big scale, maybe there are only some areas that need work, that have big gaps in between them. UPDATE2: Okey, big disappointment, the numbers are back to very small again (2012-11-07_0720) lseek(4, 52174548992, SEEK_SET) = 52174548992 read(4, "\374\312\22\\\325\215\213\23\0357U\222\246\370v^f(\312|f\212\362\343\375\373\342\4\204mU6"..., 4096) = 4096 lseek(4, 46603526144, SEEK_SET) = 46603526144 write(4, "\370\261\223\227\23?\4\4\217\264\320_Am\246CQ\313^\203U\253\274\204\277\2564n\227\177\267\343"..., 4096) = 4096 so either e2fsck goes over the data multiple times, or it just hops back and forth multiple times. Or my assumption that those numbers are bytes is wrong. UPDATE3: Since it's mentioned here http://forums.fedoraforum.org/showthread.php?t=282125&page=2 that you can testisk while e2fsck is running, i tried that, though not with a lot of success. When asking testdisk to display the data of my partition, this is what I get: TestDisk 6.13, Data Recovery Utility, November 2011 Christophe GRENIER <[email protected]> http://www.cgsecurity.org 1 P Linux 0 4 5 45600 40 8 732566272 Can't open filesystem. Filesystem seems damaged. And this is what strace currently gives me (2012-11-07_1030) lseek(4, 212460343296, SEEK_SET) = 212460343296 read(4, "\315Mb\265v\377Gn \24\f\205EHh\2349~\330\273\203\3375\206\10\r3=W\210\372\352"..., 4096) = 4096 lseek(4, 47347830784, SEEK_SET) = 47347830784 write(4, "]\204\223\300I\357\4\26\33+\243\312G\230\250\371*m2U\t_\215\265J \252\342Pm\360D"..., 4096) = 4096 (times are in CET)

    Read the article

  • Microsoft and jQuery

    - by Rick Strahl
    The jQuery JavaScript library has been steadily getting more popular and with recent developments from Microsoft, jQuery is also getting ever more exposure on the ASP.NET platform including now directly from Microsoft. jQuery is a light weight, open source DOM manipulation library for JavaScript that has changed how many developers think about JavaScript. You can download it and find more information on jQuery on www.jquery.com. For me jQuery has had a huge impact on how I develop Web applications and was probably the main reason I went from dreading to do JavaScript development to actually looking forward to implementing client side JavaScript functionality. It has also had a profound impact on my JavaScript skill level for me by seeing how the library accomplishes things (and often reviewing the terse but excellent source code). jQuery made an uncomfortable development platform (JavaScript + DOM) a joy to work on. Although jQuery is by no means the only JavaScript library out there, its ease of use, small size, huge community of plug-ins and pure usefulness has made it easily the most popular JavaScript library available today. As a long time jQuery user, I’ve been excited to see the developments from Microsoft that are bringing jQuery to more ASP.NET developers and providing more integration with jQuery for ASP.NET’s core features rather than relying on the ASP.NET AJAX library. Microsoft and jQuery – making Friends jQuery is an open source project but in the last couple of years Microsoft has really thrown its weight behind supporting this open source library as a supported component on the Microsoft platform. When I say supported I literally mean supported: Microsoft now offers actual tech support for jQuery as part of their Product Support Services (PSS) as jQuery integration has become part of several of the ASP.NET toolkits and ships in several of the default Web project templates in Visual Studio 2010. The ASP.NET MVC 3 framework (still in Beta) also uses jQuery for a variety of client side support features including client side validation and we can look forward toward more integration of client side functionality via jQuery in both MVC and WebForms in the future. In other words jQuery is becoming an optional but included component of the ASP.NET platform. PSS support means that support staff will answer jQuery related support questions as part of any support incidents related to ASP.NET which provides some piece of mind to some corporate development shops that require end to end support from Microsoft. In addition to including jQuery and supporting it, Microsoft has also been getting involved in providing development resources for extending jQuery’s functionality via plug-ins. Microsoft’s last version of the Microsoft Ajax Library – which is the successor to the native ASP.NET AJAX Library – included some really cool functionality for client templates, databinding and localization. As it turns out Microsoft has rebuilt most of that functionality using jQuery as the base API and provided jQuery plug-ins of these components. Very recently these three plug-ins were submitted and have been approved for inclusion in the official jQuery plug-in repository and been taken over by the jQuery team for further improvements and maintenance. Even more surprising: The jQuery-templates component has actually been approved for inclusion in the next major update of the jQuery core in jQuery V1.5, which means it will become a native feature that doesn’t require additional script files to be loaded. Imagine this – an open source contribution from Microsoft that has been accepted into a major open source project for a core feature improvement. Microsoft has come a long way indeed! What the Microsoft Involvement with jQuery means to you For Microsoft jQuery support is a strategic decision that affects their direction in client side development, but nothing stopped you from using jQuery in your applications prior to Microsoft’s official backing and in fact a large chunk of developers did so readily prior to Microsoft’s announcement. Official support from Microsoft brings a few benefits to developers however. jQuery support in Visual Studio 2010 means built-in support for jQuery IntelliSense, automatically added jQuery scripts in many projects types and a common base for client side functionality that actually uses what most developers are already using. If you have already been using jQuery and were worried about straying from the Microsoft line and their internal Microsoft Ajax Library – worry no more. With official support and the change in direction towards jQuery Microsoft is now following along what most in the ASP.NET community had already been doing by using jQuery, which is likely the reason for Microsoft’s shift in direction in the first place. ASP.NET AJAX and the Microsoft AJAX Library weren’t bad technology – there was tons of useful functionality buried in these libraries. However, these libraries never got off the ground, mainly because early incarnations were squarely aimed at control/component developers rather than application developers. For all the functionality that these controls provided for control developers they lacked in useful and easily usable application developer functionality that was easily accessible in day to day client side development. The result was that even though Microsoft shipped support for these tools in the box (in .NET 3.5 and 4.0), other than for the internal support in ASP.NET for things like the UpdatePanel and the ASP.NET AJAX Control Toolkit as well as some third party vendors, the Microsoft client libraries were largely ignored by the developer community opening the door for other client side solutions. Microsoft seems to be acknowledging developer choice in this case: Many more developers were going down the jQuery path rather than using the Microsoft built libraries and there seems to be little sense in continuing development of a technology that largely goes unused by the majority of developers. Kudos for Microsoft for recognizing this and gracefully changing directions. Note that even though there will be no further development in the Microsoft client libraries they will continue to be supported so if you’re using them in your applications there’s no reason to start running for the exit in a panic and start re-writing everything with jQuery. Although that might be a reasonable choice in some cases, jQuery and the Microsoft libraries work well side by side so that you can leave existing solutions untouched even as you enhance them with jQuery. The Microsoft jQuery Plug-ins – Solid Core Features One of the most interesting developments in Microsoft’s embracing of jQuery is that Microsoft has started contributing to jQuery via standard mechanism set for jQuery developers: By submitting plug-ins. Microsoft took some of the nicest new features of the unpublished Microsoft Ajax Client Library and re-wrote these components for jQuery and then submitted them as plug-ins to the jQuery plug-in repository. Accepted plug-ins get taken over by the jQuery team and that’s exactly what happened with the three plug-ins submitted by Microsoft with the templating plug-in even getting slated to be published as part of the jQuery core in the next major release (1.5). The following plug-ins are provided by Microsoft: jQuery Templates – a client side template rendering engine jQuery Data Link – a client side databinder that can synchronize changes without code jQuery Globalization – provides formatting and conversion features for dates and numbers The first two are ports of functionality that was slated for the Microsoft Ajax Library while functionality for the globalization library provides functionality that was already found in the original ASP.NET AJAX library. To me all three plug-ins address a pressing need in client side applications and provide functionality I’ve previously used in other incarnations, but with more complete implementations. Let’s take a close look at these plug-ins. jQuery Templates http://api.jquery.com/category/plugins/templates/ Client side templating is a key component for building rich JavaScript applications in the browser. Templating on the client lets you avoid from manually creating markup by creating DOM nodes and injecting them individually into the document via code. Rather you can create markup templates – similar to the way you create classic ASP server markup – and merge data into these templates to render HTML which you can then inject into the document or replace existing content with. Output from templates are rendered as a jQuery matched set and can then be easily inserted into the document as needed. Templating is key to minimize client side code and reduce repeated code for rendering logic. Instead a single template can be used in many places for updating and adding content to existing pages. Further if you build pure AJAX interfaces that rely entirely on client rendering of the initial page content, templates allow you to a use a single markup template to handle all rendering of each specific HTML section/element. I’ve used a number of different client rendering template engines with jQuery in the past including jTemplates (a PHP style templating engine) and a modified version of John Resig’s MicroTemplating engine which I built into my own set of libraries because it’s such a commonly used feature in my client side applications. jQuery templates adds a much richer templating model that allows for sub-templates and access to the data items. Like John Resig’s original Micro Template engine, the core basics of the templating engine create JavaScript code which means that templates can include JavaScript code. To give you a basic idea of how templates work imagine I have an application that downloads a set of stock quotes based on a symbol list then displays them in the document. To do this you can create an ‘item’ template that describes how each of the quotes is renderd as a template inside of the document: <script id="stockTemplate" type="text/x-jquery-tmpl"> <div id="divStockQuote" class="errordisplay" style="width: 500px;"> <div class="label">Company:</div><div><b>${Company}(${Symbol})</b></div> <div class="label">Last Price:</div><div>${LastPrice}</div> <div class="label">Net Change:</div><div> {{if NetChange > 0}} <b style="color:green" >${NetChange}</b> {{else}} <b style="color:red" >${NetChange}</b> {{/if}} </div> <div class="label">Last Update:</div><div>${LastQuoteTimeString}</div> </div> </script> The ‘template’ is little more than HTML with some markup expressions inside of it that define the template language. Notice the embedded ${} expressions which reference data from the quote objects returned from an AJAX call on the server. You can embed any JavaScript or value expression in these template expressions. There are also a number of structural commands like {{if}} and {{each}} that provide for rudimentary logic inside of your templates as well as commands ({{tmpl}} and {{wrap}}) for nesting templates. You can find more about the full set of markup expressions available in the documentation. To load up this data you can use code like the following: <script type="text/javascript"> //var Proxy = new ServiceProxy("../PageMethods/PageMethodsService.asmx/"); $(document).ready(function () { $("#btnGetQuotes").click(GetQuotes); }); function GetQuotes() { var symbols = $("#txtSymbols").val().split(","); $.ajax({ url: "../PageMethods/PageMethodsService.asmx/GetStockQuotes", data: JSON.stringify({ symbols: symbols }), // parameter map type: "POST", // data has to be POSTed contentType: "application/json", timeout: 10000, dataType: "json", success: function (result) { var quotes = result.d; var jEl = $("#stockTemplate").tmpl(quotes); $("#quoteDisplay").empty().append(jEl); }, error: function (xhr, status) { alert(status + "\r\n" + xhr.responseText); } }); }; </script> In this case an ASMX AJAX service is called to retrieve the stock quotes. The service returns an array of quote objects. The result is returned as an object with the .d property (in Microsoft service style) that returns the actual array of quotes. The template is applied with: var jEl = $("#stockTemplate").tmpl(quotes); which selects the template script tag and uses the .tmpl() function to apply the data to it. The result is a jQuery matched set of elements that can then be appended to the quote display element in the page. The template is merged against an array in this example. When the result is an array the template is automatically applied to each each array item. If you pass a single data item – like say a stock quote – the template works exactly the same way but is applied only once. Templates also have access to a $data item which provides the current data item and information about the tempalte that is currently executing. This makes it possible to keep context within the context of the template itself and also to pass context from a parent template to a child template which is very powerful. Templates can be evaluated by using the template selector and calling the .tmpl() function on the jQuery matched set as shown above or you can use the static $.tmpl() function to provide a template as a string. This allows you to dynamically create templates in code or – more likely – to load templates from the server via AJAX calls. In short there are options The above shows off some of the basics, but there’s much for functionality available in the template engine. Check the documentation link for more information and links to additional examples. The plug-in download also comes with a number of examples that demonstrate functionality. jQuery templates will become a native component in jQuery Core 1.5, so it’s definitely worthwhile checking out the engine today and get familiar with this interface. As much as I’m stoked about templating becoming part of the jQuery core because it’s such an integral part of many applications, there are also a couple shortcomings in the current incarnation: Lack of Error Handling Currently if you embed an expression that is invalid it’s simply not rendered. There’s no error rendered into the template nor do the various  template functions throw errors which leaves finding of bugs as a runtime exercise. I would like some mechanism – optional if possible – to be able to get error info of what is failing in a template when it’s rendered. No String Output Templates are always rendered into a jQuery matched set and there’s no way that I can see to directly render to a string. String output can be useful for debugging as well as opening up templating for creating non-HTML string output. Limited JavaScript Access Unlike John Resig’s original MicroTemplating Engine which was entirely based on JavaScript code generation these templates are limited to a few structured commands that can ‘execute’. There’s no code execution inside of script code which means you’re limited to calling expressions available in global objects or the data item passed in. This may or may not be a big deal depending on the complexity of your template logic. Error handling has been discussed quite a bit and it’s likely there will be some solution to that particualar issue by the time jQuery templates ship. The others are relatively minor issues but something to think about anyway. jQuery Data Link http://api.jquery.com/category/plugins/data-link/ jQuery Data Link provides the ability to do two-way data binding between input controls and an underlying object’s properties. The typical scenario is linking a textbox to a property of an object and have the object updated when the text in the textbox is changed and have the textbox change when the value in the object or the entire object changes. The plug-in also supports converter functions that can be applied to provide the conversion logic from string to some other value typically necessary for mapping things like textbox string input to say a number property and potentially applying additional formatting and calculations. In theory this sounds great, however in reality this plug-in has some serious usability issues. Using the plug-in you can do things like the following to bind data: person = { firstName: "rick", lastName: "strahl"}; $(document).ready( function() { // provide for two-way linking of inputs $("form").link(person); // bind to non-input elements explicitly $("#objFirst").link(person, { firstName: { name: "objFirst", convertBack: function (value, source, target) { $(target).text(value); } } }); $("#objLast").link(person, { lastName: { name: "objLast", convertBack: function (value, source, target) { $(target).text(value); } } }); }); This code hooks up two-way linking between a couple of textboxes on the page and the person object. The first line in the .ready() handler provides mapping of object to form field with the same field names as properties on the object. Note that .link() does NOT bind items into the textboxes when you call .link() – changes are mapped only when values change and you move out of the field. Strike one. The two following commands allow manual binding of values to specific DOM elements which is effectively a one-way bind. You specify the object and a then an explicit mapping where name is an ID in the document. The converter is required to explicitly assign the value to the element. Strike two. You can also detect changes to the underlying object and cause updates to the input elements bound. Unfortunately the syntax to do this is not very natural as you have to rely on the jQuery data object. To update an object’s properties and get change notification looks like this: function updateFirstName() { $(person).data("firstName", person.firstName + " (code updated)"); } This works fine in causing any linked fields to be updated. In the bindings above both the firstName input field and objFirst DOM element gets updated. But the syntax requires you to use a jQuery .data() call for each property change to ensure that the changes are tracked properly. Really? Sure you’re binding through multiple layers of abstraction now but how is that better than just manually assigning values? The code savings (if any) are going to be minimal. As much as I would like to have a WPF/Silverlight/Observable-like binding mechanism in client script, this plug-in doesn’t help much towards that goal in its current incarnation. While you can bind values, the ‘binder’ is too limited to be really useful. If initial values can’t be assigned from the mappings you’re going to end up duplicating work loading the data using some other mechanism. There’s no easy way to re-bind data with a different object altogether since updates trigger only through the .data members. Finally, any non-input elements have to be bound via code that’s fairly verbose and frankly may be more voluminous than what you might write by hand for manual binding and unbinding. Two way binding can be very useful but it has to be easy and most importantly natural. If it’s more work to hook up a binding than writing a couple of lines to do binding/unbinding this sort of thing helps very little in most scenarios. In talking to some of the developers the feature set for Data Link is not complete and they are still soliciting input for features and functionality. If you have ideas on how you want this feature to be more useful get involved and post your recommendations. As it stands, it looks to me like this component needs a lot of love to become useful. For this component to really provide value, bindings need to be able to be refreshed easily and work at the object level, not just the property level. It seems to me we would be much better served by a model binder object that can perform these binding/unbinding tasks in bulk rather than a tool where each link has to be mapped first. I also find the choice of creating a jQuery plug-in questionable – it seems a standalone object – albeit one that relies on the jQuery library – would provide a more intuitive interface than the current forcing of options onto a plug-in style interface. Out of the three Microsoft created components this is by far the least useful and least polished implementation at this point. jQuery Globalization http://github.com/jquery/jquery-global Globalization in JavaScript applications often gets short shrift and part of the reason for this is that natively in JavaScript there’s little support for formatting and parsing of numbers and dates. There are a number of JavaScript libraries out there that provide some support for globalization, but most are limited to a particular portion of globalization. As .NET developers we’re fairly spoiled by the richness of APIs provided in the framework and when dealing with client development one really notices the lack of these features. While you may not necessarily need to localize your application the globalization plug-in also helps with some basic tasks for non-localized applications: Dealing with formatting and parsing of dates and time values. Dates in particular are problematic in JavaScript as there are no formatters whatsoever except the .toString() method which outputs a verbose and next to useless long string. With the globalization plug-in you get a good chunk of the formatting and parsing functionality that the .NET framework provides on the server. You can write code like the following for example to format numbers and dates: var date = new Date(); var output = $.format(date, "MMM. dd, yy") + "\r\n" + $.format(date, "d") + "\r\n" + // 10/25/2010 $.format(1222.32213, "N2") + "\r\n" + $.format(1222.33, "c") + "\r\n"; alert(output); This becomes even more useful if you combine it with templates which can also include any JavaScript expressions. Assuming the globalization plug-in is loaded you can create template expressions that use the $.format function. Here’s the template I used earlier for the stock quote again with a couple of formats applied: <script id="stockTemplate" type="text/x-jquery-tmpl"> <div id="divStockQuote" class="errordisplay" style="width: 500px;"> <div class="label">Company:</div><div><b>${Company}(${Symbol})</b></div> <div class="label">Last Price:</div> <div>${$.format(LastPrice,"N2")}</div> <div class="label">Net Change:</div><div> {{if NetChange > 0}} <b style="color:green" >${NetChange}</b> {{else}} <b style="color:red" >${NetChange}</b> {{/if}} </div> <div class="label">Last Update:</div> <div>${$.format(LastQuoteTime,"MMM dd, yyyy")}</div> </div> </script> There are also parsing methods that can parse dates and numbers from strings into numbers easily: alert($.parseDate("25.10.2010")); alert($.parseInt("12.222")); // de-DE uses . for thousands separators As you can see culture specific options are taken into account when parsing. The globalization plugin provides rich support for a variety of locales: Get a list of all available cultures Query cultures for culture items (like currency symbol, separators etc.) Localized string names for all calendar related items (days of week, months) Generated off of .NET’s supported locales In short you get much of the same functionality that you already might be using in .NET on the server side. The plugin includes a huge number of locales and an Globalization.all.min.js file that contains the text defaults for each of these locales as well as small locale specific script files that define each of the locale specific settings. It’s highly recommended that you NOT use the huge globalization file that includes all locales, but rather add script references to only those languages you explicitly care about. Overall this plug-in is a welcome helper. Even if you use it with a single locale (like en-US) and do no other localization, you’ll gain solid support for number and date formatting which is a vital feature of many applications. Changes for Microsoft It’s good to see Microsoft coming out of its shell and away from the ‘not-built-here’ mentality that has been so pervasive in the past. It’s especially good to see it applied to jQuery – a technology that has stood in drastic contrast to Microsoft’s own internal efforts in terms of design, usage model and… popularity. It’s great to see that Microsoft is paying attention to what customers prefer to use and supporting the customer sentiment – even if it meant drastically changing course of policy and moving into a more open and sharing environment in the process. The additional jQuery support that has been introduced in the last two years certainly has made lives easier for many developers on the ASP.NET platform. It’s also nice to see Microsoft submitting proposals through the standard jQuery process of plug-ins and getting accepted for various very useful projects. Certainly the jQuery Templates plug-in is going to be very useful to many especially since it will be baked into the jQuery core in jQuery 1.5. I hope we see more of this type of involvement from Microsoft in the future. Kudos!© Rick Strahl, West Wind Technologies, 2005-2010Posted in jQuery  ASP.NET  

    Read the article

  • Integrating HTML into Silverlight Applications

    - by dwahlin
    Looking for a way to display HTML content within a Silverlight application? If you haven’t tried doing that before it can be challenging at first until you know a few tricks of the trade.  Being able to display HTML is especially handy when you’re required to display RSS feeds (with embedded HTML), SQL Server Reporting Services reports, PDF files (not actually HTML – but the techniques discussed will work), or other HTML content.  In this post I'll discuss three options for displaying HTML content in Silverlight applications and describe how my company is using these techniques in client applications. Displaying HTML Overlays If you need to display HTML over a Silverlight application (such as an RSS feed containing HTML data in it) you’ll need to set the Silverlight control’s windowless parameter to true. This can be done using the object tag as shown next: <object data="data:application/x-silverlight-2," type="application/x-silverlight-2" width="100%" height="100%"> <param name="source" value="ClientBin/HTMLAndSilverlight.xap"/> <param name="onError" value="onSilverlightError" /> <param name="background" value="white" /> <param name="minRuntimeVersion" value="4.0.50401.0" /> <param name="autoUpgrade" value="true" /> <param name="windowless" value="true" /> <a href="http://go.microsoft.com/fwlink/?LinkID=149156&v=4.0.50401.0" style="text-decoration:none"> <img src="http://go.microsoft.com/fwlink/?LinkId=161376" alt="Get Microsoft Silverlight" style="border-style:none"/> </a> </object> By setting the control to “windowless” you can overlay HTML objects by using absolute positioning and other CSS techniques. Keep in mind that on Windows machines the windowless setting can result in a performance hit when complex animations or HD video are running since the plug-in content is displayed directly by the browser window. It goes without saying that you should only set windowless to true when you really need the functionality it offers. For example, if I want to display my blog’s RSS content on top of a Silverlight application I could set windowless to true and create a user control that grabbed the content and output it using a DataList control: <style type="text/css"> a {text-decoration:none;font-weight:bold;font-size:14pt;} </style> <div style="margin-top:10px; margin-left:10px;margin-right:5px;"> <asp:DataList ID="RSSDataList" runat="server" DataSourceID="RSSDataSource"> <ItemTemplate> <a href='<%# XPath("link") %>'><%# XPath("title") %></a> <br /> <%# XPath("description") %> <br /> </ItemTemplate> </asp:DataList> <asp:XmlDataSource ID="RSSDataSource" DataFile="http://weblogs.asp.net/dwahlin/rss.aspx" XPath="rss/channel/item" CacheDuration="60" runat="server" /> </div> The user control can then be placed in the page hosting the Silverlight control as shown below. This example adds a Close button, additional content to display in the overlay window and the HTML generated from the user control. <div id="RSSDiv"> <div style="background-color:#484848;border:1px solid black;height:35px;width:100%;"> <img alt="Close Button" align="right" src="Images/Close.png" onclick="HideOverlay();" style="cursor:pointer;" /> </div> <div style="overflow:auto;width:800px;height:565px;"> <div style="float:left;width:100px;height:103px;margin-left:10px;margin-top:5px;"> <img src="http://weblogs.asp.net/blogs/dwahlin/dan2008.jpg" style="border:1px solid Gray" /> </div> <div style="float:left;width:300px;height:103px;margin-top:5px;"> <a href="http://weblogs.asp.net/dwahlin" style="margin-left:10px;font-size:20pt;">Dan Wahlin's Blog</a> </div> <br /><br /><br /> <div style="clear:both;margin-top:20px;"> <uc:BlogRoller ID="BlogRoller" runat="server" /> </div> </div> </div> Of course, we wouldn’t want the RSS HTML content to be shown until requested. Once it’s requested the absolute position of where it should show above the Silverlight control can be set using standard CSS styles. The following ID selector named #RSSDiv handles hiding the overlay div shown above and determines where it will be display on the screen. #RSSDiv { background-color:White; position:absolute; top:100px; left:300px; width:800px; height:600px; border:1px solid black; display:none; } Now that the HTML content to display above the Silverlight control is set, how can we show it as a user clicks a HyperlinkButton or other control in the application? Fortunately, Silverlight provides an excellent HTML bridge that allows direct access to content hosted within a page. The following code shows two JavaScript functions that can be called from Siverlight to handle showing or hiding HTML overlay content. The two functions rely on jQuery (http://www.jQuery.com) to make it easy to select HTML objects and manipulate their properties: function ShowOverlay() { rssDiv.css('display', 'block'); } function HideOverlay() { rssDiv.css('display', 'none'); } Calling the ShowOverlay function is as simple as adding the following code into the Silverlight application within a button’s Click event handler: private void OverlayHyperlinkButton_Click(object sender, RoutedEventArgs e) { HtmlPage.Window.Invoke("ShowOverlay"); } The result of setting the Silverlight control’s windowless parameter to true and showing the HTML overlay content is shown in the following screenshot:   Thinking Outside the Box to Show HTML Content Setting the windowless parameter to true may not be a viable option for some Silverlight applications or you may simply want to go about showing HTML content a different way. The next technique I’ll show takes advantage of simple HTML, CSS and JavaScript code to handle showing HTML content while a Silverlight application is running in the browser. Keep in mind that with Silverlight’s HTML bridge feature you can always pop-up HTML content in a new browser window using code similar to the following: System.Windows.Browser.HtmlPage.Window.Navigate( new Uri("http://silverlight.net"), "_blank"); For this example I’ll demonstrate how to hide the Silverlight application while maximizing a container div containing the HTML content to show. This allows HTML content to take up the full screen area of the browser without having to set windowless to true and when done right can make the user feel like they never left the Silverlight application. The following HTML shows several div elements that are used to display HTML within the same browser window as the Silverlight application: <div id="JobPlanDiv"> <div style="vertical-align:middle"> <img alt="Close Button" align="right" src="Images/Close.png" onclick="HideJobPlanIFrame();" style="cursor:pointer;" /> </div> <div id="JobPlan_IFrame_Container" style="height:95%;width:100%;margin-top:37px;"></div> </div> The JobPlanDiv element acts as a container for two other divs that handle showing a close button and hosting an iframe that will be added dynamically at runtime. JobPlanDiv isn’t visible when the Silverlight application loads due to the following ID selector added into the page: #JobPlanDiv { position:absolute; background-color:#484848; overflow:hidden; left:0; top:0; height:100%; width:100%; display:none; } When the HTML content needs to be shown or hidden the JavaScript functions shown next can be used: var jobPlanIFrameID = 'JobPlan_IFrame'; var slHost = null; var jobPlanContainer = null; var jobPlanIFrameContainer = null; var rssDiv = null; $(document).ready(function () { slHost = $('#silverlightControlHost'); jobPlanContainer = $('#JobPlanDiv'); jobPlanIFrameContainer = $('#JobPlan_IFrame_Container'); rssDiv = $('#RSSDiv'); }); function ShowJobPlanIFrame(url) { jobPlanContainer.css('display', 'block'); $('<iframe id="' + jobPlanIFrameID + '" src="' + url + '" style="height:100%;width:100%;" />') .appendTo(jobPlanIFrameContainer); slHost.css('width', '0%'); } function HideJobPlanIFrame() { jobPlanContainer.css('display', 'none'); $('#' + jobPlanIFrameID).remove(); slHost.css('width', '100%'); } ShowJobPlanIFrame() handles showing the JobPlanDiv div and adding an iframe into it dynamically. Once JobPlanDiv is shown, the Silverlight control host has its width set to a value of 0% to allow the control to stay alive while making it invisible to the user. I found that this technique works better across multiple browsers as opposed to manipulating the Silverlight control host div’s display or visibility properties. Now that you’ve seen the code to handle showing and hiding the HTML content area, let’s switch focus to the Silverlight application. As a user clicks on a link such as “View Report” the ShowJobPlanIFrame() JavaScript function needs to be called. The following code handles that task: private void ReportHyperlinkButton_Click(object sender, RoutedEventArgs e) { ShowBrowser(_BaseUrl + "/Report.aspx"); } public void ShowBrowser(string url) { HtmlPage.Window.Invoke("ShowJobPlanIFrame", url); } Any URL can be passed into the ShowBrowser() method which handles invoking the JavaScript function. This includes standard web pages or even PDF files. We’ve used this technique frequently with our SmartPrint control (http://www.smartwebcontrols.com) which converts Silverlight screens into PDF documents and displays them. Here’s an example of the content generated:   Silverlight 4’s WebBrowser Control Both techniques shown to this point work well when Silverlight is running in-browser but not so well when it’s running out-of-browser since there’s no host page that you can access using the HTML bridge. Fortunately, Silverlight 4 provides a WebBrowser control that can be used to perform the same functionality quite easily. We’re currently using it in client applications to display PDF documents, SSRS reports and standard HTML content. Using the WebBrowser control simplifies the application quite a bit since no JavaScript is required if the application only runs out-of-browser. Here’s a simple example of defining the WebBrowser control in XAML. I typically define it in MainPage.xaml when a Silverlight Navigation template is used to create the project so that I can re-use the functionality across multiple screens. <Grid x:Name="WebBrowserGrid" HorizontalAlignment="Stretch" VerticalAlignment="Stretch" Visibility="Collapsed"> <StackPanel HorizontalAlignment="Stretch" VerticalAlignment="Stretch"> <Border Background="#484848" HorizontalAlignment="Stretch" Height="40"> <Image x:Name="WebBrowserImage" Width="100" Height="33" Cursor="Hand" HorizontalAlignment="Right" Source="/HTMLAndSilverlight;component/Assets/Images/Close.png" MouseLeftButtonDown="WebBrowserImage_MouseLeftButtonDown" /> </Border> <WebBrowser x:Name="JobPlanReportWebBrowser" HorizontalAlignment="Stretch" VerticalAlignment="Stretch" /> </StackPanel> </Grid> Looking through the XAML you can see that a close image is defined along with the WebBrowser control. Because the URL that the WebBrowser should navigate to isn’t known at design time no value is assigned to the control’s Source property. If the XAML shown above is left “as is” you’ll find that any HTML content assigned to the WebBrowser doesn’t display properly. This is due to no height or width being set on the control. To handle this issue the following code is added into the XAML’s code-behind file to dynamically determine the height and width of the page and assign it to the WebBrowser. This is done by handling the SizeChanged event. void MainPage_SizeChanged(object sender, SizeChangedEventArgs e) { WebBrowserGrid.Height = JobPlanReportWebBrowser.Height = ActualHeight; WebBrowserGrid.Width = JobPlanReportWebBrowser.Width = ActualWidth; } When the user wants to view HTML content they click a button which executes the code shown in next: public void ShowBrowser(string url) { if (Application.Current.IsRunningOutOfBrowser) { JobPlanReportWebBrowser.NavigateToString("<html><body><iframe src='" + url + "' style='width:100%;height:97%;' /></body></html>"); WebBrowserGrid.Visibility = Visibility.Visible; } else { HtmlPage.Window.Invoke("ShowJobPlanIFrame", url); } } private void WebBrowserImage_MouseLeftButtonDown(object sender, MouseButtonEventArgs e) { WebBrowserGrid.Visibility = Visibility.Collapsed; }   Looking through the code you’ll see that it checks to see if the Silverlight application is running out-of-browser and then either displays the WebBrowser control or runs the JavaScript function discussed earlier. Although the WebBrowser control’s Source property could be assigned the URI of the page to navigate to, by assigning HTML content using the NavigateToString() method and adding an iframe, content can be shown from any site including cross-domain sites. This is especially handy when you need to grab a page from a reporting site that’s in a different domain than the Silverlight application. Here’s an example of viewing  PDF file inside of an out-of-browser application. The first image shows the application running out-of-browser before the user clicks a PDF HyperlinkButton.  The second image shows the PDF being displayed.   While there are certainly other techniques that can be used, the ones shown here have worked well for us in different applications and provide the ability to display HTML content in-browser or out-of-browser. Feel free to add a comment if you have another tip or trick you like to use when working with HTML content in Silverlight applications.   Download Code Sample   For more information about onsite, online and video training, mentoring and consulting solutions for .NET, SharePoint or Silverlight please visit http://www.thewahlingroup.com.

    Read the article

< Previous Page | 363 364 365 366 367 368 369 370 371  | Next Page >