Search Results

Search found 68825 results on 2753 pages for 'problem'.

Page 589/2753 | < Previous Page | 585 586 587 588 589 590 591 592 593 594 595 596  | Next Page >

  • Lockless queue implementation ends up having a loop under stress

    - by Fozi
    I have lockless queues written in C in form of a linked list that contains requests from several threads posted to and handled in a single thread. After a few hours of stress I end up having the last request's next pointer pointing to itself, which creates an endless loop and locks up the handling thread. The application runs (and fails) on both Linux and Windows. I'm debugging on Windows, where my COMPARE_EXCHANGE_PTR maps to InterlockedCompareExchangePointer. This is the code that pushes a request to the head of the list, and is called from several threads: void push_request(struct request * volatile * root, struct request * request) { assert(request); do { request->next = *root; } while(COMPARE_EXCHANGE_PTR(root, request, request->next) != request->next); } This is the code that gets a request from the end of the list, and is only called by a single thread that is handling them: struct request * pop_request(struct request * volatile * root) { struct request * volatile * p; struct request * request; do { p = root; while(*p && (*p)->next) p = &(*p)->next; // <- loops here request = *p; } while(COMPARE_EXCHANGE_PTR(p, NULL, request) != request); assert(request->next == NULL); return request; } Note that I'm not using a tail pointer because I wanted to avoid the complication of having to deal with the tail pointer in push_request. However I suspect that the problem might be in the way I find the end of the list. There are several places that push a request into the queue, but they all look generaly like this: // device->requests is defined as struct request * volatile requests; struct request * request = malloc(sizeof(struct request)); if(request) { // fill out request fields push_request(&device->requests, request); sem_post(device->request_sem); } The code that handles the request is doing more than that, but in essence does this in a loop: if(sem_wait_timeout(device->request_sem, timeout) == sem_success) { struct request * request = pop_request(&device->requests); // handle request free(request); } I also just added a function that is checking the list for duplicates before and after each operation, but I'm afraid that this check will change the timing so that I will never encounter the point where it fails. (I'm waiting for it to break as I'm writing this.) When I break the hanging program the handler thread loops in pop_request at the marked position. I have a valid list of one or more requests and the last one's next pointer points to itself. The request queues are usually short, I've never seen more then 10, and only 1 and 3 the two times I could take a look at this failure in the debugger. I thought this through as much as I could and I came to the conclusion that I should never be able to end up with a loop in my list unless I push the same request twice. I'm quite sure that this never happens. I'm also fairly sure (although not completely) that it's not the ABA problem. I know that I might pop more than one request at the same time, but I believe this is irrelevant here, and I've never seen it happening. (I'll fix this as well) I thought long and hard about how I can break my function, but I don't see a way to end up with a loop. So the question is: Can someone see a way how this can break? Can someone prove that this can not? Eventually I will solve this (maybe by using a tail pointer or some other solution - locking would be a problem because the threads that post should not be locked, I do have a RW lock at hand though) but I would like to make sure that changing the list actually solves my problem (as opposed to makes it just less likely because of different timing).

    Read the article

  • Creating an element and insertBefore is not working...

    - by sologhost
    Ok, I've been banging my head up against the wall on this and I have no clue why it isn't creating the element. Maybe something very small that I overlooked here. Basically, there is this Javascript code that is in a PHP document being outputted, like somewhere in the middle of when the page gets loaded, NOW, unfortunately it can't go into the header. Though I'm not sure that that is the problem anyways, but perhaps it is... hmmmmm. // Setting the variables needed to be set. echo ' <script type="text/javascript" src="' . $settings['default_theme_url'] . '/scripts/dpShoutbox.js"></script>'; echo ' <script type="text/javascript"> var refreshRate = ', $params['refresh_rate'], '; createEventListener(window); window.addEventListener("load", loadShouts, false); function loadShouts() { var alldivs = document.getElementsByTagName(\'div\'); var shoutCount = 0; var divName = "undefined"; for (var i = 0; i<alldivs.length; i++) { var is_counted = 0; divName = alldivs[i].getAttribute(\'name\'); if (divName.indexOf(\'dp_Reserved_Shoutbox\') < 0 && divName.indexOf(\'dp_Reserved_Counted\') < 0) continue; else if(divName == "undefined") continue; else { if (divName.indexOf(\'dp_Reserved_Counted\') == 0) { is_counted = 0; shoutCount++; continue; } else { shoutCount++; is_counted = 1; } } // Empty out the name attr. alldivs[i].name = \'dp_Reserved_Counted\'; var shoutId = \'shoutbox_area\' + shoutCount; // Build the div to be inserted. var shoutHolder = document.createElement(\'div\'); shoutHolder.setAttribute(\'id\', [shoutId]); shoutHolder.setAttribute(\'class\', \'dp_control_flow\'); shoutHolder.style.cssText = \'padding-right: 6px;\'; alldivs[i].parentNode.insertBefore(shoutHolder, alldivs[i]); if (is_counted == 1) { startShouts(refreshRate, shoutId); break; } } } </script>'; Also, I'm sure the other functions that I'm linking to within these functions work just fine. The problem here is that within this function, the div never gets created at all and I can't understand why? Furthermore Firefox, FireBug is telling me that the variable divName is undefined, even though I have attempted to take care of this within the function, though not sure why. Anyways, I need the created div element to be inserted just before the following HTML: echo ' <div name="dp_Reserved_Shoutbox" style="padding-bottom: 9px;"></div>'; I'm using name here instead of id because I don't want duplicate id values which is why I'm changing the name value and incrementing, since this function may be called more than 1 time. For example if there are 3 shoutboxes on the same page (Don't ask why...lol), I need to skip the other names that I already changed to "dp_Reserved_Counted", which I believe I am doing correctly. In any case, if I could I would place this into the header and have it called just once, but this isn't possible as these are loaded and no way of telling which one's they are, so it's directly hard-coded into the actual output on the page of where the shoutbox is within the HTML. Basically, not sure if that is the problem or not, but there must be some sort of work-around, unless the problem is within my code above... arrg Please help me. Thanks :)

    Read the article

  • mysql_close doesn't kill locked sql requests

    - by Nikita
    I use mysqld Ver 5.1.37-2-log for debian-linux-gnu I perform mysql calls from c++ code with functions mysql_query. The problem occurs when mysql_query execute procedure, procedure locked on locked table, so mysql_query hangs. If send kill signal to application then we can see lock until table is locked. Create the following SQL table and procedure CREATE TABLE IF NOT EXISTS `tabletolock` ( `id` INT NOT NULL AUTO_INCREMENT, PRIMARY KEY (`id`) )ENGINE = InnoDB; DELIMITER $$ DROP PROCEDURE IF EXISTS `LOCK_PROCEDURE` $$ CREATE PROCEDURE `LOCK_PROCEDURE`() BEGIN SELECT id INTO @id FROM tabletolock; END $$ DELOMITER; There are sql commands to reproduce the problem: 1. in one terminal execute lock tables tabletolock write; 2. in another terminal execute call LOCK_PROCEDURE(); 3. In first terminal exeute show processlist and see | 2492 | root | localhost | syn_db | Query | 12 | Locked | SELECT id INTO @id FROM tabletolock | Then perfrom Ctrl-C in second terminal to interrupt our procudere and see processlist again. It is not changed, we already see locked select request and can teminate it by unlock tables or kill commands. Problem described is occured with mysql command line client. Also such problem exists when we use functions mysql_query and mysql_close. Example of c code: #include <iostream> #include <mysql/mysql.h> #include <mysql/errmsg.h> #include <signal.h> // g++ -Wall -g -fPIC -lmysqlclient dbtest.cpp using namespace std; MYSQL * connection = NULL; void closeconnection() { if(connection != NULL) { cout << "close connection !\n"; mysql_close(connection); mysql_thread_end(); delete connection; mysql_library_end(); } } void sigkill(int s) { closeconnection(); signal(SIGINT, NULL); raise(s); } int main(int argc, char ** argv) { signal(SIGINT, sigkill); connection = new MYSQL; mysql_init(connection); mysql_options(connection, MYSQL_READ_DEFAULT_GROUP, "nnfc"); if (!mysql_real_connect(connection, "127.0.0.1", "user", "password", "db", 3306, NULL, CLIENT_MULTI_RESULTS)) { delete connection; cout << "cannot connect\n"; return -1; } cout << "before procedure call\n"; mysql_query(connection, "CALL LOCK_PROCEDURE();"); cout << "after procedure call\n"; closeconnection(); return 0; } Compile it, and perform the folloing actions: 1. in first terminal local tables tabletolock write; 2. run program ./a.out 3. interrupt program Ctrl-C. on the screen we see that closeconnection function is called, so connection is closed. 4. in first terminal execute show processlist and see that procedure was not intrrupted. My question is how to terminate such locked calls from c code? Thank you in advance!

    Read the article

  • Resolving a Forward Declaration Issue Involving a State Machine in C++

    - by hypersonicninja
    I've recently returned to C++ development after a hiatus, and have a question regarding implementation of the State Design Pattern. I'm using the vanilla pattern, exactly as per the GoF book. My problem is that the state machine itself is based on some hardware used as part of an embedded system - so the design is fixed and can't be changed. This results in a circular dependency between two of the states (in particular), and I'm trying to resolve this. Here's the simplified code (note that I tried to resolve this by using headers as usual but still had problems - I've omitted them in this code snippet): #include <iostream> #include <memory> using namespace std; class Context { public: friend class State; Context() { } private: State* m_state; }; class State { public: State() { } virtual void Trigger1() = 0; virtual void Trigger2() = 0; }; class LLT : public State { public: LLT() { } void Trigger1() { new DH(); } void Trigger2() { new DL(); } }; class ALL : public State { public: ALL() { } void Trigger1() { new LLT(); } void Trigger2() { new DH(); } }; // DL needs to 'know' about DH. class DL : public State { public: DL() { } void Trigger1() { new ALL(); } void Trigger2() { new DH(); } }; class HLT : public State { public: HLT() { } void Trigger1() { new DH(); } void Trigger2() { new DL(); } }; class AHL : public State { public: AHL() { } void Trigger1() { new DH(); } void Trigger2() { new HLT(); } }; // DH needs to 'know' about DL. class DH : public State { public: DH () { } void Trigger1() { new AHL(); } void Trigger2() { new DL(); } }; int main() { auto_ptr<LLT> llt (new LLT); auto_ptr<ALL> all (new ALL); auto_ptr<DL> dl (new DL); auto_ptr<HLT> hlt (new HLT); auto_ptr<AHL> ahl (new AHL); auto_ptr<DH> dh (new DH); return 0; } The problem is basically that in the State Pattern, state transitions are made by invoking the the ChangeState method in the Context class, which invokes the constructor of the next state. Because of the circular dependency, I can't invoke the constructor because it's not possible to pre-define both of the constructors of the 'problem' states. I had a look at this article, and the template method which seemed to be the ideal solution - but it doesn't compile and my knowledge of templates is a rather limited... The other idea I had is to try and introduce a Helper class to the subclassed states, via multiple inheritance, to see if it's possible to specify the base class's constructor and have a reference to the state subclasse's constructor. But I think that was rather ambitious... Finally, would a direct implmentation of the Factory Method Design Pattern be the best way to resolve the entire problem?

    Read the article

  • Received memory warning on setimage

    - by Sam Budda
    This problem has completely stumped me. This is for iOS 5.0 with Xcode 4.2 What's going on is that in my app I let user select images from their photo album and I save those images to apps document directory. Pretty straight forward. What I do then is that in one of the viewController.m files I create multiple UIImageViews and I then set the image for the image view from one of the picture that user selected from apps dir. The problem is that after a certain number of UIImage sets I receive a "Received memory warning". It usually happens when there are 10 pictures. If lets say user selected 11 pictures then the app crashes with Error (GBC). NOTE: each of these images are at least 2.5 MB a piece. After hours of testing I finally narrowed down the problem to this line of code [button1AImgVw setImage:image]; If I comment out that code. All compiles fine and no memory errors happen. But if I don't comment out that code I receive memory errors and eventually a crash. Also note it does process the whole CreateViews IBAction but still crashes at the end. I cannot do release or dealloc since I am running this on iOS 5.0 with Xcode 4.2 Here is the code that I used. Can anyone tell me what did I do wrong? - (void)viewDidLoad { [super viewDidLoad]; // Do any additional setup after loading the view, typically from a nib. [self CreateViews]; } -(IBAction) CreateViews { paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask ,YES); documentsPath = [paths objectAtIndex:0]; //here 15 is for testing purposes for (int i = 0; i < 15; i++) { //Lets not get bogged down here. The problem is not here UIImageView *button1AImgVw = [[UIImageView alloc] initWithFrame:CGRectMake(10*i, 10, 10, 10)]; [self.view addSubview:button1AImgVw]; NSMutableString *picStr1a = [[NSMutableString alloc] init]; NSString *dataFile1a = [[NSString alloc] init]; picStr1a = [NSMutableString stringWithFormat:@"%d.jpg", i]; dataFile1a = [documentsPath stringByAppendingPathComponent:picStr1a]; NSData *potraitImgData1a =[[NSData alloc] initWithContentsOfFile:dataFile1a]; UIImage *image = [[UIImage alloc] initWithData:potraitImgData1a]; // This is causing my app to crash if I load more than 10 images! //[button1AImgVw setImage:image]; } NSLog(@"It went to END!"); } //Error I get when 10 images are selected. App does launch and work 2012-10-07 17:12:51.483 ABC-APP[7548:707] It went to END! 2012-10-07 17:12:51.483 ABC-APP [7531:707] Received memory warning. //App crashes with this error when there are 11 images 2012-10-07 17:30:26.339 ABC-APP[7548:707] It went to END! (gdb)

    Read the article

  • Log4j: Events appear in the wrong logfile

    - by Markus
    Hi there! To be able to log and trace some events I've added a LoggingHandler class to my java project. Inside this class I'm using two different log4j logger instances - one for logging an event and one for tracing an event into different files. The initialization block of the class looks like this: public void initialize() { System.out.print("starting logging server ..."); // create logger instances logLogger = Logger.getLogger("log"); traceLogger = Logger.getLogger("trace"); // create pattern layout String conversionPattern = "%c{2} %d{ABSOLUTE} %r %p %m%n"; try { patternLayout = new PatternLayout(); patternLayout.setConversionPattern(conversionPattern); } catch (Exception e) { System.out.println("error: could not create logger layout pattern"); System.out.println(e); System.exit(1); } // add pattern to file appender try { logFileAppender = new FileAppender(patternLayout, logFilename, false); traceFileAppender = new FileAppender(patternLayout, traceFilename, false); } catch (IOException e) { System.out.println("error: could not add logger layout pattern to corresponding appender"); System.out.println(e); System.exit(1); } // add appenders to loggers logLogger.addAppender(logFileAppender); traceLogger.addAppender(traceFileAppender); // set logger level logLogger.setLevel(Level.INFO); traceLogger.setLevel(Level.INFO); // start logging server loggingServer = new LoggingServer(logLogger, traceLogger, serverPort, this); loggingServer.start(); System.out.println(" done"); } To make sure that only only thread is using the functionality of a logger instance at the same time each logging / tracing method calls the logging method .info() inside a synchronized-block. One example looks like this: public void logMessage(String message) { synchronized (logLogger) { if (logLogger.isInfoEnabled() && logFileAppender != null) { logLogger.info(instanceName + ": " + message); } } } If I look at the log files, I see that sometimes a event appears in the wrong file. One example: trace 10:41:30,773 11080 INFO masterControl(192.168.2.21): string broadcast message was pushed from 1267093 to vehicle 1055293 (slaveControl 1) trace 10:41:30,784 11091 INFO masterControl(192.168.2.21): string broadcast message was pushed from 1156513 to vehicle 1105792 (slaveControl 1) trace 10:41:30,796 11103 INFO masterControl(192.168.2.21): string broadcast message was pushed from 1104306 to vehicle 1055293 (slaveControl 1) trace 10:41:30,808 11115 INFO masterControl(192.168.2.21): vehicle 1327879 was pushed to slave control 1 10:41:30,808 11115 INFO masterControl(192.168.2.21): string broadcast message was pushed from 1101572 to vehicle 106741 (slaveControl 1) trace 10:41:30,820 11127 INFO masterControl(192.168.2.21): string broadcast message was pushed from 1055293 to vehicle 1104306 (slaveControl 1) I think that the problem occures everytime two event happen at the same time (here: 10:41:30,808). Does anybody has an idea how to solve my problem? I already tried to add a sleep() after the method call, but that doesn't helped ... BR, Markus Edit: logtrace 11:16:07,75511:16:07,755 1129711297 INFOINFO masterControl(192.168.2.21): string broadcast message was pushed from 1291400 to vehicle 1138272 (slaveControl 1)masterControl(192.168.2.21): vehicle 1333770 was added to slave control 1 or log 11:16:08,562 12104 INFO 11:16:08,562 masterControl(192.168.2.21): string broadcast message was pushed from 117772 to vehicle 1217744 (slaveControl 1) 12104 INFO masterControl(192.168.2.21): vehicle 1169775 was pushed to slave control 1 Edit 2: It seems like the problem only occurs if logging methods are called from inside a RMI thread (my client / server exchange information using RMI connections). ... Edit 3: I solved the problem by myself: It seems like log4j is NOT completely thread-save. After synchronizing all log / trace methods using a separate object everything is working fine. Maybe the lib is writing the messages to a thread-unsafe buffer before writing them to file?

    Read the article

  • Projections.count() and Projections.countDistinct() both result in the same query

    - by Kim L
    EDIT: I've edited this post completely, so that the new description of my problem includes all the details and not only what I previously considered relevant. Maybe this new description will help to solve the problem I'm facing. I have two entity classes, Customer and CustomerGroup. The relation between customer and customer groups is ManyToMany. The customer groups are annotated in the following way in the Customer class. @Entity public class Customer { ... @ManyToMany(mappedBy = "customers", fetch = FetchType.LAZY) public Set<CustomerGroup> getCustomerGroups() { ... } ... public String getUuid() { return uuid; } ... } The customer reference in the customer groups class is annotated in the following way @Entity public class CustomerGroup { ... @ManyToMany public Set<Customer> getCustomers() { ... } ... public String getUuid() { return uuid; } ... } Note that both the CustomerGroup and Customer classes also have an UUID field. The UUID is a unique string (uniqueness is not forced in the datamodel, as you can see, it is handled as any other normal string). What I'm trying to do, is to fetch all customers which do not belong to any customer group OR the customer group is a "valid group". The validity of a customer group is defined with a list of valid UUIDs. I've created the following criteria query Criteria criteria = getSession().createCriteria(Customer.class); criteria.setProjection(Projections.countDistinct("uuid")); criteria = criteria.createCriteria("customerGroups", "groups", Criteria.LEFT_JOIN); List<String> uuids = getValidUUIDs(); Criterion criterion = Restrictions.isNull("groups.uuid"); if (uuids != null && uuids.size() > 0) { criterion = Restrictions.or(criterion, Restrictions.in( "groups.uuid", uuids)); } criteria.add(criterion); When executing the query, it will result in the following SQL query select count(*) as y0_ from Customer this_ left outer join CustomerGroup_Customer customergr3_ on this_.id=customergr3_.customers_id left outer join CustomerGroup groups1_ on customergr3_.customerGroups_id=groups1_.id where groups1_.uuid is null or groups1_.uuid in ( ?, ? ) The query is exactly what I wanted, but with one exception. Since a Customer can belong to multiple CustomerGroups, left joining the CustomerGroup will result in duplicated Customer objects. Hence the count(*) will give a false value, as it only counts how many results there are. I need to get the amount of unique customers and this I expected to achieve by using the Projections.countDistinct("uuid"); -projection. For some reason, as you can see, the projection will still result in a count(*) query instead of the expected count(distinct uuid). Replacing the projection countDistinct with just count("uuid") will result in the exactly same query. Am I doing something wrong or is this a bug? === "Problem" solved. Reason: PEBKAC (Problem Exists Between Keyboard And Chair). I had a branch in my code and didn't realize that the branch was executed. That branch used rowCount() instead of countDistinct().

    Read the article

  • C# IOException: The process cannot access the file because it is being used by another process.

    - by Michiel Bester
    Hi, I have a slight problem. What my application is supose to do, is to watch a folder for any newly copied file with the extention '.XSD' open the file and assign the lines to an array. After that the data from the array should be inserted into a MySQL database, then move the used file to another folder if it's done. The problem is that the application works fine with the first file, but as soon as the next file is copied to the folder I get this exception for example: 'The process cannot access the file 'C:\inetpub\admission\file2.XPD' because it is being used by another process'. If two files on the onther hand is copied at the same time there's no problem at all. The following code is on the main window: public partial class Form1 : Form { static string folder = specified path; static FileProcessor processor; public Form1() { InitializeComponent(); processor = new FileProcessor(); InitializeWatcher(); } static FileSystemWatcher watcher; static void InitializeWatcher() { watcher = new FileSystemWatcher(); watcher.Path = folder; watcher.Created += new FileSystemEventHandler(watcher_Created); watcher.EnableRaisingEvents = true; watcher.Filter = "*.XPD"; } static void watcher_Created(object sender, FileSystemEventArgs e) { processor.QueueInput(e.FullPath); } } As you can see the file's path is entered into a queue for processing which is on another class called FileProcessor: class FileProcessor { private Queue<string> workQueue; private Thread workerThread; private EventWaitHandle waitHandle; public FileProcessor() { workQueue = new Queue<string>(); waitHandle = new AutoResetEvent(true); } public void QueueInput(string filepath) { workQueue.Enqueue(filepath); if (workerThread == null) { workerThread = new Thread(new ThreadStart(Work)); workerThread.Start(); } else if (workerThread.ThreadState == ThreadState.WaitSleepJoin) { waitHandle.Set(); } } private void Work() { while (true) { string filepath = RetrieveFile(); if (filepath != null) ProcessFile(filepath); else waitHandle.WaitOne(); } } private string RetrieveFile() { if (workQueue.Count > 0) return workQueue.Dequeue(); else return null; } private void ProcessFile(string filepath) { string xName = Path.GetFileName(filepath); string fName = Path.GetFileNameWithoutExtension(filepath); string gfolder = specified path; bool fileInUse = true; string line; string[] itemArray = null; int i = 0; #region Declare Db variables //variables for each field of the database is created here #endregion #region Populate array while (fileInUse == true) { FileStream fs = new FileStream(filepath, FileMode.Open, FileAccess.Read, FileShare.ReadWrite); StreamReader reader = new StreamReader(fs); itemArray = new string[75]; while (!reader.EndOfStream == true) { line = reader.ReadLine(); itemArray[i] = line; i++; } fs.Flush(); reader.Close(); reader.Dispose(); i = 0; fileInUse = false; } #endregion #region Assign Db variables //here all the variables get there values from the array #endregion #region MySql Connection //here the connection to mysql is made and the variables are inserted into the db #endregion #region Test and Move file if (System.IO.File.Exists(gfolder + xName)) { System.IO.File.Delete(gfolder + xName); } Directory.Move(filepath, gfolder + xName); #endregion } } The problem I get occurs in the Populate array region. I read alot of other threads and was lead to believe that by flushing the file stream would help... I am also thinking of adding a try..catch for if the file process was successful, the file is moved to gfolder and if it failed, moved to bfolder Any help would be awesome Tx

    Read the article

  • Drawing using Dynamic Array and Buffer Object

    - by user1905910
    I have a problem when creating the vertex array and the indices array. I don't know what really is the problem with the code, but I guess is something with the type of the arrays, can someone please give me a light on this? #define GL_GLEXT_PROTOTYPES #include<GL/glut.h> #include<iostream> using namespace std; #define BUFFER_OFFSET(offset) ((GLfloat*) NULL + offset) const GLuint numDiv = 2; const GLuint numVerts = 9; GLuint VAO; void display(void) { enum vertex {VERTICES, INDICES, NUM_BUFFERS}; GLuint * buffers = new GLuint[NUM_BUFFERS]; GLfloat (*squareVerts)[2] = new GLfloat[numVerts][2]; GLubyte * indices = new GLubyte[numDiv*numDiv*4]; GLuint delta = 80/numDiv; for(GLuint i = 0; i < numVerts; i++) { squareVerts[i][1] = (i/(numDiv+1))*delta; squareVerts[i][0] = (i%(numDiv+1))*delta; } for(GLuint i=0; i < numDiv; i++){ for(GLuint j=0; j < numDiv; j++){ //cada iteracao gera 4 pontos #define NUM_VERT(ii,jj) ((ii)*(numDiv+1)+(jj)) #define INDICE(ii,jj) (4*((ii)*numDiv+(jj))) indices[INDICE(i,j)] = NUM_VERT(i,j); indices[INDICE(i,j)+1] = NUM_VERT(i,j+1); indices[INDICE(i,j)+2] = NUM_VERT(i+1,j+1); indices[INDICE(i,j)+3] = NUM_VERT(i+1,j); } } glGenVertexArrays(1, &VAO); glBindVertexArray(VAO); glGenBuffers(NUM_BUFFERS, buffers); glBindBuffer(GL_ARRAY_BUFFER, buffers[VERTICES]); glBufferData(GL_ARRAY_BUFFER, sizeof(squareVerts), squareVerts, GL_STATIC_DRAW); glVertexPointer(2, GL_FLOAT, 0, BUFFER_OFFSET(0)); glEnableClientState(GL_VERTEX_ARRAY); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, buffers[INDICES]); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices, GL_STATIC_DRAW); glClear(GL_COLOR_BUFFER_BIT); glColor3f(1.0,1.0,1.0); glDrawElements(GL_POINTS, 16, GL_UNSIGNED_BYTE, BUFFER_OFFSET(0)); glutSwapBuffers(); } void init() { glClearColor(0.0, 0.0, 0.0, 0.0); gluOrtho2D((GLdouble) -1.0, (GLdouble) 90.0, (GLdouble) -1.0, (GLdouble) 90.0); } int main(int argv, char** argc) { glutInit(&argv, argc); glutInitDisplayMode(GLUT_RGB | GLUT_DOUBLE); glutInitWindowSize(500,500); glutInitWindowPosition(100,100); glutCreateWindow("myCode.cpp"); init(); glutDisplayFunc(display); glutMainLoop(); return 0; } Edit: The problem here is that drawing don't work at all. But I don't get any error, this just don't display what I want to display. Even if I put the code that make the vertices and put them in the buffers in a diferent function, this don't work. I just tried to do this: void display(void) { glBindVertexArray(VAO); glClear(GL_COLOR_BUFFER_BIT); glColor3f(1.0,1.0,1.0); glDrawElements(GL_POINTS, 16, GL_UNSIGNED_BYTE, BUFFER_OFFSET(0)); glutSwapBuffers(); } and I placed the rest of the code in display in another function that is called on the start of the program. But the problem still

    Read the article

  • migrating Solaris to RH: network latency issue, tcp window size & other tcp parameters

    - by Bastien
    Hello I have a client/server app (Java) that I'm migrating from Solaris to RH Linux. since I started running it in RH, I noticed some issues related to latency. I managed to isolate the problem that looks like this: client sends 5 messages (32 bytes each) in a row (same application timestamp) to the server. server echos messages. client receives replies and prints round trip time for each msg. in Solaris, all is well: I get ALL 5 replies at the same time, roughly 80ms after having sent original messages (client & server are several thousands miles away from each other: my ping RTT is 80ms, all normal). in RH, first 3 messages are echoed normally (they arrive 80ms after they've been sent), however the following 2 arrive 80ms later (so total 160ms RTT). the pattern is always the same. clearly looked like a TCP problem. on my solaris box, I had previously configured the tcp stack with 2 specific options: disable nagle algorithm globally set tcp_deferred_acks_max to 0 on RH, it's not possible to disable nagle globally, but I disabled it on all of my apps' sockets (TCP_NODELAY). so I started playing with tcpdump (on the server machine), and compared both outputs: SOLARIS: 22 2.085645 client server TCP 56150 > 6006 [PSH, ACK] Seq=111 Ack=106 Win=66672 Len=22 "MSG_1 RCV" 23 2.085680 server client TCP 6006 > 56150 [ACK] Seq=106 Ack=133 Win=50400 Len=0 24 2.085908 client server TCP 56150 > 6006 [PSH, ACK] Seq=133 Ack=106 Win=66672 Len=22 "MSG_2 RCV" 25 2.085925 server client TCP 6006 > 56150 [ACK] Seq=106 Ack=155 Win=50400 Len=0 26 2.086175 client server TCP 56150 > 6006 [PSH, ACK] Seq=155 Ack=106 Win=66672 Len=22 "MSG_3 RCV" 27 2.086192 server client TCP 6006 > 56150 [ACK] Seq=106 Ack=177 Win=50400 Len=0 28 2.086243 server client TCP 6006 > 56150 [PSH, ACK] Seq=106 Ack=177 Win=50400 Len=21 "MSG_1 ECHO" 29 2.086440 client server TCP 56150 > 6006 [PSH, ACK] Seq=177 Ack=106 Win=66672 Len=22 "MSG_4 RCV" 30 2.086454 server client TCP 6006 > 56150 [ACK] Seq=127 Ack=199 Win=50400 Len=0 31 2.086659 server client TCP 6006 > 56150 [PSH, ACK] Seq=127 Ack=199 Win=50400 Len=21 "MSG_2 ECHO" 32 2.086708 client server TCP 56150 > 6006 [PSH, ACK] Seq=199 Ack=106 Win=66672 Len=22 "MSG_5 RCV" 33 2.086721 server client TCP 6006 > 56150 [ACK] Seq=148 Ack=221 Win=50400 Len=0 34 2.086947 server client TCP 6006 > 56150 [PSH, ACK] Seq=148 Ack=221 Win=50400 Len=21 "MSG_3 ECHO" 35 2.087196 server client TCP 6006 > 56150 [PSH, ACK] Seq=169 Ack=221 Win=50400 Len=21 "MSG_4 ECHO" 36 2.087500 server client TCP 6006 > 56150 [PSH, ACK] Seq=190 Ack=221 Win=50400 Len=21 "MSG_5 ECHO" 37 2.165390 client server TCP 56150 > 6006 [ACK] Seq=221 Ack=148 Win=66632 Len=0 38 2.166314 client server TCP 56150 > 6006 [ACK] Seq=221 Ack=190 Win=66588 Len=0 39 2.364135 client server TCP 56150 > 6006 [ACK] Seq=221 Ack=211 Win=66568 Len=0 REDHAT: 17 2.081163 client server TCP 55879 > 6006 [PSH, ACK] Seq=111 Ack=106 Win=66672 Len=22 "MSG_1 RCV" 18 2.081178 server client TCP 6006 > 55879 [ACK] Seq=106 Ack=133 Win=5888 Len=0 19 2.081297 server client TCP 6006 > 55879 [PSH, ACK] Seq=106 Ack=133 Win=5888 Len=21 "MSG_1 ECHO" 20 2.081711 client server TCP 55879 > 6006 [PSH, ACK] Seq=133 Ack=106 Win=66672 Len=22 "MSG_2 RCV" 21 2.081761 client server TCP 55879 > 6006 [PSH, ACK] Seq=155 Ack=106 Win=66672 Len=22 "MSG_3 RCV" 22 2.081846 server client TCP 6006 > 55879 [PSH, ACK] Seq=127 Ack=177 Win=5888 Len=21 "MSG_2 ECHO" 23 2.081995 server client TCP 6006 > 55879 [PSH, ACK] Seq=148 Ack=177 Win=5888 Len=21 "MSG_3 ECHO" 24 2.082011 client server TCP 55879 > 6006 [PSH, ACK] Seq=177 Ack=106 Win=66672 Len=22 "MSG_4 RCV" 25 2.082362 client server TCP 55879 > 6006 [PSH, ACK] Seq=199 Ack=106 Win=66672 Len=22 "MSG_5 RCV" 26 2.082377 server client TCP 6006 > 55879 [ACK] Seq=169 Ack=221 Win=5888 Len=0 27 2.171003 client server TCP 55879 > 6006 [ACK] Seq=221 Ack=148 Win=66632 Len=0 28 2.171019 server client TCP 6006 > 55879 [PSH, ACK] Seq=169 Ack=221 Win=5888 Len=42 "MSG_4 ECHO + MSG_5 ECHO" 29 2.257498 client server TCP 55879 > 6006 [ACK] Seq=221 Ack=211 Win=66568 Len=0 so, I got confirmation things are not working correctly for RH: packet 28 is sent TOO LATE, it looks like the server is waiting for packet 27's ACK before doing anything. seems to me it's the most likely reason... then I realized that the "Win" parameters are different on Solaris & RH dumps: 50400 on Solaris, only 5888 on RH. that's another hint... I read the doc about the slide window & buffer window, and played around with the rcvBuffer & sendBuffer in java on my sockets, but never managed to change this 5888 value to anything else (I checked each time directly with tcpdump). does anybody know how to do this ? I'm having a hard time getting definitive information, as in some cases there's "auto-negotiation" that I might need to bypass, etc... I eventually managed to get only partially rid of my initial problem by setting the "tcp_slow_start_after_idle" parameter to 0 on RH, but it did not change the "win" parameter at all. the same problem was there for the first 4 groups of 5 messages, with TCP retransmission & TCP Dup ACK in tcpdump, then the problem disappeared altogether for all following groups of 5 messages. It doesn't seem like a very clean and/or generic solution to me. I'd really like to reproduce the exact same conditions under both OSes. I'll keep researching, but any help from TCP gurus would be greatly appreciated ! thanks !

    Read the article

  • IE7 doesn't render part of page until the window resizes or switch between tabs

    - by BlackMael
    I have a problem with IE7. I have a fixed layout for keeping the header and a sidepanel fixed on a page leaving only the "main content" area switch can happily scroll it's content. This layout works perfectly fine for IE6 and IE8, but sometimes one page may start "hiding" the content that should be showing in the "main content" area. The page finishes loading just fine. For a split second IE7 will render the main content just fine and then it will immediately hide it from view.. somewhere.. It would also seem that it only experiences this problem when there is enough content to force the "main content" area to scroll. By resizing the window or switching to another open tab and back again will cause IE7 to show the page as it was intended. Note the same problem does occur with IE8 in compatibility mode, but the page is rendered correctly in IE8 mode. If need be I can attach the basic CSS styling I use, but I first want to see if this is a known issue with IE7. Does IE7 have issues with positioned layout and overflow scrolling that is sometimes likes to forgot to finish rendering the page correctly until some window redraw event forces to finish rendering? Please remember, this exact same layout is used across multiple pages in the site as it is set up in a master page. It is just (in this case) one page that is experiencing this problem. Other pages with the exact same layout do render correctly. Even if the main content is full enough to also scroll. UPDATE: A related question which doesn't have an answer at this point. LATE UPDATE: Adding example masterpage and css Please note this same layout is the same for all the pages in the application. My problem with IE7 only occurs on one such page. All other pages have happily render correctly in IE7. Just one page, using the exact same layout, has issues where it sometimes hides the content in the "work-space" div. The master page <%@ Master Language="VB" CodeFile="MasterPage.master.vb" Inherits="shared_templates_MasterPage" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title></title> <link rel="Stylesheet" type="text/css" href="~/common/yui/2.7.0/build/reset-fonts/reset-fonts.css" runat="server" /> <link rel="Stylesheet" type="text/css" href="~/shared/css/layout.css" runat="server" /> <asp:ContentPlaceHolder ID="head" runat="server" /> </head> <body> <form id="form1" runat="server"> <asp:ScriptManager ID="ScriptManager1" runat="server" /> <div id="app-header"> </div> <div id="side-panel"> </div> <div id="work-space"> <asp:ContentPlaceHolder ID="WorkSpaceContentPlaceHolder" runat="server" /> </div> <div id="status-bar"> <asp:ContentPlaceHolder ID="StatusBarContentPlaceHolder" runat="server" /> </div> </form> </body> </html> The layout.css html { overflow: hidden; } body { overflow: hidden; padding: 0; margin: 0; width: 100%; height: 100%; background-color: white; } body, table, td, th, select, textarea, input { font-family: Tahoma, Arial, Sans-Serif; font-size: 9pt; } p { padding-left: 1em; margin-bottom: 1em; } #app-header { position: absolute; top: 0; left: 0; width: 100%; height: 80px; background-color: #dcdcdc; border-bottom: solid 4px #000; } #side-panel { position: absolute; top: 84px; left: 0px; bottom: 0px; overflow: auto; padding: 0; margin: 0; width: 227px; background-color: #AABCCA; border-right: solid 1px black; background-repeat: repeat-x; padding-top: 5px; } #work-space { position: absolute; top: 84px; left: 232px; right: 0px; padding: 0; margin: 0; bottom: 22px; overflow: auto; background-color: White; } #status-bar { position: absolute; height: 20px; left: 228px; right: 0px; padding: 0; margin: 0; bottom: 0px; border-top: solid 1px #c0c0c0; background-color: #f0f0f0; } The Default.aspx <%@ Page Title="Test" Language="VB" MasterPageFile="~/shared/templates/MasterPage.master" AutoEventWireup="false" CodeFile="Default.aspx.vb" Inherits="_Default" %> <asp:Content ID="WorkspaceContent" ContentPlaceHolderID="WorkSpaceContentPlaceHolder" Runat="Server"> Workspace <asp:ListView ID="DemoListView" runat="server" DataSourceID="DemoObjectDataSource" ItemPlaceholderID="DemoPlaceHolder"> <LayoutTemplate> <table style="border: 1px solid #a0a0a0; width: 600px"> <colgroup> <col width="80" /> <col /> <col width="80" /> <col width="120" /> </colgroup> <tbody> <asp:PlaceHolder ID="DemoPlaceHolder" runat="server" /> </tbody> </table> </LayoutTemplate> <ItemTemplate> <tr> <th><%#Eval("ID")%></th> <td><%#Eval("Name")%></td> <td><%#Eval("Size")%></td> <td><%#Eval("CreatedOn", "{0:yyyy-MM-dd HH:mm:ss}")%></td> </tr> </ItemTemplate> </asp:ListView> <asp:ObjectDataSource ID="DemoObjectDataSource" runat="server" OldValuesParameterFormatString="original_{0}" SelectMethod="GetData" TypeName="DemoLogic"> <SelectParameters> <asp:Parameter Name="path" Type="String" /> </SelectParameters> </asp:ObjectDataSource> </asp:Content> <asp:Content ID="StatusContent" ContentPlaceHolderID="StatusBarContentPlaceHolder" Runat="Server"> Ready OK. </asp:Content>

    Read the article

  • How to get full query string parameters not UrlDecoded

    - by developerit
    Introduction While developing Developer IT’s website, we came across a problem when the user search keywords containing special character like the plus ‘+’ char. We found it while looking for C++ in our search engine. The request parameter output in ASP.NET was “c “. I found it strange that it removed the ‘++’ and replaced it with a space… Analysis After a bit of Googling and Reflection, it turns out that ASP.NET calls UrlDecode on each parameters retreived by the Request(“item”) method. The Request.Params property is affected by this two since it mashes all QueryString, Forms and other collections into a single one. Workaround Finally, I solve the puzzle usign the Request.RawUrl property and parsing it with the same RegEx I use in my url re-writter. The RawUrl not affected by anything. As its name say it, it’s raw. Published on http://www.developerit.com/

    Read the article

  • Upload File to Windows Azure Blob in Chunks through ASP.NET MVC, JavaScript and HTML5

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/07/01/upload-file-to-windows-azure-blob-in-chunks-through-asp.net.aspxMany people are using Windows Azure Blob Storage to store their data in the cloud. Blob storage provides 99.9% availability with easy-to-use API through .NET SDK and HTTP REST. For example, we can store JavaScript files, images, documents in blob storage when we are building an ASP.NET web application on a Web Role in Windows Azure. Or we can store our VHD files in blob and mount it as a hard drive in our cloud service. If you are familiar with Windows Azure, you should know that there are two kinds of blob: page blob and block blob. The page blob is optimized for random read and write, which is very useful when you need to store VHD files. The block blob is optimized for sequential/chunk read and write, which has more common usage. Since we can upload block blob in blocks through BlockBlob.PutBlock, and them commit them as a whole blob with invoking the BlockBlob.PutBlockList, it is very powerful to upload large files, as we can upload blocks in parallel, and provide pause-resume feature. There are many documents, articles and blog posts described on how to upload a block blob. Most of them are focus on the server side, which means when you had received a big file, stream or binaries, how to upload them into blob storage in blocks through .NET SDK.  But the problem is, how can we upload these large files from client side, for example, a browser. This questioned to me when I was working with a Chinese customer to help them build a network disk production on top of azure. The end users upload their files from the web portal, and then the files will be stored in blob storage from the Web Role. My goal is to find the best way to transform the file from client (end user’s machine) to the server (Web Role) through browser. In this post I will demonstrate and describe what I had done, to upload large file in chunks with high speed, and save them as blocks into Windows Azure Blob Storage.   Traditional Upload, Works with Limitation The simplest way to implement this requirement is to create a web page with a form that contains a file input element and a submit button. 1: @using (Html.BeginForm("About", "Index", FormMethod.Post, new { enctype = "multipart/form-data" })) 2: { 3: <input type="file" name="file" /> 4: <input type="submit" value="upload" /> 5: } And then in the backend controller, we retrieve the whole content of this file and upload it in to the blob storage through .NET SDK. We can split the file in blocks and upload them in parallel and commit. The code had been well blogged in the community. 1: [HttpPost] 2: public ActionResult About(HttpPostedFileBase file) 3: { 4: var container = _client.GetContainerReference("test"); 5: container.CreateIfNotExists(); 6: var blob = container.GetBlockBlobReference(file.FileName); 7: var blockDataList = new Dictionary<string, byte[]>(); 8: using (var stream = file.InputStream) 9: { 10: var blockSizeInKB = 1024; 11: var offset = 0; 12: var index = 0; 13: while (offset < stream.Length) 14: { 15: var readLength = Math.Min(1024 * blockSizeInKB, (int)stream.Length - offset); 16: var blockData = new byte[readLength]; 17: offset += stream.Read(blockData, 0, readLength); 18: blockDataList.Add(Convert.ToBase64String(BitConverter.GetBytes(index)), blockData); 19:  20: index++; 21: } 22: } 23:  24: Parallel.ForEach(blockDataList, (bi) => 25: { 26: blob.PutBlock(bi.Key, new MemoryStream(bi.Value), null); 27: }); 28: blob.PutBlockList(blockDataList.Select(b => b.Key).ToArray()); 29:  30: return RedirectToAction("About"); 31: } This works perfect if we selected an image, a music or a small video to upload. But if I selected a large file, let’s say a 6GB HD-movie, after upload for about few minutes the page will be shown as below and the upload will be terminated. In ASP.NET there is a limitation of request length and the maximized request length is defined in the web.config file. It’s a number which less than about 4GB. So if we want to upload a really big file, we cannot simply implement in this way. Also, in Windows Azure, a cloud service network load balancer will terminate the connection if exceed the timeout period. From my test the timeout looks like 2 - 3 minutes. Hence, when we need to upload a large file we cannot just use the basic HTML elements. Besides the limitation mentioned above, the simple HTML file upload cannot provide rich upload experience such as chunk upload, pause and pause-resume. So we need to find a better way to upload large file from the client to the server.   Upload in Chunks through HTML5 and JavaScript In order to break those limitation mentioned above we will try to upload the large file in chunks. This takes some benefit to us such as - No request size limitation: Since we upload in chunks, we can define the request size for each chunks regardless how big the entire file is. - No timeout problem: The size of chunks are controlled by us, which means we should be able to make sure request for each chunk upload will not exceed the timeout period of both ASP.NET and Windows Azure load balancer. It was a big challenge to upload big file in chunks until we have HTML5. There are some new features and improvements introduced in HTML5 and we will use them to implement our solution.   In HTML5, the File interface had been improved with a new method called “slice”. It can be used to read part of the file by specifying the start byte index and the end byte index. For example if the entire file was 1024 bytes, file.slice(512, 768) will read the part of this file from the 512nd byte to 768th byte, and return a new object of interface called "Blob”, which you can treat as an array of bytes. In fact,  a Blob object represents a file-like object of immutable, raw data. The File interface is based on Blob, inheriting blob functionality and expanding it to support files on the user's system. For more information about the Blob please refer here. File and Blob is very useful to implement the chunk upload. We will use File interface to represent the file the user selected from the browser and then use File.slice to read the file in chunks in the size we wanted. For example, if we wanted to upload a 10MB file with 512KB chunks, then we can read it in 512KB blobs by using File.slice in a loop.   Assuming we have a web page as below. User can select a file, an input box to specify the block size in KB and a button to start upload. 1: <div> 2: <input type="file" id="upload_files" name="files[]" /><br /> 3: Block Size: <input type="number" id="block_size" value="512" name="block_size" />KB<br /> 4: <input type="button" id="upload_button_blob" name="upload" value="upload (blob)" /> 5: </div> Then we can have the JavaScript function to upload the file in chunks when user clicked the button. 1: <script type="text/javascript"> 1: 2: $(function () { 3: $("#upload_button_blob").click(function () { 4: }); 5: });</script> Firstly we need to ensure the client browser supports the interfaces we are going to use. Just try to invoke the File, Blob and FormData from the “window” object. If any of them is “undefined” the condition result will be “false” which means your browser doesn’t support these premium feature and it’s time for you to get your browser updated. FormData is another new feature we are going to use in the future. It could generate a temporary form for us. We will use this interface to create a form with chunk and associated metadata when invoked the service through ajax. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: if (window.File && window.Blob && window.FormData) { 4: alert("Your brwoser is awesome, let's rock!"); 5: } 6: else { 7: alert("Oh man plz update to a modern browser before try is cool stuff out."); 8: return; 9: } 10: }); Each browser supports these interfaces by their own implementation and currently the Blob, File and File.slice are supported by Chrome 21, FireFox 13, IE 10, Opera 12 and Safari 5.1 or higher. After that we worked on the files the user selected one by one since in HTML5, user can select multiple files in one file input box. 1: var files = $("#upload_files")[0].files; 2: for (var i = 0; i < files.length; i++) { 3: var file = files[i]; 4: var fileSize = file.size; 5: var fileName = file.name; 6: } Next, we calculated the start index and end index for each chunks based on the size the user specified from the browser. We put them into an array with the file name and the index, which will be used when we upload chunks into Windows Azure Blob Storage as blocks since we need to specify the target blob name and the block index. At the same time we will store the list of all indexes into another variant which will be used to commit blocks into blob in Azure Storage once all chunks had been uploaded successfully. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10:  11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: var blockSizeInKB = $("#block_size").val(); 14: var blockSize = blockSizeInKB * 1024; 15: var blocks = []; 16: var offset = 0; 17: var index = 0; 18: var list = ""; 19: while (offset < fileSize) { 20: var start = offset; 21: var end = Math.min(offset + blockSize, fileSize); 22:  23: blocks.push({ 24: name: fileName, 25: index: index, 26: start: start, 27: end: end 28: }); 29: list += index + ","; 30:  31: offset = end; 32: index++; 33: } 34: } 35: }); Now we have all chunks’ information ready. The next step should be upload them one by one to the server side, and at the server side when received a chunk it will upload as a block into Blob Storage, and finally commit them with the index list through BlockBlobClient.PutBlockList. But since all these invokes are ajax calling, which means not synchronized call. So we need to introduce a new JavaScript library to help us coordinate the asynchronize operation, which named “async.js”. You can download this JavaScript library here, and you can find the document here. I will not explain this library too much in this post. We will put all procedures we want to execute as a function array, and pass into the proper function defined in async.js to let it help us to control the execution sequence, in series or in parallel. Hence we will define an array and put the function for chunk upload into this array. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4:  5: // start to upload each files in chunks 6: var files = $("#upload_files")[0].files; 7: for (var i = 0; i < files.length; i++) { 8: var file = files[i]; 9: var fileSize = file.size; 10: var fileName = file.name; 11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: ... ... 14:  15: // define the function array and push all chunk upload operation into this array 16: blocks.forEach(function (block) { 17: putBlocks.push(function (callback) { 18: }); 19: }); 20: } 21: }); 22: }); As you can see, I used File.slice method to read each chunks based on the start and end byte index we calculated previously, and constructed a temporary HTML form with the file name, chunk index and chunk data through another new feature in HTML5 named FormData. Then post this form to the backend server through jQuery.ajax. This is the key part of our solution. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: blocks.forEach(function (block) { 15: putBlocks.push(function (callback) { 16: // load blob based on the start and end index for each chunks 17: var blob = file.slice(block.start, block.end); 18: // put the file name, index and blob into a temporary from 19: var fd = new FormData(); 20: fd.append("name", block.name); 21: fd.append("index", block.index); 22: fd.append("file", blob); 23: // post the form to backend service (asp.net mvc controller action) 24: $.ajax({ 25: url: "/Home/UploadInFormData", 26: data: fd, 27: processData: false, 28: contentType: "multipart/form-data", 29: type: "POST", 30: success: function (result) { 31: if (!result.success) { 32: alert(result.error); 33: } 34: callback(null, block.index); 35: } 36: }); 37: }); 38: }); 39: } 40: }); Then we will invoke these functions one by one by using the async.js. And once all functions had been executed successfully I invoked another ajax call to the backend service to commit all these chunks (blocks) as the blob in Windows Azure Storage. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.series(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); That’s all in the client side. The outline of our logic would be - Calculate the start and end byte index for each chunks based on the block size. - Defined the functions of reading the chunk form file and upload the content to the backend service through ajax. - Execute the functions defined in previous step with “async.js”. - Commit the chunks by invoking the backend service in Windows Azure Storage finally.   Save Chunks as Blocks into Blob Storage In above we finished the client size JavaScript code. It uploaded the file in chunks to the backend service which we are going to implement in this step. We will use ASP.NET MVC as our backend service, and it will receive the chunks, upload into Windows Azure Bob Storage in blocks, then finally commit as one blob. As in the client side we uploaded chunks by invoking the ajax call to the URL "/Home/UploadInFormData", I created a new action under the Index controller and it only accepts HTTP POST request. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: } 8: catch (Exception e) 9: { 10: error = e.ToString(); 11: } 12:  13: return new JsonResult() 14: { 15: Data = new 16: { 17: success = string.IsNullOrWhiteSpace(error), 18: error = error 19: } 20: }; 21: } Then I retrieved the file name, index and the chunk content from the Request.Form object, which was passed from our client side. And then, used the Windows Azure SDK to create a blob container (in this case we will use the container named “test”.) and create a blob reference with the blob name (same as the file name). Then uploaded the chunk as a block of this blob with the index, since in Blob Storage each block must have an index (ID) associated with so that finally we can put all blocks as one blob by specifying their block ID list. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var index = int.Parse(Request.Form["index"]); 9: var file = Request.Files[0]; 10: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 11:  12: var container = _client.GetContainerReference("test"); 13: container.CreateIfNotExists(); 14: var blob = container.GetBlockBlobReference(name); 15: blob.PutBlock(id, file.InputStream, null); 16: } 17: catch (Exception e) 18: { 19: error = e.ToString(); 20: } 21:  22: return new JsonResult() 23: { 24: Data = new 25: { 26: success = string.IsNullOrWhiteSpace(error), 27: error = error 28: } 29: }; 30: } Next, I created another action to commit the blocks into blob once all chunks had been uploaded. Similarly, I retrieved the blob name from the Request.Form. I also retrieved the chunks ID list, which is the block ID list from the Request.Form in a string format, split them as a list, then invoked the BlockBlob.PutBlockList method. After that our blob will be shown in the container and ready to be download. 1: [HttpPost] 2: public JsonResult Commit() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var list = Request.Form["list"]; 9: var ids = list 10: .Split(',') 11: .Where(id => !string.IsNullOrWhiteSpace(id)) 12: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 13: .ToArray(); 14:  15: var container = _client.GetContainerReference("test"); 16: container.CreateIfNotExists(); 17: var blob = container.GetBlockBlobReference(name); 18: blob.PutBlockList(ids); 19: } 20: catch (Exception e) 21: { 22: error = e.ToString(); 23: } 24:  25: return new JsonResult() 26: { 27: Data = new 28: { 29: success = string.IsNullOrWhiteSpace(error), 30: error = error 31: } 32: }; 33: } Now we finished all code we need. The whole process of uploading would be like this below. Below is the full client side JavaScript code. 1: <script type="text/javascript" src="~/Scripts/async.js"></script> 2: <script type="text/javascript"> 3: $(function () { 4: $("#upload_button_blob").click(function () { 5: // assert the browser support html5 6: if (window.File && window.Blob && window.FormData) { 7: alert("Your brwoser is awesome, let's rock!"); 8: } 9: else { 10: alert("Oh man plz update to a modern browser before try is cool stuff out."); 11: return; 12: } 13:  14: // start to upload each files in chunks 15: var files = $("#upload_files")[0].files; 16: for (var i = 0; i < files.length; i++) { 17: var file = files[i]; 18: var fileSize = file.size; 19: var fileName = file.name; 20:  21: // calculate the start and end byte index for each blocks(chunks) 22: // with the index, file name and index list for future using 23: var blockSizeInKB = $("#block_size").val(); 24: var blockSize = blockSizeInKB * 1024; 25: var blocks = []; 26: var offset = 0; 27: var index = 0; 28: var list = ""; 29: while (offset < fileSize) { 30: var start = offset; 31: var end = Math.min(offset + blockSize, fileSize); 32:  33: blocks.push({ 34: name: fileName, 35: index: index, 36: start: start, 37: end: end 38: }); 39: list += index + ","; 40:  41: offset = end; 42: index++; 43: } 44:  45: // define the function array and push all chunk upload operation into this array 46: var putBlocks = []; 47: blocks.forEach(function (block) { 48: putBlocks.push(function (callback) { 49: // load blob based on the start and end index for each chunks 50: var blob = file.slice(block.start, block.end); 51: // put the file name, index and blob into a temporary from 52: var fd = new FormData(); 53: fd.append("name", block.name); 54: fd.append("index", block.index); 55: fd.append("file", blob); 56: // post the form to backend service (asp.net mvc controller action) 57: $.ajax({ 58: url: "/Home/UploadInFormData", 59: data: fd, 60: processData: false, 61: contentType: "multipart/form-data", 62: type: "POST", 63: success: function (result) { 64: if (!result.success) { 65: alert(result.error); 66: } 67: callback(null, block.index); 68: } 69: }); 70: }); 71: }); 72:  73: // invoke the functions one by one 74: // then invoke the commit ajax call to put blocks into blob in azure storage 75: async.series(putBlocks, function (error, result) { 76: var data = { 77: name: fileName, 78: list: list 79: }; 80: $.post("/Home/Commit", data, function (result) { 81: if (!result.success) { 82: alert(result.error); 83: } 84: else { 85: alert("done!"); 86: } 87: }); 88: }); 89: } 90: }); 91: }); 92: </script> And below is the full ASP.NET MVC controller code. 1: public class HomeController : Controller 2: { 3: private CloudStorageAccount _account; 4: private CloudBlobClient _client; 5:  6: public HomeController() 7: : base() 8: { 9: _account = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("DataConnectionString")); 10: _client = _account.CreateCloudBlobClient(); 11: } 12:  13: public ActionResult Index() 14: { 15: ViewBag.Message = "Modify this template to jump-start your ASP.NET MVC application."; 16:  17: return View(); 18: } 19:  20: [HttpPost] 21: public JsonResult UploadInFormData() 22: { 23: var error = string.Empty; 24: try 25: { 26: var name = Request.Form["name"]; 27: var index = int.Parse(Request.Form["index"]); 28: var file = Request.Files[0]; 29: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 30:  31: var container = _client.GetContainerReference("test"); 32: container.CreateIfNotExists(); 33: var blob = container.GetBlockBlobReference(name); 34: blob.PutBlock(id, file.InputStream, null); 35: } 36: catch (Exception e) 37: { 38: error = e.ToString(); 39: } 40:  41: return new JsonResult() 42: { 43: Data = new 44: { 45: success = string.IsNullOrWhiteSpace(error), 46: error = error 47: } 48: }; 49: } 50:  51: [HttpPost] 52: public JsonResult Commit() 53: { 54: var error = string.Empty; 55: try 56: { 57: var name = Request.Form["name"]; 58: var list = Request.Form["list"]; 59: var ids = list 60: .Split(',') 61: .Where(id => !string.IsNullOrWhiteSpace(id)) 62: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 63: .ToArray(); 64:  65: var container = _client.GetContainerReference("test"); 66: container.CreateIfNotExists(); 67: var blob = container.GetBlockBlobReference(name); 68: blob.PutBlockList(ids); 69: } 70: catch (Exception e) 71: { 72: error = e.ToString(); 73: } 74:  75: return new JsonResult() 76: { 77: Data = new 78: { 79: success = string.IsNullOrWhiteSpace(error), 80: error = error 81: } 82: }; 83: } 84: } And if we selected a file from the browser we will see our application will upload chunks in the size we specified to the server through ajax call in background, and then commit all chunks in one blob. Then we can find the blob in our Windows Azure Blob Storage.   Optimized by Parallel Upload In previous example we just uploaded our file in chunks. This solved the problem that ASP.NET MVC request content size limitation as well as the Windows Azure load balancer timeout. But it might introduce the performance problem since we uploaded chunks in sequence. In order to improve the upload performance we could modify our client side code a bit to make the upload operation invoked in parallel. The good news is that, “async.js” library provides the parallel execution function. If you remembered the code we invoke the service to upload chunks, it utilized “async.series” which means all functions will be executed in sequence. Now we will change this code to “async.parallel”. This will invoke all functions in parallel. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallel(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); In this way all chunks will be uploaded to the server side at the same time to maximize the bandwidth usage. This should work if the file was not very large and the chunk size was not very small. But for large file this might introduce another problem that too many ajax calls are sent to the server at the same time. So the best solution should be, upload the chunks in parallel with maximum concurrency limitation. The code below specified the concurrency limitation to 4, which means at the most only 4 ajax calls could be invoked at the same time. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallelLimit(putBlocks, 4, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: });   Summary In this post we discussed how to upload files in chunks to the backend service and then upload them into Windows Azure Blob Storage in blocks. We focused on the frontend side and leverage three new feature introduced in HTML 5 which are - File.slice: Read part of the file by specifying the start and end byte index. - Blob: File-like interface which contains the part of the file content. - FormData: Temporary form element that we can pass the chunk alone with some metadata to the backend service. Then we discussed the performance consideration of chunk uploading. Sequence upload cannot provide maximized upload speed, but the unlimited parallel upload might crash the browser and server if too many chunks. So we finally came up with the solution to upload chunks in parallel with the concurrency limitation. We also demonstrated how to utilize “async.js” JavaScript library to help us control the asynchronize call and the parallel limitation.   Regarding the chunk size and the parallel limitation value there is no “best” value. You need to test vary composition and find out the best one for your particular scenario. It depends on the local bandwidth, client machine cores and the server side (Windows Azure Cloud Service Virtual Machine) cores, memory and bandwidth. Below is one of my performance test result. The client machine was Windows 8 IE 10 with 4 cores. I was using Microsoft Cooperation Network. The web site was hosted on Windows Azure China North data center (in Beijing) with one small web role (1.7GB 1 core CPU, 1.75GB memory with 100Mbps bandwidth). The test cases were - Chunk size: 512KB, 1MB, 2MB, 4MB. - Upload Mode: Sequence, parallel (unlimited), parallel with limit (4 threads, 8 threads). - Chunk Format: base64 string, binaries. - Target file: 100MB. - Each case was tested 3 times. Below is the test result chart. Some thoughts, but not guidance or best practice: - Parallel gets better performance than series. - No significant performance improvement between parallel 4 threads and 8 threads. - Transform with binaries provides better performance than base64. - In all cases, chunk size in 1MB - 2MB gets better performance.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Comments on Comments

    - by Joe Mayo
    I almost tweeted a reply to Capar Kleijne's question about comments on Twitter, but realized that my opinion exceeded 140 characters. The following is based upon my experience with extremes and approaches that I find useful in code comments. There are a couple extremes that I've seen and reasons why people go the distance in each approach. The most common extreme is no comments in the code at all.  A few bad reasons why this happens is because a developer is in a hurry, sloppy, or is interested in job preservation. The unfortunate result is that the code is difficult to understand and hard to maintain. The drawbacks to no comments in code are a primary reason why teachers drill the need for commenting code into our heads.  This viewpoint assumes the lack of comments are bad because the code is bad, but there is another reason for not commenting that is gaining more popularity. I've heard/and read that code should be self documenting. Following this thought pattern, if code is well written with meaningful names, there should not be a reason for comments.  An addendum to this argument is that comments are often neglected and get out-of-date, but the code is what is kept up-to-date. Presumably, if code contained very good naming, it would be easy to maintain.  This is a noble perspective and I like the practice of meaningful naming of identifiers. However, I think it's also an extreme approach that doesn't cover important cases.  i.e. If an identifier is named badly (subjective differences in opinion) or not changed appropriately during maintenance, then the badly named identifier is no more useful than a stale comment. These were the two no-comment extremes, so let's look at the too many comments extreme. On a regular basis, I'll see cases where the code is over-commented; not nearly as often as the no-comment scenarios, but still prevalent.  These are examples of where every single line in the code is commented.  These comments make the code harder to read because they get in the way of the algorithm.  In most cases, the comments parrot what each line of code does.  If a developer understands the language, then most statements are immediately intuitive.  i.e. what use is it to say that I'm assigning foo to bar when it's clear what the code is doing. I think that over-commenting code is a waste of time that slows down initial development and maintenance.  Understandably, the developer's intentions are admirable because they've had it beaten into their heads that they must comment. However, I think it's an extreme and prefer a more moderate approach. I don't think the extremes do justice to code because each can make maintenance harder.  No comments on bad code is obviously a problem, but the other two extremes are subtle and require qualification to address properly. The problem I see with the code-as-documentation approach is that it doesn't lift the developer out of the algorithm to identify dependencies, intentions, and hacks. Any developer can read code and follow an algorithm, but they still need to know where it fits into the big picture of the application. Because of indirections with language features like interfaces, delegates, and virtual members, code can become complex.  Occasionally, it's useful to point out a nuance or reason why a piece of code is there. i.e. If you've building an app that communicates via HTTP, you'll have certain headers to include for the endpoint, and it could be useful to point out why the code for setting those header values is there and how they affect the application. An argument against this could be that you should extract that code into a separate method with a meaningful name to describe the scenario.  My problem with such an approach would be that your code base becomes even more difficult to navigate and work with because you have all of this extra code just to make the code more meaningful. My opinion is that a simple and well-stated comment stating the reasons and intention for the code is more natural and convenient to the initial developer and maintainer.  I just don't agree with the approach of going out of the way to avoid making a comment.  I'm also concerned that some developers would take this approach as an excuse to not comment their bad code. Another area where I like comments is on documentation comments.  Java has it and so does C# and VB.  It's convenient because we can build automated tools that extract these comments.  These extracted comments are often much better than no documentation at all.  The "go read the code" answer always doesn't fulfill the need for a quick summary of an API. To summarize, I think that the extremes of no comments and too many comments are less than desirable approaches. I prefer documentation comments to explain each class and member (API level) and code comments as necessary to supplement well-written code. Joe

    Read the article

  • SQL Server service accounts and SPNs

    - by simonsabin
    Service Principal Names (SPNs) are a must for kerberos authentication which is a must when using sharepoint, reporting services and sql server where you access one server that then needs to access another resource, this is called the double hop. The reason this is a complex problem is that the second hop has to be done with impersonation/delegation. For this to work there needs to be a way for the security system to make sure that the service in the middle is allowed to impersonate you, after all you are not giving the service your password. To do this you need to be using kerberos. The following is my simple interpretation of how kerberos works. I find the Kerberos documentation rediculously complex so the following might be sligthly wrong but I think its close enough. Keberos works on a ticketing system, the prinicipal is that you get a security token from AD and then you can pass that to the service in the middle which can then use that token to impersonate you. For that to work AD has to be able to identify who is allowed to use the token, in this case the service account.But how do you as a client know what service account the service in the middle is configured with. The answer is SPNs. The SPN is the mapping between your logical connection to the service account. One type of SPN is for the DNS name for the server and the port. i.e. MySQL.mydomain.com and 1433. You can see how this maps to SQL Server on that server, but how does it map to the account. Well it can be done in two ways, either you can have a mapping defined in AD or AD can use a default mapping (this is something I didn't know about). To map the SPN in AD then you have to add the SPN to the user account, this is documented in the first link below either directly or using a tool called SetSPN. You might say that is complex, well it is and thats why SQL Server tries to do it for you, at start up it tries to connect to AD and set the SPN on the account it is running as, clearly that can only happen IF SQL is running as a domain account AND importantly it has permission to do so. By default a normal domain user account doesn't have the correct permission, and is why so many people have this problem. If the account is a domain admin then it will have permission, but non of us run SQL using domain admin accounts do we. You might also note that the SPN contains the port number (this isn't a requirement now in sql 2008 but I won't go into that), so if you set it manually and you are using dynamic ports (the default for a named instance) what do you do, well every time the port changes you need to change the SPN allocated to the account. Thats why its advised to let SQL Server register the SPN itself. You may also have thought, well what happens if I change my service account, won't that lead to two accounts with the same SPN. Possibly. Having two accounts with the same SPN is definitely a problem. Why? Well because if there are two accounts Kerberos can't identify the exact account that the service is running as, it could be either account, and so your security falls back to NTLM. SETSPN is useful for finding duplicate SPNs Reading this you will probably be thinking Oh my goodness this is really difficult. It is however I've found today in investigating something else that there is an easy option. Use Network Service as your service account. Network Service is a special account and is tied to the computer. It appears that Network Service has the update rights to AD to set an SPN mapping for the computer account. This then allows the SPN mapping to work. I believe this also works for the local system account. To get all the SPNs in your AD run the following, it could be a large file, so you might want to restrict it to a specific OU, or CN ldifde -d "DC=<domain>" -l servicePrincipalName -F spn.txt You will read in the links below that you need SQL to register the SPN this is done how to use Kerberos authenticaiton in SQL Server - http://support.microsoft.com/kb/319723 Using Kerberos with SQL Server - http://blogs.msdn.com/sql_protocols/archive/2005/10/12/479871.aspx Understanding Kerberos and NTLM authentication in SQL Server Connections - http://blogs.msdn.com/sql_protocols/archive/2006/12/02/understanding-kerberos-and-ntlm-authentication-in-sql-server-connections.aspx Summary The only reason I personally know to use a domain account is when you can't get kerberos to work and you want to do BULK INSERT or other network service that requires access to a a remote server. In this case you have to resort to using SQL authentication and the SQL Server uses its service account to access the remote service, and thus you need a domain account. You migth need this if using some forms of replication. I've always found Kerberos awkward to setup and so fallen back to this domain account approach. So in summary to get Kerberos to work try using the network service or local system accounts. For a great post from the Adam Saxton of the SQL Server support team go to http://blogs.msdn.com/psssql/archive/2010/03/09/what-spn-do-i-use-and-how-does-it-get-there.aspx 

    Read the article

  • Hack Extension Files to Make Them Version-Compatible for Firefox

    - by Asian Angel
    A well known drawback in using Firefox is the problem with extension compatibility when a new major version is released. Whether it is for a new extension that you are trying for the first time or an old favorite we have a way to get those extensions working for you again. There are multiple reasons why you might want to choose this method to fix a non-compatible extension: You are uncomfortable with tweaking the “about:config” settings You prefer to maintain the original “about:config” settings in a pristine state and like having compatibility checking active You are looking to gain some “geek cred” Keep in mind that most extensions will work perfectly well with a new version of Firefox and simply have the “version compatibility number” problem. But once in a while there may be one that needs to have some work done on it by the extension’s author. The Problem Here is a perfect example of everyone’s least favorite “extension message”. This is the last thing that you need when all that you want is for your favorite extension (or a new one) to work on a fresh clean install. Note: This works nicely to “replace” non-compatible extensions already present in your browser if you are simply upgrading. Hacking the XPI File For this procedure you will need to manually download the extension to your hard-drive (right click on the extension’s “Install Button” and select “Save As”). Once you have done that you are ready to start hacking the extension. For our example we chose the “GCal Popup Extension”. The best thing to do is place the extension in a new folder (i.e. the Desktop or other convenient location) then unzip it just the same way that you would with any regular zip file. Once it is unzipped you will see the various folders and files that were in the “xpi file” (we had four files here but depending on the extension the number may vary). There is only one file that you need to focus on…the “install.rdf” file. Note: At this point you should move the original extension file to a different location (i.e. outside of the folder) so that it is no longer present. Open the file in “Notepad” so that you can change the number for the “maxVersion”. Here the number is listed as “3.5.*” but we needed to make it higher… Replacing the “5” with a “7” is all that we needed to do. Once you have entered your new “maxVersion” number save the file. At this point you will need to re-zip all of the files back into a single file. Make certain that you “create” a file with the “.zip file extension” otherwise this will not work. Once you have the new zip file created you will need to rename the entire file including the “file extension”. For our example we copied and pasted the original extension name. Once you have changed the name click outside of the “text area”. You will see a small message window like this asking for confirmation…click “Yes” to finish the process. Now your modified/updated extension is ready to install. Drag the extension into your browser to install it and watch that wonderful “Restart to complete the installation.” message appear. As soon as your browser starts you can check the “Add-ons Manager Window” and see the version compatibility numbers for the extension. Looking very very nice! And just like that your extension should be up and running without any problems. Conclusion If you are looking to try something new, gain some geek cred, or just want to keep your Firefox install as close to the original condition as possible this method should get those extensions working nicely for you again. Similar Articles Productive Geek Tips Make Firefox Extensions Compatible After Firefox Update Breaks Them For No Good ReasonCheck Extension Compatibility for Upcoming Firefox ReleasesFirefox 3.6 Release Candidate Available, Here’s How to Fix Your Incompatible ExtensionsHow To Force Extension Compatibility with Firefox 3.6+Test and Report Add-on Compatibility in Firefox TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional 15 Great Illustrations by Chow Hon Lam Easily Sync Files & Folders with Friends & Family Amazon Free Kindle for PC Download Stretch popurls.com with a Stylish Script (Firefox) OldTvShows.org – Find episodes of Hitchcock, Soaps, Game Shows and more Download Microsoft Office Help tab

    Read the article

  • Windows Phone 7 development: first impressions

    - by DigiMortal
    After hard week in work I got some free time to play with Windows Phone 7 CTP developer tools. Although my first test application is still unfinished I think it is good moment to share my first experiences to you. In this posting I will give you quick overview of Windows Phone 7 developer tools from developer perspective. If you are familiar with Visual Studio 2010 then you will feel comfortable because Windows Phone 7 CTP developer tools base on Visual Studio 2010 Express. Project templates There are five project templates available. Three of them are based on Silverlight and two on XNA Game Studio: Windows Phone Application (Silverlight) Windows Phone List Application (Silverlight) Windows Phone Class Library (Silverlight) Windows Phone Game (XNA Game Studio) Windows Phone Game Library (XNA Game Studio) Currently I am writing to test applications. One of them is based on Windows Phone Application and the other on Windows Phone List Application project template. After creating these projects you see the following views in Visual Studio. Windows Phone Application. Click on image to enlarge. Windows Phone List Application. Click on image to enlarge.  I suggest you to use some of these templates to get started more easily. Windows Phone 7 emulator You can run your Windows Phone 7 applications on Windows Phone 7 emulator that comes with developer tools CTP. If you run your application then emulator is started automatically and you can try out how your application works in phone-like emulator. You can see screenshot of emulator on right. Currently there is opened Windows Phone List Application as it is created by default. Click on image to enlarge it. Emulator is a little bit slow and uncomfortable but it works pretty well. This far I have caused only couple of crashes during my experiments. In these cases emulator works but Visual Studio gets stuck because it cannot communicate with emulator. One important note. Emulator is based on virtual machine although you can see only phone screen and options toolbar. If you want to run emulator you must close all virtual machines running on your machine and run Visual Studio 2010 as administrator. Once you run emulator you can keep it open because you can stop your application in Visual Studio, modify, compile and re-deploy it without restarting emulator. Designing user interfaces You can design user interface of your application in Visual Studio. When you open XAML-files it is displayed in window with two panels. Left panel shows you device screen and works as visual design environment while right panel shows you XAML mark-up and let’s you modify XML if you need it. As it is one of my very first Silverlight applications I felt more comfortable with XAML editor because property names in property boxes of visual designer confused me a little bit. Designer panel is not very good because it is visually hard to follow. It has black background that makes dark borders of controls very hard to see. If you have monitor with very high contrast then it is may be not a real problem. I have usual monitor and I have problem. :) Putting controls on design surface, dragging and resizing them is also pretty painful. Some controls are drawn correctly but for some controls you have to set width and height in XML so they can be resized. After some practicing it is not so annoying anymore. On the right you can see toolbox with some controllers. This is all you get out of the box. But it is sufficient to get started. After getting some experiences you can create your own controls or use existing ones from other vendors or developers. If it is your first time to do stuff with Silverlight then keep Google open – you need it hard. After getting over the first shock you get the point very quickly and start developing at normal speed. :) Writing source code Writing source code is the most familiar part of this action. Good old Visual Studio code editor with all nice features it has. But here you get also some surprises: The anatomy of Silverlight controls is a little bit different than the one of user controls in web and forms projects. Windows Phone 7 doesn’t run on full version of Windows (I bet it is some version of Windows CE or something like this) then there is less system classes you can use. Some familiar classes have less methods that in full version of .NET Framework and in these cases you have to write all the code by yourself or find libraries or source code from somewhere. These problems are really not so much problems than limitations and you get easily over them. Conclusion Windows Phone 7 CTP developer tools help you do a lot of things on Windows Phone 7. Although I expected better performance from tools I think that current performance is not a problem. This far my first test project is going very well and Google has answer for almost every question. Windows Phone 7 is mobile device and therefore it has less hardware resources than desktop computers. This is why toolset is so limited. The more you need memory the more slower is device and as you may guess it needs the more battery. If you are writing apps for mobile devices then make your best to get your application use as few resources as possible and act as fast as possible.

    Read the article

  • Handling Trailing Delimiters in HL7 Messages

    - by Thomas Canter
    Applies to: BizTalk Server 2006 with the HL7 1.3 Accelerator Outline of the problem Trailing Delimiters are empty values at the end of an object in a HL7 ER7 formatted message. Examples: Empty Field NTE|P| NTE|P|| Empty component ORC|1|725^ Empty Subcomponent ORC|1|||||27& Empty repeat OBR|1||||||||027~ Trailing delimiters indicate the following object exists and is empty, which is quite different from null, null is an explicit value indicated by a pair of double quotes -> "". The BizTalk HL7 Accelerator by default does not allow trailing delimiters. There are three methods to allow trailing delimiters. NOTE: All Schemas always allow trailing delimiters in the MSH Segment Using party identifiers MSH3.1 – Receive/inbound processing, using this value as a party allows you to configure the system to allow inbound trailing delimiters. MSH5.1 – Send/outbound processing, using this value as a party allows you to configure the system to allow outbound trailing delimiters. Generally, if you allow inbound trailing delimiters, unless you are willing to programmatically remove all trailing delimiters, then you need to configure the send to allow trailing delimiters. Add the appropriate parties to the BizTalk Parties list from these two fields in your message stream. Open the BizTalk HL7 Configuration tool and for each party check the "Allow trailing delimiters (separators)" check box on the Validation tab. Disadvantage – Each MSH3.1 and MSH5.1 value must be represented in the parties list and configured. Advantage – granular control over system behavior for each inbound/outbound system. Using instance properties of a pipeline used in a send port or receive location. Open the BizTalk Server Administration console locate the send port or receive location that contains the BTAHL72XReceivePipeline or BTAHL72XSendPipeline pipeline. Open the properties To the right of the pipeline selected locate the […] ellipses button In the property list, locate the "TrailingDelimiterAllowed" property and set it to True. Advantage – All messages through a particular Send Port or Receive Location will allow trailing delimiters. Disadvantage – Must configure each Send Port or Receive Location. No granular control over which remote parties will send or receive messages with trailing delimiters. Using a custom pipeline that uses a pre-configured BTA HL7 Pipeline component. Use Visual Studio to construct a custom receive and send pipeline using the appropriate assembler or dissasembler. Set the component property to "TrailingDelimitersAllowed" to True Compile and deploy the custom pipeline Use the custom pipeline instead of the standard pipeline for all HL7 message processing Advantage – All messages using the custom pipeline will automatically allow trailing delimiters. Disadvantage – Requires custom coding and development to create and deploy the custom pipeline. No granular control over which remote parties will send or receive messages with trailing delimiters. What does a Trailing Delimiter do to the XML Schema? Allowing trailing delimiters does not have the impact often expected in the actual XML Schema.The Schema reproduces the message with no data loss.Thus, the message when represented in XML must contain the extra fields, in order to reproduce the outbound message.Thus, a trialing delimiter results in an empty XML field.Trailing Delmiters are not stripped from the inbound message. Example:<PID_21>44172</PID_21><PID_21>9257</PID_21> -> the original maximum number of repeats<PID_21></PID_21> -> The empty repeated field Allowing trailing delimiters not remove the trailing delimiters from the message, it simply suppresses the check that will cause the message to fail parse with trailing delimiters. When can you not fix the problem by enabling trailing delimiters Each object in a message must have a location in the target BTAHL7 schema for its content to reside.If you have more objects in the message than are contained at that location, then enabling trailing delimiters will not resolve the problem. The schema must be extended to accommodate the empty message content.Examples: Extra Field NTE|P||||Only 4 fields in NTE Segment, the 4th field exists, but is empty. Extra component PID|1|1523|47^^^^^^^Only 5 components in a CX data type, the 5th component exists, but is empty Extra subcomponent ORC|1|||||27&&Only 2 subcomponents in a CQ data type, the 3rd subcomponent is empty, but exists. Extra Repeat PID|1||||||||||||||||||||4419~5217~Only 2 repeats allowed for the field "Mother's identifier", the repeat is empty, but exists. In each of these cases, you must locate the failing object and extend the type to allow an additional object of that type. FieldAdd a field of ST to the end of the segment with a suitable name in the segments_nnn.xsd Component Create a new Custom CX data type (i.e. CX_XtraComp) in the datatypes_nnn.xsd and add a new component to the custom CX data type. Update the field in the segments_nnn.xsd file to use the custom data type instead of the standard datatype. Subcomponent Create a new Custom CQ data type that accepts an additional TS value at the end of the data type. Create a custom TQ data type that uses the new custom CQ data type as the first subcomponent. Modify the ORC segment to use the new CQ data type at ORC.7 instead of the standard CQ data type. RepeatModify the Field definition for PID.21 in the segments_nnn.xsd to allow more repeats in the field.

    Read the article

  • The ugly evolution of running a background operation in the context of an ASP.NET app

    - by Jeff
    If you’re one of the two people who has followed my blog for many years, you know that I’ve been going at POP Forums now for over almost 15 years. Publishing it as an open source app has been a big help because it helps me understand how people want to use it, and having it translated to six languages is pretty sweet. Despite this warm and fuzzy group hug, there has been an ugly hack hiding in there for years. One of the things we find ourselves wanting to do is hide some kind of regular process inside of an ASP.NET application that runs periodically. The motivation for this has always been that a lot of people simply don’t have a choice, because they’re running the app on shared hosting, or don’t otherwise have access to a box that can run some kind of regular background service. In POP Forums, I “solved” this problem years ago by hiding some static timers in an HttpModule. Truthfully, this works well as long as you don’t run multiple instances of the app, which in the cloud world, is always a possibility. With the arrival of WebJobs in Azure, I’m going to solve this problem. This post isn’t about that. The other little hacky problem that I “solved” was spawning a background thread to queue emails to subscribed users of the forum. This evolved quite a bit over the years, starting with a long running page to mail users in real-time, when I had only a few hundred. By the time it got into the thousands, or tens of thousands, I needed a better way. What I did is launched a new thread that read all of the user data in, then wrote a queued email to the database (as in, the entire body of the email, every time), with the properly formatted opt-out link. It was super inefficient, but it worked. Then I moved my biggest site using it, CoasterBuzz, to an Azure Website, and it stopped working. So let’s start with the first stupid thing I was doing. The new thread was simply created with delegate code inline. As best I can tell, Azure Websites are more aggressive about garbage collection, because that thread didn’t queue even one message. When the calling server response went out of scope, so went the magic background thread. Duh, all I had to do was move the thread to a private static variable in the class. That’s the way I was able to keep stuff running from the HttpModule. (And yes, I know this is still prone to failure, particularly if the app recycles. For as infrequently as it’s used, I have not, however, experienced this.) It was still failing, but this time I wasn’t sure why. It would queue a few dozen messages, then die. Running in Azure, I had to turn on the application logging and FTP in to see what was going on. That led me to a helper method I was using as delegate to build the unsubscribe links. The idea here is that I didn’t want yet another config entry to describe the base URL, appended with the right path that would match the routing table. No, I wanted the app to figure it out for you, so I came up with this little thing: public static string FullUrlHelper(this Controller controller, string actionName, string controllerName, object routeValues = null) { var helper = new UrlHelper(controller.Request.RequestContext); var requestUrl = controller.Request.Url; if (requestUrl == null) return String.Empty; var url = requestUrl.Scheme + "://"; url += requestUrl.Host; url += (requestUrl.Port != 80 ? ":" + requestUrl.Port : ""); url += helper.Action(actionName, controllerName, routeValues); return url; } And yes, that should have been done with a string builder. This is useful for sending out the email verification messages, too. As clever as I thought I was with this, I was using a delegate in the admin controller to format these unsubscribe links for tens of thousands of users. I passed that delegate into a service class that did the email work: Func<User, string> unsubscribeLinkGenerator = user => this.FullUrlHelper("Unsubscribe", AccountController.Name, new { id = user.UserID, key = _profileService.GetUnsubscribeHash(user) }); _mailingListService.MailUsers(subject, body, htmlBody, unsubscribeLinkGenerator); Cool, right? Actually, not so much. If you look back at the helper, this delegate then will depend on the controller context to learn the routing and format for the URL. As you might have guessed, those things were turning null after a few dozen formatted links, when the original request to the admin controller went away. That this wasn’t already happening on my dedicated server is surprising, but again, I understand why the Azure environment might be eager to reclaim a thread after servicing the request. It’s already inefficient that I’m building the entire email for every user, but going back to check the routing table for the right link every time isn’t a win either. I put together a little hack to look up one generic URL, and use that as the basis for a string format. If you’re wondering why I didn’t just use the curly braces up front, it’s because they get URL formatted: var baseString = this.FullUrlHelper("Unsubscribe", AccountController.Name, new { id = "--id--", key = "--key--" }); baseString = baseString.Replace("--id--", "{0}").Replace("--key--", "{1}"); Func unsubscribeLinkGenerator = user => String.Format(baseString, user.UserID, _profileService.GetUnsubscribeHash(user)); _mailingListService.MailUsers(subject, body, htmlBody, unsubscribeLinkGenerator); And wouldn’t you know it, the new solution works just fine. It’s still kind of hacky and inefficient, but it will work until this somehow breaks too.

    Read the article

  • Anti-Forgery Request in ASP.NET MVC and AJAX

    - by Dixin
    Background To secure websites from cross-site request forgery (CSRF, or XSRF) attack, ASP.NET MVC provides an excellent mechanism: The server prints tokens to cookie and inside the form; When the form is submitted to server, token in cookie and token inside the form are sent by the HTTP request; Server validates the tokens. To print tokens to browser, just invoke HtmlHelper.AntiForgeryToken():<% using (Html.BeginForm()) { %> <%: this.Html.AntiForgeryToken(Constants.AntiForgeryTokenSalt)%> <%-- Other fields. --%> <input type="submit" value="Submit" /> <% } %> which writes to token to the form:<form action="..." method="post"> <input name="__RequestVerificationToken" type="hidden" value="J56khgCvbE3bVcsCSZkNVuH9Cclm9SSIT/ywruFsXEgmV8CL2eW5C/gGsQUf/YuP" /> <!-- Other fields. --> <input type="submit" value="Submit" /> </form> and the cookie: __RequestVerificationToken_Lw__=J56khgCvbE3bVcsCSZkNVuH9Cclm9SSIT/ywruFsXEgmV8CL2eW5C/gGsQUf/YuP When the above form is submitted, they are both sent to server. [ValidateAntiForgeryToken] attribute is used to specify the controllers or actions to validate them:[HttpPost] [ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public ActionResult Action(/* ... */) { // ... } This is very productive for form scenarios. But recently, when resolving security vulnerabilities for Web products, I encountered 2 problems: It is expected to add [ValidateAntiForgeryToken] to each controller, but actually I have to add it for each POST actions, which is a little crazy; After anti-forgery validation is turned on for server side, AJAX POST requests will consistently fail. Specify validation on controller (not on each action) Problem For the first problem, usually a controller contains actions for both HTTP GET and HTTP POST requests, and usually validations are expected for HTTP POST requests. So, if the [ValidateAntiForgeryToken] is declared on the controller, the HTTP GET requests become always invalid:[ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public class SomeController : Controller { [HttpGet] public ActionResult Index() // Index page cannot work at all. { // ... } [HttpPost] public ActionResult PostAction1(/* ... */) { // ... } [HttpPost] public ActionResult PostAction2(/* ... */) { // ... } // ... } If user sends a HTTP GET request from a link: http://Site/Some/Index, validation definitely fails, because no token is provided. So the result is, [ValidateAntiForgeryToken] attribute must be distributed to each HTTP POST action in the application:public class SomeController : Controller { [HttpGet] public ActionResult Index() // Works. { // ... } [HttpPost] [ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public ActionResult PostAction1(/* ... */) { // ... } [HttpPost] [ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public ActionResult PostAction2(/* ... */) { // ... } // ... } Solution To avoid a large number of [ValidateAntiForgeryToken] attributes (one attribute for one HTTP POST action), I created a wrapper class of ValidateAntiForgeryTokenAttribute, where HTTP verbs can be specified:[AttributeUsage(AttributeTargets.Class | AttributeTargets.Method, AllowMultiple = false, Inherited = true)] public class ValidateAntiForgeryTokenWrapperAttribute : FilterAttribute, IAuthorizationFilter { private readonly ValidateAntiForgeryTokenAttribute _validator; private readonly AcceptVerbsAttribute _verbs; public ValidateAntiForgeryTokenWrapperAttribute(HttpVerbs verbs) : this(verbs, null) { } public ValidateAntiForgeryTokenWrapperAttribute(HttpVerbs verbs, string salt) { this._verbs = new AcceptVerbsAttribute(verbs); this._validator = new ValidateAntiForgeryTokenAttribute() { Salt = salt }; } public void OnAuthorization(AuthorizationContext filterContext) { string httpMethodOverride = filterContext.HttpContext.Request.GetHttpMethodOverride(); if (this._verbs.Verbs.Contains(httpMethodOverride, StringComparer.OrdinalIgnoreCase)) { this._validator.OnAuthorization(filterContext); } } } When this attribute is declared on controller, only HTTP requests with the specified verbs are validated:[ValidateAntiForgeryTokenWrapper(HttpVerbs.Post, Constants.AntiForgeryTokenSalt)] public class SomeController : Controller { // Actions for HTTP GET requests are not affected. // Only HTTP POST requests are validated. } Now one single attribute on controller turns on validation for all HTTP POST actions. Submit token via AJAX Problem For AJAX scenarios, when request is sent by JavaScript instead of form:$.post(url, { productName: "Tofu", categoryId: 1 // Token is not posted. }, callback); This kind of AJAX POST requests will always be invalid, because server side code cannot see the token in the posted data. Solution The token must be printed to browser then submitted back to server. So first of all, HtmlHelper.AntiForgeryToken() must be called in the page where the AJAX POST will be sent. Then jQuery must find the printed token in the page, and post it:$.post(url, { productName: "Tofu", categoryId: 1, __RequestVerificationToken: getToken() // Token is posted. }, callback); To be reusable, this can be encapsulated in a tiny jQuery plugin:(function ($) { $.getAntiForgeryToken = function () { // HtmlHelper.AntiForgeryToken() must be invoked to print the token. return $("input[type='hidden'][name='__RequestVerificationToken']").val(); }; var addToken = function (data) { // Converts data if not already a string. if (data && typeof data !== "string") { data = $.param(data); } data = data ? data + "&" : ""; return data + "__RequestVerificationToken=" + encodeURIComponent($.getAntiForgeryToken()); }; $.postAntiForgery = function (url, data, callback, type) { return $.post(url, addToken(data), callback, type); }; $.ajaxAntiForgery = function (settings) { settings.data = addToken(settings.data); return $.ajax(settings); }; })(jQuery); Then in the application just replace $.post() invocation with $.postAntiForgery(), and replace $.ajax() instead of $.ajaxAntiForgery():$.postAntiForgery(url, { productName: "Tofu", categoryId: 1 }, callback); // Token is posted. This solution looks hard coded and stupid. If you have more elegant solution, please do tell me.

    Read the article

  • How to build the Darling projrct on Ubuntu 13.10?

    - by mirror27
    The Darling project is an open source Darwin/OS X emulation layer for Linux. I downloaded the source code with git and tried to build it with cmake but it failed. The document says I need these packages: clang 3.1+ GCC 4.6+ (yes, you still need GCC for header files) libkqueue libbsd gnustep-base ("Foundation") gnustep-gui ("Cocoa") gnustep-corebase ("CoreFoundation") libobjc2 libudev openssl libasound libav libgc but I could not find them on apt or in software center. Also cmake showed this result: No build type selected, default to Debug This is a 64-bit build Building ObjC ABI 2 You have called ADD_LIBRARY for library Carbon without any source files. This typically indicates a problem with your CMakeLists.txt file You have called ADD_LIBRARY for library AppKit without any source files. This typically indicates a problem with your CMakeLists.txt file You have called ADD_LIBRARY for library auto without any source files. This typically indicates a problem with your CMakeLists.txt file CMake Error: The following variables are used in this project, but they are set to NOTFOUND. Please set them or make sure they are set and tested correctly in the CMake files: LIBGNUSTEPCOREBASE_INCLUDE_DIR used as include directory in directory /home/mirror/work/darling/darling/src/motool used as include directory in directory /home/mirror/work/darling/darling/src/util used as include directory in directory /home/mirror/work/darling/darling/src/libmach-o used as include directory in directory /home/mirror/work/darling/darling/src/libdyld used as include directory in directory /home/mirror/work/darling/darling/src/dyld used as include directory in directory /home/mirror/work/darling/darling/src/dyld used as include directory in directory /home/mirror/work/darling/darling/src/libSystem used as include directory in directory /home/mirror/work/darling/darling/src/libltdl used as include directory in directory /home/mirror/work/darling/darling/src/Cocoa used as include directory in directory /home/mirror/work/darling/darling/src/libobjcdarwin used as include directory in directory /home/mirror/work/darling/darling/src/CoreFoundation used as include directory in directory /home/mirror/work/darling/darling/src/libncurses used as include directory in directory /home/mirror/work/darling/darling/src/CoreSecurity used as include directory in directory /home/mirror/work/darling/darling/src/CoreServices used as include directory in directory /home/mirror/work/darling/darling/src/ExceptionHandling used as include directory in directory /home/mirror/work/darling/darling/src/IOKit used as include directory in directory /home/mirror/work/darling/darling/src/Foundation used as include directory in directory /home/mirror/work/darling/darling/src/Carbon used as include directory in directory /home/mirror/work/darling/darling/src/CoreVideo used as include directory in directory /home/mirror/work/darling/darling/src/OpenGL used as include directory in directory /home/mirror/work/darling/darling/src/thin used as include directory in directory /home/mirror/work/darling/darling/src/thin used as include directory in directory /home/mirror/work/darling/darling/src/libstdc++darwin LIBKQUEUE_INCLUDE_DIR used as include directory in directory /home/mirror/work/darling/darling/src/motool used as include directory in directory /home/mirror/work/darling/darling/src/util used as include directory in directory /home/mirror/work/darling/darling/src/libmach-o used as include directory in directory /home/mirror/work/darling/darling/src/libdyld used as include directory in directory /home/mirror/work/darling/darling/src/dyld used as include directory in directory /home/mirror/work/darling/darling/src/dyld used as include directory in directory /home/mirror/work/darling/darling/src/libSystem used as include directory in directory /home/mirror/work/darling/darling/src/libltdl used as include directory in directory /home/mirror/work/darling/darling/src/Cocoa used as include directory in directory /home/mirror/work/darling/darling/src/libobjcdarwin used as include directory in directory /home/mirror/work/darling/darling/src/CoreFoundation used as include directory in directory /home/mirror/work/darling/darling/src/libncurses used as include directory in directory /home/mirror/work/darling/darling/src/CoreSecurity used as include directory in directory /home/mirror/work/darling/darling/src/CoreServices used as include directory in directory /home/mirror/work/darling/darling/src/ExceptionHandling used as include directory in directory /home/mirror/work/darling/darling/src/IOKit used as include directory in directory /home/mirror/work/darling/darling/src/Foundation used as include directory in directory /home/mirror/work/darling/darling/src/Carbon used as include directory in directory /home/mirror/work/darling/darling/src/CoreVideo used as include directory in directory /home/mirror/work/darling/darling/src/OpenGL used as include directory in directory /home/mirror/work/darling/darling/src/thin used as include directory in directory /home/mirror/work/darling/darling/src/thin used as include directory in directory /home/mirror/work/darling/darling/src/libstdc++darwin LIBOBJC2_INCLUDE_DIR used as include directory in directory /home/mirror/work/darling/darling/src/motool used as include directory in directory /home/mirror/work/darling/darling/src/util used as include directory in directory /home/mirror/work/darling/darling/src/libmach-o used as include directory in directory /home/mirror/work/darling/darling/src/libdyld used as include directory in directory /home/mirror/work/darling/darling/src/dyld used as include directory in directory /home/mirror/work/darling/darling/src/dyld used as include directory in directory /home/mirror/work/darling/darling/src/libSystem used as include directory in directory /home/mirror/work/darling/darling/src/libltdl used as include directory in directory /home/mirror/work/darling/darling/src/Cocoa used as include directory in directory /home/mirror/work/darling/darling/src/libobjcdarwin used as include directory in directory /home/mirror/work/darling/darling/src/CoreFoundation used as include directory in directory /home/mirror/work/darling/darling/src/libncurses used as include directory in directory /home/mirror/work/darling/darling/src/CoreSecurity used as include directory in directory /home/mirror/work/darling/darling/src/CoreServices used as include directory in directory /home/mirror/work/darling/darling/src/ExceptionHandling used as include directory in directory /home/mirror/work/darling/darling/src/IOKit used as include directory in directory /home/mirror/work/darling/darling/src/Foundation used as include directory in directory /home/mirror/work/darling/darling/src/Carbon used as include directory in directory /home/mirror/work/darling/darling/src/CoreVideo used as include directory in directory /home/mirror/work/darling/darling/src/OpenGL used as include directory in directory /home/mirror/work/darling/darling/src/thin used as include directory in directory /home/mirror/work/darling/darling/src/thin used as include directory in directory /home/mirror/work/darling/darling/src/libstdc++darwin Configuring incomplete, errors occurred! How can I build the Darling project?

    Read the article

  • Passing the CAML thru the EY of the NEEDL

    - by PointsToShare
    © 2011 By: Dov Trietsch. All rights reserved Passing the CAML thru the EY of the NEEDL Definitions: CAML (Collaborative Application Markup Language) is an XML based markup language used in Microsoft SharePoint technologies  Anonymous: A camel is a horse designed by committee  Dov Trietsch: A CAML is a HORS designed by Microsoft  I was advised against putting any Camel and Sphinx rhymes in here. Look it up in Google!  _____ Now that we have dispensed with the dromedary jokes (BTW, I have many more, but they are not fit to print!), here is an interesting problem and its solution.  We have built a list where the title must be kept unique so I needed to verify the existence (or absence) of a list item with a particular title. Two methods came to mind:  1: Span the list until the title is found (result = found) or until the list ends (result = not found). This is an algorithm of complexity O(N) and for long lists it is a performance sucker. 2: Use a CAML query instead. Here, for short list we’ll encounter some overhead, but because the query results in an SQL query on the content database, it is of complexity O(LogN), which is significantly better and scales perfectly. Obviously I decided to go with the latter and this is where the CAML s--t hit the fan.   A CAML query returns a SPListItemCollection and I simply checked its Count. If it was 0, the item did not already exist and it was safe to add a new item with the given title. Otherwise I cancelled the operation and warned the user. The trouble was that I always got a positive. Most of the time a false positive. The count was greater than 0 regardles of the title I checked (except when the list was empty, which happens only once). This was very disturbing indeed. To solve my immediate problem which was speedy delivery, I reverted to the “Span the list” approach, but the problem bugged me, so I wrote a little console app by which I tested and tweaked and tested, time and again, until I found the solution. Yes, one can pass the proverbial CAML thru the ey of the needle (e’s missing on purpose).  So here are my conclusions:  CAML that does not work:  Note: QT is my quote:  char QT = Convert.ToChar((int)34); string titleQuery = "<Query>><Where><Eq>"; titleQuery += "<FieldRef Name=" + QT + "Title" + QT + "/>"; titleQuery += "<Value Type=" + QT + "Text" + QT + ">" + uniqueID + "</Value></Eq></Where></Query>"; titleQuery += "<ViewFields><FieldRef Name=" + QT + "Title" + QT + "/></ViewFields>";  Why? Even though U2U generates it, the <Query> and </Query> tags do not belong in the query that you pass. Start your query with the <Where> clause.  Also the <ViewFiels> clause does not belong. I used this clause to limit the returned collection to a single column, and I still wish to do it. I’ll show how this is done a bit later.   When you use the <Query> </Query> tags in you query, it’s as if you did not specify the query at all. What you get is the all inclusive default query for the list. It returns evey column and every item. It is expensive for both server and network because it does all the extra processing and eats plenty of bandwidth.   Now, here is the CAML that works  string titleQuery = "<Where><Eq>"; titleQuery += "<FieldRef Name=" + QT + "Title" + QT + "/>"; titleQuery += "<Value Type=" + QT + "Text" + QT + ">" + uniqueID + "</Value></Eq></Where>";  You’ll also notice that inside the unusable <ViewFields> clause above, we have a <FieldRef> clause. This is what we pass to the SPQuery object. Here is how:  SPQuery query = new SPQuery(); query.Query = titleQuery; query.ViewFields = "<FieldRef Name=" + QT + "Title" + QT + "/>"; query.RowLimit = 1; SPListItemCollection col = masterList.GetItems(query);  Two thing to note: we enter the view fields into the SPQuery object and we also limited the number of rows that the query returns. The latter is not always done, but in an existence test, there is no point in returning hundreds of rows. The query will now return one item or none, which is all we need in order to verify the existence (or non-existence) of items. Limiting the number of columns and the number of rows is a great performance enhancer. That’s all folks!!

    Read the article

  • What Did You Do? is a Bad Question

    - by Ajarn Mark Caldwell
    Brian Moran (blog | Twitter) did a great presentation today for the PASS Professional Development Virtual Chapter on The Art of Questions.  One of the points that Brian made was that there are good questions and bad (or at least not-as-good) questions.  Good questions tend to open-up the conversation and engender positive reactions (perhaps even trust and respect) between the participants; and bad questions tend to close-down a conversation either through the narrow list of possible responses (e.g. strictly Yes/No) or through the negative reactions they can produce.  And this explains why I so frequently had problems troubleshooting real-time problems with users in the past.  I’ll explain that in more detail below, but before we go on, let me recommend that you watch the recording of Brian’s presentation to learn why the question Why is often problematic in the U.S. and yet we so often resort to it. For a short portion (3 years) of my career, I taught basic computer skills and Office applications in an adult vocational school, and this gave me ample opportunity to do live troubleshooting of user challenges with computers.  And like many people who ended up in computer related jobs, I also have had numerous times where I was called upon by less computer-savvy individuals to help them with some challenge they were having, whether it was part of my job or not.  One of the things that I noticed, especially during my time as a teacher, was that when I was helping somebody, typically the first question I would ask them was, “What did you do?”  This seemed to me like a good way to start my detective work trying to figure out what happened, what went wrong, how to fix it, and how to help the person avoid it again in the future.  I always asked it in a polite tone of voice as I was just trying to gather the facts before diving in deeper.  However; 99.999% of the time, I always got the same answer, “Nothing!”  For a long time this frustrated me because (remember I’m in detective mode at that point) I knew it could not possibly be true.  They HAD to have done SOMETHING…just tell me what were the last actions you took before this problem presented itself.  But no, they always stuck with “Nothing”.  At which point, with frustration growing, and not a little bit of disdain for their lack of helpfulness, I would usually ask them to move aside while I took over their machine and got them out of whatever they had gotten themselves into.  After a while I just grew used to the fact that this was the answer I would usually receive, but I always kept asking because for the .001% of the people who would actually tell me, I could then help them understand what went wrong and how to avoid it in the future. Now, after hearing Brian’s talk, I understand what the problem was.  Even though I meant to just be in an information gathering mode, the words I was using, “What did YOU do?” have such a strong negative connotation that people would instinctively go into defense-mode and stop sharing information that might make them look bad.  Many of them probably were not even consciously aware that they had gone on the defensive, but the self-preservation instinct, especially self-preservation of the ego, is so strong that people would end up there without even realizing it. So, if “What did you do” is a bad question, what would have been better?  Well, one suggestion that Brian makes in his talk is something along the lines of, “Can you tell me what led up to this?” or “what was happening on the computer right before this came up?”  It’s subtle, but the point is to take the focus off of the person and their behavior; instead depersonalizing it and talk about events from more of a 3rd-party observer point of view.  With this approach, people will be more likely to talk about what the computer did and what they did in response to it without feeling the interrogation spotlight is on them.  They are also more likely to mention other events that occurred around the same time that may or may not be related, but which could certainly help you troubleshoot a larger problem if it is not just user actions.  And that is the ultimate goal of your asking the questions.  So yes, it does matter how you ask the question; and there are such things as good questions and bad questions.  Excellent topic Brian!  Thanks for getting the thinking gears churning! (Cross-posted to the Professional Development Virtual Chapter blog.)

    Read the article

  • Advanced donut caching: using dynamically loaded controls

    - by DigiMortal
    Yesterday I solved one caching problem with local community portal. I enabled output cache on SharePoint Server 2007 to make site faster. Although caching works fine I needed to do some additional work because there are some controls that show different content to different users. In this example I will show you how to use “donut caching” with user controls – powerful way to drive some content around cache. About donut caching Donut caching means that although you are caching your content you have some holes in it so you can still affect the output that goes to user. By example you can cache front page on your site and still show welcome message that contains correct user name. To get better idea about donut caching I suggest you to read ScottGu posting Tip/Trick: Implement "Donut Caching" with the ASP.NET 2.0 Output Cache Substitution Feature. Basically donut caching uses ASP.NET substitution control. In output this control is replaced by string you return from static method bound to substitution control. Again, take a look at ScottGu blog posting I referred above. Problem If you look at Scott’s example it is pretty plain and easy by its output. All it does is it writes out current user name as string. Here are examples of my login area for anonymous and authenticated users:    It is clear that outputting mark-up for these views as string is pretty lame to implement in code at string level. Every little change in design will end up with new version of controls library because some parts of design “live” there. Solution: using user controls I worked out easy solution to my problem. I used cache substitution and user controls together. I have three user controls: LogInControl – this is the proxy control that checks which “real” control to load. AnonymousLogInControl – template and logic for anonymous users login area. AuthenticatedLogInControl – template and logic for authenticated users login area. This is the control we render for each user separately because it contains user name and user profile fill percent. Anonymous control is not very interesting because it is only about keeping mark-up in separate file. Interesting parts are LogInControl and AuthenticatedLogInControl. Creating proxy control The first thing was to create control that has substitution area where “real” control is loaded. This proxy control should also be available to decide which control to load. The definition of control is very primitive. <%@ Control EnableViewState="false" Inherits="MyPortal.Profiles.LogInControl" %> <asp:Substitution runat="server" MethodName="ShowLogInBox" /> But code is a little bit tricky. Based on current user instance we decide which login control to load. Then we create page instance and load our control through it. When control is loaded we will call DataBind() method. In this method we evaluate all fields in loaded control (it was best choice as Load and other events will not be fired). Take a look at the code. public static string ShowLogInBox(HttpContext context) {     var user = SPContext.Current.Web.CurrentUser;     string controlName;       if (user != null)         controlName = "AuthenticatedLogInControl.ascx";     else         controlName = "AnonymousLogInControl.ascx";       var path = "~/_controltemplates/" + controlName;     var output = new StringBuilder(10000);       using(var page = new Page())     using(var ctl = page.LoadControl(path))     using(var writer = new StringWriter(output))     using(var htmlWriter = new HtmlTextWriter(writer))     {         ctl.DataBind();         ctl.RenderControl(htmlWriter);     }     return output.ToString(); } When control is bound to data we ask to render it its contents to StringBuilder. Now we have the output of control as string and we can return it from our method. Of course, notice how correct I am with resources disposing. :) The method that returns contents for substitution control is static method that has no connection with control instance because hen page is read from cache there are no instances of controls available. Conclusion As you saw it was not very hard to use donut caching with user controls. Instead of writing mark-up of controls to static method that is bound to substitution control we can still use our user controls.

    Read the article

  • SQL SERVER – SSMS: Database Consistency History Report

    - by Pinal Dave
    Doctor and Database The last place I like to visit is always a hospital. With the monsoon season starting, intermittent rains, it has become sort of a routine to get a cycle of fever every other year (seriously I hate it). So when I visit my doctor, it is always interesting in the way he quizzes me. The routine question of – “How many days have you had this?”, “Is there any pattern?”, “Did you drench in rain?”, “Do you have any other symptom?” and so on. The idea here is that the doctor wants to find any anomaly or a pattern that will guide him to a viral or bacterial type. Most of the time they get it based on experience and sometimes after a battery of tests. So if there is consistent behavior to your problem, there is always a solution out. SQL Server has its way to find if the server data / files are in consistent state using the DBCC commands. Back to SQL Server In real life, Database consistency check is one of the critical operations a DBA generally doesn’t give much priority. Many readers of my blogs have asked many times, how do we know if the database is consistent? How do I read output of DBCC CHECKDB and find if everything is right or not? My common answer to all of them is – look at the bottom of checkdb (or checktable) output and look for below line. CHECKDB found 0 allocation errors and 0 consistency errors in database ‘DatabaseName’. Above is a “good sign” because we are seeing zero allocation and zero consistency error. If you are seeing non-zero errors then there is some problem with the database. Sample output is shown as below: CHECKDB found 0 allocation errors and 2 consistency errors in database ‘DatabaseName’. repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKDB (DatabaseName). If we see non-zero error then most of the time (not always) we get repair options depending on the level of corruption. There is risk involved with above option (repair_allow_data_loss), that is – we would lose the data. Sometimes the option would be repair_rebuild which is little safer. Though these options are available, it is important to find the root cause to the problem. In standard report, there is a report which can show the history of checkdb executed for the selected database. Since this is a database level report, we need to right click on database, click Reports, click Standard Reports and then choose “Database Consistency History” report. The information in this report is picked from default trace. If default trace is disabled or there is no checkdb run or information is not there in default trace (because it’s rolled over), we would get report like below. As we can see report says it very clearly: Currently, no execution history of CHECKDB is available or default trace is not enabled. To demonstrate, I have caused corruption in one of the database and did below steps. Run CheckDB so that errors are reported. Fix the corruption by losing the data using repair option Run CheckDB again to check if corruption is cleared. After that I have launched the report and below is what we would see. If you are lazy like me and don’t want to run the report manually for each database then below query would be handy to provide same report for all database. This query is runs behind the scenes by the report. All I have done is remove the filter for database name (at the last – highlighted). DECLARE @curr_tracefilename VARCHAR(500); DECLARE @base_tracefilename VARCHAR(500); DECLARE @indx INT; SELECT @curr_tracefilename = path FROM sys.traces WHERE is_default = 1; SET @curr_tracefilename = REVERSE(@curr_tracefilename); SELECT @indx  = PATINDEX('%\%', @curr_tracefilename) ; SET @curr_tracefilename = REVERSE(@curr_tracefilename); SET @base_tracefilename = LEFT( @curr_tracefilename,LEN(@curr_tracefilename) - @indx) + '\log.trc'; SELECT  SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),36, PATINDEX('%executed%',TEXTData)-36) AS command ,       LoginName ,       StartTime ,       CONVERT(INT,SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%found%',TEXTData) +6,PATINDEX('%errors %',TEXTData)-PATINDEX('%found%',TEXTData)-6)) AS errors ,       CONVERT(INT,SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%repaired%',TEXTData) +9,PATINDEX('%errors.%',TEXTData)-PATINDEX('%repaired%',TEXTData)-9)) repaired ,       SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%time:%',TEXTData)+6,PATINDEX('%hours%',TEXTData)-PATINDEX('%time:%',TEXTData)-6)+':'+SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%hours%',TEXTData) +6,PATINDEX('%minutes%',TEXTData)-PATINDEX('%hours%',TEXTData)-6)+':'+SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%minutes%',TEXTData) +8,PATINDEX('%seconds.%',TEXTData)-PATINDEX('%minutes%',TEXTData)-8) AS time FROM::fn_trace_gettable( @base_tracefilename, DEFAULT) WHERE EventClass = 22 AND SUBSTRING(TEXTData,36,12) = 'DBCC CHECKDB' -- AND DatabaseName = @DatabaseName; Don’t get worried about the logic above. All it is doing is reading the trace files, parsing below entry and getting out information for underlined words. DBCC CHECKDB (CorruptedDatabase) executed by sa found 2 errors and repaired 0 errors. Elapsed time: 0 hours 0 minutes 0 seconds.  Internal database snapshot has split point LSN = 00000029:00000030:0001 and first LSN = 00000029:00000020:0001. Hopefully now onwards you would run checkdb and understand the importance of it. As responsible DBAs I am sure you are already doing it, let me know how often do you actually run them on you production environment? Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL Tagged: SQL Reports

    Read the article

< Previous Page | 585 586 587 588 589 590 591 592 593 594 595 596  | Next Page >