Search Results

Search found 6729 results on 270 pages for 'practical answers'.

Page 262/270 | < Previous Page | 258 259 260 261 262 263 264 265 266 267 268 269  | Next Page >

  • Assertion failure when trying to write (INSERT, UPDATE) to sqlite database on iPhone.

    - by Mark McFarlane
    I have a really frustrating error that I've spent hours looking at and cannot fix. I can get data from my db no problem with this code, but inserting or updating gives me these errors: *** Assertion failure in +[Functions db_insert_answer:question_type:score:], /Volumes/Xcode/Kanji/Classes/Functions.m:129 *** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Error inserting: db_insert_answer:question_type:score:' Here is the code I'm using: [Functions db_insert_answer:[[dict_question objectForKey:@"JISDec"] intValue] question_type:@"kanji_meaning" score:arc4random() % 100]; //update EF, Next_question, n here [Functions db_update_EF:[dict_question objectForKey:@"question"] EF:EF]; To call these functions: +(sqlite3_stmt *)db_query:(NSString *)queryText{ sqlite3 *database = [self get_db]; sqlite3_stmt *statement; NSLog(queryText); if (sqlite3_prepare_v2(database, [queryText UTF8String], -1, &statement, nil) == SQLITE_OK) { } else { NSLog(@"HMM, COULDNT RUN QUERY: %s\n", sqlite3_errmsg(database)); } sqlite3_close(database); return statement; } +(void)db_insert_answer:(int)obj_id question_type:(NSString *)question_type score:(int)score{ sqlite3 *database = [self get_db]; sqlite3_stmt *statement; char *errorMsg; char *update = "INSERT INTO Answers (obj_id, question_type, score, date) VALUES (?, ?, ?, DATE())"; if (sqlite3_prepare_v2(database, update, -1, &statement, nil) == SQLITE_OK) { sqlite3_bind_int(statement, 1, obj_id); sqlite3_bind_text(statement, 2, [question_type UTF8String], -1, NULL); sqlite3_bind_int(statement, 3, score); } if (sqlite3_step(statement) != SQLITE_DONE){ NSAssert1(0, @"Error inserting: %s", errorMsg); } sqlite3_finalize(statement); sqlite3_close(database); NSLog(@"Answer saved"); } +(void)db_update_EF:(NSString *)kanji EF:(int)EF{ sqlite3 *database = [self get_db]; sqlite3_stmt *statement; //NSLog(queryText); char *errorMsg; char *update = "UPDATE Kanji SET EF = ? WHERE Kanji = '?'"; if (sqlite3_prepare_v2(database, update, -1, &statement, nil) == SQLITE_OK) { sqlite3_bind_int(statement, 1, EF); sqlite3_bind_text(statement, 2, [kanji UTF8String], -1, NULL); } else { NSLog(@"HMM, COULDNT RUN QUERY: %s\n", sqlite3_errmsg(database)); } if (sqlite3_step(statement) != SQLITE_DONE){ NSAssert1(0, @"Error updating: %s", errorMsg); } sqlite3_finalize(statement); sqlite3_close(database); NSLog(@"Update saved"); } +(sqlite3 *)get_db{ sqlite3 *database; NSFileManager *fileManager = [NSFileManager defaultManager]; NSString *copyFrom = [[[NSBundle mainBundle] resourcePath] stringByAppendingPathComponent:@"/kanji_training.sqlite"]; if([fileManager fileExistsAtPath:[self dataFilePath]]) { //NSLog(@"DB FILE ALREADY EXISTS"); } else { [fileManager copyItemAtPath:copyFrom toPath:[self dataFilePath] error:nil]; NSLog(@"COPIED DB TO DOCUMENTS BECAUSE IT DIDNT EXIST: NEW INSTALL"); } if (sqlite3_open([[self dataFilePath] UTF8String], &database) != SQLITE_OK) { sqlite3_close(database); NSAssert(0, @"Failed to open database"); NSLog(@"FAILED TO OPEN DB"); } else { if([fileManager fileExistsAtPath:[self dataFilePath]]) { //NSLog(@"DB PATH:"); //NSLog([self dataFilePath]); } } return database; } + (NSString *)dataFilePath { NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString *documentsDirectory = [paths objectAtIndex:0]; return [documentsDirectory stringByAppendingPathComponent:@"kanji_training.sqlite"]; } I really can't work it out! Can anyone help me? Many thanks.

    Read the article

  • Point in polygon OR point on polygon using LINQ

    - by wageoghe
    As noted in an earlier question, How to Zip enumerable with itself, I am working on some math algorithms based on lists of points. I am currently working on point in polygon. I have the code for how to do that and have found several good references here on SO, such as this link Hit test. So, I can figure out whether or not a point is in a polygon. As part of determining that, I want to determine if the point is actually on the polygon. This I can also do. If I can do all of that, what is my question you might ask? Can I do it efficiently using LINQ? I can already do something like the following (assuming a Pairwise extension method as described in my earlier question as well as in links to which my question/answers links, and assuming a Position type that has X and Y members). I have not tested much, so the lambda might not be 100% correct. Also, it does not take very small differences into account. public static PointInPolygonLocation PointInPolygon(IEnumerable<Position> pts, Position pt) { int numIntersections = pts.Pairwise( (p1, p2) => { if (p1.Y != p2.Y) { if ((p1.Y >= pt.Y && p2.Y < pt.Y) || (p1.Y < pt.Y && p2.Y >= pt.Y)) { if (p1.X < p1.X && p2.X < pt.X) { return 1; } if (p1.X < pt.X || p2.X < pt.X) { if (((pt.Y - p1.Y) * ((p1.X - p2.X) / (p1.Y - p2.Y)) * p1.X) < pt.X) { return 1; } } } } return 0; }).Sum(); if (numIntersections % 2 == 0) { return PointInPolygonLocation.Outside; } else { return PointInPolygonLocation.Inside; } } This function, PointInPolygon, takes the input Position, pt, iterates over the input sequence of position values, and uses the Jordan Curve method to determine how many times a ray extended from pt to the left intersects the polygon. The lambda expression will yield, into the "zipped" list, 1 for every segment that is crossed, and 0 for the rest. The sum of these values determines if pt is inside or outside of the polygon (odd == inside, even == outside). So far, so good. Now, for any consecutive pairs of position values in the sequence (i.e. in any execution of the lambda), we can also determine if pt is ON the segment p1, p2. If that is the case, we can stop the calculation because we have our answer. Ultimately, my question is this: Can I perform this calculation (maybe using Aggregate?) such that we will only iterate over the sequence no more than 1 time AND can we stop the iteration if we encounter a segment that pt is ON? In other words, if pt is ON the very first segment, there is no need to examine the rest of the segments because we have the answer. It might very well be that this operation (particularly the requirement/desire to possibly stop the iteration early) does not really lend itself well to the LINQ approach. It just occurred to me that maybe the lambda expression could yield a tuple, the intersection value (1 or 0 or maybe true or false) and the "on" value (true or false). Maybe then I could use TakeWhile(anontype.PointOnPolygon == false). If I Sum the tuples and if ON == 1, then the point is ON the polygon. Otherwise, the oddness or evenness of the sum of the other part of the tuple tells if the point is inside or outside.

    Read the article

  • Enable button based on TextBox value (WPF)

    - by zendar
    This is MVVM application. There is a window and related view model class. There is TextBox, Button and ListBox on form. Button is bound to DelegateCommand that has CanExecute function. Idea is that user enters some data in text box, presses button and data is appended to list box. I would like to enable command (and button) when user enters correct data in TextBox. Things work like this now: CanExecute() method contains code that checks if data in property bound to text box is correct. Text box is bound to property in view model UpdateSourceTrigger is set to PropertyChanged and property in view model is updated after each key user presses. Problem is that CanExecute() does not fire when user enters data in text box. It doesn't fire even when text box lose focus. How could I make this work? Edit: Re Yanko's comment: Delegate command is implemented in MVVM toolkit template and when you create new MVVM project, there is Delegate command in solution. As much as I saw in Prism videos this should be the same class (or at least very similar). Here is XAML snippet: ... <UserControl.Resources> <views:CommandReference x:Key="AddObjectCommandReference" Command="{Binding AddObjectCommand}" /> </UserControl.Resources> ... <TextBox Text="{Binding ObjectName, UpdateSourceTrigger=PropertyChanged}"> </TextBox> <Button Command="{StaticResource AddObjectCommandReference}">Add</Button> ... View model: // Property bound to textbox public string ObjectName { get { return objectName; } set { objectName = value; OnPropertyChanged("ObjectName"); } } // Command bound to button public ICommand AddObjectCommand { get { if (addObjectCommand == null) { addObjectCommand = new DelegateCommand(AddObject, CanAddObject); } return addObjectCommand; } } private void AddObject() { if (ObjectName == null || ObjectName.Length == 0) return; objectNames.AddSourceFile(ObjectName); OnPropertyChanged("ObjectNames"); // refresh listbox } private bool CanAddObject() { return ObjectName != null && ObjectName.Length > 0; } As I wrote in the first part of question, following things work: property setter for ObjectName is triggered on every keypress in textbox if I put return true; in CanAddObject(), command is active (button to) It looks to me that binding is correct. Thing that I don't know is how to make CanExecute() fire in setter of ObjectName property from above code. Re Ben's and Abe's answers: CanExecuteChanged() is event handler and compiler complains: The event 'System.Windows.Input.ICommand.CanExecuteChanged' can only appear on the left hand side of += or -= there are only two more members of ICommand: Execute() and CanExecute() Do you have some example that shows how can I make command call CanExecute(). I found command manager helper class in DelegateCommand.cs and I'll look into it, maybe there is some mechanism that could help. Anyway, idea that in order to activate command based on user input, one needs to "nudge" command object in property setter code looks clumsy. It will introduce dependencies and one of big points of MVVM is reducing them. Is it possible to solve this problem by using dependency properties?

    Read the article

  • Building an interleaved buffer for pyopengl and numpy

    - by Nick Sonneveld
    I'm trying to batch up a bunch of vertices and texture coords in an interleaved array before sending it to pyOpengl's glInterleavedArrays/glDrawArrays. The only problem is that I'm unable to find a suitably fast enough way to append data into a numpy array. Is there a better way to do this? I would have thought it would be quicker to preallocate the array and then fill it with data but instead, generating a python list and converting it to a numpy array is "faster". Although 15ms for 4096 quads seems slow. I have included some example code and their timings. #!/usr/bin/python import timeit import numpy import ctypes import random USE_RANDOM=True USE_STATIC_BUFFER=True STATIC_BUFFER = numpy.empty(4096*20, dtype=numpy.float32) def render(i): # pretend these are different each time if USE_RANDOM: tex_left, tex_right, tex_top, tex_bottom = random.random(), random.random(), random.random(), random.random() left, right, top, bottom = random.random(), random.random(), random.random(), random.random() else: tex_left, tex_right, tex_top, tex_bottom = 0.0, 1.0, 1.0, 0.0 left, right, top, bottom = -1.0, 1.0, 1.0, -1.0 ibuffer = ( tex_left, tex_bottom, left, bottom, 0.0, # Lower left corner tex_right, tex_bottom, right, bottom, 0.0, # Lower right corner tex_right, tex_top, right, top, 0.0, # Upper right corner tex_left, tex_top, left, top, 0.0, # upper left ) return ibuffer # create python list.. convert to numpy array at end def create_array_1(): ibuffer = [] for x in xrange(4096): data = render(x) ibuffer += data ibuffer = numpy.array(ibuffer, dtype=numpy.float32) return ibuffer # numpy.array, placing individually by index def create_array_2(): if USE_STATIC_BUFFER: ibuffer = STATIC_BUFFER else: ibuffer = numpy.empty(4096*20, dtype=numpy.float32) index = 0 for x in xrange(4096): data = render(x) for v in data: ibuffer[index] = v index += 1 return ibuffer # using slicing def create_array_3(): if USE_STATIC_BUFFER: ibuffer = STATIC_BUFFER else: ibuffer = numpy.empty(4096*20, dtype=numpy.float32) index = 0 for x in xrange(4096): data = render(x) ibuffer[index:index+20] = data index += 20 return ibuffer # using numpy.concat on a list of ibuffers def create_array_4(): ibuffer_concat = [] for x in xrange(4096): data = render(x) # converting makes a diff! data = numpy.array(data, dtype=numpy.float32) ibuffer_concat.append(data) return numpy.concatenate(ibuffer_concat) # using numpy array.put def create_array_5(): if USE_STATIC_BUFFER: ibuffer = STATIC_BUFFER else: ibuffer = numpy.empty(4096*20, dtype=numpy.float32) index = 0 for x in xrange(4096): data = render(x) ibuffer.put( xrange(index, index+20), data) index += 20 return ibuffer # using ctype array CTYPES_ARRAY = ctypes.c_float*(4096*20) def create_array_6(): ibuffer = [] for x in xrange(4096): data = render(x) ibuffer += data ibuffer = CTYPES_ARRAY(*ibuffer) return ibuffer def equals(a, b): for i,v in enumerate(a): if b[i] != v: return False return True if __name__ == "__main__": number = 100 # if random, don't try and compare arrays if not USE_RANDOM and not USE_STATIC_BUFFER: a = create_array_1() assert equals( a, create_array_2() ) assert equals( a, create_array_3() ) assert equals( a, create_array_4() ) assert equals( a, create_array_5() ) assert equals( a, create_array_6() ) t = timeit.Timer( "testing2.create_array_1()", "import testing2" ) print 'from list:', t.timeit(number)/number*1000.0, 'ms' t = timeit.Timer( "testing2.create_array_2()", "import testing2" ) print 'array: indexed:', t.timeit(number)/number*1000.0, 'ms' t = timeit.Timer( "testing2.create_array_3()", "import testing2" ) print 'array: slicing:', t.timeit(number)/number*1000.0, 'ms' t = timeit.Timer( "testing2.create_array_4()", "import testing2" ) print 'array: concat:', t.timeit(number)/number*1000.0, 'ms' t = timeit.Timer( "testing2.create_array_5()", "import testing2" ) print 'array: put:', t.timeit(number)/number*1000.0, 'ms' t = timeit.Timer( "testing2.create_array_6()", "import testing2" ) print 'ctypes float array:', t.timeit(number)/number*1000.0, 'ms' Timings using random numbers: $ python testing2.py from list: 15.0486779213 ms array: indexed: 24.8184704781 ms array: slicing: 50.2214789391 ms array: concat: 44.1691994667 ms array: put: 73.5879898071 ms ctypes float array: 20.6674289703 ms edit note: changed code to produce random numbers for each render to reduce object reuse and to simulate different vertices each time. edit note2: added static buffer and force all numpy.empty() to use dtype=float32 note 1/Apr/2010: still no progress and I don't really feel that any of the answers have solved the problem yet.

    Read the article

  • Using DateTime in a SqlParameter for Stored Procedure, format error

    - by Matt
    I'm trying to call a stored procedure (on a SQL 2005 server) from C#, .NET 2.0 using DateTime as a value to a SqlParameter. The SQL type in the stored procedure is 'datetime'. Executing the sproc from SQL Management Studio works fine. But everytime I call it from C# I get an error about the date format. When I run SQL Profiler to watch the calls, I then copy paste the exec call to see whats going on. These are my observations and notes about what I've attempted: 1) If I pass the DateTime in directly as a DateTime or converted to SqlDateTime, the field is surrounding by a PAIR of single quotes, such as @Date_Of_Birth=N''1/8/2009 8:06:17 PM'' 2) If I pass the DateTime in as a string, I only get the single quotes 3) Using SqlDateTime.ToSqlString() does not result in a UTC formatted datetime string (even after converting to universal time) 4) Using DateTime.ToString() does not result in a UTC formatted datetime string. 5) Manually setting the DbType for the SqlParameter to DateTime does not change the above observations. So, my questions then, is how on earth do I get C# to pass the properly formatted time in the SqlParameter? Surely this is a common use case, why is it so difficult to get working? I can't seem to convert DateTime to a string that is SQL compatable (e.g. '2009-01-08T08:22:45') EDIT RE: BFree, the code to actually execute the sproc is as follows: using (SqlCommand sprocCommand = new SqlCommand(sprocName)) { sprocCommand.Connection = transaction.Connection; sprocCommand.Transaction = transaction; sprocCommand.CommandType = System.Data.CommandType.StoredProcedure; sprocCommand.Parameters.AddRange(parameters.ToArray()); sprocCommand.ExecuteNonQuery(); } To go into more detail about what I have tried: parameters.Add(new SqlParameter("@Date_Of_Birth", DOB)); parameters.Add(new SqlParameter("@Date_Of_Birth", DOB.ToUniversalTime())); parameters.Add(new SqlParameter("@Date_Of_Birth", DOB.ToUniversalTime().ToString())); SqlParameter param = new SqlParameter("@Date_Of_Birth", System.Data.SqlDbType.DateTime); param.Value = DOB.ToUniversalTime(); parameters.Add(param); SqlParameter param = new SqlParameter("@Date_Of_Birth", SqlDbType.DateTime); param.Value = new SqlDateTime(DOB.ToUniversalTime()); parameters.Add(param); parameters.Add(new SqlParameter("@Date_Of_Birth", new SqlDateTime(DOB.ToUniversalTime()).ToSqlString())); Additional EDIT The one I thought most likely to work: SqlParameter param = new SqlParameter("@Date_Of_Birth", System.Data.SqlDbType.DateTime); param.Value = DOB; Results in this value in the exec call as seen in the SQL Profiler @Date_Of_Birth=''2009-01-08 15:08:21:813'' If I modify this to be @Date_Of_Birth='2009-01-08T15:08:21' It works, but it won't parse with pair of single quotes, and it wont convert to a datetime correctly with the space between the date and time and with the milliseconds on the end. Update and Success First and foremost, thank you everyone for the answers. I post this for the sake of completeness and accuracy on SO - because I certainly do not do it for my pride... I had copy/pasted the code above after the request from below. I trimmed things here and there to be concise. Turns out my problem was in the code I left out, which I'm sure any one of you would have spotted in an instant. I had wrapped my sproc calls inside a transaction. Turns out that I was simply not doing transaction.Commit()!!!!! I'm ashamed to say it, but there you have it. I still don't know what's going on with the syntax I get back from the profiler. A coworker watched with his own instance of the profiler from his computer, and it returned proper syntax. Watching the very SAME executions from my profiler showed the incorrect syntax. It acted as a red-herring, making me believe there was a query syntax problem instead of the much more simple and true answer, which was that I need to commit the transaction! I marked an answer below as correct, and threw in some up-votes on others because they did, after all, answer the question, even if they didn't fix my specific (brain lapse) issue. Thanks again for the help.

    Read the article

  • postfix relaying all mail through office365 problems

    - by amrith
    This is a rather long question with a long list of things tried and travails so please bear with me. The summary is this. I am able to relay email from ubuntu through office365 using postfix; the configuration works. It only works as one of the users; more specifically the user who authenticates against office365 is the only valid "from" More details follow. I have a machine in Amazon's cloud on which I run a bunch of jobs and would like to have statuses mailed over to me. I use office365 at work so I want to relay mail through office365. I'm most familiar with postfix so I used that as the MTA. Configuration is ubuntu 12.04LTS; I've installed postfix and mail-utils. For this example, let me say my company is "company.com" and the machine in question (through an elastic IP and a DNS entry) is called "plaything.company.com". hostname is set to "plaything.company.com", so is /etc/mailname On plaything, I have the following users registered alpha, bravo, and charlie. I have the following configuration files. alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases append_dot_mydomain = no biff = no config_directory = /etc/postfix inet_interfaces = all inet_protocols = ipv4 mailbox_size_limit = 0 mydestination = plaything.company.com, localhost.company.com, , localhost myhostname = plaything.company.com mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 myorigin = /etc/mailname readme_directory = no recipient_delimiter = + relayhost = [smtp.office365.com]:587 sender_canonical_maps = hash:/etc/postfix/sender_canonical smtp_sasl_auth_enable = yes smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd smtp_sasl_security_options = noanonymous smtp_sasl_tls_security_options = noanonymous smtp_tls_CAfile = /etc/ssl/certs/ca-certificates.crt smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtp_use_tls = yes smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_tls_cert_file = /etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file = /etc/ssl/private/ssl-cert-snakeoil.key smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes As the machine is called plaything.company.com I went through the exercise of registering all the appropriate DNS entries to make office365 recognize that I owned plaything.company.com and allowed me to create a user called [email protected] in office365. In office365, I setup [email protected] as having another email address of [email protected]. Then, I made the following sender_canonical [email protected] [email protected] I created a sasl_passwd file that reads: smtp.office365.com [email protected]:123456password123456 let's just say that the password for [email protected] is 1234...456 With all this setup, login as alpha and mail [email protected] Cc: Subject: test test and the whole thing works wonderfully. email gets sent off by postfix, TLS works like a champ, authenticates as daemon@... and [email protected] in Office365 gets an email message. The issue comes up when logged in as bravo to the machine. sender is [email protected] and office365 says: status=bounced (host smtp.office365.com[132.245.12.25] said: 550 5.7.1 Client does not have permissions to send as this sender (in reply to end of DATA command)) this is because I'm trying to send mail as bravo@... and authenticating with office365 as daemon@.... The reason it works with alpha@... is because in office365, I setup [email protected] as having another email address of [email protected]. In Postfix Relay to Office365, Miles Erickson answers the question thusly: Don't send mail to Office365 as a user from your Office365-hosted e-mail domain. Use a subdomain instead, e.g. [email protected] instead of [email protected]. It wouldn't hurt to set up an SPF record for services.mydomain.com or whatever you decide to use. Don't authenticate against mail.messaging.microsoft.com as an Office365 user. Just connect on port 25 and deliver the mail to your domain as any foreign SMTP agent would do. OK, I've done #1, I have those records on DNS but for the most part they are not relevant once Office365 recognizes that I own the domain. Here are those records: CNAME records: - msoid.plaything.company.com - autodiscover.plaything.company.com MX record: - plaything.company.com (plaything-company-com.mail.protection.outlook.com) TXT record: - plaything.company.com (v=spf1 include:spf.protection.outlook.com -all) I've tried #2 but no matter what I do, office365 just blows away the connection with "not authenticated". I can try even a simple telnet to port 25 and attempt to send and it doesn't work. 250 BY2PR01CA007.outlook.office365.com Hello [54.221.245.236] 530 5.7.1 Client was not authenticated Connection closed by foreign host. Is there someone out there who has this kind of a configuration working where multiple users on a linux machine are able to relay mail using postfix through office365? There has to be someone out there doing this who can tell me what is wrong with my setup ...

    Read the article

  • Implementing an async "read all currently available data from stream" operation

    - by Jon
    I recently provided an answer to this question: C# - Realtime console output redirection. As often happens, explaining stuff (here "stuff" was how I tackled a similar problem) leads you to greater understanding and/or, as is the case here, "oops" moments. I realized that my solution, as implemented, has a bug. The bug has little practical importance, but it has an extremely large importance to me as a developer: I can't rest easy knowing that my code has the potential to blow up. Squashing the bug is the purpose of this question. I apologize for the long intro, so let's get dirty. I wanted to build a class that allows me to receive input from a console's standard output Stream. Console output streams are of type FileStream; the implementation can cast to that, if needed. There is also an associated StreamReader already present to leverage. There is only one thing I need to implement in this class to achieve my desired functionality: an async "read all the data available this moment" operation. Reading to the end of the stream is not viable because the stream will not end unless the process closes the console output handle, and it will not do that because it is interactive and expecting input before continuing. I will be using that hypothetical async operation to implement event-based notification, which will be more convenient for my callers. The public interface of the class is this: public class ConsoleAutomator { public event EventHandler<ConsoleOutputReadEventArgs> StandardOutputRead; public void StartSendingEvents(); public void StopSendingEvents(); } StartSendingEvents and StopSendingEvents do what they advertise; for the purposes of this discussion, we can assume that events are always being sent without loss of generality. The class uses these two fields internally: protected readonly StringBuilder inputAccumulator = new StringBuilder(); protected readonly byte[] buffer = new byte[256]; The functionality of the class is implemented in the methods below. To get the ball rolling: public void StartSendingEvents(); { this.stopAutomation = false; this.BeginReadAsync(); } To read data out of the Stream without blocking, and also without requiring a carriage return char, BeginRead is called: protected void BeginReadAsync() { if (!this.stopAutomation) { this.StandardOutput.BaseStream.BeginRead( this.buffer, 0, this.buffer.Length, this.ReadHappened, null); } } The challenging part: BeginRead requires using a buffer. This means that when reading from the stream, it is possible that the bytes available to read ("incoming chunk") are larger than the buffer. Remember that the goal here is to read all of the chunk and call event subscribers exactly once for each chunk. To this end, if the buffer is full after EndRead, we don't send its contents to subscribers immediately but instead append them to a StringBuilder. The contents of the StringBuilder are only sent back whenever there is no more to read from the stream. private void ReadHappened(IAsyncResult asyncResult) { var bytesRead = this.StandardOutput.BaseStream.EndRead(asyncResult); if (bytesRead == 0) { this.OnAutomationStopped(); return; } var input = this.StandardOutput.CurrentEncoding.GetString( this.buffer, 0, bytesRead); this.inputAccumulator.Append(input); if (bytesRead < this.buffer.Length) { this.OnInputRead(); // only send back if we 're sure we got it all } this.BeginReadAsync(); // continue "looping" with BeginRead } After any read which is not enough to fill the buffer (in which case we know that there was no more data to be read during the last read operation), all accumulated data is sent to the subscribers: private void OnInputRead() { var handler = this.StandardOutputRead; if (handler == null) { return; } handler(this, new ConsoleOutputReadEventArgs(this.inputAccumulator.ToString())); this.inputAccumulator.Clear(); } (I know that as long as there are no subscribers the data gets accumulated forever. This is a deliberate decision). The good This scheme works almost perfectly: Async functionality without spawning any threads Very convenient to the calling code (just subscribe to an event) Never more than one event for each time data is available to be read Is almost agnostic to the buffer size The bad That last almost is a very big one. Consider what happens when there is an incoming chunk with length exactly equal to the size of the buffer. The chunk will be read and buffered, but the event will not be triggered. This will be followed up by a BeginRead that expects to find more data belonging to the current chunk in order to send it back all in one piece, but... there will be no more data in the stream. In fact, as long as data is put into the stream in chunks with length exactly equal to the buffer size, the data will be buffered and the event will never be triggered. This scenario may be highly unlikely to occur in practice, especially since we can pick any number for the buffer size, but the problem is there. Solution? Unfortunately, after checking the available methods on FileStream and StreamReader, I can't find anything which lets me peek into the stream while also allowing async methods to be used on it. One "solution" would be to have a thread wait on a ManualResetEvent after the "buffer filled" condition is detected. If the event is not signaled (by the async callback) in a small amount of time, then more data from the stream will not be forthcoming and the data accumulated so far should be sent to subscribers. However, this introduces the need for another thread, requires thread synchronization, and is plain inelegant. Specifying a timeout for BeginRead would also suffice (call back into my code every now and then so I can check if there's data to be sent back; most of the time there will not be anything to do, so I expect the performance hit to be negligible). But it looks like timeouts are not supported in FileStream. Since I imagine that async calls with timeouts are an option in bare Win32, another approach might be to PInvoke the hell out of the problem. But this is also undesirable as it will introduce complexity and simply be a pain to code. Is there an elegant way to get around the problem? Thanks for being patient enough to read all of this. Update: I definitely did not communicate the scenario well in my initial writeup. I have since revised the writeup quite a bit, but to be extra sure: The question is about how to implement an async "read all the data available this moment" operation. My apologies to the people who took the time to read and answer without me making my intent clear enough.

    Read the article

  • Connection to webservice times out first time

    - by Neo
    My application needs to connect to a web service. The WSDL file given by the client was converted to java using the wsdl2java utility in axis 2-1.5.2. The problem occurs during the first connection to the webservice. It gives me java.net.SocketTimeoutException: Read timed out at jrockit.net.SocketNativeIO.readBytesPinned(Native Method) at jrockit.net.SocketNativeIO.socketRead(SocketNativeIO.java:46) at java.net.SocketInputStream.socketRead0(SocketInputStream.java) at java.net.SocketInputStream.read(SocketInputStream.java:129) at com.sun.net.ssl.internal.ssl.InputRecord.readFully(InputRecord.java:293) at com.sun.net.ssl.internal.ssl.InputRecord.read(InputRecord.java:331) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:789) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:747) at com.sun.net.ssl.internal.ssl.AppInputStream.read(AppInputStream.java:75) at java.io.BufferedInputStream.fill(BufferedInputStream.java:218) at java.io.BufferedInputStream.read(BufferedInputStream.java:238) at org.apache.commons.httpclient.HttpParser.readRawLine(HttpParser.java:78) at org.apache.commons.httpclient.HttpParser.readLine(HttpParser.java:106) at org.apache.commons.httpclient.HttpConnection.readLine(HttpConnection.java:1116) at org.apache.commons.httpclient.MultiThreadedHttpConnectionManager$HttpConnectionAdapter.readLine(MultiThreadedHttpConnectionManager.java:1413) at org.apache.commons.httpclient.HttpMethodBase.readStatusLine(HttpMethodBase.java:1974) at org.apache.commons.httpclient.HttpMethodBase.readResponse(HttpMethodBase.java:1735) at org.apache.commons.httpclient.HttpMethodBase.execute(HttpMethodBase.java:1100) at org.apache.commons.httpclient.HttpMethodDirector.executeWithRetry(HttpMethodDirector.java:398) at org.apache.commons.httpclient.HttpMethodDirector.executeMethod(HttpMethodDirector.java:171) at org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:397) at org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:346) at org.apache.axis2.transport.http.AbstractHTTPSender.executeMethod(AbstractHTTPSender.java:558) at org.apache.axis2.transport.http.HTTPSender.sendViaPost(HTTPSender.java:199) at org.apache.axis2.transport.http.HTTPSender.send(HTTPSender.java:77) at org.apache.axis2.transport.http.CommonsHTTPTransportSender.writeMessageWithCommons(CommonsHTTPTransportSender.java:400) at org.apache.axis2.transport.http.CommonsHTTPTransportSender.invoke(CommonsHTTPTransportSender.java:225) at org.apache.axis2.engine.AxisEngine.send(AxisEngine.java:438) at org.apache.axis2.description.OutInAxisOperationClient.send(OutInAxisOperation.java:402) at org.apache.axis2.description.OutInAxisOperationClient.executeImpl(OutInAxisOperation.java:230) at org.apache.axis2.client.OperationClient.execute(OperationClient.java:166) at com.jmango.webservice.talker.WCFServiceStub.addSaleSupportRequest(WCFServiceStub.java:270) at com.jmango.domain.salessystem.talkerimp.RequestServiceInfoImp.addanewServiceRequest(RequestServiceInfoImp.java:58) at com.jmango.mobilenexus.service.MobileServiceImp.sendQueryforServiceInfo(MobileServiceImp.java:358) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:307) at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:182) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:149) at org.springframework.remoting.support.RemoteInvocationTraceInterceptor.invoke(RemoteInvocationTraceInterceptor.java:77) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204) at $Proxy8.sendQueryforServiceInfo(Unknown Source) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.caucho.hessian.server.HessianSkeleton.invoke(HessianSkeleton.java:180) at com.caucho.hessian.server.HessianSkeleton.invoke(HessianSkeleton.java:110) at org.springframework.remoting.caucho.Hessian2SkeletonInvoker.invoke(Hessian2SkeletonInvoker.java:94) at org.springframework.remoting.caucho.HessianExporter.invoke(HessianExporter.java:142) at org.springframework.remoting.caucho.HessianServiceExporter.handleRequest(HessianServiceExporter.java:70) at org.springframework.web.servlet.mvc.HttpRequestHandlerAdapter.handle(HttpRequestHandlerAdapter.java:50) at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:875) at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:807) at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:571) at org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:512) at javax.servlet.http.HttpServlet.service(HttpServlet.java:637) at javax.servlet.http.HttpServlet.service(HttpServlet.java:718) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:111) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) at org.apache.jk.server.JkCoyoteHandler.invoke(JkCoyoteHandler.java:190) at org.apache.jk.common.HandlerRequest.invoke(HandlerRequest.java:291) at org.apache.jk.common.ChannelSocket.invoke(ChannelSocket.java:776) at org.apache.jk.common.ChannelSocket.processConnection(ChannelSocket.java:705) at org.apache.jk.common.ChannelSocket$SocketConnection.runIt(ChannelSocket.java:899) at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:690) at java.lang.Thread.run(Thread.java:619) I tried searching the web for answers though there was one place which mentions it could be the firewall at the webservice end that is blocking, I wasnt able to find a valid solution. Any help will be much appreciated. Running: Apache Tomcat 6.0 Axis2 1.5.2

    Read the article

  • Is this question too hard for a seasoned C++ architect?

    - by Monomer
    Background Information We're looking to hire a seasoned C++ architect (10+years dev, of which at least 6years must be C++ ) for a high frequency trading platform. Job advert says STL, Boost proficiency is a must with preferences to modern uses of C++. The company I work for is a Fortune 500 IB (aka finance industry), it requires passes in all the standard SHL tests (numeric, vocab, spatial etc) before interviews can commence. Everyone on the team was given the task of coming up with one question to ask the candidates during a written/typed test, please note this is the second test provided to the candidates, the first being Advanced IKM C++ test, done in the offices supervised and without internet access. People passing that do the second test. After roughly 70 candidates, my question has been determined to be statistically the worst performing - aka least number of people attempted it, furthermore even less people were able to give meaningful answers. Please note, the second test is not timed, the candidate can literally take as long as they like (we've had one person take roughly 10.5hrs) My question to SO is this, after SHL and IKM adv c++ tests, backed up with at least 6+ years C++ development experience, is it still ok not to be able to even comment about let alone come up with some loose strategy for solving the following question. The Question There is a class C with methods foo, boo, boo_and_foo and foo_and_boo. Each method takes i,j,k and l clock cycles respectively, where i < j, k < i+j and l < i+j. class C { public: int foo() {...} int boo() {...} int boo_and_foo() {...} int foo_and_boo() {...} }; In code one might write: C c; . . int i = c.foo() + c.boo(); But it would be better to have: int i = c.foo_and_boo(); What changes or techniques could one make to the definition of C, that would allow similar syntax of the original usage, but instead have the compiler generate the latter. Note that foo and boo are not commutative. Possible Solution We were basically looking for an expression templates based approach, and were willing to give marks to anyone who had even hinted or used the phrase or related terminology. We got only two people that used the wording, but weren't able to properly describe how they accomplish the task in detail. We use such techniques all over the place, due to the use of various mathematical operators for matrix and vector based calculations, for example to decide when to use IPP or hand woven implementations at compile time for a particular architecture and many other things. The particular area of software development requires microsecond response times. I believe could/should be able to teach a junior such techniques, but given the assumed caliber of candidates I expected a little more. Is this really a difficult question? Should it be removed? Or are we just not seeing the right candidates?

    Read the article

  • Can't obtain reference to EKReminder array retrieved from fetchRemindersMatchingPredicate

    - by Scionwest
    When I create an NSPredicate via EKEventStore predicateForRemindersInCalendars; and pass it to EKEventStore fetchRemindersMatchingPredicate:completion:^ I can loop through the reminders array provided by the completion code block, but when I try to store a reference to the reminders array, or create a copy of the array into a local variable or instance variable, both array's remain empty. The reminders array is never copied to them. This is the method I am using, in the method, I create a predicate, pass it to the event store and then loop through all of the reminders logging their title via NSLog. I can see the reminder titles during runtime thanks to NSLog, but the local arrayOfReminders object is empty. I also try to add each reminder into an instance variable of NSMutableArray, but once I leave the completion code block, the instance variable remains empty. Am I missing something here? Can someone please tell me why I can't grab a reference to all of the reminders for use through-out the app? I am not having any issues at all accessing and storing EKEvents, but for some reason I can't do it with EKReminders. - (void)findAllReminders { NSPredicate *predicate = [self.eventStore predicateForRemindersInCalendars:nil]; __block NSArray *arrayOfReminders = [[NSArray alloc] init]; [self.eventStore fetchRemindersMatchingPredicate:predicate completion:^(NSArray *reminders) { arrayOfReminders = [reminders copy]; //Does not work. for (EKReminder *reminder in reminders) { [self.remindersForTheDay addObject:reminder]; NSLog(@"%@", reminder.title); } }]; //Always = 0; if ([self.remindersForTheDay count]) { NSLog(@"Instance Variable has reminders!"); } //Always = 0; if ([arrayOfReminders count]) { NSLog(@"Local Variable has reminders!"); } } The eventStore getter is where I perform my instantiation and get access to the event store. - (EKEventStore *)eventStore { if (!_eventStore) { _eventStore = [[EKEventStore alloc] init]; //respondsToSelector indicates iOS 6 support. if ([_eventStore respondsToSelector:@selector(requestAccessToEntityType:completion:)]) { //Request access to user calendar [_eventStore requestAccessToEntityType:EKEntityTypeEvent completion:^(BOOL granted, NSError *error) { if (granted) { NSLog(@"iOS 6+ Access to EventStore calendar granted."); } else { NSLog(@"Access to EventStore calendar denied."); } }]; //Request access to user Reminders [_eventStore requestAccessToEntityType:EKEntityTypeReminder completion:^(BOOL granted, NSError *error) { if (granted) { NSLog(@"iOS 6+ Access to EventStore Reminders granted."); } else { NSLog(@"Access to EventStore Reminders denied."); } }]; } else { //iOS 5.x and lower support if Selector is not supported NSLog(@"iOS 5.x < Access to EventStore calendar granted."); } for (EKCalendar *cal in self.calendars) { NSLog(@"Calendar found: %@", cal.title); } [_eventStore reset]; } return _eventStore; } Lastly, just to show that I am initializing my remindersForTheDay instance variable using lazy instantiation. - (NSMutableArray *)remindersForTheDay { if (!_remindersForTheDay) _remindersForTheDay = [[NSMutableArray alloc] init]; return _remindersForTheDay; } I've read through the Apple documentation and it doesn't provide any explanation that I can find to answer this. I read through the Blocks Programming docs and it states that you can access local and instance variables without issues from within a block, but for some reason, the above code does not work. Any help would be greatly appreciated, I've scoured Google for answers but have yet to get this figured out. Thanks everyone! Johnathon.

    Read the article

  • Stored proc running 30% slower through Java versus running directly on database

    - by James B
    Hi All, I'm using Java 1.6, JTDS 1.2.2 (also just tried 1.2.4 to no avail) and SQL Server 2005 to create a CallableStatement to run a stored procedure (with no parameters). I am seeing the Java wrapper running the same stored procedure 30% slower than using SQL Server Management Studio. I've run the MS SQL profiler and there is little difference in I/O between the two processes, so I don't think it's related to query plan caching. The stored proc takes no arguments and returns no data. It uses a server-side cursor to calculate the values that are needed to populate a table. I can't see how the calling a stored proc from Java should add a 30% overhead, surely it's just a pipe to the database that SQL is sent down and then the database executes it....Could the database be giving the Java app a different query plan?? I've posted to both the MSDN forums, and the sourceforge JTDS forums (topic: "stored proc slower in JTDS than direct in DB") I was wondering if anyone has any suggestions as to why this might be happening? Thanks in advance, -James (N.B. Fear not, I will collate any answers I get in other forums together here once I find the solution) Java code snippet: sLogger.info("Preparing call..."); stmt = mCon.prepareCall("SP_WB200_POPULATE_TABLE_limited_rows"); sLogger.info("Call prepared. Executing procedure..."); stmt.executeQuery(); sLogger.info("Procedure complete."); I have run sql profiler, and found the following: Java app : CPU: 466,514 Reads: 142,478,387 Writes: 284,078 Duration: 983,796 SSMS : CPU: 466,973 Reads: 142,440,401 Writes: 280,244 Duration: 769,851 (Both with DBCC DROPCLEANBUFFERS run prior to profiling, and both produce the correct number of rows) So my conclusion is that they both execute the same reads and writes, it's just that the way they are doing it is different, what do you guys think? It turns out that the query plans are significantly different for the different clients (the Java client is updating an index during an insert that isn't in the faster SQL client, also, the way it is executing joins is different (nested loops Vs. gather streams, nested loops Vs index scans, argh!)). Quite why this is, I don't know yet (I'll re-post when I do get to the bottom of it) Epilogue I couldn't get this to work properly. I tried homogenising the connection properties (arithabort, ansi_nulls etc) between the Java and Mgmt studio clients. It ended up the two different clients had very similar query/execution plans (but still with different actual plan_ids). I posted a summary of what I found to the MSDN SQL Server forums as I found differing performance not just between a JDBC client and management studio, but also between Microsoft's own command line client, SQLCMD, I also checked some more radical things like network traffic too, or wrapping the stored proc inside another stored proc, just for grins. I have a feeling the problem lies somewhere in the way the cursor was being executed, and it was somehow giving rise to the Java process being suspended, but why a different client should give rise to this different locking/waiting behaviour when nothing else is running and the same execution plan is in operation is a little beyond my skills (I'm no DBA!). As a result, I have decided that 4 days is enough of anyone's time to waste on something like this, so I will grudgingly code around it (if I'm honest, the stored procedure needed re-coding to be more incremental instead of re-calculating all data each week anyway), and chalk this one down to experience. I'll leave the question open, big thanks to everyone who put their hat in the ring, it was all useful, and if anyone comes up with anything further, I'd love to hear some more options...and if anyone finds this post as a result of seeing this behaviour in their own environments, then hopefully there's some pointers here that you can try yourself, and hope fully see further than we did. I'm ready for my weekend now! -James

    Read the article

  • Designing interfaces: predict methods needed, discipline yourself and deal with code that comes to m

    - by fireeyedboy
    Was: Design by contract: predict methods needed, discipline yourself and deal with code that comes to mind I like the idea of designing by contract a lot (at least, as far as I understand the principal). I believe it means you define intefaces first before you start implementing actual code, right? However, from my limited experience (3 OOP years now) I usually can't resist the urge to start coding pretty early, for several reasons: because my limited experience has shown me I am unable to predict what methods I will be needing in the interface, so I might as well start coding right away. or because I am simply too impatient to write out the whole interfaces first. or when I do try it, I still wind up implementing bits of code already, because I fear I might forget this or that imporant bit of code, that springs to mind when I am designing the interfaces. As you see, especially with the last two points, this leads to a very disorderly way of doing things. Tasks get mixed up. I should draw a clear line between designing interfaces and actual coding. If you, unlike me, are a good/disciplined planner, as intended above, how do you: ...know the majority of methods you will be needing up front so well? Especially if it's components that implement stuff you are not familiar with yet. ...resist the urge to start coding right away? ...deal with code that comes to mind when you are designing the interfaces? UPDATE: Thank you for the answers so far. Valuable insights! And... I stand corrected; it seems I misinterpreted the idea of Design By Contract. For clarity, what I actually meant was: "coming up with interface methods before implementing the actual components". An additional thing that came up in my mind is related to point 1): b) How do you know the majority of components you will be needing. How do you flesh out these things before you start actually coding? For arguments sake, let's say I'm a novice with the MVC pattern, and I wanted to implement such a component/architecture. A naive approach would be to think of: a front controller some abstract action controller some abstract view ... and be done with it, so to speak. But, being more familiar with the MVC pattern, I know now that it makes sense to also have: a request object a router a dispatcher a response object view helpers etc.. etc.. If you map this idea to some completely new component you want to develop, with which you have no experience yet; how do you come up with these sort of additional components without actually coding the thing, and stuble upon the ideas that way? How would you know up front how fine grained some components should be? Is this a matter of disciplining yourself to think it out thoroughly? Or is it a matter of being good at thinking in abstractions?

    Read the article

  • Exposing model object using bindings in custom NSCell of NSTableView

    - by Hooligancat
    I am struggling trying to perform what I would think would be a relatively common task. I have an NSTableView that is bound to it's array via an NSArrayController. The array controller has it's content set to an NSMutableArray that contains one or more NSObject instances of a model class. What I don't know how to do is expose the model inside the NSCell subclass in a way that is bindings friendly. For the purpose of illustration, we'll say that the object model is a person consisting of a first name, last name, age and gender. Thus the model would appear something like this: @interface PersonModel : NSObject { NSString * firstName; NSString * lastName; NSString * gender; int * age; } Obviously the appropriate setters, getters init etc for the class. In my controller class I define an NSTableView, NSMutableArray and an NSArrayController: @interface ControllerClass : NSObject { IBOutlet NSTableView * myTableView; NSMutableArray * myPersonArray; IBOutlet NSArrayController * myPersonArrayController; } Using Interface Builder I can easily bind the model to the appropriate columns: myPersonArray --> myPersonArrayController --> table column binding This works fine. So I remove the extra columns, leaving one column hidden that is bound to the NSArrayController (this creates and keeps the association between each row and the NSArrayController) so that I am down to one visible column in my NSTableView and one hidden column. I create an NSCell subclass and put the appropriate drawing method to create the cell. In my awakeFromNib I establish the custom NSCell subclass: PersonModel * aCustomCell = [[[PersonModel alloc] init] autorelease]; [[myTableView tableColumnWithIdentifier:@"customCellColumn"] setDataCell:aCustomCell]; This, too, works fine from a drawing perspective. I get my custom cell showing up in the column and it repeats for every managed object in my array controller. If I add an object or remove an object from the array controller the table updates accordingly. However... I was under the impression that my PersonModel object would be available from within my NSCell subclass. But I don't know how to get to it. I don't want to set each NSCell using setters and getters because then I'm breaking the whole model concept by storing data in the NSCell instead of referencing it from the array controller. And yes I do need to have a custom NSCell, so having multiple columns is not an option. Where to from here? In addition to the Google and StackOverflow search, I've done the obligatory walk through on Apple's docs and don't seem to have found the answer. I have found a lot of references that beat around the bush but nothing involving an NSArrayController. The controller makes life very easy when binding to other elements of the model entity (such as a master/detail scenario). I have also found a lot of references (although no answers) when using Core Data, but Im not using Core Data. As per the norm, I'm very grateful for any assistance that can be offered!

    Read the article

  • How do I make Linux recognize a new SATA /dev/sda drive I hot swapped in without rebooting?

    - by Philip Durbin
    Hot swapping out a failed SATA /dev/sda drive worked fine, but when I went to swap in a new drive, it wasn't recognized: [root@fs-2 ~]# tail -18 /var/log/messages May 5 16:54:35 fs-2 kernel: ata1: exception Emask 0x10 SAct 0x0 SErr 0x50000 action 0xe frozen May 5 16:54:35 fs-2 kernel: ata1: SError: { PHYRdyChg CommWake } May 5 16:54:40 fs-2 kernel: ata1: link is slow to respond, please be patient (ready=0) May 5 16:54:45 fs-2 kernel: ata1: device not ready (errno=-16), forcing hardreset May 5 16:54:45 fs-2 kernel: ata1: soft resetting link May 5 16:54:50 fs-2 kernel: ata1: link is slow to respond, please be patient (ready=0) May 5 16:54:55 fs-2 kernel: ata1: SRST failed (errno=-16) May 5 16:54:55 fs-2 kernel: ata1: soft resetting link May 5 16:55:00 fs-2 kernel: ata1: link is slow to respond, please be patient (ready=0) May 5 16:55:05 fs-2 kernel: ata1: SRST failed (errno=-16) May 5 16:55:05 fs-2 kernel: ata1: soft resetting link May 5 16:55:10 fs-2 kernel: ata1: link is slow to respond, please be patient (ready=0) May 5 16:55:40 fs-2 kernel: ata1: SRST failed (errno=-16) May 5 16:55:40 fs-2 kernel: ata1: limiting SATA link speed to 1.5 Gbps May 5 16:55:40 fs-2 kernel: ata1: soft resetting link May 5 16:55:45 fs-2 kernel: ata1: SRST failed (errno=-16) May 5 16:55:45 fs-2 kernel: ata1: reset failed, giving up May 5 16:55:45 fs-2 kernel: ata1: EH complete I tried a couple things to make the server find the new /dev/sda, such as rescan-scsi-bus.sh but they didn't work: [root@fs-2 ~]# echo "---" > /sys/class/scsi_host/host0/scan -bash: echo: write error: Invalid argument [root@fs-2 ~]# [root@fs-2 ~]# /root/rescan-scsi-bus.sh -l [snip] 0 new device(s) found. 0 device(s) removed. [root@fs-2 ~]# [root@fs-2 ~]# ls /dev/sda ls: /dev/sda: No such file or directory I ended up rebooting the server. /dev/sda was recognized, I fixed the software RAID, and everything is fine now. But for next time, how can I make Linux recognize a new SATA drive I have hot swapped in without rebooting? The operating system in question is RHEL5.3: [root@fs-2 ~]# cat /etc/redhat-release Red Hat Enterprise Linux Server release 5.3 (Tikanga) The hard drive is a Seagate Barracuda ES.2 SATA 3.0-Gb/s 500-GB, model ST3500320NS. Here is the lscpi output: [root@fs-2 ~]# lspci 00:00.0 RAM memory: nVidia Corporation MCP55 Memory Controller (rev a2) 00:01.0 ISA bridge: nVidia Corporation MCP55 LPC Bridge (rev a3) 00:01.1 SMBus: nVidia Corporation MCP55 SMBus (rev a3) 00:02.0 USB Controller: nVidia Corporation MCP55 USB Controller (rev a1) 00:02.1 USB Controller: nVidia Corporation MCP55 USB Controller (rev a2) 00:04.0 IDE interface: nVidia Corporation MCP55 IDE (rev a1) 00:05.0 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:05.1 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:05.2 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:06.0 PCI bridge: nVidia Corporation MCP55 PCI bridge (rev a2) 00:08.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a3) 00:09.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a3) 00:0a.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0b.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0c.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0d.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0e.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0f.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:18.0 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] HyperTransport Technology Configuration 00:18.1 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Address Map 00:18.2 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] DRAM Controller 00:18.3 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Miscellaneous Control 00:19.0 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] HyperTransport Technology Configuration 00:19.1 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Address Map 00:19.2 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] DRAM Controller 00:19.3 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Miscellaneous Control 03:00.0 VGA compatible controller: Matrox Graphics, Inc. MGA G200e [Pilot] ServerEngines (SEP1) (rev 02) 04:00.0 PCI bridge: NEC Corporation uPD720400 PCI Express - PCI/PCI-X Bridge (rev 06) 04:00.1 PCI bridge: NEC Corporation uPD720400 PCI Express - PCI/PCI-X Bridge (rev 06) Update: In perhaps a dozen cases, we've been forced to reboot servers because hot swap hasn't "just worked." Thanks for the answers to look more into the SATA controller. I've included the lspci output for the problematic system above (hostname: fs-2). I could still use some help understanding what exactly isn't supported hardware-wise in terms of hot swap for that system. Please let me know what other output besides lspci might be useful. The good news is that hot swap "just worked" today on one of our servers (hostname: www-1), which is very rare for us. Here is the lspci output: [root@www-1 ~]# lspci 00:00.0 RAM memory: nVidia Corporation MCP55 Memory Controller (rev a2) 00:01.0 ISA bridge: nVidia Corporation MCP55 LPC Bridge (rev a3) 00:01.1 SMBus: nVidia Corporation MCP55 SMBus (rev a3) 00:02.0 USB Controller: nVidia Corporation MCP55 USB Controller (rev a1) 00:02.1 USB Controller: nVidia Corporation MCP55 USB Controller (rev a2) 00:04.0 IDE interface: nVidia Corporation MCP55 IDE (rev a1) 00:05.0 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:05.1 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:05.2 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:06.0 PCI bridge: nVidia Corporation MCP55 PCI bridge (rev a2) 00:08.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a3) 00:09.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a3) 00:0b.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0c.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0f.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:18.0 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] HyperTransport Configuration 00:18.1 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Address Map 00:18.2 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] DRAM Controller 00:18.3 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Miscellaneous Control 00:18.4 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Link Control 00:19.0 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] HyperTransport Configuration 00:19.1 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Address Map 00:19.2 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] DRAM Controller 00:19.3 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Miscellaneous Control 00:19.4 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Link Control 03:00.0 VGA compatible controller: Matrox Graphics, Inc. MGA G200e [Pilot] ServerEngines (SEP1) (rev 02) 04:00.0 PCI bridge: NEC Corporation uPD720400 PCI Express - PCI/PCI-X Bridge (rev 06) 04:00.1 PCI bridge: NEC Corporation uPD720400 PCI Express - PCI/PCI-X Bridge (rev 06) 09:00.0 SCSI storage controller: LSI Logic / Symbios Logic SAS1064ET PCI-Express Fusion-MPT SAS (rev 04)

    Read the article

  • MySQL Query GROUP_CONCAT Over Multiple Rows

    - by PeteGO
    I'm getting name and address data out of generic question / answer data to create some kind of normalised reporting database. The query I've got uses group_concat and works for individual sets of questions but not for multiple sets. I've tried to simplify what I'm doing by using just forename and surname and just 3 records, 2 for 1 person and 1 for another. In reality though there are more than 300,000 records. Example of results with qs.Id = 1. QuestionSetId Forename Surname ------------------------------------------------------- 1 Bob Jones Example of results with qs.Id IN (1, 2, 3). QuestionSetId Forename Surname ------------------------------------------------------- 3 Bob,Bob,Frank Jones,Jones,Smith What I would like to see for qs.Id IN (1, 2, 3). QuestionSetId Forename Surname ------------------------------------------------------- 1 Bob Jones 2 Bob Jones 3 Frank Smith So how can I make the 2nd example return a separate row for each set of name and address information? I realise the current way the data is stored is "questionable" but I cannot change the way the data is stored. I can get sets of individual answers but not sure how to combine the others. My simplified Schema that I cannot change: CREATE TABLE StaticQuestion ( Id INT NOT NULL, StaticText VARCHAR(500) NOT NULL); CREATE TABLE Question ( Id INT NOT NULL, Text VARCHAR(500) NOT NULL); CREATE TABLE StaticQuestionQuestionLink ( Id INT NOT NULL, StaticQuestionId INT NOT NULL, QuestionId INT NOT NULL, DateEffective DATETIME NOT NULL); CREATE TABLE Answer ( Id INT NOT NULL, Text VARCHAR(500) NOT NULL); CREATE TABLE QuestionSet ( Id INT NOT NULL, DateEffective DATETIME NOT NULL); CREATE TABLE QuestionAnswerLink ( Id INT NOT NULL, QuestionSetId INT NOT NULL, QuestionId INT NOT NULL, AnswerId INT NOT NULL, StaticQuestionId INT NOT NULL); Some example data for only forename and surname. INSERT INTO StaticQuestion (Id, StaticText) VALUES (1, 'FirstName'), (2, 'LastName'); INSERT INTO Question (Id, Text) VALUES (1, 'What is your first name?'), (2, 'What is your forename?'), (3, 'What is your Surname?'); INSERT INTO StaticQuestionQuestionLink (Id, StaticQuestionId, QuestionId, DateEffective) VALUES (1, 1, 1, '2001-01-01'), (2, 1, 2, '2008-08-08'), (3, 2, 3, '2001-01-01'); INSERT INTO Answer (Id, Text) VALUES (1, 'Bob'), (2, 'Jones'), (3, 'Bob'), (4, 'Jones'), (5, 'Frank'), (6, 'Smith'); INSERT INTO QuestionSet (Id, DateEffective) VALUES (1, '2002-03-25'), (2, '2009-05-05'), (3, '2009-08-06'); INSERT INTO QuestionAnswerLink (Id, QuestionSetId, QuestionId, AnswerId, StaticQuestionId) VALUES (1, 1, 1, 1, 1), (2, 1, 3, 2, 2), (3, 2, 2, 3, 1), (4, 2, 3, 4, 2), (5, 3, 2, 5, 1), (6, 3, 3, 6, 2); Just in case SQLFiddle is down here are the 3 queries from the examples I've linked to: 1: - working query but only on 1 set of data. SELECT MAX(QuestionSetId) AS QuestionSetId, GROUP_CONCAT(Forename) AS Forename, GROUP_CONCAT(Surname) AS Surname FROM (SELECT x.QuestionSetId, CASE x.StaticQuestionId WHEN 1 THEN Text END AS Forename, CASE x.StaticQuestionId WHEN 2 THEN Text END AS Surname FROM (SELECT (SELECT link.StaticQuestionId FROM StaticQuestionQuestionLink link WHERE link.Id = qa.QuestionId AND link.DateEffective <= qs.DateEffective AND link.StaticQuestionId IN (1, 2) ORDER BY link.DateEffective DESC LIMIT 1) AS StaticQuestionId, a.Text, qa.QuestionSetId FROM QuestionSet qs INNER JOIN QuestionAnswerLink qa ON qs.Id = qa.QuestionSetId INNER JOIN Answer a ON qa.AnswerId = a.Id WHERE qs.Id IN (1)) x) y 2: - working query but undesired results on multiple sets of data. SELECT MAX(QuestionSetId) AS QuestionSetId, GROUP_CONCAT(Forename) AS Forename, GROUP_CONCAT(Surname) AS Surname FROM (SELECT x.QuestionSetId, CASE x.StaticQuestionId WHEN 1 THEN Text END AS Forename, CASE x.StaticQuestionId WHEN 2 THEN Text END AS Surname FROM (SELECT (SELECT link.StaticQuestionId FROM StaticQuestionQuestionLink link WHERE link.Id = qa.QuestionId AND link.DateEffective <= qs.DateEffective AND link.StaticQuestionId IN (1, 2) ORDER BY link.DateEffective DESC LIMIT 1) AS StaticQuestionId, a.Text, qa.QuestionSetId FROM QuestionSet qs INNER JOIN QuestionAnswerLink qa ON qs.Id = qa.QuestionSetId INNER JOIN Answer a ON qa.AnswerId = a.Id WHERE qs.Id IN (1, 2, 3)) x) y 3: - working query on multiple sets of data only on 1 field (answer) though. SELECT qs.Id AS QuestionSet, a.Text AS Answer FROM QuestionSet qs INNER JOIN QuestionAnswerLink qalink ON qs.Id = qalink.QuestionSetId INNER JOIN StaticQuestionQuestionLink sqqlink ON qalink.QuestionId = sqqlink.QuestionId INNER JOIN Answer a ON qalink.AnswerId = a.Id WHERE sqqlink.StaticQuestionId = 1 /* FirstName */ AND sqqlink.DateEffective = (SELECT DateEffective FROM StaticQuestionQuestionLink WHERE StaticQuestionId = 1 AND DateEffective <= qs.DateEffective ORDER BY DateEffective DESC LIMIT 1)

    Read the article

  • Oracle data warehouse design - fact table acting as a dimension?

    - by Elizabeth
    THANKS: Both answers here are very helpful, but I could only pick one. I really appreciate the advice! our datawarehouse will be used more for workflow reports than traditional analytical reports. Our users care about "current picture" far more than history. (though history matters, too.) We are a government entity that does not have costs or related calculations. Mostly just counts of people within given locations and with related history. We are using Oracle, and I have found distinct advantage in using the star join whenever possible and would like to rearchitect everything to as closely resemble the star schema as is reasonable for our business uses. Speed in this DW is vital, and a number of tests have already proven the star schema approach to me. Our "person" table is key - it contains over 4 million records and will be the most frequently used source in queries. It can be seen at the center of a star with multiple dimensions (like age, gender, affiliation, location, etc.). It is a very LONG table, particularly when I join it to the address and contact information. However, it is more like a dimension table when we start looking at history. For example, there are two different history tables that have a person key pointing to the person table. One has over 20 million records and the other has almost 50 million and grows daily. Is this table a fact table or a dimension table? Can one work as both? If so, is that going to be a big performance problem? Is it common to query more off of a dimension than a fact? What happens if a DIFFERENT fact table that uses the person table as a dimension is actually only 60,000 records (much smaller.). I think my problem is that our data and use of it does not fit with the commonly use examples of star schemas. CLARIFICATION: Some good thoughts have been added below, but perhaps I left too much out to really explain well. Here's some more info: We handle a voter database. We don't have any measures except voter counts by various groups: voter counts by party, by age, by location; voter counts by ballot type and election, by ballot status and election, etc. We do have a "voting history" log as well as an activity audit log (change of address, party, etc.). We have information on which voters are election workers and all that related information. I figure I'll get to the peripheral stuff later. For now I'm focusing on our two major "business processes": voter registration(which IS a voter.) and election turnout. In the first, voter is a fact. In the second, voter is a dimension, along with party, election, and type of ballot. (and in case anyone is worried - no we don't know HOW people vote. Just that they do. LOL ) I hope that clarifies things a bit.

    Read the article

  • Rails & combo box change event: Help make this obtrusive javascript unobtrusive

    - by DJTripleThreat
    Ok so a friend of mine gave some help with a prototype/obtrusive solution to this but its not quite there. Also, I want to make this unobtrusive instead of using the observe_field function that rails gives me. I don't want to use prototype either because I'm more familiar with JQuery. Here's my problem: I have an Event that can have multiple ServiceTypes and a ServiceType can belong to many Events. A many-to-many relationship between these two exists as an OfferedService. When creating an event, I have a drop down with a list of TimeAllotments that are something like 10 minutes, 12 minutes, 15 minutes, 20 minutes, 30 minutes etc. When the user selects one of these choices, I want a div tag to be filled with a list of ServiceTypes that are associated with this TimeAllotment. So for example, the user selects "10 minutes" and then the div repopulates with services that last 10 minutes. Here is what I have so far: ... some erb code etc and then <fieldset> <legend><%= f.label :time_allotment, "Size of the Appointment Slots:" %></legend> <div> <span class="field-group"> <div> <!-- TimeAllotment is a tabless model which is why this is done like so... --> <%= select("event_service", "time_allotment", TimeAllotment.all.collect {|ta| [ta.title, ta.value]}, {:prompt => true}) %> </div> </span> </div> <div style="clear:both;"></div> Services: <div> <span class="field-group"> <!-- this div right here needs to be repopulated when the above select changes. --> <div id="services"> <% for service_type in ServiceType.all %> <div> <%= check_box_tag "event_service[service_type_ids][]", service_type.id, false %> <%=h service_type.title %> </div> <% end %> </div> </span> </div> <div class="clear"></div> </fieldset> ok so right now ALL of the services are there to be chosen from. I want them to change based on what is selected in the combobox event_service_time_allotment. Can someone help me get pointed in the right direction? I have looked at Ryan's rails casts for using JQuery but its not helpful because he deals with ajax calls for the create action. This would be for the new or edit action. I have a new.js.erb but it doesn't get loaded when calling the new action. I'm super lost as far as getting JQuery to work with my application. I think that if someone can just show me how to make an alert pop up when I change the combo box, and how to return a dataset using ajax the right now, I think I can figure out the rest. Thanks, I know this is super complicated so any helpful answers will get an upvote.

    Read the article

  • Macports irssi & perl5 installation issues

    - by Dmitri DB
    Long time reader, first time poster. Big, appreciative thanks for everyone's collective questioning and answering here and at stackoverflow, it's helped me quite a lot over the time I've been learning answers through these sites! Apologies in advance if I didn't search hard enough on posts already up on this site to find out what I could do about this issue, but I thought I'd just reach out for the sake of trying at least once. I've experienced this issue while starting up my macports-installed version of irssi: 13:25 -!- Irssi: Error in script dispatch: 13:25 Can't locate lib.pm in @INC (@INC contains: /opt/local/lib/perl5/site_perl/5.12.4/darwin-multi-2level /opt/local/lib/perl5/site_perl/5.12.4 /opt/local/lib/perl5/vendor_perl/5.12.4/darwin-multi-2level /opt/local/lib/perl5/vendor_perl/5.12.4 /opt/local/lib/perl5/5.12.4/darwin-multi-2level /opt/local/lib/perl5/5.12.4 /opt/local/lib/perl5/site_perl/5.12.3/darwin-multi-2level /opt/local/lib/perl5/site_perl/5.12.3 /opt/local/lib/perl5/site_perl /opt/local/lib/perl5/vendor_perl .) at (eval 18) line 1. 13:25 BEGIN failed--compilation aborted at (eval 18) line 1. 13:25 Huh, strange. I looked into it a bit: [email protected] /opt/local/lib/perl5 ?- find . -name "lib.pm" -ls 14673887 16 -r--r--r-- 1 root admin 6853 25 Jun 23:39 ./5.12.4/darwin-thread-multi- 2level/lib.pm [email protected] /opt/local/lib/perl5 ?- l 5.12.4/darwin-thread-multi-2level total 1864 drwxr-xr-x 55 root admin 1870 28 Jun 19:28 . drwxr-xr-x 158 root admin 5372 28 Jun 19:28 .. -rw-r--r-- 1 root admin 177814 25 Jun 23:39 .packlist drwxr-xr-x 6 root admin 204 28 Jun 19:28 B -r--r--r-- 1 root admin 25714 25 Jun 23:39 B.pm drwxr-xr-x 64 root admin 2176 28 Jun 19:28 CORE drwxr-xr-x 3 root admin 102 28 Jun 19:28 Compress -r--r--r-- 1 root admin 3000 25 Jun 23:39 Config.pm -r--r--r-- 1 root admin 228094 25 Jun 23:39 Config.pod -r--r--r-- 1 root admin 409 25 Jun 23:39 Config_git.pl -r--r--r-- 1 root admin 38759 25 Jun 23:39 Config_heavy.pl -r--r--r-- 1 root admin 21174 25 Jun 23:39 Cwd.pm -r--r--r-- 1 root admin 63535 25 Jun 23:39 DB_File.pm drwxr-xr-x 3 root admin 102 28 Jun 19:28 Data drwxr-xr-x 5 root admin 170 28 Jun 19:28 Devel drwxr-xr-x 4 root admin 136 28 Jun 19:28 Digest -r--r--r-- 1 root admin 25185 25 Jun 23:39 DynaLoader.pm drwxr-xr-x 22 root admin 748 28 Jun 19:28 Encode -r--r--r-- 1 root admin 29731 25 Jun 23:39 Encode.pm -r--r--r-- 1 root admin 6736 25 Jun 23:39 Errno.pm -r--r--r-- 1 root admin 5445 25 Jun 23:39 Fcntl.pm drwxr-xr-x 5 root admin 170 28 Jun 19:28 File drwxr-xr-x 3 root admin 102 28 Jun 19:28 Filter -r--r--r-- 1 root admin 1819 25 Jun 23:39 GDBM_File.pm drwxr-xr-x 4 root admin 136 28 Jun 19:28 Hash drwxr-xr-x 3 root admin 102 28 Jun 19:28 I18N drwxr-xr-x 11 root admin 374 28 Jun 19:28 IO -r--r--r-- 1 root admin 1404 25 Jun 23:39 IO.pm drwxr-xr-x 6 root admin 204 28 Jun 19:28 IPC drwxr-xr-x 4 root admin 136 28 Jun 19:28 List drwxr-xr-x 4 root admin 136 28 Jun 19:28 MIME drwxr-xr-x 3 root admin 102 28 Jun 19:28 Math -r--r--r-- 1 root admin 2519 25 Jun 23:39 NDBM_File.pm -r--r--r-- 1 root admin 4208 25 Jun 23:39 O.pm -r--r--r-- 1 root admin 15563 25 Jun 23:39 Opcode.pm -r--r--r-- 1 root admin 21011 25 Jun 23:39 POSIX.pm -r--r--r-- 1 root admin 58962 25 Jun 23:39 POSIX.pod drwxr-xr-x 5 root admin 170 28 Jun 19:28 PerlIO -r--r--r-- 1 root admin 2515 25 Jun 23:39 SDBM_File.pm drwxr-xr-x 4 root admin 136 28 Jun 19:28 Scalar -r--r--r-- 1 root admin 10837 25 Jun 23:39 Socket.pm -r--r--r-- 1 root admin 41003 25 Jun 23:39 Storable.pm drwxr-xr-x 4 root admin 136 28 Jun 19:28 Sys drwxr-xr-x 3 root admin 102 28 Jun 19:28 Text drwxr-xr-x 5 root admin 170 28 Jun 19:28 Time drwxr-xr-x 3 root admin 102 28 Jun 19:28 Unicode -r--r--r-- 1 root admin 14462 25 Jun 23:39 attributes.pm drwxr-xr-x 38 root admin 1292 28 Jun 19:28 auto -r--r--r-- 1 root admin 19892 25 Jun 23:39 encoding.pm -r--r--r-- 1 root admin 6853 25 Jun 23:39 lib.pm -r--r--r-- 1 root admin 11044 25 Jun 23:39 mro.pm -r--r--r-- 1 root admin 997 25 Jun 23:39 ops.pm -r--r--r-- 1 root admin 13945 25 Jun 23:39 re.pm drwxr-xr-x 3 root admin 102 28 Jun 19:28 threads -r--r--r-- 1 root admin 33283 25 Jun 23:39 threads.pm So, it sort of seems to me that the permissions which perl5 got installed with for these modules has gotten mixed up somehow? I'm not really a perl user beyond enjoying it for massive directory-recursive find/replace operations within text files, so I haven't much of an idea what the permissions here are supposed to look like, and I'm not really sure how to go about determining how macports has gone and installed perl this way when it's otherwise worked without failure for years now. Does anyone have any recommendations for the sanest path towards rectifying this issue? Also, is there any interesting reason as to why the macports default for the perl5 port installs 5.12.4, and not 5.16.0, which has to be explicitly installed via the perl5.16 port? Thanks again!

    Read the article

  • SVN: Release branch headaches, how to merge in website revisions as and when cleared to go live?

    - by Pete Duncanson
    I need a sanity check here if we can, any ideas on correcting/changing the following are very welcome! We've been getting ourselves in knots of late with our SVN and are trying to correct it by putting a Trunk/Release system in place. We have a large website that we develop on and we store it all in SVN. Heres what we had in mind: We have trunk and a release branch All work gets checked into Trunk. When a feature is deemed ready for the next release it is merged into a Release branch. We only have one release branch and just tag "Latest" when we do a push to live We hope to be able to get all the files changed from Latest to Head to give us a zip that we can upload (any ideas on an easy way to do this via scripting?) So we set all this up and where very happy with ourselves. Except its not working and heres why. We work on lots a different features/fixes/problems at once and they don't all get nicely checked in feature complete (but always working at least). Then sometimes you have to wait for Clients to sign off. As a result you end up with revisions which are "ready for live" being scattered with ones which are "still being worked on" in trunk. That means that the completed revisions are not getting merged in sequentially but out of order. I thought SVN could handle this, clever little thing it is, but apparently not. Heres an example: Pete changes some CSS to make a new button look pretty (Revision 1) Dave add some CSS to the bottom of the same CSS file as Pete's for a new feature (Revision 2) Dave's mod gets the nod so he merges it into Release and commits it with a log message mentioning revision number and bug tracking id. Pete adds more buttons to finish this mod, no CSS changes here though (Revision 3) Pete then merges his mods (Revision 1 and 3) into the Head of Release (which has Daves merge in it) but this over-writes Daves CSS additions which now dissapear completely. This leads to the site being broken and the Release branch being pretty much useless. So we tried some other ideas like reverting Release back to "Latest" and then just merging in all the Revisions 1,2 and 3 in order. This worked fine until we had Revision 4 which was not ready for live and Revision 5 which was. Suddenly we are getting ourselves in knots again with exactly the same problem! Ok so take three. Revert to Latest, merge in Revision 5, then do any update back to Head. Tree conflicts galore! So thats a no no. I cracked in the end and built it all up manaually but its not something I want to do regular, ideally I want to script our deployment but can't while Release is in such a mess. HELP! What the heck are we doing wrong? I can't seem to find any solutions to this problem of wanting different none sequential Revisions in Release. If its not possible thats fine but how the heck are we meant to get stuff live easily. We can't branch for every single change, the site takes 30 minutes+ to check out it would take too long. Side note, we are using TortoiseSVN so can we keep command line examples to a minimum in any answers? Latest version of TSVN and SVN Version 1.6 so we have the funky merge tracking etc. EDIT: An excellent blog post which deals with the dev/release cycle (although using GIT but still relivant) thought everyone would like to read it if they found this question interesting. (http://nvie.com/git-model) EDIT 2: I wrote a blog post on how to show which branch you are working on in your website which others have asked me about (http://www.offroadcode.com/2010/5/14/which-svn-branch-are-you-working-on.aspx). Hope that helps. In the meantime we are looking at Kiln and hoping to make the switch next month (gulp!)

    Read the article

  • Implementing a popularity algorithm in Django

    - by TheLizardKing
    I am creating a site similar to reddit and hacker news that has a database of links and votes. I am implementing hacker news' popularity algorithm and things are going pretty swimmingly until it comes to actually gathering up these links and displaying them. The algorithm is simple: Y Combinator's Hacker News: Popularity = (p - 1) / (t + 2)^1.5` Votes divided by age factor. Where` p : votes (points) from users. t : time since submission in hours. p is subtracted by 1 to negate submitter's vote. Age factor is (time since submission in hours plus two) to the power of 1.5.factor is (time since submission in hours plus two) to the power of 1.5. I asked a very similar question over yonder http://stackoverflow.com/questions/1964395/complex-ordering-in-django but instead of contemplating my options I choose one and tried to make it work because that's how I did it with PHP/MySQL but I now know Django does things a lot differently. My models look something (exactly) like this class Link(models.Model): category = models.ForeignKey(Category) user = models.ForeignKey(User) created = models.DateTimeField(auto_now_add = True) modified = models.DateTimeField(auto_now = True) fame = models.PositiveIntegerField(default = 1) title = models.CharField(max_length = 256) url = models.URLField(max_length = 2048) def __unicode__(self): return self.title class Vote(models.Model): link = models.ForeignKey(Link) user = models.ForeignKey(User) created = models.DateTimeField(auto_now_add = True) modified = models.DateTimeField(auto_now = True) karma_delta = models.SmallIntegerField() def __unicode__(self): return str(self.karma_delta) and my view: def index(request): popular_links = Link.objects.select_related().annotate(karma_total = Sum('vote__karma_delta')) return render_to_response('links/index.html', {'links': popular_links}) Now from my previous question, I am trying to implement the algorithm using the sorting function. An answer from that question seems to think I should put the algorithm in the select and sort then. I am going to paginate these results so I don't think I can do the sorting in python without grabbing everything. Any suggestions on how I could efficiently do this? EDIT This isn't working yet but I think it's a step in the right direction: from django.shortcuts import render_to_response from linkett.apps.links.models import * def index(request): popular_links = Link.objects.select_related() popular_links = popular_links.extra( select = { 'karma_total': 'SUM(vote.karma_delta)', 'popularity': '(karma_total - 1) / POW(2, 1.5)', }, order_by = ['-popularity'] ) return render_to_response('links/index.html', {'links': popular_links}) This errors out into: Caught an exception while rendering: column "karma_total" does not exist LINE 1: SELECT ((karma_total - 1) / POW(2, 1.5)) AS "popularity", (S... EDIT 2 Better error? TemplateSyntaxError: Caught an exception while rendering: missing FROM-clause entry for table "vote" LINE 1: SELECT ((vote.karma_total - 1) / POW(2, 1.5)) AS "popularity... My index.html is simply: {% block content %} {% for link in links %} karma-up {{ link.karma_total }} karma-down {{ link.title }} Posted by {{ link.user }} to {{ link.category }} at {{ link.created }} {% empty %} No Links {% endfor %} {% endblock content %} EDIT 3 So very close! Again, all these answers are great but I am concentrating on a particular one because I feel it works best for my situation. from django.db.models import Sum from django.shortcuts import render_to_response from linkett.apps.links.models import * def index(request): popular_links = Link.objects.select_related().extra( select = { 'popularity': '(SUM(links_vote.karma_delta) - 1) / POW(2, 1.5)', }, tables = ['links_link', 'links_vote'], order_by = ['-popularity'], ) return render_to_response('links/test.html', {'links': popular_links}) Running this I am presented with an error hating on my lack of group by values. Specifically: TemplateSyntaxError at / Caught an exception while rendering: column "links_link.id" must appear in the GROUP BY clause or be used in an aggregate function LINE 1: ...karma_delta) - 1) / POW(2, 1.5)) AS "popularity", "links_lin... Not sure why my links_link.id wouldn't be in my group by but I am not sure how to alter my group by, django usually does that.

    Read the article

  • Problem installing Ubuntu 10.04 64 bit side by side with Vista by using a bootable USB drive. What n

    - by Adam Siddhi
    What happened I decided to install Ubuntu 10.04 64 bit side by side with Vista Home Premium (I guess on another partition) with a USB stick. I found instructions on how to do this here: https://help.ubuntu.com/community/Installation/FromUSBStick To create the bootable USB drive I had to download a program called Unetbootin. That process was simple enough. All I had to do was just choose the disk image option, select the ubuntu-10.04-desktop-amd64.iso image, make sure it recognizes my USB drive and then press OK. It takes only like a few minutes to create a working bootable USB drive. Then I have to restart my computer, enter the BIOS, select my USB drive as the first boot drive, save options and continue with booting up. After this Ubuntu actually loads up. I think this is known as the Live version of Ubuntu so you can try it out before fully installing it. Any ways, on the Ubuntu 10.04 desktop I saw an installer. I click it and begin the installation process. Just so you know, I tried installing it 2 times. I will explain what happened each time: The first time I tried installing Ubuntu 10.04 I got stuck at step 4 of 7. I remember selecting the last option in the window which was Specify Partitions Manually (Advanced) I made my partition for Ubuntu like 52 gigs. I clicked forward and a little pop up window appeared saying Please Wait. So the installation process stalled on this window so I closed out of it and quit the installation process. So at this point I was worried because I had already selected the partition size and assumed it started making it. Since it stalled I had to quit out though. Anyways, once again I reached step 4 of 7 a decided to select the first option which is Install them side by side choosing between them each startup. I figured this was the safe way to go. I did that and the pop up window saying Please Wait popped up again but lasted only like 10 seconds. Then I got to I guess step 6 where it asks you to enter your desired name and password. Did that and clicked forward. The Ubuntu 10.04 installation load screen appeared and the loading bar at the bottom started filling up. So I got to 83% and stalled during the Importing other profile information (I think it was called this. I had the option to do this during I think step 6) process. So at this point I decided to get stop the installation process. I was getting very nervous. I tried to restart the computer but all that happened was that Ubuntu restarted. I finally got the computer to restart. I was pretty sure I had screwed something up big time by this point. As my computer was restarting I entered BIOS again and switched back to it booting from my main hard drive containing Vista. Saved it and continued the boot process. My worst fears were confirmed as Vista would not boot up. I mean I saw the little Microsoft Windows choppy animated green loading bar at the bottom of the screen and then boom! It decided to restart. When it restarted I had the option to run a memory test check to see if there was anything that needed to be repaired. That took like 20 minutes and at the end I saw that I did indeed have to repair something. I had to go through 2 repair processes. After each I had to restart the computer. The 2nd time it went through the repair process it said that it could not fully repair the damage. I was scared and restarted but Vista did load up. I got to my desktop and saw a message saying something like Repairs have been made, Please restart for changes to take effect I noticed that some Notification icons were missing and I could not hear volume in a video. Things were a bit funky. So I did restart and here I am. Now what?! So since I got back into Vista and thankfully have a working Internet connection I am trying to find answers to my problem (that is why I am writing this post). I am scared that I have partioned my hard drive 2 times after researching Installing Ubuntu 10.04 and seeing this post http://techie-buzz.com/foss/ubuntu-10-04-lts-installation-guide.html The author shows screen shots of installing Ubuntu 10.04. He shows the image of step 4 of 7 with a caption at the bottom. I will recreate it below: Select a partitioning option. Unless you want to format all the hard drive and install Ubuntu afresh, select the last option and proceed. Questions If I have indeed partitioned my HD 2 times (which I am sure it is), how do I get to a point where I can see all my bad, unfinished Ubuntu partitions and get rid of them? How do I clean this big mess up? & How can I ensure that this mess will not happen next time I try installing Ubuntu 10.04? Thank you Adam

    Read the article

  • Is SHA-1 secure for password storage?

    - by Tgr
    Some people throw around remarks like "SHA-1 is broken" a lot, so I'm trying to understand what exactly that means. Let's assume I have a database of SHA-1 password hashes, and an attacker whith a state of the art SHA-1 breaking algorithm and a botnet with 100,000 machines gets access to it. (Having control over 100k home computers would mean they can do about 10^15 operations per second.) How much time would they need to find out the password of any one user? find out the password of a given user? find out the password of all users? find a way to log in as one of the users? find a way to log in as a specific user? How does that change if the passwords are salted? Does the method of salting (prefix, postfix, both, or something more complicated like xor-ing) matter? Here is my current understanding, after some googling. Please correct in the answers if I misunderstood something. If there is no salt, a rainbow attack will immediately find all passwords (except extremely long ones). If there is a sufficiently long random salt, the most effective way to find out the passwords is a brute force or dictionary attack. Neither collision nor preimage attacks are any help in finding out the actual password, so cryptographic attacks against SHA-1 are no help here. It doesn't even matter much what algorithm is used - one could even use MD5 or MD4 and the passwords would be just as safe (there is a slight difference because computing a SHA-1 hash is slower). To evaluate how safe "just as safe" is, let's assume that a single sha1 run takes 1000 operations and passwords contain uppercase, lowercase and digits (that is, 60 characters). That means the attacker can test 1015*60*60*24 / 1000 ~= 1017 potential password a day. For a brute force attack, that would mean testing all passwords up to 9 characters in 3 hours, up to 10 characters in a week, up to 11 characters in a year. (It takes 60 times as much for every additional character.) A dictionary attack is much, much faster (even an attacker with a single computer could pull it off in hours), but only finds weak passwords. To log in as a user, the attacker does not need to find out the exact password; it is enough to find a string that results in the same hash. This is called a first preimage attack. As far as I could find, there are no preimage attacks against SHA-1. (A bruteforce attack would take 2160 operations, which means our theoretical attacker would need 1030 years to pull it off. Limits of theoretical possibility are around 260 operations, at which the attack would take a few years.) There are preimage attacks against reduced versions of SHA-1 with negligible effect (for the reduced SHA-1 which uses 44 steps instead of 80, attack time is down from 2160 operations to 2157). There are collision attacks against SHA-1 which are well within theoretical possibility (the best I found brings the time down from 280 to 252), but those are useless against password hashes, even without salting. In short, storing passwords with SHA-1 seems perfectly safe. Did I miss something?

    Read the article

  • How do I make Linux recognize a new SATA /dev/sda drive I hot swapped in without rebooting?

    - by Philip Durbin
    Hot swapping out a failed SATA /dev/sda drive worked fine, but when I went to swap in a new drive, it wasn't recognized: [root@fs-2 ~]# tail -18 /var/log/messages May 5 16:54:35 fs-2 kernel: ata1: exception Emask 0x10 SAct 0x0 SErr 0x50000 action 0xe frozen May 5 16:54:35 fs-2 kernel: ata1: SError: { PHYRdyChg CommWake } May 5 16:54:40 fs-2 kernel: ata1: link is slow to respond, please be patient (ready=0) May 5 16:54:45 fs-2 kernel: ata1: device not ready (errno=-16), forcing hardreset May 5 16:54:45 fs-2 kernel: ata1: soft resetting link May 5 16:54:50 fs-2 kernel: ata1: link is slow to respond, please be patient (ready=0) May 5 16:54:55 fs-2 kernel: ata1: SRST failed (errno=-16) May 5 16:54:55 fs-2 kernel: ata1: soft resetting link May 5 16:55:00 fs-2 kernel: ata1: link is slow to respond, please be patient (ready=0) May 5 16:55:05 fs-2 kernel: ata1: SRST failed (errno=-16) May 5 16:55:05 fs-2 kernel: ata1: soft resetting link May 5 16:55:10 fs-2 kernel: ata1: link is slow to respond, please be patient (ready=0) May 5 16:55:40 fs-2 kernel: ata1: SRST failed (errno=-16) May 5 16:55:40 fs-2 kernel: ata1: limiting SATA link speed to 1.5 Gbps May 5 16:55:40 fs-2 kernel: ata1: soft resetting link May 5 16:55:45 fs-2 kernel: ata1: SRST failed (errno=-16) May 5 16:55:45 fs-2 kernel: ata1: reset failed, giving up May 5 16:55:45 fs-2 kernel: ata1: EH complete I tried a couple things to make the server find the new /dev/sda, such as rescan-scsi-bus.sh but they didn't work: [root@fs-2 ~]# echo "---" > /sys/class/scsi_host/host0/scan -bash: echo: write error: Invalid argument [root@fs-2 ~]# [root@fs-2 ~]# /root/rescan-scsi-bus.sh -l [snip] 0 new device(s) found. 0 device(s) removed. [root@fs-2 ~]# [root@fs-2 ~]# ls /dev/sda ls: /dev/sda: No such file or directory I ended up rebooting the server. /dev/sda was recognized, I fixed the software RAID, and everything is fine now. But for next time, how can I make Linux recognize a new SATA drive I have hot swapped in without rebooting? The operating system in question is RHEL5.3: [root@fs-2 ~]# cat /etc/redhat-release Red Hat Enterprise Linux Server release 5.3 (Tikanga) The hard drive is a Seagate Barracuda ES.2 SATA 3.0-Gb/s 500-GB, model ST3500320NS. Here is the lscpi output: [root@fs-2 ~]# lspci 00:00.0 RAM memory: nVidia Corporation MCP55 Memory Controller (rev a2) 00:01.0 ISA bridge: nVidia Corporation MCP55 LPC Bridge (rev a3) 00:01.1 SMBus: nVidia Corporation MCP55 SMBus (rev a3) 00:02.0 USB Controller: nVidia Corporation MCP55 USB Controller (rev a1) 00:02.1 USB Controller: nVidia Corporation MCP55 USB Controller (rev a2) 00:04.0 IDE interface: nVidia Corporation MCP55 IDE (rev a1) 00:05.0 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:05.1 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:05.2 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:06.0 PCI bridge: nVidia Corporation MCP55 PCI bridge (rev a2) 00:08.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a3) 00:09.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a3) 00:0a.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0b.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0c.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0d.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0e.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0f.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:18.0 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] HyperTransport Technology Configuration 00:18.1 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Address Map 00:18.2 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] DRAM Controller 00:18.3 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Miscellaneous Control 00:19.0 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] HyperTransport Technology Configuration 00:19.1 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Address Map 00:19.2 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] DRAM Controller 00:19.3 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Miscellaneous Control 03:00.0 VGA compatible controller: Matrox Graphics, Inc. MGA G200e [Pilot] ServerEngines (SEP1) (rev 02) 04:00.0 PCI bridge: NEC Corporation uPD720400 PCI Express - PCI/PCI-X Bridge (rev 06) 04:00.1 PCI bridge: NEC Corporation uPD720400 PCI Express - PCI/PCI-X Bridge (rev 06) Update: In perhaps a dozen cases, we've been forced to reboot servers because hot swap hasn't "just worked." Thanks for the answers to look more into the SATA controller. I've included the lspci output for the problematic system above (hostname: fs-2). I could still use some help understanding what exactly isn't supported hardware-wise in terms of hot swap for that system. Please let me know what other output besides lspci might be useful. The good news is that hot swap "just worked" today on one of our servers (hostname: www-1), which is very rare for us. Here is the lspci output: [root@www-1 ~]# lspci 00:00.0 RAM memory: nVidia Corporation MCP55 Memory Controller (rev a2) 00:01.0 ISA bridge: nVidia Corporation MCP55 LPC Bridge (rev a3) 00:01.1 SMBus: nVidia Corporation MCP55 SMBus (rev a3) 00:02.0 USB Controller: nVidia Corporation MCP55 USB Controller (rev a1) 00:02.1 USB Controller: nVidia Corporation MCP55 USB Controller (rev a2) 00:04.0 IDE interface: nVidia Corporation MCP55 IDE (rev a1) 00:05.0 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:05.1 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:05.2 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:06.0 PCI bridge: nVidia Corporation MCP55 PCI bridge (rev a2) 00:08.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a3) 00:09.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a3) 00:0b.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0c.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0f.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:18.0 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] HyperTransport Configuration 00:18.1 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Address Map 00:18.2 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] DRAM Controller 00:18.3 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Miscellaneous Control 00:18.4 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Link Control 00:19.0 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] HyperTransport Configuration 00:19.1 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Address Map 00:19.2 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] DRAM Controller 00:19.3 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Miscellaneous Control 00:19.4 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Link Control 03:00.0 VGA compatible controller: Matrox Graphics, Inc. MGA G200e [Pilot] ServerEngines (SEP1) (rev 02) 04:00.0 PCI bridge: NEC Corporation uPD720400 PCI Express - PCI/PCI-X Bridge (rev 06) 04:00.1 PCI bridge: NEC Corporation uPD720400 PCI Express - PCI/PCI-X Bridge (rev 06) 09:00.0 SCSI storage controller: LSI Logic / Symbios Logic SAS1064ET PCI-Express Fusion-MPT SAS (rev 04)

    Read the article

  • What are the Open Source alternatives to WPF/XAML?

    - by Evan Plaice
    If we've learned anything from HTML/CSS it's that, declarative languages (like XML) work best to describe User Interfaces because: It's easy to build code preprocessors that can template the code effectively. The code is in a well defined well structured (ideally) format so it's easy to parse. The technology to effectively parse or crawl an XML based source file already exists. The UIs scripted code becomes much simpler and easier to understand. It simple enough that designers are able to design the interface themselves. Programmers suck at creating UIs so it should be made easy enough for designers. I recently took a look at the meat of a WPF application (ie. the XAML) and it looks surprisingly familiar to the declarative language style used in HTML. It's apparent to me that the current state of desktop UI development is largely fractionalized, otherwise there wouldn't be so much duplicated effort in the domain of graphical user interface design (IE. GTK, XUL, Qt, Winforms, WPF, etc). There are 45 GUI platforms for Python alone It's seems reasonable to me that there should be a general purpose, open source, standardized, platform independent, markup language for designing desktop GUIs. Much like what the W3C made HTML/CSS into. WPF, or more specifically XAML seems like a pretty likely step in the right direction. Now that the 'browser wars' are over should we look forward to a future of 'desktop gui wars?' Note: This topic is relatively subjective in the attempt to be 'future-thinking.' I think that desktop GUI development in its current state sucks ((really)hard) and, even though WPF is still in it's infancy, it presents a likely solution to the problem. Update: Thanks a lot for the info, keep it comin'. Here's are the options I've gathered from the comments and answers. GladeXML Editor: Glade Interface Designer OS Platforms: All GUI Platform: GTK+ Languages: C (libglade), C++, C# (Glade#), Python, Ada, Pike, Perl, PHP, Eiffel, Ruby XRC (XML Resource) Editors: wxGlade, XRCed, wxDesigner, DialogBlocks (non-free) OS Platforms: All GUI Platform: wxWidgets Languages: C++, Python (wxPython), Perl (wxPerl), .NET (wx.NET) XML based formats that are either not free, not cross-platform, or language specific XUL Editor: Any basic text editor OS Platforms: Any OS running a browser that supports XUL GUI Platform: Gecko Engine? Languages: C++, Python, Ruby as plugin languages not base languages Note: I'm not sure if XUL deserves mentioning in this list because it's less of a desktop GUI language and more of a make-webapps-run-on-the-desktop language. Plus, it requires a browser to run. IE, it's 'DHTML for the desktop.' CookSwing Editor: Eclipse via WindowBuilder, NetBeans 5.0 (non-free) via Swing GUI Builder aka Matisse OS Platforms: All GUI Platform: Java Languages: Java only XAML (Moonlight) Editor: MonoDevelop OS Platforms: Linux and other Unix/X11 based OSes only GUI Platforms: GTK+ Languages: .NET Note: XAML is not a pure Open Source format because Microsoft controls its terms of use including the right to change the terms at any time. Moonlight can not legally be made to run on Windows or Mac. In addition, the only platform that is exempt from legal action is Novell. See this for a full description of what I mean.

    Read the article

  • BasicAuthProvider in ServiceStack

    - by Per
    I've got an issue with the BasicAuthProvider in ServiceStack. POST-ing to the CredentialsAuthProvider (/auth/credentials) is working fine. The problem is that when GET-ing (in Chrome): http://foo:pwd@localhost:81/tag/string/list the following is the result Handler for Request not found: Request.HttpMethod: GET Request.HttpMethod: GET Request.PathInfo: /login Request.QueryString: System.Collections.Specialized.NameValueCollection Request.RawUrl: /login?redirect=http%3a%2f%2flocalhost%3a81%2ftag%2fstring%2flist which tells me that it redirected me to /login instead of serving the /tag/... request. Here's the entire code for my AppHost: public class AppHost : AppHostHttpListenerBase, IMessageSubscriber { private ITagProvider myTagProvider; private IMessageSender mySender; private const string UserName = "foo"; private const string Password = "pwd"; public AppHost( TagConfig config, IMessageSender sender ) : base( "BM App Host", typeof( AppHost ).Assembly ) { myTagProvider = new TagProvider( config ); mySender = sender; } public class CustomUserSession : AuthUserSession { public override void OnAuthenticated( IServiceBase authService, IAuthSession session, IOAuthTokens tokens, System.Collections.Generic.Dictionary<string, string> authInfo ) { authService.RequestContext.Get<IHttpRequest>().SaveSession( session ); } } public override void Configure( Funq.Container container ) { Plugins.Add( new MetadataFeature() ); container.Register<BeyondMeasure.WebAPI.Services.Tags.ITagProvider>( myTagProvider ); container.Register<IMessageSender>( mySender ); Plugins.Add( new AuthFeature( () => new CustomUserSession(), new AuthProvider[] { new CredentialsAuthProvider(), //HTML Form post of UserName/Password credentials new BasicAuthProvider(), //Sign-in with Basic Auth } ) ); container.Register<ICacheClient>( new MemoryCacheClient() ); var userRep = new InMemoryAuthRepository(); container.Register<IUserAuthRepository>( userRep ); string hash; string salt; new SaltedHash().GetHashAndSaltString( Password, out hash, out salt ); // Create test user userRep.CreateUserAuth( new UserAuth { Id = 1, DisplayName = "DisplayName", Email = "[email protected]", UserName = UserName, FirstName = "FirstName", LastName = "LastName", PasswordHash = hash, Salt = salt, }, Password ); } } Could someone please tell me what I'm doing wrong with either the SS configuration or how I am calling the service, i.e. why does it not accept the supplied user/pwd? Update1: Request/Response captured in Fiddler2when only BasicAuthProvider is used. No Auth header sent in the request, but also no Auth header in the response. GET /tag/string/AAA HTTP/1.1 Host: localhost:81 Connection: keep-alive User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.64 Safari/537.11 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8,sv;q=0.6 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 Cookie: ss-pid=Hu2zuD/T8USgvC8FinMC9Q==; X-UAId=1; ss-id=1HTqSQI9IUqRAGxM8vKlPA== HTTP/1.1 302 Found Location: /login?redirect=http%3a%2f%2flocalhost%3a81%2ftag%2fstring%2fAAA Server: Microsoft-HTTPAPI/2.0 X-Powered-By: ServiceStack/3,926 Win32NT/.NET Date: Sat, 10 Nov 2012 22:41:51 GMT Content-Length: 0 Update2 Request/Response with HtmlRedirect = null . SS now answers with the Auth header, which Chrome then issues a second request for and authentication succeeds GET http://localhost:81/tag/string/Abc HTTP/1.1 Host: localhost:81 Connection: keep-alive User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.64 Safari/537.11 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8,sv;q=0.6 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 Cookie: ss-pid=Hu2zuD/T8USgvC8FinMC9Q==; X-UAId=1; ss-id=1HTqSQI9IUqRAGxM8vKlPA== HTTP/1.1 401 Unauthorized Transfer-Encoding: chunked Server: Microsoft-HTTPAPI/2.0 X-Powered-By: ServiceStack/3,926 Win32NT/.NET WWW-Authenticate: basic realm="/auth/basic" Date: Sat, 10 Nov 2012 22:49:19 GMT 0 GET http://localhost:81/tag/string/Abc HTTP/1.1 Host: localhost:81 Connection: keep-alive Authorization: Basic Zm9vOnB3ZA== User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.64 Safari/537.11 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8,sv;q=0.6 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 Cookie: ss-pid=Hu2zuD/T8USgvC8FinMC9Q==; X-UAId=1; ss-id=1HTqSQI9IUqRAGxM8vKlPA==

    Read the article

< Previous Page | 258 259 260 261 262 263 264 265 266 267 268 269  | Next Page >